Deeplinks

To Search Through Millions of License Plates, Police Should Get a Warrant (Fr, 22 Mär 2019)
Earlier this week, EFF filed a brief in one of the first cases to consider whether the use of automated license plate reader (ALPR) technology implicates the Fourth Amendment. Our amicus brief, filed in the Ninth Circuit Court of Appeals in United States v. Yang, argues that when a U.S. Postal Service inspector used a commercial ALPR database to locate a suspected mail thief, it was a Fourth Amendment search that required a warrant. ALPRs are high-speed, computer-controlled camera systems. Some models can photograph up to 1,800 license plates every minute, and every week, law enforcement agencies across the country use these cameras to collect data on millions of license plates. The plate numbers, together with location, date, and time information, are uploaded to a central server, and made instantly available to other agencies. The data include photographs of the vehicle, and sometimes of its drivers and passengers. ALPRs are typically attached to vehicles, such as police cars, or can be mounted on street poles, highway overpasses, or mobile trailers. One leading commercial database operated by DRN advertises that it contains 6.5 billion plates. DRN is owned by the same company as Vigilant Solutions, and according to testimony from a Vigilant executive in the Yang case, the Vigilant LEARN database used by the Postal Service to locate the defendant includes all of DRN’s records as well as a wealth of data available only to law enforcement agencies. If police want to search through ALPR data, we believe they should get a warrant. In recent years, EFF, the ACLU, and others have called attention to ALPR’s invasive tracking capabilities and its proliferation across the country. We won a major victory when the California Supreme Court agreed with us that the public has a right to know how police use this technology. Starting with Yang, we will be arguing that government use of ALPRs is a search that implicates the Fourth Amendment, and it should require a warrant in routine investigations. ALPRs scan every car, regardless of whether the individual driver is suspected of criminal activity. Similar to cell site location information (CSLI) or GPS tracking, ALPR records can paint a picture of where a vehicle and its occupants have traveled—including sensitive and private places like our homes, doctors’ offices, and places of worship. Commercial vendors operate vast databases of ALPR records, and sell database access to not just law enforcement agencies, but private businesses like repo services and insurance companies. Government employees are frequently able to access records generated by cameras mounted on both private and law enforcement vehicles, giving them access to a vast array of location data. That’s why government use of ALPR could lead to invasive tracking, and necessitates safeguards, such as a warrant requirement. The legal arguments against warrantless ALPR searches are even stronger after a landmark ruling from the Supreme Court last June. The Court’s ruling in United States v. Carpenter involved police tracking a suspect using location data obtained from his cellular provider, but much of its reasoning applies to ALPRs as well. For example, Chief Justice Roberts wrote that because nearly everyone uses a cell phone, the government’s tracking ability “runs against everyone,” and “[o]nly the few without cell phones could escape this tireless and absolute surveillance.” ALPR data collection is similarly indiscriminate; anyone who drives on public streets is likely to be tracked and logged in a database available to police. Roberts also pointed to law enforcement’s ability to retrieve CSLI from years in the past, creating a virtual surveillance time machine which “gives police access to a category of information otherwise unknowable.” ALPR databases, too, facilitate retrospective searches of cars whose drivers were not under suspicion at the time they were photographed by an ALPR camera. As we wrote in our amicus brief in Yang, “The confluence of these factors—detailed location data collection about a vast swath of the American population allowing retrospective searches—is why technologies like ALPRs violate expectations of privacy under the Fourth Amendment.” We’ll watch to see what the Ninth Circuit does in Yang, and we’ll be making similar arguments in other ALPR cases soon.
>> mehr lesen

The U.S. Desperately Needs a “Fiber for All” Plan (Fri, 22 Mar 2019)
We have a real, coming broadband access crisis in the United States. Data from the government and independent analysis show that we are falling behind the world. This crisis comes from the fact that fiber-to-the-home deployment, the alternative to your gigabit cable monopoly (if you even have that choice), is languishing and slowing down across the board. In contrast to the United States, countries around the world are aggressively modernizing their telecommunications infrastructure. They are actively pushing fiber across the board, with advanced Asian markets like South Korea and Japan already finished, and countries in the EU heading towards universal access (including their rural markets). China is predicted to have more than five times (around 80 percent of households totaling at 193.5 million homes) the U.S. number of fiber gigabit connections by 2023. The big difference between the United States and the rest of the advanced economies around the world is that the U.S. is the only country that believes having no plan will solve this issue. We are the only country to completely abandon federal oversight of an uncompetitive, highly concentrated market that sells critical services to all people, yet we expect widely available, affordable, ultra-fast services. But if you live in a low-income neighborhood or in a rural market today, you know very well this is not working and the status quo is going to cement in your local broadband options to either one choice or no choice. This Means 5G Wireless Is Not Going to Reach Most People Congress and the FCC have been obsessing about 5G hype, but early estimates are that only about three to nine percent of the market will have 5G access by 2022. It’s important to remember that, no matter what ISPs try to say about 5G, there is no real equivalency between fiber to the home and wireless 5G broadband. The two are not direct competitors given the superiority of fiber as a transmission medium. The less-spoken truth about 5G networks is that they need dense fiber networks to make them work. One estimate on the amount of fiber investment that needs to occur is as much as $150 billion—including fiber to the home deployments—in the near future, and we are far below that level of commitment to fiber. In other words, resolving the future of high-speed broadband competition with fiber to all Americans (which would help at least 68 million households stuck in monopoly cable markers) also carries the benefit of ensuring that 5G networks can reach all corners of the country as well. Where Things Stand Now Without A Fiber Plan  Very small ISPs and local governments with limited budgets are at the frontline of deploying fiber to the home to fix these problems, but policymakers from the federal, state, and local level need to step up and lead. At least 19 states still have laws that prohibit local governments from deploying community broadband projects. Worst yet, both AT&T and Verizon are actively asking the FCC to make it even harder for small private ISPs to deploy fiber, so that the big incumbents can raise prices and suppress competition, a proposal EFF has urged the FCC to reject. This is why we need to push our elected officials and regulators for a fiber-for-all-people plan to ensure everyone can obtain the next generation of broadband access. Otherwise, the next generation of applications and services won’t be usable in most of the United States. They will be built instead for markets with better, faster, cheaper, and more accessible broadband. This dire outcome was the central thesis to a recently published book by Professor Susan Crawford (appropriately named Fiber) and EFF agrees with its findings. If American policymakers do not remedy the failings in the US market and actively pursue ways to drive fiber deployment with the goal of universal coverage, then a staggering number of Americans will miss out on the latest innovations that will occur on the Internet because it will be inaccessible or too expensive. As a result, we will see a worsening of the digital divide as advances in virtual reality, cloud computing, gaming, education, and things we have not invented yet are going to carry a monopoly price tag for a majority of us—or just not be accessible here. This does not have to be so, but it requires federal, state, and local governments to get to work on policies that promote fiber infrastructure to all people.
>> mehr lesen

This Could Be It: Key Polish Political Party Comes Out Against Article 13 (Fri, 22 Mar 2019)
With only days to go before the final EU debate and vote on the new Copyright Directive (we're told the debate will be at 0900h CET on Tuesday, 27 March, and the vote will happen at 1200h CET), things could not be more urgent and fraught. That's why today's announcement by Poland's Platformy Obywatelska—the second-largest party in the European People's Party (EPP) bloc—is so important. Platformy Obywatelska has said that it will vote to block the entire Copyright Directive unless Article 13—a ground-breakingly terrible Internet law that will lead to widespread filtering of all Europeans' Internet speech, images, and videos—is stricken from the final draft. EPP, a coalition of European national political parties, is the key backer of Article 13 and the largest party in the European Parliament. Without its support, Article 13 is very unlikely to make it through the final vote. The EPP is deeply split on the issue. EPP parties from Luxembourg, Sweden and the Czech Republic all oppose the measure, so Poland is in good company. The other blocs that strongly back Article 13 are the S&D (socialist) and ALDE (liberal) MEPs. 126 members of the Parliament have expressly pledged to vote against Article 13, and more than 5,000,000 Europeans have signed a petition against it. This is the largest petition in European history! It's vital that Europeans contact their MEPs as soon as possible to urge them to vote against Articles 11 and 13. On Sunday, the streets of Europe will be flooded with demonstrators marching against the Directive. This could be the final battle over the Directive. If it dies in Tuesday's vote, there will be no chance to bring it back before EU elections in May. This is no time to sit on the sidelines. Step up and be heard. They have the money, but we have the people! Take Action Stop Article 13
>> mehr lesen

Congress Has a Chance to Finally End the NSA’s Mass Telephone Records Program (Thu, 21 Mar 2019)
Earlier this month, the New York Times published a major story reporting that the NSA has stopped using the authority to run its massive, ongoing surveillance of Americans’ telephone records. After years of fighting mass surveillance of telephone records, the story may make our jobs easier: NSA has consistently claimed this surveillance was critical to national security. But now it appears that the agency couldn’t properly use the authority Congress granted it in the 2015 USA Freedom Act, so it has simply given up.  Coincidentally, EFF had organized a briefing of congressional staff the day after the Times report on the controversial surveillance law used to conduct telephone record surveillance: Section 215 of the Patriot Act. As we told Congress, it is long past time to end the telephone records program for good. Now, we’ve signed a letter to House Judiciary Committee leadership repeating that demand, along with a list of other important reforms we’d like to see before Section 215 and two other Patriot Act provisions expire in December.  The Times story only added to a feeling of unfinished business from the last time Section 215 was set to sunset, in 2015. When Edward Snowden revealed the NSA’s use of Section 215 to conduct its telephone records program, EFF, the ACLU, and others sued to stop it. The courts, Congress, and public opinion seemed to be on our side: The Second Circuit Court of Appeals ruled that the government’s reliance on the law was “unprecedented and unwarranted,” and shortly afterward, Congress passed the USA Freedom Act, which was intended to stop this mass surveillance. But USA Freedom was incomplete: it still allowed the government to conduct suspicionless, ongoing collection of Americans’ telephone records, although under tighter, more specific controls than the program revealed by Snowden. But as information has emerged about how Section 215 has been used (or not used) since the passage of USA Freedom, we have to question even those modest reforms. First, we learned that a law that was supposed to end mass surveillance still allowed the NSA to collect over 500 million telephone records in 2017 alone—a number that sounds a lot like mass surveillance. In partial explanation of that statistic, the NSA reported last June that it had discovered “technical irregularities,” resulting in overcollection of telephone records. The agency addressed that discovery by purging all of the records it had collected since the passage of USA Freedom, and the recent New York Times report suggests that rather than addressing these technical irregularities, the government has simply stopped using Section 215 for this purpose.  Given this newest chapter in a long, embarrassing history of post-9/11 surveillance, ending the telephone records program is the obvious step for Congress to take. If the NSA can simply delete every single telephone record it has collected since USA Freedom and not even attempt to fix the technical difficulties it encountered, the law authorizing this program should not remain on the books.   If the NSA can simply delete every single telephone record it has collected since USA Freedom and not even attempt to fix the technical difficulties it encountered, the law authorizing this program should not remain on the books.   That is just the beginning of the reforms Congress should be considering, however. Section 215 has become synonymous with the NSA’s database of billions of telephone records, but the law has an entirely different scope than that. Section 215 allows the government to obtain a secret court order requiring third parties, such as Internet providers and financial institutions, to hand over business records or any other “tangible thing” if the Foreign Intelligence Surveillance Court (FISC) deems them “relevant” to an international terrorism, counterespionage, or foreign intelligence investigation.  The Snowden revelations focused attention on the NSA’s tortured interpretation of “relevance” to collect telephone records which it knew to be mostly irrelevant, but defenders of civil liberties and civil rights have worried about the “tangible things” language right from the start. Even if Congress entirely outlaws the most well-known use of Section 215, the government will still have the authority to collect “any tangible thing” based on a very loose relevance standard. We still know very little about these other uses of Section 215, and the government is currently mandated to report only bare minimum of data about them. Congress should hold public hearings on uses of Section 215 to collect information other than telephone records, and investigate whether there are other still-secret uses of the law that would leave Americans “stunned and angry,” such as targeting individuals based on religion or other First-Amendment–protected activities. Our joint letter to Chairman Nadler details these questions as well as other important transparency reforms that fell by the wayside in the legislative debate around USA Freedom.  Finally, it’s reasonable to wonder what happens if our legislative and executive branches fail to act before Section 215 sunsets at the end of this year. In that case, the law would revert to a pre-Patriot Act provision from 1998, which allowed the government to collect only a narrow range of business records (not communications records) only from a limited set of companies such as transportation common carriers and other lodging, storage and vehical facilities, and only if it could make the specific showing that the records belonged to an “agent of a foreign power.” The government might argue that this would be “throwing the baby out with the bathwater.” But any surveillance law needs to be justified on its own terms, and the intelligence community would still have many other powers at its disposal. In order to fully assess what reforms are needed, Congress and the public must know more about how Section 215 is used. Congress should demand those answers from the government now. Related Cases:  Klayman v. Obama First Unitarian Church of Los Angeles v. NSA ACLU v. Clapper
>> mehr lesen

Who Defends Your Data? Report Reveals Peruvian ISPs Progress on User Privacy, Still Room for Improvement (Thu, 21 Mar 2019)
Hiperderecho, the leading digital rights organization in Peru, in collaboration with the Electronic Frontier Foundation, today launched its second ¿Quien Defiende Tus Datos? (Who Defends Your Data?), an evaluation of the privacy practices of the Internet Service Providers (ISPs) that millions of Peruvians use every day.  This year's results are more encouraging than those in 2015's report, with Telefonica's Movistar making significant improvement in its privacy policy, responses to judicial orders, and commitment to privacy. Five out of the six ISPs now publish specific, detailed policies on how they collect and process personal data. However, the report also revealed that there is plenty of room for improvement, especially when it comes to user notification and Peruvian ISPs' public commitment to privacy.  Internet access has grown significantly in Peru in recent years, particularly through mobile networks. Movistar (Telefónica) and Claro (América Móvil) are the main players, making up 70% of the Internet market. For landline connections, these two ISPs connect more than 90% of users in Peru; Movistar alone has 74.4% of them. The report also evaluated four other telecom operators: Bitel, Entel, Olo, and Inkacel. Every day, these users provide these companies with specific information about their movements, routines, and relations - a treasure trove of data for government authorities, who can use unnecessary and disproportionate measures to access it. This constant threat from State authorities demands public awareness and oversight. That’s why this new Peru report aims to push companies to counter surveillance measures that are conducted without proper safeguards, and to be transparent about their policies and practices. This year’s report, available in Spanish, evaluated each ISP on five categories: Privacy Policy: To earn a star in this category, a company must have published a privacy policy that is easy to understand. It should inform the reader about what data is collected from them, how long it is stored, and for what purposes. Partial compliance got a partially filled star. Judicial Order: Companies earned a star in this category if they require that the government obtain a warrant from a judge before handing over user data (either content or metadata). Compliance with this requirement for the content of communications, but not for metadata, earned a company a half star. User Notification: To earn a star in this category, companies must promise to inform their customers of a government request at the earliest moment permitted by the law. Transparency: This category looked for companies publishing transparency reports about government requests for user data. To earn a full star, the report must provide useful data about how many requests have been received and complied with, and include details about the type of requests, the government agencies that made the requests, the reasons provided by the authority, and describe the guidelines and procedures the company adopts when an authority requests the data. We demanded high standards, but partial compliance gained companies part of a star. Commitment to privacy: This star recognizes companies who have challenged inaccurate or disproportionate access to data requests. It also rewards companies that have publicly taken a position in favor of their users’ privacy before Congress and other regulatory bodies. Partial compliance is rewarded with a half star. The chart below ranks the six Peruvian telecommunications companies: This latest report awards more stars than the first edition, which was published in 2015. Now, five out of the six ISPs have published their policies with specific information about the collection and processing of personal data. However, Claro and Entel provide this information using highly technical language, which reduced their score. In order to earn a full star, the information provided must be easily understandable, otherwise it is just a formal measure, with little to no effect in empowering users to fight for their rights. Still, all companies detail how long and for which purposes users’ data is stored. Even Olo, which doesn’t publish a privacy policy, added this information to its regular service provision agreement. We also saw progress in the companies’ commitment to demanding a judicial order before handing over data to government authorities. Bitel and Claro were given a half star for explicitly demanding a warrant when the request was for the content of communications. Movistar received a full star for adhering to this commitment for users’ content and metadata. In 2015, only Movistar received any credit in this category, with a half star. Movistar also stands out in the transparency category. The company’s annual transparency report outlines how many requests they’ve received and complied with, what types of requests they received, as well as the guidelines and procedures the company follows when an authority requests data. Being transparent about the law enforcement guidelines companies follow is crucial to shedding a light on how companies  deal internally with government requests for data. This information allows users to understand how they interpret and apply the legal requirements and whether their procedures follow national and international safeguards. Although Bitel and Claro publish the instances in which they hand user data over to government authorities, they did not go as deeply into detail as Movistar does. There is still much work to be done. No company earned a star for a public commitment to speak up for their users’ privacy, either in the courts or in legislative and regulatory bodies. Similarly, none of the six companies commit to notify their customers of a government request at the earliest moment allowed by the law. Peru’s new Criminal Procedure Code states that once a judicial measure has been executed and immediate investigations have been carried out, the user affected must be informed of it whenever the investigation object permits the notification, and as long as it does not endanger life or the physical safety of third parties. In turn, no restriction for notice is provided by the controversial Legislative Decree 1182, which regulates the direct access by police authorities to location data. Hiperderecho stressed in the report: “Even if the legal obligation is of the judicial authority’s responsibility, there is much more that companies could do in this context. They can keep a record of the interventions made, promote notification to users after the measure expires or make simultaneous notifications with the authorities (…) in a way that users can enforce their right to go to the courts to request reexamination of the measure or to challenge the decisions issued.” Such proactive measures are particularly important because the law only gives users three business days to challenge these measures. Hiperderecho's report shows that telecommunications companies are making progress when it comes to complying with the law, but they’re not doing as well as they could. Yet the ¿Quién Defiende Tus Datos? reports, much like EFF’s Who Has Your Back? project, are not only about fulfilling established legal rules. Their aim is to push companies to go beyond the requirements of the law. Peru’s companies must do more, and we’ll remain vigilant to ensure that happens. The report is part of a series across Latin America and Spain adapted from EFF’s Who Has Your Back? reports. Last year, Spain’s ETICAS Foundation, Argentina’s ADC, Chile’s Derechos Digitales, Brazil’s Internet Lab, and Colombia’s Karisma Foundation published their own reports.
>> mehr lesen

The Best of Europe’s Web Went Dark Today. We Can’t Let That Be Our Future. (Thu, 21 Mar 2019)
We’re into the final days before members of the European Parliament vote on the Copyright and the Digital Single Market Directive, home of the censoring Article 13, and the anti-news Article 11. Europeans are still urging their MEPs to vote down these articles (if you haven’t already, call now, and stepping up the visibility of their complaints in this final week. Take Action Stop Article 13 The first salvo drawing attention to the damage the directive will cause has come from the European Wikipedias. German Wikipedia has gone completely dark for today, along with the Czech, Slovak and Danish Wikipedias, German OpenStreetMap, and many more. With confusing rhetoric, the Directive’s advocates have always claimed that they mean no harm to popular, user-driven sites like Wikipedia and OpenStreetMap. They’ve said that the law is aimed only at big American tech giants, even as drafters have scrambled to address the criticism that it affects all of the Internet. Late in the process, the drafters tried to carve out exceptions for “online encyclopedias,” and the German government and European Parliamentarians fought hard – though ultimately failed – to put in effective exceptions for European start-ups and other competitors. Very few of the organizations and communities for whom these exceptions are meant to protect are happy with the end result. The Wikimedia Foundation, which worked valiantly to improve the Directive over its history, came out last week and declared that it could not support its final version. Even though copyright reform is badly needed online, and Wikipedians fought hard to include positive fixes in the rest of the Directive, Article 13 and Article 11 have effectively undermined all of those positive results. As Wikimedia’s experts write: Despite some good intentions, the wholly problematic inclusion of Articles 11 and 13 mean that fundamental principles of knowledge sharing are overturned: in practice users and projects will have to prove they are allowed to share knowledge before a platform permits an upload. The EU Copyright Directive envisions a technical and legal infrastructure that treats user generated content with suspicion unless proved legal. We cannot support this—it is better to have no reform at all, than to have one including these toxic provisions. The European lawmakers who see Article 13 and Article 11 as a simple fix for the woes of entertainment and news media companies still don’t get that the Internet isn’t a competing “industry” – it’s an ecosystem. Companies like Google and Facebook are certainly supported by that ecosystem – but so too are the billions of individuals, thousands of European companies, families, and ad-hoc communities of creators, coders, and services. As Wikimedia says, this Directive turns the simplest basic actions of those Internet users - sharing and linking - suspect. Websites must check everything that users upload, because if they upload something that another person decided is their own, the website can be liable for unbounded costs. If Article 11 passes, everyone will have to make a legal assessment when linking to the news, out of fear the text accompanying their link contains one too many words, and triggers Article 11’s licensing requirements. The sites that are shutting down today in protest are, without question, sites that are home to European creators: the very people that Article 13 and 11 adherents claim to be protecting. That these parts of the European creative community are so concerned about their own future, and the wider ecology of the Net, should be a giant, flashing, warning sign to all MEPs. If you’re in Europe, contact your MEP, and join the protests this weekend. The future doesn’t have to be as dark as it looks today.
>> mehr lesen

More Than 130 European Businesses Tell the European Parliament: Reject the #CopyrightDirective (Wed, 20 Mar 2019)
The EU's Copyright Directive will be voted on in the week of March 25 (our sources suggest the vote will take place on March 27th, but that could change); the Directive has been controversial all along, but it took a turn for the catastrophic during the late stages of the negotiation, which yielded a final text that is alarming in its potential consequences for all internet activity in Europe and around the world. More than 5,000,000 Europeans have signed a petition against Article 13 of the Directive, and there has been outcry from eminent technical experts, the United Nations' special rapporteur on free expression, and many other quarters. Now, a coalition of more than 130 EU businesses have entered the fray, led by file storage service NextCloud. Their letter to the European Parliament calls Article 13—which will lead to mass adoption of copyright filters for online services that will monitor and block user-submitted text, audio, video and images—a "dangerous experiment with the core foundation of the Internet’s ecosystem." They also condemn Article 11, which will allow news publishers to decide who can quote and link to news stories and charge for the right to do so. Importantly, they identify a key risk of the Directive, which is that it will end up advantaging US Big Tech firms that can afford monitoring duties, and that will collect "massive amounts of data" sent by Europeans. March 21st is an EU-wide day of action on the Copyright Directive, with large site blackouts planned (including German Wikipedia), and on March 23, there will be mass demonstrations across the EU. Things are getting down to the wire here, folks. Here's the text of the letter; you can find the original, with the full list of signatories, here. The companies signing this letter to the European Parliament are urging you to vote against Articles 11 and 13 of the proposed copyright directive. The text of the trilogue agreement would harm the European economy and seriously undermine the ability of European businesses to compete with big Internet giants like Google. We support the goal of the legislation to protect the rights of creators and publishers, but the proposed measures are inadequate to reap these benefits and also fail to strike a fair balance between creators and all other parts of society. The success of our business enterprises will be seriously jeopardized by these heavy-handed EU regulations. Especially Article 13 is dangerously experimenting with the core foundation of the Internet’s ecosystem. Making companies directly liable for the content of their users forces these businesses to make billions of legal decisions about the legality of content. Most companies are neither equipped nor capable of implementing the automatic content filtering mechanisms this requires, which are expensive and prone to error. Article 11 is creating a completely new intellectual property right for press publishers. The experience with similar laws in Germany and Spain raises serious doubts about the expected benefits, while the negative impact would be very real. An additional layer of exclusive rights would make it harder to clear the necessary legal hurdles to start new projects. It will make entrepreneurs more hesitant to just launch new projects. Europe would lose any chance to play a significant role on the world stage. Startups that build services based on aggregated online information would go out of business, and every company that publishes press summaries of their appearance in the media would be in violation of this law. Although the purpose of these regulations is to limit the powers of big US Internet companies like Google or Facebook, the proposed legislation would end up having the opposite effect. Article 13 requires filtering of massive amounts of data, requiring technology only the Internet giants have the resources to build. European companies will be thus forced to hand over their data to them, jeopardizing the independence of the European tech industry as well as the privacy of our users. European companies like ours will be hindered in their ability to compete or will have to abandon certain markets completely. Given all of these issues it is noteworthy that the final trilogue agreement lacks meaningful safeguards for small and medium enterprises. The broad scope of this law would most likely lead to less new companies being founded in Europe and existing companies moving their headquarters out of Europe. For all those reasons we urge every pro-Startup politician to vote against Article 11 and Article 13. We hope EU lawmakers hear the concerns of these businesses and take them to heart. If you live in the EU, consider taking part in the day of action on March 21; and contact your MEP right now.  Take Action Stop Article 13
>> mehr lesen

EFF Submits Consumer Data Privacy Comment to the California Attorney General (Tue, 19 Mar 2019)
The California Consumer Privacy Act (CCPA) requires the California Attorney General to take input from the public on regulations to implement the law, which does not go into effect until 2020. The Electronic Frontier Foundation has filed comments on two issues: first, how to verify consumer requests to companies for access to personal information, and for deletion of that information; and second, how to make the process of opting out of the sale of data easy, using the framework already in place for the Do Not Track (DNT) system. Verification of Requests When it comes to verifying requests that users make of businesses to access their own data, EFF asked the Attorney General to carefully balance the interest of the consumer in obtaining their own personal information without undue delay or difficulty, with their interest in avoiding theft of their private data by people who might make fraudulent CCPA requests for data. If a consumer already has a password-protected account, the Attorney General should mandate use of that password to verify the account. Further, the business must ensure that the requester really knows the password, and didn’t just steal a laptop with an open app, by requiring the requester to log out of the account and present the password again. The AG should also encourage, but not require, two-factor authentication as a form of verification in cases where doing so poses no risk to the user. If a consumer does not have a password, the company must be as certain as is reasonably possible that the requester is the subject of the personal information being requested. Opting Out of Sales We also encourage the Attorney General to rely on the existing Do Not Track (DNT) system when issuing rules about consumer requests to opt-out of data sales. The DNT system combines a technology (a browsing header that announces the user prefers not to be tracked online) with a policy framework (how companies should respond to that signal). The DNT header is already widely supported by most major web browsers, including Google Chrome, Mozilla Firefox, and Opera. EFF proposes that the Attorney General require any business that interacts with consumers directly over the Internet to treat a browser’s DNT request as a request to opt-out of data collection. We thank the Attorney General’s office for the opportunity to comment on CCPA regulations, and look forward to making further comments about consumer data privacy. To read EFF’s comments in full, please click here.
>> mehr lesen

The European Copyright Directive: What Is It, and Why Has It Drawn More Controversy Than Any Other Directive In EU History? (Tue, 19 Mar 2019)
During the week of March 25, the European Parliament will hold the final vote on the Copyright Directive, the first update to EU copyright rules since 2001; normally this would be a technical affair watched only by a handful of copyright wonks and industry figures, but the Directive has become the most controversial issue in EU history, literally, with the petition opposing it attracting more signatures than any other petition in change.org’s history. How did we get here? European regulations are marathon affairs, and the Copyright Directive is no exception: it had been debated and refined for years, and as of spring 2017, it was looking like all the major points of disagreement had been resolved. Then all hell broke loose. Under the leadership of German Member of the European Parliament (MEP) Axel Voss, acting as "rapporteur" (a sort of legislative custodian), two incredibly divisive clauses in the Directive (Articles 11 and 13) were reintroduced in forms that had already been discarded as unworkable after expert advice. Voss's insistence that Articles 11 and 13 be included in the final Directive has been a flashpoint for public anger, drawing criticism from the world's top technical, copyright, journalistic, and human rights experts and organizations. Why can no one agree on what the Directive actually means? "Directives" are rules made by the European Parliament, but they aren't binding law—not directly. After a Directive is adopted at the European level, each of the 28 countries in the EU is required to "transpose" it by passing national laws that meet its requirements. The Copyright Directive has lots of worrying ambiguity, and much of the disagreement about its meaning comes from different assumptions about what the EU nations do when they turn it into law: for example, Article 11 (see below) allows member states to ban links to news stories that contain more than a word or two from the story or its headline, but it only requires them to ban links that contain more than "brief snippets"—so one country might set up a linking rule that bans news links that reproduce three words of an article, and other countries might define "snippets" so broadly that very little changes. The problem is that EU-wide services will struggle to present different versions of their sites to people based on which country they're in, and so there's good reason to believe that online services will converge on the most restrictive national implementation of the Directive. Take Action Stop Article 13 What is Article 11 (The "Link Tax")? Article 11 seeks to give news companies a negotiating edge with Google, Facebook and a few other Big Tech platforms that aggregate headlines and brief excerpts from news stories and refer users to the news companies' sites. Under Article 11, text that contains more than a "snippet" from an article are covered by a new form of copyright, and must be licensed and paid by whoever quotes the text, and while each country can define "snippet" however it wants, the Directive does not stop countries from making laws that pass using as little as three words from a news story. What's wrong with Article 11/The Link Tax? Article 11 has a lot of worrying ambiguity: it has a very vague definition of "news site" and leaves the definition of "snippet" up to each EU country's legislature. Worse, the final draft of Article 11 has no exceptions to protect small and noncommercial services, including Wikipedia but also your personal blog. The draft doesn’t just give news companies the right to charge for links to their articles—it also gives them the right to ban linking to those articles altogether, (where such a link includes a quote from the article) so sites can threaten critics writing about their articles. Article 11 will also accelerate market concentration in news media because giant companies will license the right to link to each other but not to smaller sites, who will not be able to point out deficiencies and contradictions in the big companies' stories. What is Article 13 ("Censorship Machines")? Article 13 is a fundamental reworking of how copyright works on the Internet. Today, online services are not required to check everything that their users post to prevent copyright infringement, and rightsholders don't have to get a court order to remove something they view as a copyright infringement—they just have to send a "takedown notice" and the services have to remove the post or face legal jeopardy. Article 13 removes the protection for online services and relieves rightsholders of the need to check the Internet for infringement and send out notices. Instead, it says that online platforms have a duty to ensure that none of their users infringe copyright, period. Article 13 is the most controversial part of the Copyright Directive. What's a "copyright filter?" The early versions of Article 13 were explicit about what online service providers were expected to do: they were supposed to implement "copyright filters" that would check every tweet, Facebook update, shared photo, uploaded video, and every other upload to see if anything in it was similar to items in a database of known copyrighted works, and block the upload if they found anything too similar. Some companies have already made crude versions of these filters, the most famous being YouTube's "ContentID," which blocks videos that match items identified by a small, trusted group of rightsholders. Google has spent $100m on ContentID so far. Why do people hate filters? Copyright filters are very controversial. All but the crudest filters cost so much that only the biggest tech companies can afford to build them—and most of those are US-based. What's more, filters are notoriously inaccurate, prone to overblocking legitimate material—and lacking in checks and balances, making it easy for censors to remove material they disagree with. Filters assume that the people who claim copyrights are telling the truth, encouraging laziness and sloppiness that catches a lot of dolphins in the tuna-net. Does Article 13 require "filters?" Axel Voss and other proponents for Article 13 to remove references to them from the Directive in order to win a vote to remove them in the European Parliament. But the new text of Article 13 still demands that the people who operate online communities somehow examine and make copyright assessments about everything, hundreds of billions of social media posts and forum posts and video uploads. Article 13 advocates say that filters aren't required, but when challenged, not one has been able to explain how to comply with Article 13 without using filters. Put it this way: if I pass a law requiring you to produce a large African mammal with four legs, a trunk, and tusks, we definitely have an elephant in the room. Will every online service need filters? Europe has a thriving tech sector, composed mostly of "small and medium-sized enterprises" (SMEs), and the politicians negotiating the Directive have been under enormous pressure to protect these Made-In-Europe firms from a rule that would wipe them out and turn over permanent control over Europe's Internet to America's Big Tech companies. The political compromise that was struck makes a nod to protecting SME's but ultimately dooms them. The new rules grant partial limits on copyright liability only for the first three years of an online service's existence, and even these limits are mostly removed once a firm attains over 5m in unique visitors (an undefined term) in a given month, and once a European company hits annual revenues (not profits!) of €10m, it has all the same obligations as the biggest US platforms. That means that the 10,000,001st euro a company earns comes with a whopping bill for copyright filters. There are other, vaguer exemptions for not-for-profit services, but without a clear description of what they would mean. As with the rest of the law, it will depend on how each individual country implements the Directive. France’s negotiators, for example, made it clear that they believe no Internet service should be exempted from the Article’s demands, so we can expect their implementation to provide for the narrowest possible exemption. Smaller companies and informal organizations will have to prepare to lawyer up in these jurisdictions because that’s where rightsholders will seek to sue. A more precise, and hopefully equitable, solution could finally be decided by the European Court of Justice, but such suits will take years to resolve. Both the major rightsholders and Big Tech will strike their own compromise license agreements outside of the courts, and both will have an interest in limiting these exceptions, so it will come down to those same not-for-profit services or small companies to spend the costs required to win those cases and live in legal uncertainty until they have been decided. Take Action Stop Article 13 What about "licenses" instead of "filters"? Article 13 only requires companies to block infringing uses of copyrighted material: Article 13 advocates argue that online services won't need to filter if they license the catalogues of big entertainment companies. But almost all creative content put online (from this FAQ to your latest tweet) is instantly and automatically copyrighted. Despite what EU lawmakers believe, we don’t live in a world where a few large rightsholders control the copyright of the majority of creative works. Every Internet user is a potential rightsholder. All three billion of them. Article 13 doesn't just require online services to police the copyrights of a few giant media companies; it covers everyone, meaning that a small forum for dog fanciers would have to show it had made "best efforts" to license photos from other dog fancier forums that their own users might report—every copyright holder is covered by Article 13. Even if an online platform could license all the commercial music, books, comics, TV shows, stock art, news photos, games, and so on (and assuming that media companies would sell them these licenses), they would still somehow have to make "best effort" to license other user's posts or stop their users from reposting them. Doesn't Article 13 say that companies shouldn't overblock? Article 13 has some language directing European countries to make laws that protect users from false copyright takedowns, but while EU copyright sets out financial damages for people whose copyrights are infringed, you aren't entitled to anything if your legitimate posts are censored. So if a company like Facebook, which sees billions of posts a day, accidentally blocks one percent of those posts, that would mean that it would have to screen and rule on millions of users' appeals every single day. If Facebook makes those users wait for days or weeks or months or years for a ruling, or if it hires moderators who make hasty, sloppy judgments, or both, Article 13 gives those users no rights to demand better treatment, and even the minimal protections under Article 13 can be waved away by platforms through a declaration that users' speech was removed because of a "terms of service violation" rather than a copyright enforcement. Do Article 13's opponents only want to "save the memes?" Not really. It's true that filters—and even human moderators—would struggle to figure out when a meme crosses the line from "fair dealing" (a suite of European exceptions to copyright for things like parody, criticism and commentary) into infringement, but "save the memes" is mostly a catchy way of talking about all the things that filters struggle to cope with, especially incidental use. If your kid takes her first steps in your living room while music is playing in the background, the "incidental" sound could trigger a filter, meaning you couldn't share an important family moment with your loved ones around the world. Or if a news photographer takes a picture of police violence at a demonstration, or the aftermath of a terrorist attack, and that picture captures a bus-ad with a copyrighted stock-photo, that incidental image might be enough to trigger a filter and block this incredibly newsworthy image in the days (or even weeks) following an event, while the photographer waits for a low-paid, overworked moderator at a big platform to review their appeal. It also affects independent creators whose content is used by established rightsholders. Current filters frequently block original content, uploaded by the original creator, because a news service or aggregator subsequently used that content, and then asserted copyright over it. (Funny story: MEP Axel Voss claimed that AI can distinguish memes from copyright infringement on the basis that a Google image search for "memes" displays a bunch of memes) What can I do? Please contact your MEP and tell them to vote against the Copyright Directive. The Copyright Directive vote is practically the last thing MEPs will do before they head home to start campaigning for EU elections in May, so they're very sensitive to voters right now! And on March 23, people from across Europe are marching against the Copyright Directive. The pro-Article 13 side has the money, but we have the people! Take Action Stop Article 13
>> mehr lesen

Here’s Why You Can’t Trust What Cops and Companies Claim About Automated License Plate Readers (Tue, 19 Mar 2019)
Emails Prove ICE Could Access Data from Orange County Shopping Malls, Despite the Companies' Denials In response to an ACLU report on how law enforcement agencies share information collected by automated license plate readers (ALPRs) with Immigration and Customs Enforcement, officials have been quick to deny and obfuscate despite documentary evidence obtained directly from ICE itself through a Freedom of Information Act lawsuit Let’s be clear: you can’t trust what ALPR company Vigilant Solutions and its clients say. It’s time for higher authorities to conduct an audit. Through years of research spanning California (and beyond), EFF has discovered that agencies that access ALPR data are often ignorant or noncompliant when it comes to the transparency and accountability requirements of state law. Furthermore, their agreements with the vendor Vigilant Solutions often include “non-disparagement” and “non-publication” clauses that contractually bind them to Vigilant Solutions’ “media messaging” and prevent agencies from speaking candidly with the press. Meanwhile, training materials created by Vigilant Solutions explicitly recommend that police leave ALPR out of its reports whenever possible. But documents obtained as part of the ACLU’s lawsuit brings another factor into play: sometimes the claims are just jaw-droppingly inaccurate. One email in particular shows exactly how ICE could access data collected at shopping malls through a regional fusion center, despite the mall operator and Vigilant Solutions’ repeated denials that it was happening. For background: ALPR is a technology that allows law enforcement and private companies to track the travel patterns of drivers, through networks of cameras that record license plates, along with time, date and location. That information is uploaded to a database that users can search to find out where a vehicle travelled, reveal what vehicles visited particular locations, and receive real-time alerts on vehicles added to watch lists. It is a mass surveillance technology that captures information on everyone, regardless of whether their vehicle is tied to an investigation. Last summer, EFF volunteer Zoe Wheatcroft, a high school student in Mesa, Ariz., discovered a curious document on a website belonging to the Irvine Company, a real estate developer based in Orange County. The document showed that private security patrols were using ALPR to gather data on customers at Irvine Company-owned shopping malls . As EFF reported, Irvine Company then transferred that information to Vigilant Solutions, a controversial ALPR vendor well-known for selling data to ICE. We asked the mall operator, Irvine Company, to explain itself, but it refused to answer questions. However, after EFF published its report, Irvine Company told reporters ALPR data was not shared with ICE, but only three local police departments. Then Vigilant Solutions issued a press release saying “the entire premise of the article is false,” and accused EFF of “creating fake news.” Vigilant Solutions also demanded we retract the post and apologize, saying that it was “evaluating potential legal claims” against EFF. What they wouldn’t say publicly is that within within two weeks, Irvine Company quietly terminated its whole ALPR program. EFF only learned of this six months later from Irvine Company directly, but the company’s spokesperson refused to tell us the motivation behind ending the surveillance, beyond it being a business decision. What Really Happened in Orange County EFF began to investigate Irvine’s Claims that its ALPR data from the shopping malls was tightly controlled and could never be shared with ICE.  We filed public records requests with the police department that Irvine Company said were the only agencies allowed to access the data. None of them were able to produce any documentation limiting data sharing—or indeed any limitations at all on data could be used or shared. Then, earlier this year, the ACLU received more than 1,800 pages of ICE records about the agency’s use of ALPR and Vigilant Solutions’ technology. Buried in the set is an email exchange that shows unequivocally that ICE accessed the Irvine Company’s shopping center data just months before EFF’s report. According to the records: In October 2017, an official with Homeland Security Investigations, an arm of ICE, sent an email to a detective with the La Habra Police Department, who was working out of the regional “fusion center,”  the Orange County Intelligence Assessment Center. The ICE HSI specialist asked the detective to run a license plate for them, with no explanation of the purpose of the search, even though documenting a purpose is required by California law. A few hours laters, the La Habra detective responded with a PDF attachment exported from Vigilant Solutions’ LEARN software that included the plate scans: "i attached the report... there are a LOT of scans, most of them from fashion island security.. he spends a lot of time parked there.." This email wasn’t just the smoking gun: it was the bullet. The document demonstrates that data could be transferred to ICE What They Claimed: The Irvine Company said the data was only shared with the Irvine, Newport and Tustin police departments. “We have been assured through conversations with Vigilant that only those police departments are receiving information,” a spokesperson told the Orange County Register. Vigilant Solutions backed up the claim, writing “As Irvine Company has stated, it is shared with select law enforcement agencies to ensure the security of mall patrons.” What the Emails Actually Show: A La Habra Police detective had access to mall data through the fusion center. Neither La Habra nor OCIAC are one of the three agencies the data access was supposed to be limited to. This raises the question, who else had access to the data? As a fusion center, OCIAC exists to facilitate the exchange of information across agencies. “Intelligence processes—through which information is collected, integrated, evaluated, analyzed, and disseminated—are a primary focus” of the fusion center, according to OCIAC’s website. What They Claimed: In its press release, Vigilant said, “These law enforcement agencies do not have the ability in Vigilant Solutions’ system to electronically copy this data or share this data with other persons or agencies, such as ICE.” What the Emails Actually Show: Within hours of receiving the request from ICE, the La Habra Detective was easily able to copy the data as a PDF and share it with ICE via email. EFF reached out both to Irvine Company and Vigilant Solutions prior to publishing this report. Irvine Company would only confirm the date that it stopped the ALPR program, but would provide no further information. Motorola Solutions, which acquired Vigilant Solutions earlier this year sent the following statement: We are aware of the ACLU of Northern California's recent report on license plate recognition data and assertions regarding data access by the Irvine Company. The referenced incident predates Motorola Solutions' ownership of Vigilant Solutions, and we are currently working with Vigilant to assess the situation in greater detail. Motorola Solutions is committed to the highest standard of integrity and data protection, which includes ensuring that vehicle location data is accessed only by authorized law enforcement agencies in accordance with applicable laws and industry standards. We also are committed to working with our customers and partners to ensure that use of vehicle location data hosted in our database is appropriately safeguarded to minimize the potential for misuse by any person. Motorola Solutions deeply respects individual privacy rights and is committed to mitigating privacy risks associated with data collection, use and storage.  Considering the historic wall of secrecy maintained by Vigilant Solutions and its clients, we believe it is time for a more thorough accounting than just an internal review. We urge the California legislature and the state auditor to investigate Vigilant Solutions and its government clients to find out the truth about how our data is shared with ICE and other agencies and whether these law enforcement agency are violating state laws regulating the use of this mass surveillance technology. Related Cases:  Automated License Plate Readers (ALPR)
>> mehr lesen

Why the Debate Over Privacy Can't Rely on Tech Giants (Sat, 16 Mar 2019)
Ever since the Cambridge Analytica scandal last summer, consumer data privacy has been a hot topic in Congress. The witness table has been dominated by the biggest platforms, with those in lockstep with the tech giants earning the vast majority of attention. However, this week marked the first time that opposing views had a chance to fight back. The Senate Judiciary committee held a hearing called GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation, and unlike previous hearings, this hearing featured two groups of panelists with contradictory viewpoints. While we still call for a panel that puts consumer advocates and tech giants at the same table to discuss consumer privacy, we appreciate that Judiciary Chair Sen. Lindsey Graham included representatives from DuckDuckGo and Mapbox to discuss how they are able to run successful businesses while also respecting user privacy. It’s clear after this hearing that companies who deliberately over-collect data and sidestep user privacy are making a business choice, and they could choose to operate differently. Privacy Can Be Good for Business In his opening statement, CEO and Founder of DuckDuckGo Gabriel Weinberg said that, “Privacy legislation is not anti-advertising…[our] ads won’t follow [the user] around, because we don’t know who you are, where you’ve been, or where you go. It’s contextual advertising versus behavioral advertising.” Press investigations have exposed, time and again, that large tech companies will often choose their profits over your privacy. This underscores the need for stronger privacy laws across the country, and it helps to have another tech CEO tell the Senate that well-drafted privacy legislation can spur more competition and innovation. In fact, Sen. Graham immediately followed up on this point, asking Google’s Senior Privacy Counsel, Will DeVries, to explain how much of Google’s revenue from search terms comes from contextual advertising versus behavioral advertising. Despite being repeatedly pressed by Sen. Graham, DeVries declined to answer and promised to get back to the Senator privately. It’s unfortunate that he couldn’t—or wouldn’t—answer the question. It’s not the first time companies have muddied the waters on this point. Facebook CEO Mark Zuckerberg has previously claimed that users prefer targeted ads, a claim without much merit. It would be useful for Congress (and users) to know if the reason for these claims is because the business models depend on it. We hope Sen. Graham keeps asking that question and receives a real answer. But we cast doubt on the assertion that new privacy laws kill businesses. During the second panel, the Judiciary committee’s top Democrat, Senator Dianne Feinstein, asked if the GDPR was bad for business. CDT’s Michelle Richardson responded by saying that because the GDPR is so new, we don’t yet know its effects. Richardson also cited a Cisco study that cites evidence that organizations in Europe that are ready for the GDPR are benefiting from their privacy investments. As we have said before, the real proof of the GDPR’s provisions will be in how they are enforced, and against whom. Those answers will only emerge as European regulators begin to use their new authorities. Similarly, state laws such as BIPA in Illinois and Vermont’s data privacy law, and the CCPA, are still so new that we don’t entirely know their impact. Congress needs to allow the laws to work and the courts to make decisions before they get involved.  Privacy Doesn’t Have to Be Complicated Many different senators criticized the idea that companies should be allowed to expect that their users fully understand what clicking “I agree” means on a terms of service agreement. While discussing the length and complexity of Google’s privacy policy, Sen. John Kennedy said “You can hide a dead body in there and no one would ever find it.” And then there is the question of whether users actually have a choice. Freshman Sen. Josh Hawley asked DeVries whether users can fully turn off all Google’s location tracking services on their Android phones. DeVries responded that location tracking is required to "perform basic functions" on the phone. In other words, no—even if a consumer consciously chooses to turn off location tracking on their Android phone, Google is still tracking them. That’s a big deal, and Sen. Hawley noticed: Here's my basic concern ... that Americans have not signed up for this…They think they can opt out of the tracking that you're performing, but they can't meaningfully opt out. DeVries offered to follow up with Sen. Hawley later on Google’s tracking practices, saying, "I understand it's a complicated topic." "I don't think it's that complicated," Sen. Hawley responded. Again, it’s disappointing that DeVries wouldn’t answer the question in a public hearing. Android users should have the right to know why they can’t ever turn off collection of sensitive (and apparently, valuable) data. Build a Floor, Not a Ceiling States across the country have already enacted laws to create strong protections for user privacy. Republicans and tech industry leaders who resist these restrictions have gone on record calling for federal preemption of state privacy laws. They say they want “one national standard” in order to avoid a "patchwork" of regulations—which could moot an ongoing class action suit against Facebook in Illinois and wipe out the CCPA. We were pleased to hear Senator Feinstein say that people should control their data with opt-in consent and that she would oppose efforts to water down the CCPA through a federal privacy law during the hearing, saying "I will not support any federal privacy bill that weakens the California standard.” Senator Richard Blumenthal followed up by saying there is “a bipartisan core of support for adopting a law that regards California as a floor, not a ceiling, in terms of privacy standards for both the expectations of what the standard should be as well as enforcement.” We are glad to see these senators take such a strong stand for privacy protections at the state level. We look forward to working with them and hope Congress will continue inviting different viewpoints to the table to work on strong, comprehensive privacy protections for all Americans.
>> mehr lesen

Our Thoughts on the New Zealand Massacre (Fri, 15 Mar 2019)
EFF is deeply saddened and disturbed by the massacre in New Zealand. We offer our condolences to the survivors and families of victims. This horrific event had an online component; one gunman livestreamed the event, and it appears that he had an active and hateful online presence. Enforcing their terms of use, most web platforms appear to have removed the horrendous video and related content. Incidents involving extreme violence invite hard questions about how platforms can enforce their policies without unfairly silencing innocent voices. Online platforms have the right to remove speech that violates their community standards, as is happening here. But times of tragedy often bring calls for platforms to ramp up their speech-policing practices. Those practices often expand to silence legitimate voices—including those that have long sought to overcome marginalization. It’s understandable to call for more aggressive moderation policies in the face of horrifying crimes. Unfortunately, history has shown that those proposals frequently backfire. When platforms over-censor, they often disproportionately silence the speech of their most vulnerable, at-risk users. It is difficult to draw lines between the speech of violent extremists and those commenting on, criticizing, or defending themselves from such attacks. Egyptian journalist and anti-torture advocate Wael Abbas was kicked off YouTube for posting videos of police brutality. Twitter suspended his account, which contained thousands of photos, videos, and livestreams documenting human rights abuses. In 2017, YouTube inadvertently removed thousands of videos used by human rights groups to document atrocities in Syria. It is difficult to draw lines between the speech of violent extremists and those commenting on, criticizing, or defending themselves from such attacks. It’s much more difficult to make those judgment calls at the scale of a large Internet platform. To make matters worse, bad actors can often take advantage of overly restrictive rules in order to censor innocent people—often the members of society who are most targeted by organized hate groups. It’s not just 8chan-style trolls, either: state actors have systematically abused Facebook’s flagging process to censor political enemies. On today’s Internet, if platforms don’t carefully consider the ways in which a takedown mechanism invites abuse, creating one risks doing more harm than good. And attempts to use government pressure to push platforms to more exhaustively police speech inevitably result in more censorship than intended. Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines for how online platforms should handle removal of speech. Simply put, the Principles say that platforms should: provide transparent data about how many posts and accounts they remove; give notice to users who’ve had something removed about what was removed, under what rules; and give those users a meaningful opportunity to appeal the decision. The Santa Clara Principles help ensure that platforms’ content moderation decisions are consistent with human rights standards. Moderation decisions are one of the most difficult problems on the Internet today. Well-meaning platforms and organizations may disagree on specific community standards, but we should all work together to take steps to ensure that those rules aren’t wielded against the most vulnerable members of society.
>> mehr lesen

Critical Free Speech Protections Are Under Attack in Texas (Thu, 14 Mar 2019)
A bill introduced in Texas threatens the free speech rights of 28 million residents by making it easier to bring frivolous lawsuits against speakers and to harass or intimidate them into silence.  EFF has long been concerned about these types of lawsuits, called Strategic Lawsuits Against Public Participation, or SLAPPs, as they use legal claims as a pretext to punish individuals exercising their First Amendment rights. That’s why EFF supports efforts to limit or prevent SLAPPs.  28 states have so-called “anti-SLAPP” laws, which provide invaluable protections to speakers exercising their First Amendment rights, both online and off. While the laws vary, they typically allow the target of the SLAPP suit to quickly get a court to decide whether the case can go forward, and often require the party bringing the claims to demonstrate they have legitimate legal claims. Anti-SLAPP laws also often allow a victorious target of a SLAPP suit to recover attorneys’ fees from the party who brought the meritless claims.  Without anti-SLAPP laws, plaintiffs could bring a meritless claim against speakers that they have no intention of winning—just to stop the speech or inflict financial stress by forcing those targeted by the suits to pay for attorneys to defend against meritless claims. Texas has one of the premier anti-SLAPP laws in the country: the Texas Citizens Participation Act, or TCPA. The law currently applies to a broad range of protected First Amendment activity, including discussing matters of public importance or speaking at a government proceeding. A bill introduced earlier this month, H.B. 2730, would gut these and other important protections.  The attempt to substantially weaken and narrow the TCPA is particularly concerning because, since its passage in 2011, the law has disposed of numerous lawsuits filed against Texans who were exercising their free speech rights. Some examples of the TCPA’s success at stopping meritless lawsuits include: A Dallas area couple who were sued by a pet-sitting company when they left a negative Yelp review Individuals who complained about using a “fascia blaster” treatment on Facebook, which prompted the company selling the product to sue Anonymous speakers who posted comments on a nonprofit’s website avoided being unmasked by a group of lawyers seeking to find out their identities An online critic of a multi-level marketing company was sued after publishing blog posts that were critical of the company If H.B. 2730 passes, the protections enjoyed by the speakers described above and others will be severely threatened. The bill eviscerates several key protections of the TCPA. First, the bill narrows the scope of activity protected by the law in a way that will allow those bringing lawsuits against speakers to make an end-run around the TCPA’s protections. In short, H.B. 2730 will allow plaintiffs to argue that the because they are alleging the speech was defamatory, the TCPA simply does not apply.  The bill also removes key definitions that explain what type of activity is protected by the TCPA, creating uncertainty for speakers as to whether the law would protect them, which will chill speech. Additionally, the bill exempts lawsuits that are based on alleged breach of non-disparagement clauses. These types of contracts are notoriously speech restrictive and have been used by websites and other online services to limit users or customers’ ability to criticize products or services. Worse, these terms are often buried deep in form contracts. H.B. 2730 would also exempt the TCPA from applying to a procedure under Texas law that allows parties to attempt to unmask anonymous online speakers without first filing a lawsuit. EFF has been particularly concerned about the use of this pre-litigation discovery process to target anonymous speakers because it can be abused to harass speakers rather than vindicate legitimate legal claims. We filed a brief last year in support of anonymous speakers who posted on the employer review site Glassdoor after a business attempted to use Texas’ pre-lawsuit discovery process to learn their identities. Although the Texas Supreme Court declined to rule that the TCPA applied to the pre-suit discovery process, its ruling had the practical effect of protecting anonymous speakers.   If H.B. 2730 passes, litigants will likely increase their use of Texas’ pre-lawsuit discovery process to attempt to unmask anonymous speakers. That result may well succeed in scaring off online critics, and chilling speech. The TCPA needs to be defended. It’s a law that’s protected the free speech of more than 28 million Texans, and is a national model for other states. If you live in Texas, tell your representatives to oppose H.B. 2730. The Texas Protect Free Speech Coalition is organizing opposition to the bill, and the group’s website has sample letters and information to help you make your voice heard.  Regardless of where you live, join us in advocating for new federal anti-SLAPP protections that will protect those sued in federal court. EFF has supported such measures in the past and will be pushing for them again this year. Last year, we also launched an anti-SLAPP coalition of non-profit groups, to make sure that dissenting voices aren’t drowned out when they’re hit with a SLAPP. Rolling back laws that protect speakers from harassment isn’t the way to go—in Texas, or anywhere else.
>> mehr lesen

If It Really Wants To Restore Debate, Facebook Should Update Its Ad Policy (Wed, 13 Mar 2019)
Last week, Facebook CEO Mark Zuckerberg announced a new “privacy-focused” direction for the company that, while sounding great in theory, also set off several alarm bells—including concerns about competition as the company moves to make its messaging properties indistinguishable from one another. As usual for Zuckerberg, it’s all frying pans and fires: just a few days later, it seemed the company had accidentally-on-purpose picked a fight with one leading competition critic — Senator and Presidential candidate Elizabeth Warren — by deleting Facebook ads, placed by her campaign, that advocated breaking up the platform. Facebook has since restored the ads, and clarified that they were removed solely because they violated policies against the use of the company’s trademarks in advertising on the platform. The company’s advertising platform policy prohibits advertisers from  “represent[ing] the Facebook brand in a way that makes it the most distinctive or prominent feature” of the ad, and use of the logo itself is forbidden. This policy goes well beyond what the law requires. Trademarks are intended to protect consumers by helping ensure that a person can identify a product’s source. If you prefer Coke over Pepsi, a logo helps you know which to buy. But advertisers, whether commercial or political, can normally use a trademark as part of speech criticizing conduct or to comment upon corporations and products, as long as the use doesn’t suggest endorsement. If an advertiser, especially a political campaign, is using Facebook’s trademark to identify the company in a critical comment, it’s unlikely people would think Facebook endorsed it. There have always been two groups of people on the platform: those with the power to contest censorship and those without. Given Facebook’s outsized influence on political discourse, the company’s choice to go beyond the law matters a lot. The ability to reference a company by name and logo in critical commentary is an extraordinarily important aspect of free speech and fair use. This is how Verizon can compare its wireless coverage to AT&T’s and Microsoft can compare its voice recognition to Apple’s—and how you can call the “Super Bowl” the Super Bowl in commentary (or criticism) despite the phrase being trademarked. Facebook claims it restored Warren’s advertisements "in the interest of allowing robust debate,” and there’s no reason to look for more nefarious reasons for the takedown than the ads having violated the trademark policy. But trademark has often been used to limit debate, accidentally or intentionally, and if the company shuts down ads that use Facebook trademarks by default, then it is also censoring critics and silencing debate by default, as well. As Warren’s ads point out, using the platform is an important way to spread a political message—over 30% of the world’s population use the platform monthly. If it truly wants to restore debate, Facebook should stop censoring first and asking questions later. This takedown is also a reminder that there have always been two groups of people on the platform: those with the power to contest censorship and those without. After the ads were restored, Warren wrote that “you shouldn't have to contact Facebook's publicists in order for them to decide to "allow robust debate" about Facebook,” and she’s right. If a candidate with less name recognition had their ads removed for a similar reason, it’s difficult to know if they would have been restored. We’re just in the run-up to the 2020 Presidential campaign cycle, so this issue is only going to get more attention over the next twelve months. While no popular politician has taken a bigger swing at Facebook than Warren, criticisms of it are frequent across the political spectrum. The company is interwoven into the political process so deeply at this moment that every action it takes regarding political ads will be scrutinized closely. In a growing number of countries, Facebook requires a verification process to run such ads, which puts the company in control of whether or not an ad is “political.” Earlier this year, Facebook blocked transparency tools that inform users of how they were being targeted by advertisers. If Facebook wants to allow robust debate—and not be at the center of it—it should update its advertising policy, for political ads at least, and stop taking down uses of its trademark by default.
>> mehr lesen

When Facial Recognition Is Used to Identify Defendants, They Have a Right to Obtain Information About the Algorithms Used on Them, EFF Tells Court (Tue, 12 Mar 2019)
We urged the Florida Supreme Court yesterday to review a closely-watched lawsuit to clarify the due process rights of defendants identified by facial recognition algorithms used by law enforcement. Specifically, we told the court that when facial recognition is secretly used on people later charged with a crime, those people have a right to obtain information about how the error-prone technology functions and whether it produced other matches. EFF, ACLU, Georgetown Law’s Center on Privacy & Technology, and Innocence Project filed an amicus brief in support of the defendant’s petition for review in Willie Allen Lynch v. State of Florida. Prosecutors in the case didn’t disclose information about how the algorithm worked, that it produced other matches that were never considered, or why Lynch’s photo was targeted as the best match. This information qualifies as “Brady” material—evidence that might exonerate the defendant—and should have been turned over to Lynch. We have written extensively about how facial recognition systems are prone to error and produce false positives, especially when the algorithms are used on African Americans, like the defendant in this case. Researchers at the FBI, MIT, and ProPublica have reported that facial recognition algorithms misidentify black people, young people, and women at higher rates that white people, the elderly, and men. Facial recognition is increasingly being used by law enforcement agencies around the country to identify suspects. It’s unfathomable that technology that could help to put someone in prison is used mostly without question or oversight. In Lynch’s case, facial recognition could help to send him to prison for eight years. Undercover police photographed Lynch using an older-model cell phone at an oblique angle while he was in motion. The photo, which is blurred in places, was run through a facial recognition algorithm to see whether it matched any images of a database of county booking photos. The program returned a list of four possible matches, the first of which was Lynch’s from a previous arrest. His photo was the only one sent on to prosecutors, along with his criminal records. The algorithm used on Lynch is part of the Face Analysis Comparison Examination Systems (FACES), a program operated by the Pinellas County Sheriff’s Office and made available to law enforcement agencies throughout the state. The system can search over 33 million faces from drivers’ licenses and police photos. It doesn’t produce “yes” or “no” responses to matches; it rates matches as likely or less likely matches. Error rates in systems like this can be significant and the condition of Lynch’s photo only exacerbates the possibility of errors. FACES is poorly regulated and shrouded in secrecy. The sheriff said that his office doesn’t audit the system, and there’s no written policy governing its use. The sheriff’s office said it hadn’t been able to validate the system, and “cannot speak to the algorithms and the process by which a match is made.” That he was identified by a facial recognition algorithm wasn’t known by Lynch until just days before his final pretrial hearing, although prosecutors had known for months. Prior to that, prosecutors had never disclosed information about the algorithm to Lynch, including that it produced other possible matches. Neither the crime analyst who operated the system or the detective who accepted the analyst’s conclusion that Lynch’s face was a match knew how the algorithm functioned. The analyst said the first-listed photo in the search results is not necessarily the best match—it could be one further down the list. An Assistant State Attorney doubted the system was reliable enough to meet standards used by courts to assess the credibility of scientific testimony and whether it should be used at trial. Lynch asked for the other matches produced by FACES—the court refused. If a human witness who identified Lynch in a line-up said others in the line-up also looked like the criminal, the state would have had to disclose that information, and Lynch could have investigated those alternate leads. The same principle should have required the state to disclose other people the algorithm produced as matches and information about how the algorithm functions, EFF and ACLU told the Florida Supreme Court. When defendants are facing lengthy prison sentences or even the death penalty, tight controls on the use of facial recognition are crucial. Defendants have a due process right to information about the algorithms used and search results.  The Florida Supreme Court should accept this case for review and provide guidance to law enforcement who use facial recognition to arrest, charge, and deprive people of their liberty. Related Cases:  FBI Facial Recognition Documents
>> mehr lesen

The Patent Office Can’t Ignore Law it Dislikes (Tue, 12 Mar 2019)
Last month, we asked EFF supporters to help save Alice v. CLS Bank, the 2014 Supreme Court decision that has helped stem the tide of stupid software patents and abusive patent litigation. The Patent Office received hundreds of comments from you, telling it to do the right thing and apply Alice, not narrow it. Thank you. Last week, EFF submitted its own comments [PDF] to the Patent Office. In our comments, we explain that Patent Office’s new guidance on patent-eligibility will make it harder—if not impossible—for examiners to apply Supreme Court law correctly. If examiners cannot apply Alice to abstract patent applications, more invalid patents will issue. That’s not only bad for innovation, it also violates fundamental principles of divided government. The Supreme Court interprets laws that Congress passes, not executive branch agencies like the Patent Office. The Patent Office’s new guidance aims to undermine Alice in two ways. First, the Guidance narrows ineligible abstract ideas to only three possibilities: mental processes, mathematical formula, and methods of organizing human activity. No Supreme Court or Federal Circuit has ever said only three categories of abstract ideas exist. In fact, the Supreme Court in Alice went out of its way to explain that it was not going to “labor to delimit the precise contours of the ‘abstract ideas’ category in this case.” That omission is not incidental. Instead, of defining a precise “abstract idea” category, the Court endorsed an approach that should be familiar to lawyers: figuring out whether the claims in a given case are abstract, by using past cases. That's how the Court determined that the Alice patent—which covered the idea of using a third-party intermediary—was abstract. It was similar to other abstract patents, like one on the idea of hedging risk. Following Alice, courts have repeatedly recognized abstract ideas by comparing them to other abstract ideas. That is the method the Supreme Court has approved, and the Patent Office should instruct its examiners to apply it as well—not to effectively rewrite its own wishes into the Supreme Court’s decision. Second, the Guidance creates an entirely new and unprecedented step within the Supreme Court’s two-step test. According to the Patent Office, an application that recites an abstract idea should still get a patent, as long as it integrates the idea into a “practical application.” That means examiners would bypass the critical second step of the Supreme Court’s patent-eligilibity test—identifying an "inventive concept." In Alice, the Supreme Court applied the entire two-step test, and did not suggest there were any loopholes. The idea that any "practical application" is enough to get a patent, even without inventiveness, fails to comply with Alice. The Patent Office's new guidance cites a handful of Federal Circuit decisions in support of its approach. But it ignores countless cases in which the Federal Circuit has rejected ineligible abstract ideas that the Patent Office will now almost certainly approve, and it ignores key aspects of Alice itself. The Patent Office has no authority to ignore case law it dislikes. With your help, we will keep fighting to ensure the patent system promotes innovation by limiting patent grants to actual inventions.
>> mehr lesen

The Foilies 2019 (Sun, 10 Mar 2019)
Recognizing the year’s worst in government transparency The cause of government transparency finally broke through to the popular zeitgeist this year. It wasn’t an investigative journalism exposé or a civil rights lawsuit that did it, but a light-hearted sitcom about a Taiwanese American family set in Orlando, Florida, in the late 1990s. In a January episode of ABC’s Fresh Off the Boat, the Huang family’s two youngest children—overachievers Evan and Emery—decide if they sprint on all their homework, they’ll have time to plan their father’s birthday party. “Like the time we knocked out two English papers, a science experiment, and built the White House out of sugar cubes,” Evan said. “It opened up our Sunday for filing Freedom of Information requests.” “They may not have figured out who shot JFK,” Emery added. “But we will.” The eldest child, teenage slacker Eddie, concluded with a sage nod, “You know, once in a while, it’s good to know nerds.” Amen to that. Around the world, nerds of all ages are using laws like the United States’ Freedom of Information Act (and state-level equivalent laws) to pry free secrets and expose the inner workings of our democracy. Each year, open government advocates celebrate these heroes during Sunshine Week, an annual advocacy campaign on transparency. But the journalists and researchers who rely on these important measures every day can’t help but smirk at the boys’ scripted innocence. Too often, government officials will devise novel and outrageous ways to reject requests for information or otherwise stymie the public’s right to know. Even today—20 years after the events set in the episode—the White House continues to withhold key documents from the Kennedy assassination files. Since 2015, the Electronic Frontier Foundation (a nonprofit that advocates for free speech, privacy and government transparency in the digital age) has published The Foilies to recognize the bad actors who attempted to thwart the quests for truth of today’s Evans and Emerys. With these tongue-in-cheek awards, we call out attempts to block transparency, retaliation against those who exercise their rights to information, and the most ridiculous examples of incompetence by government officials who handle these public records. The Corporate Eclipse Award - Google, Amazon, and Facebook The Unnecessary Box Set Award - Central Intelligence Agency The (Harlem) Shaky Grounds for Redaction Award - Federal Communications Commission The Unreliable Narrator Award - President Donald Trump, the U.S. Department of Justice and U.S. District Court Judges The Cross-Contamination Award - Stanford Law Professor Daniel Ho The Scanner Darkly Award - St. Joseph County Superior Court The Cash for Crash Award - Michigan State Police The Bartering with Extremists Award - California Highway Patrol The Preemptive Shredding Award - Inglewood Police Department The What the Swat? Award - Nova Scotia and Halifax Law Enforcement The Outrageous Fee Request of the Year - City of Seattle The Intern Art Project Award - Vermont Gov. Phil Scott The Least Transparent Employer Award - U.S. Department of Justice The Clawback Award - The Broward County School Board The Wrong Way to Plug a Leak Award -  City of Greenfield, California If it Looks like a Duck Award - Brigham Young University Police The Insecure Security Check Award - U.S. Postal Service The Corporate Eclipse Award - Google, Amazon, and Facebook Sunshine laws? Tech giants think they can just blot those out with secretive contracts. But two nonprofit groups—Working Partnerships and the First Amendment Coalition—are fighting this practice in California by suing the city of San Jose over an agreement with Google that prevents city officials from sharing the public impacts of development deals, circumventing the California Public Records Act. Google’s proposed San Jose campus is poised to have a major effect on the city’s infrastructure, Bloomberg reported. Yet, according to the organization’s lawsuit, records analyzing issues of public importance such as traffic impacts and environmental compliance were among the sorts of discussions Google demanded be made private under their non-disclosure agreements. And it’s not just Google using these tactics. An agreement between Amazon and Virginia includes a provision that the state will give the corporate giant—which is placing a major campus in the state—a heads-up when anyone files a public records request asking for information about them. The Columbia Journalism Review reported Facebook has also used this increasingly common strategy for companies to keep cities quiet and the public in the dark about major construction projects. The Unnecessary Box Set Award - Central Intelligence Agency Six CDs in white paper cases Courtesy of National Security Counselors After suing the CIA to get access to information about Trump’s classified briefings, Kel McClanahan of National Security Counselors was expecting the agency to send over eight agreed-upon documents. What he was not expecting was for the files—each between three and nine pages each—-to be spread out across six separate CD-ROMs, each burned within minutes of each other, making for perhaps the most unnecessary box set in the history of the compact disc. What makes this “extra silly,” McClanahan said, is that the CIA has previously complained about how burdensome and costly fulfilling requests can be. Yet the CIA could have easily combined several requests onto the same disc and saved themselves some time and resources. After all, a a standard CD-ROM can hold 700 MB, and all of the files took only 304 KB of space. The (Harlem) Shaky Grounds for Redaction Award - Federal Communications Commission mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FLFhT6H6pRWg%3Fautoplay%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com After repealing the Open Internet Order and ending net neutrality, Federal Communications Commission Chairman Ajit Pai doubled down on his efforts to ruin online culture. He released a cringe-inducing YouTube video titled “7 Things You Can Still Do on the Internet After Net Neutrality" that featured his own rendition of the infamous “Harlem Shake” meme. (For the uninitiated, the meme is characterized by one person subtly dancing in a room of people to Baauer’s track “Harlem Shake.” Then the bass drops and the crowd goes nuts, often with many people in costumes.) Muckrock editor JPat Brown filed a Freedom of Information Act request for emails related to the video, but the FCC rejected the request, claiming the communications were protected “deliberative” records. Brown appealed the decision, and the FCC responded by releasing all the email headers, while redacting the contents, claiming that anything more would cause  “foreseeable harm.” Brown did not relent, and a year later the FCC capitulated and released the unredacted emails. “So, what did these emails contain that was so potentially damaging that it was worth risking a potential FOIA lawsuit over?” Brown writes. “Pai was curious when it was going live, and the FCC wanted to maintain a veto power over the video if they didn’t like it.” The most ridiculous redaction of all was a tiny black box in an email from the FCC media director. Once removed, all that was revealed was a single word: “OK.” The Unreliable Narrator Award - President Donald Trump, the U.S. Department of Justice and U.S. District Court Judges When President Trump tweets attacks about the intelligence community, transparency groups and journalists often file FOIA requests (and subsequently lawsuits) seeking the documents that underpin his claims. The question that often comes up: Do Trump’s smartphone rants break the seal of secrecy on confidential programs? The answer seems to be no. Multiple judges have sided with Justice Department lawyers, concluding that his Twitter disclosures do not mean that the government has to confirm or deny whether records about those activities exist. In a FOIA case seeking documents that would show whether Trump is under investigation, U.S. District Judge Amy Berman Jackson said that the President’s tweets to that effect are “speculation.” Similarly, in a FOIA suit to get more information about the widely publicized dossier of potential ties between Trump and Russia, U.S. District Judge Amit Mehta said that the President’s statements are political rather than “assertions of pure fact.” And so, whether Trump actually knows what he’s talking about remains an open question. The Cross-Contamination Award - Stanford Law Professor Daniel Ho One of the benefits of public records laws is they allow almost anyone—regardless of legal acumen—to force government agencies to be more transparent, usually without having to file a lawsuit. But in Washington State, filing a public records request can put the requester at legal risk of being named in a lawsuit should someone else not want the records to be made public. This is what happened to Sarah Schacht, a Seattle-based open government advocate and consultant. For years Schacht has used public records to advocate for better food safety rules in King County, an effort that led to the adoption of food safety placards found in restaurants in the region. After Schacht filed another round of requests with the county health department, she received a legal threat in November 2018 from Stanford Law School professor Daniel Ho’s attorney threatening to sue her unless she abandoned her request. Apparently, Ho has been working with the health department to study the new food safety and placard regulations. He had written draft studies that he shared with the health department, making them public records. Ho’s threat amounted to an effort to intimidate Schacht from receiving public records, probably because he had not formally published his studies first. Regardless of motive, the threat was an awful look. But even when faced with the threat, Schacht refused to abandon her request. Fortunately, the lawsuit never materialized, and Schacht was able to receive the records. Although Ho’s threats made him look like a bully, the real bad actor in this scenario is Washington State’s public records law. The state’s top court has interpreted the law to require parties seeking to stop agencies from releasing records (sometimes called reverse-FOIA suits) to also sue the original requester along with the government agency. The Scanner Darkly Award - St. Joseph County Superior Court A photocopy of a CD with personal information blacked out Courtesy of Jessica Huseman ProPublica reporter Jessica Huseman has been digging deep into the child welfare system and what happens when child abuse results in death. While following up on a series of strangulations, she requested a copy of a case file from the St. Joseph County Superior Court in Indiana. Apparently, the clerk on the other end simply took the entire file and ran everything through a scanner. The problem was that the file contained a CD-ROM, and that’s not how CD-ROMs work. “Well this is the first time this had happened,” Huseman posted to Twitter, along with the blotchy black-and-white image of the top of the disc. “They scanned a CD as part of my FOI and didn’t give me its contents. Cool cool.” The Cash for Crash Award - Michigan State Police As tech companies experiment with autonomous vehicles on public roadways, reporters are keeping tabs on how often these cars are involved in collisions. That’s why The Information’s Matt Drange has been filing records requests for the crash data held by state agencies. Some government departments have started claiming that every line of the dataset is its own, individual record and subject to a copy fee. Our winner, the Michigan State Police, proposed to charge Drange a 25-cent fee for each of a 1.9 million-line dataset, plus $20 for a thumbdrive, for a grand total of $485,645.24, with half of it due up front.  Runners-up that quoted similar line-by-line charges include the Indiana State Police ($346,000) and the North Carolina Department of Transportation ($82,000). Meanwhile, Florida’s government released its detailed dataset at no charge at all. The Bartering with Extremists Award - California Highway Patrol In 2016, the Traditionalist Worker Party (TWP), an infamous neo-Nazi group, staged a demonstration at the California State Capitol. Counter-protesters fiercely opposed the demonstration, and the scene soon descended into chaos, leaving multiple people injured. When the dust settled, a member of the public (disclosure: also a co-author of this piece) filed a California Public Records Act request to obtain a copy of the permit the white nationalist group filed for its rally. The California Highway Patrol rejected the request for this normally available document, claiming it was related to a criminal investigation. Two years later, evidence emerged during criminal proceedings that a CHP detective used the public records request as a bargaining chip in a phone call with the TWP protest leader, who was initially reluctant to provide information. The officer told him how the request might reveal his name. “We don’t have a reason to...uh...deny [the request],” the officer said according a transcript of the call. But once the organizer decided to cooperate, the officer responded, “I’m gonna suggest that we hold that or redact your name or something...uh...until this thing gets resolved.” In light of these new facts, the First Amendment Coalition filed a new request for the same document. It too was denied. The Preemptive Shredding Award - Inglewood Police Department  cop shredding public documents. In defiance of the law enforcement lobby, California legislators passed a law (SB 1421) requiring police and sheriffs to disclose officer misconduct records in response to California Public Records Act requests. These documents, often contained in personnel files, had historically been untouchable by members of the public and the press. Almost immediately, police unions across the Golden State began to launch lawsuits to undermine these new transparency measures. But the Inglewood Police Department takes the prize for its efforts to evade scrutiny. Mere weeks before the law took effect on Jan. 1, 2019, the agency began destroying records that were set to become publicly available. “This premise that there was an intent to beat the clock is ridiculous,” Inglewood Mayor James T Butts Jr. told the LA Times in defending the purge. We imagine Butts would find it equally ridiculous to suggest that the fact he had also been a cop for more than 30 years, including serving in Inglewood and later as police chief of Santa Monica, may have factored into his support for the destruction of records. The What the Swat? Award - Nova Scotia and Halifax Law Enforcement  SWAT team busting down the door. One Wednesday morning in April, 15 Halifax police officers raided the home of a teenage boy and his family. “They read us our rights and told us not to talk," his mother would later tell CBC. “They rifled through everything. They turned over mattresses, they took drawers and emptied out drawers, they went through personal papers, pictures. It was totally devastating and traumatic." You might well wonder, what was the Jack Bauer-class threat to geo-political stability? Nothing at all: The Canadian teen had just downloaded a host of public records from openly available URLs on a government website. At the heart of the ordeal was some seriously terrible security practices by Nova Scotia officials. The website created to host the province’s public records was designed in such a way that every request and response had a nearly identical URL and placed no technical restrictions on the public’s ability to access any of the requests. This meant that regular public records requests and individuals’ requests to access government files about them, which included private information, were all stored together and available on the internet for anyone, including Google’s webcrawler, to access. All that was necessary was changing a number identifying the request at the end of the URL. What Nova Scotian officials should have done upon learning about leaks in their own public records website’s problems was apologize to the public, thank the teen who found these gaping holes in their digital security practices, and implement proper restrictions to protect people’s private information. They didn’t do any of that, and instead sought to improperly bring the force of Canada’s criminal hacking law down on the very person who brought the problem to light. The whole episode—which thankfully ended with the government dropping the charges—was a chilling example of how officials will often overreact and blame innocent third parties when trying to cover up for their own failings. This horror show just happened to involve public records. Do better, Canada. The Outrageous Fee Request of the Year - City of Seattle When self-described transparency advocate and civic hacker Matt Chapman sent his request to Seattle seeking the email metadata from all city email addresses (from/to/BCC addresses, time, date, etc), he expected some pushback, because it does sound like an incredible amount of data to wrangle. Seattle’s response: All the data can be yours for a measly $33 million. Officials estimated that it would take 320 years worth of staff time to review the roughly 32 million emails responsive to Chapman’s request. Oh, and they estimated charging an additional $21,600 for storage costs associated with the records. The fee request is the second highest in the history of The Foilies (the Department of Defense won in 2016 for estimating it would take $660 million to produce records on a particular computer forensic tool). Then the city did something entirely unexpected: It revisited the fee estimate and determined that the first batch of records would cost only $1.25 to process. We get it, math is hard. But wait—that’s not all. After paying for the batches of records with a series of $1.25 checks, Chapman received more than he ever bargained for. Rather than disclosing just the metadata for all 32 million emails, Seattle had given him the first 256 characters of every email. Those snippets included passwords, credit card numbers, and other personally identifying information. What followed was a series of conversations between Chapman, Seattle’s lawyers, and the city’s IT folks to ensure he’d deleted the records and that the city hadn’t just breached its own data via a public records request. Ultimately, Seattle officials in January 2018 began sending the data to Chapman once more, this time without the actual content of email messages. The whole episode doesn’t exactly inspire confidence in Seattle officials’ ability to do basic math, comply with the public records law or protect sensitive information. The Intern Art Project Award - Vermont Gov. Phil Scott  Intern cutting and pasting. Seattle isn’t the only city to stumble in response to Matt Chapman’s public records requests for email metadata. The Vermont governor’s office also wins for its scissor-and-glue approach to releasing electronic information. Rather than export the email information as a spreadsheet, the Vermont governor’s office told Chapman it had five interns (three of whom were unpaid) working six hours each, literally “cutting and pasting the emails from paper copies.” Next thing Chapman knew, he had a 43-page hodgepodge collage of email headers correlating with one day’s worth of messages. The governor’s attorney told Chapman it would cost $1,200 to process three more days’ worth of emails. Chapman pushed back and provided his own instructions on exporting the data using a computer and not, you know, scissors and glue. Sure enough, he received a 5,500-line spreadsheet a couple weeks later at no charge. The Least Transparent Employer Award - U.S. Department of Justice In the last few years, we’ve seen some great resignation letters from public servants, ranging from Defense Secretary James Mattis telling President Trump “It’s not me, it’s you” to former Attorney General Jeff Sessions’ forced resignation. But the Trump DOJ seems to have had enough of the tradition and has now determined that U.S. Attorney resignation letters are private in their entirety and cannot be released under the Freedom of Information Act. Of course, civil servants should have their private information protected by their employer, but that’s precisely what redactions should be used to protect. Past administrations have released resignation letters that are critical of executive branch leaders. The change in policy raises the question: What are departing U.S. Attorneys now saying that the government wants to hide? The Clawback Award - The Broward County School Board After the tragic Parkland shooting, the South Florida Sun-Sentinel went to court to force the Broward County School Board to hand over documents detailing the shooter’s education and disciplinary record. A judge agreed and ordered the release, as long as sensitive information was redacted. But when reporters copied and pasted the file into another document, they found that the content under the redactions was still there and readable. They broke the story of how the school denied the shooter therapeutic services and alternative education accommodations, but then uploaded the school board’s report with working redactions.   Rather than simply do better with double-checking their redactions next time, the school board struck back at the newspaper. They petitioned the court to hold the newspaper in contempt and to prevent anyone from reporting on the legally obtained information. Although the local judge didn’t issue a fine, she lambasted the paper and threatened to dictate exactly what the paper could report about the case in the future (which is itself an unconstitutional prior restraint). The Wrong Way to Plug a Leak Award -  City of Greenfield, California The Monterey County Weekly unexpectedly found itself in court after the city of Greenfield, California sued to keep the newspaper from publishing documents about the surprising termination of its city manager. When Editor Sara Rubin asked the interim city manager for the complaint the outgoing city manager filed after his termination, she got nothing but crickets. But then, an envelope containing details of a potential city political scandal appeared on the doorstep of one of the paper’s columnists. The weekly reached out to the city for comment and began preparing for its normal Wednesday print deadline. Then, the morning of publication, the paper got a call saying that they were due in court. The city sued to block publication of the documents, to have the documents returned and to have the paper reveal the identity of the leaker. Attorney Kelly Aviles gave everyone a fast lesson in the First Amendment, pointing out that the paper had every right to publish. The judge ruled in the paper’s favor, and the city ended up paying all of the Monterey County Weekly’s attorney fees. If it Looks like a Duck Award - Brigham Young University Police  duck with a cop uniform. Brigham Young University’s Police Department is certified by the state,* has the powers of the state, but says that they’re not actually a part of government for purposes of the Utah transparency law. After the Salt Lake Tribune exposed that the University punished survivors of sexual assault for coming forward and reporting, the paper tried to get records of communications between the police department and the school’s federally required sexual assault coordinator. BYU pushed back, saying that the police department is not subject to Utah’s Government Records Access and Management Act because the police department is privately funded. This actually turns out to be a trickier legal question than you’d expect. Brigham Young University itself isn’t covered by the state law because it is a private school. But the university police force was created by an act of the Utah legislature, and the law covers entities “established by the government to carry out the public’s business.” Investigating crime and arresting people seems like the public’s business. Last summer, a judge ruled that the police department is clearly a state agency, but the issue is now on appeal at the Utah Supreme Court. Sometime this year we should learn if the police are a part of the government or not. *Because BYU police failed to comply with state law, and was not responsive to an internal investigation, the Utah Office of Public Safety notified the department on February 20th that the BYU police department will be stripped of its certification on September 1, 2019. The University police also plan to appeal this decision. The Insecure Security Check Award - U.S. Postal Service Congressional elections can turn ugly, but the opponent of newly elected U.S. Rep. Abigail Spanberger got a boost when the U.S. Postal Service released Spanberger’s entire personnel file, including her security clearance application, without redaction of highly sensitive personal information. When a third party requests a person’s federal employment file without the employee’s permission, the government agency normally releases only a bare-bones record of employment dates, according to a Postal Service spokesperson. But somehow Rep. Spanberger wasn’t afforded these protections, and the Postal Service has potentially made this mistake in a “small number” of other cases this year. Security clearance applications (Form SF-86) are supposed to be analyzed and investigated by the FBI, raising questions about how the FOIA officer got the information in the first place. The Postal Service has apologized for the mistake, which they say is human error, but maybe security clearance applications should be kept just as secure as the state secrets the clearance is meant to protect. The Foilies were compiled by Electronic Frontier Foundation Senior Investigative Researcher Dave Maass, Staff Attorney Aaron Mackey, Frank Stanton Fellow Camille Fischer, and Activist Hayley Tsukayama. Illustrations by EFF Art Director Hugh D'Andrade. For more on our work visit eff.org.
>> mehr lesen

The Inextricable Link Between Modern Free Speech Law and the Civil Rights Movement (Fri, 08 Mar 2019)
No excuse is needed to celebrate the civil rights icon Rev. Fred Shuttlesworth. But this weekend is an especially appropriate time to recognize his contributions to First Amendment jurisprudence, and the inextricable link between modern free speech law and the civil rights movement of the 1950s and 1960s. This link remains pertinent: the Internet is as important a venue for protest and dissent as streets and newspapers were then, especially in light of recent attacks on this legal legacy. Why this weekend? It marks the anniversaries of the Supreme Court handing down three victories for Shuttlesworth, all three of which shed light on the civil rights-free speech link, and two of which are landmark First Amendment cases of the 20th Century. March 9, 2019 marks the 55th birthday of the U.S. Supreme Court’s decision in  Abernathy v. Sullivan, 376 U.S. 254 (1964), in which Shuttlesworth was one of the defendants, and of the summarily decided Shuttlesworth v. Birmingham, 376 U.S. 339 (1964). March 10, 2019 marks the 50th birthday of a different Shuttlesworth v. Birmingham, 394 U.S. 147 (1969).1 All of these historically and doctrinally important cases are discussed below. The Sullivan Cases Abernathy v. Sullivan was decided in the same opinion as New York Times v. Sullivan.2 The Court’s joint opinion escalated the standards required of defamation lawsuits brought by public figures, protecting the rights of both the public and the press to criticize the operations of government. The decision was an historic free speech victory when it was decided in 1964, and continues to protect the Internet as a forum for free speech today. This history reminds us that the development of modern First Amendment law was driven in large part by civil rights concerns.  Because “New York Times v. Sullivan” is the name on the Court’s opinion and frequently talked about as a free press case, that they were also civil rights cases often gets overlooked. But the cases' civil rights history is crucially important for the contemporary debate about speech online. This history reminds us that the development of modern First Amendment law was driven in large part by civil rights concerns. And it reminds us that the First Amendment still serves civil rights concerns today by continuing to demand exacting scrutiny of race-neutral laws subject to race conscious applications. This history also attains greater relevance in light of Justice Thomas’s recent troubling call for the U.S. Supreme Court to re-examine the landmark Sullivan ruling. Justice Thomas’s statement, while the voice of only one Supreme Court justice, is especially concerning in light of President Trump’s aspiration to “open up the libel laws,” which seems to be aimed at overruling Sullivan. In Sullivan, the Supreme Court firmly rejected the efforts of Southern officials opposed to civil rights to strangle the civil rights movement through crushing defamation liability judgments in state courts in Alabama and elsewhere. Were it not for the Sullivan decision, they may well have succeeded. It’s a lesson we should not forget as we consider today’s debates about free speech online. The opinion addresses issues like what we might now call “fake news” and attacking intermediaries to silence the speakers who rely on them. The strategy of bringing defamation and similar claims to try to drown political opposition certainly continues today, for example, with recent efforts to sue Greenpeace and other protestors. Contrary to Justice Thomas’ remarks in 2019 that “[t]he states are perfectly capable of striking an acceptable balance between encouraging robust public discourse and providing a meaningful remedy for reputational harm,” the Supreme Court in 1964 did not trust Alabama to do so, or to apply other seemingly neutral laws in an acceptable way. That lack of distrust was well-founded. The plaintiff-friendly common law of defamation applied by the Alabama courts, and most states, was just one tool that officials and courts used as part of a widespread effort to suppress the civil rights movement. Indeed, Sullivan was just one of several cases the Supreme Court decided against Alabama officials in 1964 alone repudiating their efforts to broadly suppress the civil rights movement.3 The civil rights background of the case was not incidental; rather, it played a critical role in the soaring First Amendment victory. The Court framed the case as an instance of government officials using the instruments of state tort law to punish those seeking to change governmental practices through protest and dissent. The Court acknowledged that officials had in effect reinstated the long-discredited law of seditious libel (the crime of criticizing the government). The Supreme Court rightfully recognized that while previous efforts to perpetuate institutionalized racial discrimination employed race-based laws, the use of race-neutral legal concepts4 like defamation law posed a uniquely dangerous threat. Moreover, it was critical to the Court’s analysis that the New York Times was acting as an intermediary for the speech of civil rights activists. The Times was viewed by Southern segregationists as a vital avenue for communicating the messages of the civil rights movement, both through its intermediary function of running advertisements and letters to the editor, as well as through its own reporting. Justice Brennan’s opinion, echoing the ministers’ arguments,5 emphasized that the First Amendment rights it vindicated were not just those of the press (the speech at issue was not the New York Times’ original content, but of a paid advertisement) but of those those who relied on the newspaper to disseminate their messages. Justice Brennan acknowledged that “‘editorial advertisements’” were “an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities—who wish to exercise their freedom of speech even though they are not members of the press. The effect would be to shackle the First Amendment in its attempt to secure ‘the widest possible dissemination of information from diverse and antagonistic sources.’”6 The Court further recognized the disastrous effects of civil damages awards on individual speakers: “Whether or not a newspaper can survive a succession of such judgments, the pall of fear and timidity imposed upon those who would give voice to public criticism is an atmosphere in which the First Amendment freedoms cannot survive.”7 Sullivan’s libel suit was just one of several similar attacks on the New York Times. Indeed, prior to filing his lawsuit, L.B. Sullivan himself issued a statement condemning “the prejudiced Northern press,” and “their program of racial strife and exploitation and financial gain and spectacular distorted news coverage.”8 These tactics were largely effective: because of the lawsuits, the New York Times pulled its Alabama reporter for several years, sharply limiting its original reporting on events there.9 Both NYT v Sullivan and Abernathy et al. v. Sullivan, were based on the same speech: the March 29, 1960 publication in the New York Times of an advertisement raising money for The Committee to Defend Martin Luther King and The Struggle for Freedom in The South. The ad, “Heed Their Rising Voices,” alleged that law enforcement across the Southeast U.S. had committed various improper acts against nonviolent civil rights demonstrators, what the ad called “an unprecedented wave of terror by those who would deny and negate that document [the U.S. Constitution] which the whole world looks upon as setting the pattern for modern freedom.” The following were among the allegations: In Montgomery, Alabama, after students sang “My Country, ‘Tis of Thee” on the State Capitol steps, their leaders were expelled from school, and truck-loads of police armed with shotguns and tear-gas ringed the Alabama State College Campus.  When the entire student body protested to state authorities by refusing to re-register, their dining hall was pad-locked in an attempt to starve them into submission.   . . . .  Again and again the Southern violators have answered Dr. King’s peaceful protests with intimidation and violence.  They have bombed his home almost killing his wife and child.  They have assaulted his person.  They have arrested him seven times-for “speeding.” “loitering” and similar “offenses.”  And now they have charged with “perjury”—under which they could imprison him for ten years.   Obviously, their real purpose is to remove him physically as the leader to whom the students and millions of others—look for guidance and support, and thereby to intimidate all leaders who may rise in the South.  Their strategy is to behead this affirmative movement, and thus to demoralize Negro Americans and weaken their will to struggle.  The defense of Martin Luther King, spiritual leader of the student sit-in movement, clearly, therefore, is an integral part of the total struggle for freedom in the South.10 The ad listed as signatories 80 prominent persons from entertainment, politics, and the civil rights movement, and included the additional note that “We in the south who are struggling daily for dignity and freedom warmly endorse this appeal,” followed by the names and locations of 20 Southerners, mostly clergy members active in the civil rights movement. Among these endorsers were four prominent Alabama-based clergymen active in Dr. King’s Southern Christian Leadership Conference: Ralph Abernathy, Fred Shuttlesworth, S.S. Seay, Sr., and Joseph Lowery. About two weeks after the publication of the ad, five Alabama officials (Alabama governor John Patterson, Montgomery mayor Earl D. James, and Montgomery city commissioners L.B. Sullivan, Franks Parks, and Clyde Sellers) each demanded that the four Alabama-based ministers and the New York Times retract the statements in the ad.11 The ministers did not respond to the demand, explaining later that they had not authorized the use of their names in the ad and knew nothing about it.12 Each of these officials then filed their own libel lawsuit. Each lawsuit named the same defendants: the New York Times and the four Alabama-based ministers. Neither the ad’s creator, the Committee to Defend Martin Luther King, nor any of the other signatories or endorsers—with one notable exception—were named in these lawsuits.13 Sullivan’s case was decided first and resulted in a $500,000 verdict against the ministers, delivered by an all-white jury. James received an identical $500,000 verdict a few months later.14 Because Alabama law required the ministers to post a $2 million bond against those damage awards in order to appeal the case, which they could not do, the state confiscated the ministers’ bank accounts and sold cars and real estate that they owned.15  This financial persecution of the ministers drove the leadership of the Southern Christian Leadership Conference out of “the toughest parts of the South.”16   In appealing the Sullivan verdict, the ministers made not only First Amendment arguments, but due process and equal protection defenses, as well. The due process defenses were based on the lack of evidence that they had authorized the ad. The equal protection concerns reflected a series of problems: the trial courtroom was racially segregated, the jury was all-white, and the judge in a related case had said that the 14th Amendment was inapplicable in Alabama courts, which were instead governed by “white man’s justice.”17 In their petition for certiorari, the ministers claimed that if the verdict were not reversed, “not only will the struggles of Southern Negroes towards civil rights be impeded, but Alabama will have been given permission to place a curtain of silence over its wrongful activities.” The ministers’ First Amendment arguments before the Supreme Court claimed infringements on the “freedoms of speech, press, assembly, and association.”18 Their brief portrayed the libel claims as part of a concerted effort to perpetuate segregation through “lynching, violence and intimidation, through restrictive covenants, Black Codes and Jim Crow laws” and “part of a concerted, calculated program to carry out a policy of punishing, intimidating and silencing all who criticize” Alabama’s enforced segregation.19 The broad reach of both the Abernathy case and the New York Times cases was acknowledged in the New York Times oral argument, when Justice Goldberg confirmed that the New York Times was not arguing for a special rule for newspapers, but rather for free speech rights generally.20 The Court issued one opinion to resolve both cases, importantly entering judgment in favor of the defendants rather than remanding the case back to Alabama state courts for new trials.21 Sullivan is revered today because it transformed the common law of defamation and firmly pushed back against of the use of libel actions to punish political criticism. Prior to the Court’s decision, defamation law in Alabama (like that of most states) allowed a plaintiff to win a defamation lawsuit with a relatively minimal showing. In particular, to state a defamation claim based on statements that naturally tended to injure a person’s reputation, profession, trade, or business, or bring them into public contempt, a plaintiff needed merely prove that the defendant published the statements to at least one other person, and that the statements were about the plaintiff. A successful plaintiff did not need to prove that anyone believed the statements to be true, or that their reputation was damaged in any way, or that they suffered any particular injury, financial or otherwise. The plaintiff did not need to prove that the defendant was at fault – they faced no requirement to prove either that the defendant made a mistake or acted unreasonably, or that the defendant acted with any intent to harm or spread falsehoods. The plaintiff did not have to prove falsity, though a defendant could successfully defend a case by proving that the statement was true. But in Sullivan, the Court changed the longstanding common law in several ways, each of which protects speakers seeking to challenge oppression: Sullivan shifted the burden of proving falsity to the plaintiff (Otherwise, “would-be critics of official conduct may be deterred from voicing their criticism, even though it is believed to be true and even though it is, in fact, true, because of doubt whether it can be proved in court or fear of the expense of having to do so.”).22 Sullivan required plaintiffs who are public officials to prove “actual malice” –that the defendant intended to lie, or recklessly spread statements despite strongly suspecting they were false. The decision recognized the “citizen-critic’s” duty to criticize public officials, and specifically held that a finding of mere negligence was not sufficient for defamation claims brought by public officials.23 Sullivan required that actual malice be proved with “convincing clarity,” a more demanding standard than preponderance of the evidence standard usually sufficient in civil cases.24 Sullivan held that statements about the operation of government generally are not statements about which a particular official can sue; this would be too close to the government itself suing for libel.25 The Court also assumed that the infamous Alien and Sedition Acts, passed by the very first Congress, were in retrospect unconstitutional, although they had expired without ever being tested by the Supreme Court.26 The Court found that the ministers could not have known of the false statements in the ad, and thus lacked the required actual malice—even if it could be proven that they had authorized the use of their names in the advertisement. This First Amendment ruling thus compelled judgment in their favor, and the Supreme Court found it unnecessary to rule on the ministers’ due process and equal protection arguments.27 The Court’s use of the First Amendment as an implement for civil rights in Sullivan is even more pronounced considering that later in its 1964 term, the Supreme Court issued another significant ruling against Alabama officials. In NAACP v. Alabama, 377 U.S. 288 (1964), the final ruling in the NAACP’s long legal battle to operate in Alabama, the Court rejected the last of the state’s procedural arguments. Alabama had asserted that the NAACP had not properly registered to operate in the state and the state judge hearing its challenge ordered the NAACP to disclose its membership. In the fourth iteration of the case to make it to the Supreme Court, the Court catalogued the history of the Alabama judiciary’s efforts to evade the Court’s rulings regarding the NAACP. Court then finally ended the case; defeating another seemingly race-neutral tactic—compelled disclosure of membership lists—that was a common tool of those trying to suppress civil rights activism.28 With the Court’s decision, the NAACP was able to resume operations in Alabama. EFF used this same precedent to challenge the NSA’s mass collection of telephone records as violating the right of several political groups to freely associate without governmental knowledge of their membership lists in First Unitarian Church v. NSA. Shuttlesworth v. Birmingham (1964) On the very same day the Supreme Court decided Sullivan, it also decided a different case in favor of Shuttlesworth, upholding his First Amendment rights to speak out against segregation. In Shuttlesworth v. Birmingham (1964), the Court, unanimously and without an opinion, reversed Shuttleworth’s conviction for interfering with the chief of police during the Birmingham attacks on the Freedom Riders. The Freedom Riders were civil rights activists who, starting in 1961, rode interstate buses through the south to challenge illegal segregation. A group had been stranded in Birmingham after an attack by KKK members, purportedly aided by the local police, prompted the bus drivers there to refuse to drive them to their next stop. A crowd of approximately 300 supporters who showed up at the bus station to offer support to the Freedom Riders were met by the police. During this confrontation, Shuttleworth was arrested for interfering with the chief of police’s effort to take the Freedom Riders into supposed “protective custody.” Shuttlesworth apparently interfered with this dubious effort by “block[ing] the chief’s path using words with an intent to do so ‘in rudeness and anger.’” Shuttlesworth was convicted, and his conviction was upheld by the Alabama Court of Appeals, which found that even if the chief was not conducting a valid operation, Shuttlesworth could still be convicted of the alternate crime of assault, again based solely on his spoken words. Shuttlesworth v. Birmingham, 41 Ala. App. 1, 2 (1962). The U.S. Supreme Court summarily reversed, seeming to hold that Shuttleworth’s conviction could not be based on a charge he did not have the opportunity to defend. Shuttlesworth v. Birmingham (1969) The Supreme Court’s 1969 decision in a different case also titled Shuttlesworth v. Birmingham remains one of the Court’s most important prior restraint cases. We’re relying on it now, in our ongoing challenges to National Security Letter (NSL) gag orders.   In April 1963, Shuttleworth was one of three ministers who led a procession of 52 people from a church and through Birmingham. The march used the sidewalks and obeyed all traffic signals. The Birmingham police stopped the marchers after four blocks and arrested them for violating the local law that required a permit for any public protest. The law gave the city the power to deny a permit if “in its judgment the public welfare, peace, safety, health, decency, good order, morals or convenience require that it be refused.” Shuttlesworth was convicted and sentenced to 90 days in imprisonment at hard labor, and almost $100 in fines and costs. After his conviction was affirmed by the Alabama Supreme Court, he appealed to the U.S. Supreme Court. In reversing the conviction, the Supreme Court set forth the requirements that still apply to day for permit schemes. These schemes cannot vest officials with permitting authority “without narrow, objective, and definite standards to guide” them. Permit schemes that vest officials with wholly subjective, unguided discretion, as the Birmingham law did, are unconstitutional, and may be ignored without penalty. The Court also used the case to affirm that protests, pickets, parades, marches, and demonstrations are indeed “speech” protected by the First Amendment.  Free Speech and Civil Rights Continue to Intertwine This history about the context in which these cases were decided helps clarify the profound importance of the decision—not only as a matter of constitutional jurisprudence, but also as a crucial moment in U.S. legal and political history. Far from being a seminal moment only for press freedom, the decision ushered in a new era of respect for First Amendment principles across multiple contexts, and reveals how rights ultimately intersect.  As we consider the free speech fights of today, it’s important to recall that the same intersection requires protection now as much as ever.  We need a strong First Amendment today, protecting today’s marginalized voices in their use of online tools to achieve equity and freedom online, as much as we did fifty-five years ago.    1. None of these was Shuttlesworth’s only Supreme Court victory. In the 1963 term, the Court decided Shuttlesworth v. Birmingham, 373 U.S. 262 (1963), which reversed Shuttlesworth’s conviction for aiding and abetting trespassing based on his acts of recruiting volunteers to take part in a sit-down demonstration at segregated lunch counters. The Supreme Court vacated Shuttlesworth’s conviction, and many others, finding no trespass because the the protestors were unconstitutionally excluded from the lunch counters. That case first reached the Supreme Court in 1962, when the Court first allowed Shuttleworth to appeal his state conviction in federal court. In re Shuttlesworth, 369 U.S. 35 (1962). In the 1965 term, the Court reversed a conviction for loitering after Shuttlesworth led a group of picketers outside a segregated department store. Shuttlesworth v. Birmingham, 382 U.S. 87 (1965). Shuttlesworth was also one of the petitioners in Walker v. Birmingham, 388 U.S. 307 (1967), which affirmed the issuance of an injunction against a planned walking protest, the same protest that gave rise to Shuttlesworth conviction later overturned in the 1969 case. Also, in 1958, Shuttlesworth was the legal representative of his daughter in her lawsuit challenging Alabama’s school segregation law. The Supreme Court summarily affirmed the dismissal of that case in Shuttlesworth v. Board of Education, 358 U.S. 101 (1958). 2. Although the Supreme Court decided the cases in one opinion, the cases were briefed and argued separately. 3. Shuttlesworth v Birmingham (1964) and NAACP v. Alabama (1964) are discussed below. 4. As Professor Christopher Schmidt has written, “By the late 1950s and early 1960s, however, the tactic of defending legalized segregation on its own terms had largely run its course. Legally mandating racial segregation and other forms of overt racial discrimination was rapidly becoming a lost cause .… Unlike the legal battles segregationists waged in the 1940s and 1950s, this new legal attack on the Civil Rights Movement relied on laws that said nothing about race. These were laws regulating disorderly conduct, trespass, disturbing the peace, and defamation. Even tax law became a weapon against the Civil Rights Movement. As the Movement gained momentum, segregationists used these and other race-neutral laws to target civil rights activists and their allies. The race-conscious use of race-neutral law became Jim Crow’s front line of defense.” Christopher W. Schmidt, New York Times v. Sullivan and the Legal Attack on the Civil Rights Movement,” 66 Ala. L. Rev. vol. 2, 295, 293 (2014). Aside from defamation law, civil rights organizations like the NAACP were subject to laws regulating the legal profession, students were subject to disciplinary actions for protesting, organizations and leaders were prosecuted for tax evasion, and disorderly conduct and trespass laws were disproportionately enforced against civil rights protestors. Schmidt at 299-306. 5. The ministers made this argument in several of the cases arising from the ad. For example, their Complaint in Abernathy v. Patterson, a case challenging the seizure of their assets pending the appeal of the state trial court’s Sullivan verdict, alleged that “The defendants herein at some time thereafter conspired and planned under the color of law and utilizing their official positions, as well as the judicial machinery of the State, to deter and prohibit the plaintiffs and their supporters as set forth above, from utilizing their constitutional rights and in particular their right to access to a free press, by instituting fraudulent actions in libel against the plaintiffs, without any basis in law or fact, in the Alabama State courts, arising out of the aforesaid advertisement.” Abernathy v. Sullivan, 295 F.2d 452, 454 (5th Cir. 1961). 6. 376 U.S. at 266. 7. Id. at 278. 8. Schmidt at 304-05. 9. William E. Lee, “Citizen-Critics, Citizen Journalists, and the Perils of Defining the Press,” 48 Ga. L. Rev. 757, 759 n.10 (2014). 10. There were apparently several inaccuracies in these statements: “Although Negro students staged a demonstration on the State Capitol steps, they sang the National Anthem and not "My Country, 'Tis of Thee." Although nine students were expelled by the State Board of Education, this was not for leading the demonstration at the Capitol, but for demanding service at a lunch counter in the Montgomery County Courthouse on another day. Not the entire student body, but most of it, had protested the expulsion, not by refusing to register, but by boycotting classes on a single day; virtually all the students did register for the ensuing semester. The campus dining hall was not padlocked on any occasion, and the only students who may have been barred from eating there were the few who had neither signed a preregistration application nor requested temporary meal tickets. Although the police were deployed near the campus in large numbers on three occasions, they did not at any time "ring" the campus, and they were not called to the campus in connection with the demonstration on the State Capitol steps, as the third paragraph implied. Dr. King had not been arrested seven times, but only four, and although he claimed to have been assaulted some years earlier in connection with his arrest for loitering outside a courtroom, one of the officers who made the arrest denied that there was such an assault. [¶] . . . . Although Dr. King's home had, in fact, been bombed twice when his wife and child were there  . . .  the police were not only not implicated in the bombings, but had made every effort to apprehend those who were.” 11. See Parks v. New York Times, 308 F.2d 474, 476 (5th Cir. 1962). 12. According to: “John Murray, who helped prepare the ad for the Committee to Defend Martin Luther King, testified that the ministers' names were not in the first version of the ad brought to the Times.,. Bayard Rustin, executive director of the Committee to Defend Martin Luther King, was not satisfied with the draft of the ad and instructed Murray to include the names of ministers whose churches were affiliated with the SCLC. Rustin insisted it was not necessary to get permission for the use of names "because they were all part of the movement." Lee at 761 n. 17. See also Anthony Lewis, Make No Law, at 32 n.5 (1991); Kermit L. Hall 7 Melvin Urofsky, New York Times v. Sullivan, at 16-17 (Peter Charles Hoffer & N.E.H. Hull eds., 2011). The specific legal theory Sullivan, James, and the others pursued against the ministers was one of ratification or adoption, whereby the silence of the ministers in response to the retraction demand made them responsible for its contents even if they did not in advance approve or know of the inclusion of their names as endorsers. The 5th Circuit ultimately found that such silence, in addition to evidence that the ministers may have benefitted from the ad, since the SCLC ultimately received some of the funds raised, was sufficient to support a finding a ratification. Parks, 308 F.2d at 479. 13. Governor Patterson’s lawsuit also named Dr. King, who was also listed as an “endorser,” as a defendant. Dr. King, in a deposition in the case, similarly testified that he had not authorized the inclusion of his name as a signatory. Some believe that the four individual Alabama residents were sued as a legal maneuver to make sure that the cases were tried in Alabama state courts, rather than removed to federal court, as they likely would have been had a non-Alabama entity, the New York Times, been the only defendant. See Lewis at 13. But the four ministers themselves believed they were specifically targeted as part of a broader effort by Alabama officials to suppress the civil rights movement in that state. See Lee at 758 & n.1, 764. They filed a lawsuit to this effect in which they sought to enjoin enforcement of Sullivan and James judgments pending appeal. See Abernathy v. Patterson, 295 F.2d 452 (5th Cir. 1961) (affirming dismissal of the case). 14. See Parks, 308 F.2d at 476. The other cases did not go to trial before the Court ruled in New York Times v Sullivan, effectively ending all of the cases based on the ad. 15. Lee at 759. The ministers field a separate action to enjoin the state from confiscating and selling their property, but that action was dismissed. Abernathy v. Patterson, 295 F.2d 452 (5th Cir. 1961) (affirming dismissal of the case). 16. Lee at 759 & n. 10 (quoting Taylor Branch, Parting the Waters 580 (1988). 17. Lee at 761 n.16. 18. Lee at 764. 19. Lee at 765. 20. Lee at 766. 21. Lee at 763-64. 22. 376 U.S. at 279. 23. Id. at 282 24. Id. at 286 25. Sullivan was not named in the ad. Rather, he, and in their separate cases, the other officials, claimed that as the city commissioner who oversaw the police he could maintain a libel lawsuit based on any statement about police misconduct or wrongful arrests. The Supreme Court ruled that an individual official cannot sue based on general criticism of the government, that such general statements do not adequately pertain to the individual and damage their individual reputation. 376 at 288. 26. Id. at 273-74. 27. Some commentators believe that this was purposeful in order to provide maximum protection to civil rights activism going forward – that the due process and equal protection claims could not have been decided without remanding the case back to the Alabama courts and effecting only piecemeal change when much larger change was needed. Lee at 764. 28. Louisiana used this same tactic against the NAACP and other organizations, including the Southern Conference Education Fund. Mississippi passed a law requiring all teachers to disclose the names of all organizations to which they belonged. Arkansas and Virginia had similar laws. See Schmidt at 299-300.
>> mehr lesen

A Privacy-Focused Facebook? We'll Believe It When We See It. (Fri, 08 Mar 2019)
In his latest announcement, Facebook CEO Mark Zuckerberg embraces privacy and security fundamentals like end-to-end encrypted messaging. But announcing a plan is one thing. Implementing it is entirely another. And for those reading between the lines of Zuckerberg’s pivot-to-privacy manifesto, it’s clear that this isn’t just about privacy. It’s also about competition. The Proof is in the Pudding At the core of Zuckerberg’s announcement is Facebook’s plan to merge its three messaging platforms: Facebook’s Messenger, Instagram’s Direct, and WhatsApp. The announcement promises security and privacy features across the board, including end-to-end encryption, ephemerality, reduced data retention, and a commitment to not store data in countries with poor human rights records. This would mean that your messages on any of these platforms would be unreadable to anyone but you and your recipients; could be set to disappear at certain intervals; and would not be stored indefinitely or in countries that are likely to attempt to improperly access your data. Even better, the announcement promises that Facebook will not store your encryption keys for any of these services, as is already the case with WhatsApp. This all sounds great, in theory. But secure messaging is not easy to get right at either the technical or policy level. Secure messaging is not easy to get right at either the technical or policy level. In technical terms, end-to-end encryption is only part of the story. In practice, the choices that undermine messaging security often lie far from the encryption engine. Strong authentication, for example, is necessary to ensure that you are messaging only with your intended recipients and not with any law enforcement “ghosts.” Automatic backups are another potential chink in the armor; if you choose to have WhatsApp back up your messages, it stores an unencrypted copy of your messages on iCloud (for iPhone) or Google Drive (for Android), essentially undermining the app’s end-to-end encryption. The prospect of merging WhatsApp, Instagram, and Messenger also raises concerns about combining identities that users intended to keep separate. Each of the three uses a different way to establish your identity: WhatsApp uses your phone number; Instagram asks for a username; and Messenger requires your “authentic name.” It’s not unusual for people to use each app for different parts of their life; therapists, sex workers, and activists, for example, face huge risks if they can no longer manage separate identities across these platforms. Zuckerberg’s announcement claims that merging the three apps “would be opt-in and you will be able to keep your accounts separate if you like.” An opt-in—not an opt-out—is an important safety valve and the right choice. Time will tell if a merged “Whatstamessenger” can pull off this promise. Above all, Facebook needs to be transparent about its business model. For example, while end-to-end encryption protects the contents of your messages, it cannot protect the metadata: who the recipients are, when messages are sent, and even where you are. Will Facebook be tracking and retaining that metadata? What about the possibility of a “super-app” model like WeChat’s? Without transparency about how Facebook will monetize its end-to-end encrypted services, users and advocates cannot scrutinize the various pressure points that business model might place on privacy and security. We could never get on board with a tool—even one that made solid technical choices—unless it was developed and had its infrastructure maintained by a trustworthy group with a history of responsible stewardship of the tool. Zuckerberg’s statement is vague about how Facebook will consult with “safety experts, law enforcement and governments on the best way to implement safety measures,” and what that will mean for how Facebook responds to government data requests. Recent news also does not inspire optimism that Facebook can execute responsible stewardship of security and privacy features. One need look no further than this week’s headlines, for example, about the extent to which Facebook has abused the security feature two-factor authentication to share and expose users’ phone numbers. Pay No Attention to the Competition Concerns Behind the Curtain Facebook’s privacy-focused vision is also a competition move. Zuckerberg’s out-of-character privacy focus in this announcement takes a page out of the Wizard of Oz: “Pay no attention to the competition concerns behind the curtain!” This is clearest when Zuckerberg’s announcement turns to “interoperability,” describing how users will be able to message friends on WhatsApp, Instagram, or Messenger from any one of the three apps. But it appears Facebook’s aim isn’t necessarily to make its messaging properties interoperable, but to make them indistinguishable—at least as far as regulators are concerned. Combining the services beyond recognition might give Facebook a technical excuse to sidestep impending competition and data-sharing regulation. Timing is key here: This privacy announcement comes on the heels of a German order to prevent Facebook from pooling user data without consent. Zuckerberg’s idea of interoperability might better be called “consolidation.” More broadly, Zuckerberg’s idea of interoperability might better be called “consolidation.” The announcement lays out a convenient future in which users have the freedom to communicate however they want...as long as they use Facebook-owned apps or SMS texting to do it. Zuckerberg’s excuse for excluding everyone else’s apps and messengers from this vision is security: “[I]t would create safety and spam vulnerabilities in an encrypted system to let people send messages from unknown apps where our safety and security systems couldn't see the patterns of activity.” But a future in which Facebook is the sole owner and guardian of our communication methods is not good news for user security, choice, and control. If Facebook really cares about interoperability, it should pursue open standards that level the playing field, not a closed proprietary family of apps that entrenches Facebook’s own dominance.
>> mehr lesen

Tell Congress to Stand Up for Real Net Neutrality Protections (Wed, 06 Mar 2019)
When the FCC announced its intention to repeal the 2015 Open Internet Order, Americans spoke up. When the FCC ignored the fact that most Americans support net neutrality, Americans spoke up again, asking Congress to reverse the FCC’s decision. And the Senate listened. This fight continues in the courts, in the states, and, yes, in Congress. The just-introduced Save the Internet Act would restore the 2015 Open Internet Order and prevent the FCC from pulling the same stunt it did in 2017 by ignoring facts and the clear desire of the people. Internet service providers (ISPs) like Verizon, AT&T, and Comcast would once again be banned from engaging in discriminatory data practices like blocking, throttling, and paid prioritization. ISPs would once again be accountable for actions that threaten the free and open Internet, public safety, and competition. Privacy protections from your ISP would once again be restored. There would again be protections for real net neutrality. The Save the Internet Act returns us to the hard-fought-for protections of the 2015 Open Internet Order and we should not settle for anything less. Bills, like H.R. 1101 (Walden), H.R. 1006 (Latta), and H.R. 1096 (McMorris Rodgers), that focus only on blocking, throttling, and paid prioritization miss the vital point that net neutrality is a principle of fairness. We cannot let ISPs try to redefine net neutrality as simply bans on three specific actions. It’s the idea that the provider that you pay to get you online doesn’t get to determine your experience once you’re on the Internet. You decide what you want to see and use, without ISPs stacking the deck in a way that benefits them. Legislation that protects real net neutrality recognizes that there are more than three ways for ISPs to leverage the fact that they control your access to the Internet and Internet services’ access to you. Legislators that truly believe in a free and open Internet will support the Save the Internet Act and not any bill that does less for Americans. Americans of both parties have made their opinion on net neutrality clear. Over and over again, we’ve spoken out. And we’re going to keep doing it until we get the Internet we deserve. Tell your representatives you want them to stand up for real net neutrality. And don’t let them redefine net neutrality by supporting one of the other, net-neutrality-in-name-only bills. Tell them you want them to co-sponsor the Save the Internet Act, and take a stand for Team Internet—not ISPs. Take Action Protect Net Neutrality
>> mehr lesen

OpenAI’s Recent Announcement: What Went Wrong, and How It Could Be Better (Tue, 05 Mar 2019)
Earlier this month, OpenAI revealed an impressive language model that can generate paragraphs of believable text. It declined to fully release their research “due to concerns about malicious applications of the technology.” OpenAI released a much smaller model and technical paper, but not the fully-trained model, training code, or full dataset, citing concerns that bad actors could use the model to fuel turbocharged disinformation campaigns. Whether or not OpenAI’s decision to withhold most of their model was correct, their “release strategy” could have been much better. The risks and dangers of models that can automate the production of convincing, low-cost, realistic text is an important debate to bring forward. But the risks attached to hinting about dangers without backing them up with detailed analysis and while refusing public or academic access, need to be considered also. OpenAI has appeared to consider one set of risks, without fully considering or justifying the risks they have taken in the opposite direction. Here are the concerns we have, and how OpenAI and other institutions should handle similar situations in the future. Some Background: What Is Language Modeling and Why Is It Important? OpenAI’s new language model does surprisingly well at a number of tasks, most notably generating text from a couple of seed sentences. The information they have released shows that the research could be a leap forward in language modeling. Language modeling is an area of contemporary machine learning research where a statistical model is trained to assign probabilities to sentences or paragraphs of text. This model can serve as a building block for a variety of language-related machine learning research tasks. Beyond generating coherent paragraphs of text, language models can also help answer questions, summarize passages, and translate text. Recent work from the past year has massively improved the state-of-the-art for language modeling, most of which has been fully open-sourced. Researchers Should Encourage a More Nuanced Dialogue Around AI OpenAI gave access to the the full model to a small number of journalists before releasing their research. On release, the media published articles with titles like, “Brace for the robot apocalypse.” The amount of misinformation now spreading about the current capabilities of state-of-the-art language modeling are reminiscent of past media hype around Facebook’s AI research. This points to the necessity for deliberate education around AI capabilities, not existential fear-mongering about vague threats. Unfortunately, thanks to OpenAI’s decision to give model access to journalists instead of sharing the model with the public—or even merely hosting a discussion between journalists and other experts both inside and outside the AI research community—it’s hard to say just how advanced or scary OpenAI’s model really is. Despite the conversations they may have had internally, all OpenAI published was a bullet-point list of general “risks” associated with its model, failing to provide any semblance of a rigorous risk assessment. This blocks independent researchers from studying the risks identified and trying to identify ways to mitigate those risks. This is particularly problematic because the risks OpenAI pointed to are all possible for powerful actors to recreate. Release the Full Model to Academics Since releasing trained language models bolsters academic work on other downstream machine learning tasks, like the ones mentioned above, most work in this field is heavily open-sourced. BERT, Google’s language model and a predecessor to OpenAI’s GPT-2, was published and fully open-sourced less than half a year ago. Since then, it has already generated (and continues to generate) massive waves of downstream language understanding research. OpenAI broke this trend of defaulting to openness by questioning the societal repercussions of releasing fully-trained language models. And when an otherwise respected research entity like OpenAI make a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research. In the AI space, open publication norms are all the more important, given that research capabilities are already so highly centralized. Many frontiers of AI research require massive amounts of computing power and data to train and test, and OpenAI’s GPT-2 language model is no different. In this sort of ecosystem, a lot of groundbreaking AI research comes from private research labs as well as publicly funded sources. Private institutions are already disincentivized from releasing large chunks of their research, like datasets and code that may contain proprietary information. An open research culture, however, provides a social incentive for private entities to publish as much as possible. OpenAI’s decision threatens that open publication culture. That means large privatized AI research centers may start thinking this is an acceptable thing to do. We could start to see fewer publications—and a world that resembles an arms race between corporate giants rather than a collaborative research forum. To minimize this sort of impact, OpenAI should at least offer full model access to academic researchers and continue to encourage a culture of peer-to-peer knowledge sharing in the AI research community. Stop Using “Responsible Disclosure” Analogies to Justify Withholding Research In defending its decision to withhold its fully-trained model, training code, and full dataset, OpenAI characterized its decision as an “experiment in responsible disclosure.” “Responsible disclosure” refers to a process in the information security industry where security researchers withhold vulnerabilities from the public until they are patched. But responsible (or coordinated) disclosure means eventual public disclosure: the goal is always for the knowledge to become public in a reasonable amount of time. OpenAI’s decision here has nothing to do with “responsible disclosure”—this is a misplaced analogy and misunderstands the purpose of the term of art. Rather, they use the term here in order to justify withholding research from the public, with no date or plan for final release. The analogy is broken in so many ways as to make it fundamentally useless. In the case of generating “believable fake news,” there is no “vendor” that OpenAI can approach to mitigate the problem. The “vulnerability” and risk is societal. It is us, and society as a whole, who must be informed and take steps to develop ways to detect or otherwise manage the consequences of convincing computer-generated text. Even if this research were as dangerous as OpenAI suggests, there is no finite period of time for which failing to disclosure would lessen the risks; in fact, the risks would only increase as powerful institutions begin to reproduce their research, and widespread understanding of the risks would be stymied. Create a Discussion Outside OpenAI Around Release Decisions This incident points to the need for consensus-building among independent researchers—from both public institutions and private corporations and ranging in expertise—before such decisions are made. From this demonstration, we’re not convinced that a single research organization is capable of performing objective and comprehensive evaluations of the ethical and policy impacts of its own work. OpenAI’s post indicates they were hesitant to take on that responsibility, and they defaulted to locking their model down, rather than defaulting to openness. Until the AI research community can come to a consensus on process for such decisions in the future, we hope that OpenAI and other research organizations will step back—and consult a broader quorum of policy experts and researchers—before making such dramatic “releases” in the future.
>> mehr lesen

Facebook Doubles Down On Misusing Your Phone Number (Mon, 04 Mar 2019)
When we publicly demanded that Facebook stop messing with users’ phone numbers last week, we weren’t expecting the social network to double down quite like this: By default, anyone can use the phone number that a user provides for two-factor authentication (2FA) to find that user’s profile. For people who need 2FA to protect their account and stay safe, Facebook is forcing them into unnecessarily choosing between security and privacy. While settings are available to choose whether “everyone,” “friends of friends,” or “friends” can use your phone number this way, there is no way to opt out completely. The problems with Facebook’s phone number look-up feature are not entirely new. Facebook even promised to disable the functionality last April in the wake of the Cambridge Analytica scandal. Now, others can no longer enter your phone number directly into the Facebook search bar to find your profile. Instead, they can still use your phone number “in other ways, such as when someone uploads your contact info to Facebook from their mobile phone,” a Facebook spokesperson told USA Today. Those "other ways" are what the settings shown above control. But whether they have to type it into Facebook’s search bar or into their phone contacts, the result is the same: others can use your phone number to find your Facebook profile. Now, since Facebook started requiring page administrators to enable 2FA last summer, it’s safe to assume that more people have started using the security feature and noticing how Facebook mismanages it. (Although Facebook stopped requiring phone numbers for 2FA enrollment last May, phone number-based 2FA can still be the most usable option for many people.) In response to a tweet from a Page administrator pointing out this critical problem, Facebook has been forced to respond to user concerns and media reports. Facebook’s response has been less than reassuring. TechCrunch reports: When asked specifically if Facebook will allow users to users to opt-out of the setting, Facebook said it won’t comment on future plans. And, asked why it was set to “everyone” by default, Facebook said the feature makes it easier to find people you know but aren’t yet friends with. Last year, Gizmodo and researchers from Northeastern University and Princeton University revealed that the company was using 2FA phone numbers—and even worse, “shadow” contact information that users never directly gave the company—for targeted advertising. Now, the scope of Facebook’s phone number problem seems even wider. In defiance of user expectations and security best practices, it is exposing users’ 2FA phone numbers not only to advertisers but also to, well, anyone. Facebook must fix this before more people are put at risk. It should never have made phone numbers that were provided for security searchable by everyone in the first place.
>> mehr lesen

Congress Invites Industry Advocates to Hearings. Industry Talking Points Ensue. (Mon, 04 Mar 2019)
In back-to-back hearings last week, the House and the Senate discussed what, if anything, Congress should do about online privacy. Sounds fine—until you see who they invited. Congress should be seeking out multiple, diverse perspectives. But last week, both chambers largely invited industry advocates, eager to do the bidding of large tech companies, to the table. The testimony and responses from the industry representatives were predictable: lip service to the idea of strong federal consumer privacy legislation, but few specifics on what those protections should actually look like. These witnesses also continue to advocate for unwritten, vague federal preemption of existing state laws like California’s Consumer Privacy Act (CCPA) or Illinois’s Biometric Information Privacy Act (BIPA). However, there were a few bright spots. In the House Consumer Protection Subcommittee Chair, Rep. Jan Schakowsky, kicked off Tuesday’s hearing by asserting that collection of personal information must come with responsibilities the tech companies: This data isn’t being collected to give you the creeps. It’s being done to control market and make a profit… Without a comprehensive privacy law, the burden has fallen completely on consumers to protect themselves, and this has to end... A person should not need an advanced law degree to avoid being taken advantage of. She also stated that federal legislation must allow aggressive enforcement mechanisms, and that it’s “important to equip regulators and enforcement with the tools and funding necessary to protect privacy,” including a private right of action for individual consumers. Energy & Commerce Committee Chair, Rep. Frank Pallone, followed in his remarks by saying, “It is time for us to move past the old model that protects the companies using the data and not the people… some data maybe just shouldn't be collected at all.” We were happy that Brandi Collins-Dexter of Color of Change was able to explain the surprising ways data can be used against consumers. “Even data points that feel innocuous can be used as proxies for a protected class.” In the Senate Despite Commerce, Science, & Transportation Chair Sen. Roger Wicker calling for a “preemptive framework” in his opening remarks, the Committee's top Democrat, Sen. Maria Cantwell, said it's "disturbing" to see calls from Republican and tech industry leaders to override state privacy laws in any federal measure: I find this effort somewhat disturbing that with all the litany of things, the privacy violations [we] just went through and as countries are grappling with this, that ... the first thing that people want to organize in D.C. is a preemption effort. Sen. Amy Klobuchar likewise called out industry fearmongering about the difficulties of complying with a “patchwork” of state laws, and stood up for state leadership in privacy legislation: The reason all the states are doing all of this is that we have done nothing here, and part of that is because the companies that you represent have been lobbying against legislation like this for years. Later in the hearing Sen. Brian Schatz painted a picture of consent fatigue, why we can’t reasonably expect consumers to consent to every use of their data, especially when they have to click on “6-point font while they are on the bus.” We look forward to working with Sen. Schatz to make improvements to his information fiduciary bill as he prepares to reintroduce it this Congress. In a panel dominated by industry, we were also happy to see respected academic Woodrow Hartzog defending user rights to privacy. In particular Hartzog pointed out the broad issues made worse by the “personal data industry complex,” especially for marginalized communities and communities of color: “We are only just beginning to see the human and societal cost of massive data platform dominance.” Real Privacy Despite these bright spots, the industry-heavy witness panels on these hearings were nothing but foxes guarding the henhouse. If Congress wants real privacy for all Americans, it cannot only hear industry’s advice on how to protect consumers. We already know that doesn’t work. In response to outcry over these hearings’ industry-heavy witness panels, Senate Commerce Chairman Roger Wicker has announced that the committee does plan to hold additional hearings on consumer privacy and plans to invite consumer groups. That’s certainly good news for the future, but we are still concerned that Chairman Wicker and Chairman Pallone kicked off the new Congress without giving consumers a meaningful seat at the table.
>> mehr lesen

German Data Privacy Commissioner Says Article 13 Inevitably Leads to Filters, Which Inevitably Lead to Internet "Oligopoly" (Mon, 04 Mar 2019)
German Data Privacy Commissioner Ulrich Kelber is also a computer scientist, which makes him uniquely qualified to comment on the potential consequences of the proposed new EU Copyright Directive. The Directive will be voted on at the end of this month, and its Article 13 requires that online communities, platforms, and services prevent their users from committing copyright infringement, rather than ensuring that infringing materials are speedily removed. In a new official statement on the Directive (English translation), Kelber warns that Article 13 will inevitably lead to the use of automated filters, because there is no imaginable way for the organisations that run online services to examine everything their users post and determine whether each message, photo, video, or audio clip is a copyright violation. Kelber goes on to warn that this will exacerbate the already dire problem of market concentration in the tech sector, and expose Europeans to particular risk of online surveillance and manipulation. That's because under Article 13, Europe's online companies will be required to block all infringement, even if they are very small and specialised (the Directive gives an online community three years' grace period before it acquires this obligation, less time if the service grosses over €5m/year). These small- and medium-sized European services (SMEs) will not be able to afford to license the catalogues of the big movie, music, and book publishers, so they'll have to rely on filters to block the unlicensed material. But if a company is too small to afford licenses, it's also too small to build filters. Google's Content ID for YouTube cost a reported €100 million to build and run, and it only does a fraction of the blocking required under Article 13. That means that they'll have to buy filter services from someone else. The most likely filter vendors are the US Big Tech companies like Google and Facebook, who will have to build and run filters anyway, and could recoup their costs by renting access to these filters to smaller competitors. Another possible source of filtering services is companies that sell copyright enforcement tools like Audible Magic (supplier to Big Tech giants like Facebook), who have spent lavishly to lobby in favour of filters (along with their competitors). As Kelber explains, this means that Europeans who use European services in the EU will nevertheless likely have every public communication they make channeled into offshore tech companies' servers for analysis. These European services will then have to channel much of their revenues to the big US tech companies or specialist filter vendors. Take Action Stop Article 13 So Article 13 guarantees America's giant companies a permanent share of all small EU companies' revenues and access to an incredibly valuable data-stream generated by all European discourse, conversation, and expression. These companies have a long track record of capitalising on users’ personal data to their advantage, and between that advantage and the revenues they siphon off of their small European competitors, they are likely to gain permanent dominance over Europe's Internet. Kelber says that this is the inevitable consequence of filters, and has challenged the EU to explain how Article 13's requirements could be satisfied without filters. He's called for "a thoughtful overhaul" of the bill based on "data privacy considerations," describing the market concentration as a "clear and present danger." We agree, and so do millions of Europeans. In fact, the petition against Article 13 has attracted more signatures than any other petition in European history and is on track to be the most popular petition in the history of the human race within a matter of days. With less than a month to go before the final vote in the European Parliament on the new Copyright Directive, Kelber's remarks couldn't be more urgent. Subjecting Europeans' communications to mass commercial surveillance and arbitrary censorship is bad for human rights and free expression, but as Kelber so ably argues, it's also a disaster for competition. Take Action Stop Article 13
>> mehr lesen

With FOSTA Already Leading to Censorship, Plaintiffs Are Seeking Reinstatement Of Their Lawsuit Challenging the Law’s Constitutionality (Sat, 02 Mar 2019)
Due to an editing error, a draft version of this article was published prematurely. Internet websites and forums are continuing to censor speech with adult content on their platforms to avoid running afoul of the new anti-sex trafficking law FOSTA. The measure’s vague, ambiguous language and stiff criminal and civil penalties are driving constitutionally protected content off the Internet. The consequences of this censorship are devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. Fearing that comments, posts, or ads that are sexual in nature will be ensnared by FOSTA, many vulnerable people have gone offline and back to the streets, where they’ve been sexually abused and physically harmed. Plaintiffs Woodhull Freedom Foundation, Human Rights Watch, Alex Andrews, the Internet Archive, and Eric Koszyk filed suit last June to invalidate the law. A federal judge dismissed the lawsuit last summer without reaching the merits of our arguments that the law is unconstitutional. Instead the judge found that none of the plaintiffs had standing—meaning could not show they have or will be harmed— to challenge the law. EFF is co-counsel for plaintiffs with Davis, Wright & Tremaine, Walters Law Group, and Daphne Keller. That decision was wrong and the plaintiffs are appealing it. They filed their opening brief February 20 explaining how the trial judge got it wrong and why the law should be enjoined. And several organizations filed amicus briefs in the appeals court last week supporting these arguments. Plaintiffs argue that they need not wait until they face a FOSTA enforcement action to challenge in court a law regulating speech. Standing should be determined according to the plaintiffs’ interpretation of the law, not the government’s. The plaintiffs emphasize that they have already been harmed by FOSTA, for example, Woodhull and Andrews, by self-censoring for reasonable fear of prosecution under the law, the Internet Archive by being burdened by uncertain moderation obligations, Koszyk by losing his platform for advertising when Craigslist shut down its personals and Therapeutic Services sections because of FOSTA. This is more than enough to establish standing, especially under the liberal standing requirements in First Amendment challenges. Plaintiffs' arguments were well supported by the four amicus curiae briefs. Here is a rundown of the arguments presented by amici. Laws chilling speech require special consideration. As we argued in our opening brief, when violations of the First Amendment are at stake, courts have an obligation to take a broad view of who will be affected by any law or regulation that threatens free speech. The Institute for Free Speech agreed and expounded on this issue in its brief. Courts have held that laws implicating First Amendment rights can be challenged before they are enforced as long as there is a credible threat that speech will be chilled. And there is ample evidence that FOSTA has caused organizations to censor their speech (more on that below). Government attorneys defending FOSTA in the case convinced the trial judge not only that the plaintiffs lacked standing, but also that the plaintiffs’ examples of chilled speech would not run afoul of FOSTA. That’s no guarantee that the government won’t take FOSTA enforcement against them, and does nothing to stop states attorneys general or private individuals from hauling the plaintiffs, and others like them, into court seeking damages or criminal prosecution. As the institute’s lawyers said: This is precisely the kind of case for which First Amendment standing doctrine was developed. It is a pre-enforcement challenge to a statute of startling scope and uncertain meaning, directly regulating a major frontier of First Amendment-protected activity. And Congress chose to decentralize its enforcement, permitting numerous parties, including private litigants and state attorneys general, to bring lawsuits against alleged violators. FOSTA makes it illegal to post content on the Internet that “facilitates” prostitution and also strips Internet sites from legal protections provided by 47 U.S.C. Section 230, part of the Communications Decency Act. Section 230 afforded websites liability only for content they themselves create and publish, shielding websites from liability for speech contained in comments and opinions submitted to them by third parties. FOSTA, which stands for Allow States and Victims to Fight Online Sex Trafficking Act, was purported to fight sex trafficking by outlawing ads or other content involving sexual exploitation of minors. But instead of focusing on the perpetrators of sex trafficking, FOSTA goes after online speakers, imposing harsh penalties for any speaker that may use the Internet to “facilitate” prostitution or “contribute to sex trafficking.” Within days of its passage, Craigslist took down its personals  section, saying it couldn’t take the risk that someone in the section could be accused of violating FOSTA without jeopardizing other services. FOSTA is the latest attempt by government to censor adult content The Center for Democracy and Technology urges the court to grant the preliminary injunction against FOSTA, pointing out in its brief that the government has a long history of trying to outlaw adult content, and in every instance those attempts were rejected by the Supreme Court. The court recognized that laws that threatened to chill protected speech aren’t constitutional. With the advent of the Internet, Congress added Section 230 of the Communications Decent Act to protect websites and online publishers from liability for content—whether benign or offensive—by third-party speakers. FOSTA tosses Section 230 aside, and marks the first time Congress has ever narrowed the scope of its protections. As CDT says in its brief: Without limits on liability for hosting user speech, such intermediaries are likely to react by significantly limiting what their users can say‚ including a potentially wide range of lawful speech, from discussions on dating forums about consensual adult sex, to resources for promoting safety among sex workers. Indeed, as discussed in Appellants’ brief, that has already started to happen, with platforms restricting access to information that promotes public health and safety, political discourse, and economic growth. FOSTA-related censorship is growing every day Lawyers for 11 rights-based organization advocating for sex workers, survivors of trafficking and other abused communities, point out that FOSTA epitomizes the continual conflation of sex work and trafficking or prostitution. Voluntary sex work, which includes webcam sex, sensual bodywork, and adult film entertainment, are legal. You may be offended by their websites or find their posts inappropriate, but it is protected speech. And the vague language of FOSTA means it could very easily be misconstrued as a FOSTA violation. So people who work in those communities have censored themselves online or found forums they’ve used shuttered because of FOSTA. They have lost income or are unable to find community, support, and health and harm reduction services online. Social media platforms like Tumblr have removed all forms of graphic sexual content, which impacts the most stigmatized sexual minorities such as LGBT, transgender, and intersex communities. Self-censorship because of FOSTA has had violent consequences: A sex worker, fearful of using the Internet for screening because of FOSTA, met with a new client, and ended up gang-raped, beaten, and robbed, according to advocates that provided information for the filing. Said the rights groups: FOSTA, like other laws before it, has had a devastating impact on both the individuals it was intended to protect and some of the most marginalized communities in our society … Many sex workers are fearful to communicate online about information such as potentially harmful clients or places to avoid because this could be considered ‘promoting prostitution.’ While this was true pre-FOSTA, the fear that such life- saving speech could be targeted is due to FOSTA. Reddit, Techdirt.com, and Engine Advocacy, a nonprofit tech policy organization advocating for start-ups, told the appeals court about the burden FOSTA has imposed on websites that host user comments. Reddit, for example, felt pressured by FOSTA to close a forum about harm reduction and safety for sex workers to minimize the threat of liability, even though doing so may have increased the actual threat to sex workers that FOSTA was ostensibly designed to reduce. Some forums won’t closely monitor adult content, fearing that if monitors learn too much about the subjects being discussed that could potentially violate FOSTA, they could be held liable for facilitating or contributing to sex trafficking. As the groups say in the brief: Broadly written rules almost necessarily result in lawful content being removed because there is no way for Reddit to weigh all the voluminous expression it reviews in a sufficiently nuanced and contextualized way to eliminate the risk of content targeted by FOSTA slipping through.     This is exactly the kind of censoring behavior that plaintiffs are suing over. It’s real, and it’s happening all the time on the Internet. The court must allow the plaintiffs' challenge to go forward.  Related Cases:  Woodhull Freedom Foundation et al. v. United States
>> mehr lesen

Massive Database Leak Gives Us a Window into China’s Digital Surveillance State (Fri, 01 Mar 2019)
Earlier this month, security researcher Victor Gevers found and disclosed an exposed database live-tracking the locations of about 2.6 million residents of Xinjiang, China, offering a window into what a digital surveillance state looks like in the 21st century. Xinjiang is China’s largest province, and home to China’s Uighurs, a Turkic minority group. Here, the Chinese government has implemented a testbed police state where an estimated 1 million individuals from these minority groups have been arbitrarily detained. Among the detainees are academics, writers, engineers, and relatives of Uighurs in exile. Many Uighurs abroad worry for their missing family members, who they haven’t heard from for several months and, in some cases, over a year. Although relatively little news gets out of Xinjiang to the rest of the world, we’ve known for over a year that China has been testing facial-recognition tracking and alert systems across Xinjiang and mandating the collection of biometric data—including DNA samples, voice samples, fingerprints, and iris scans—from all residents between the ages of 12 and 65. Reports from the province in 2016 indicated that Xinjiang residents can be questioned over the use of mobile and Internet tools; just having WhatsApp or Skype installed on your phone is classified as “subversive behavior.” Since 2017, the authorities have instructed all Xinjiang mobile phone users to install a spyware app in order to “prevent [them] from accessing terrorist information.” The prevailing evidence of mass detention centers and newly-erected surveillance systems shows that China has been pouring billions of dollars into physical and digital means of pervasive surveillance in Xinjiang and other regions. But it’s often unclear to what extent these projects operate as real, functional high-tech surveillance, and how much they are primarily intended as a sort of “security theater”: a public display of oppression and control to intimidate and silence dissent. Now, this security leak shows just how extensively China is tracking its Xinjiang residents: how parts of that system work, and what parts don’t. It demonstrates that the surveillance is real, even as it raises questions about the competence of its operators.  A Brief Window into China’s Digital Police State Earlier this month, Gevers discovered an insecure MongoDB database filled with records tracking the location and personal information of 2.6 million people located in the Xinjiang Uyghur Autonomous Region. The records include individuals’ national ID number, ethnicity, nationality, phone number, date of birth, home address, employer, and photos. Over a period of 24 hours, 6.7 million individual GPS coordinates were streamed to and collected by the database, linking individuals to various public camera streams and identification checkpoints associated with location tags such as “hotel,” “mosque,” and “police station.” The GPS coordinates were all located within Xinjiang. This database is owned by the company SenseNets, a private AI company advertising facial recognition and crowd analysis technologies. A couple of days later, Gevers reported a second open database tracking the movement of millions of cars and pedestrians. Violations like jaywalking, speeding, and going through a red-light are detected, trigger the camera to take a photo, and ping a WeChat API, presumably to try and tie the event to an identity. Database Exposed to Anyone with an Internet Connection for Half a Year China may have a working surveillance program in Xinjiang, but it’s a shockingly insecure security state. Anyone with an Internet connection had access to this massive honeypot of information. Gevers also found evidence that these servers were previously accessed by other known global entities such as a Bitcoin ransomware actor, who had left behind entries in the database. To top it off, this server was also vulnerable to several known exploits. In addition to this particular surveillance database, a Chinese cybersecurity firm revealed that at least 468 MongoDB servers had been exposed to the public Internet after Gevers and other security researchers started reporting them. Among these instances: databases containing detailed information about remote access consoles owned by China General Nuclear Power Group, and GPS coordinates of bike rentals. A Model Surveillance State for China China, like many other state actors, may simply be willing to tolerate sloppy engineering if its private contractors can reasonably claim to be delivering the goods. Last year, the government spent an extra $3 billion on security-related construction in Xinjiang, and the New York Times reported that China’s police planned to spend an additional $30 billion on surveillance in the future. Even poorly-executed surveillance is massively expensive, and Beijing is no doubt telling the people of Xinjiang that these investments are being made in the name of their own security. But the truth, revealed only through security failures and careful security research, tells a different story: China’s leaders seem to care little for the privacy, or the freedom, of millions of its citizens.
>> mehr lesen

Don’t Sacrifice Fair Use to the Bots (Fri, 01 Mar 2019)
Three years ago, we warned of a string of dangerous new policy proposals on the horizon. Under these proposals, platforms would be forced to implement copyright bots that sniffed all of the media that users uploaded to them, deleting your uploads with no human review. It’s happening. The European Parliament is weeks away from a vote on Article 13, which would force most platforms, services, and communities that host user uploads to install filters to block uploads that seem to match materials in a database of copyrighted works. If the filter detected enough similarity between your video and something from the list of copyrighted works, your video would be banned. Hollywood lobbyists have proposed similar measures here in the U.S. When platforms over-rely on automated filters to enforce copyright, users must cater their uploads to those filters. There’s a lot to say about the dangers of Article 13—how it would censor the whole world’s Internet, not just Europe’s; how it would give an unfair advantage to big American tech companies; how it would harm the artists it was supposedly intended to help—but there’s another danger in Article 13 and other proposals to mandate filtering the Internet: they undermine our fair use rights. When platforms over-rely on automated filters to enforce copyright, users must cater their uploads to those filters. If you’ve ever seen the message, “This video has been removed due to a complaint from the copyright owner,” you’re familiar with YouTube’s Content ID system. Built in 2007, Content ID lets rightsholders submit large databases of video and audio fingerprints and have YouTube continually scan new uploads for potential matches to those fingerprints. Despite its flaws and frequent false positives, Content ID has become the template for copyright bots on other online platforms. It’s also served as a persistent thorn in the side of YouTube creators—particularly those who make fair use of copyrighted works in their videos. As one creator who makes pop culture criticism videos noted, “I’ve been doing this professionally for over eight years, and I have never had a day where I felt safe posting one of my videos even though the law states I should be safe posting one of my videos.” It’s easy to see the impact that Content ID has had on the YouTube community—a simple search reveals hundreds of videos about how to avoid a Content ID takedown, with litanies of guidelines about keeping clips to a certain length, adding a colored border to them, or keeping the copyrighted content in a certain corner of the screen. That’s the problem. The beauty of fair use is its inherent flexibility. The law does not provide specific rules about how long of a clip must be for you to use it in your parody or criticism or whether it can take up the full screen. But in a filtered Internet, the algorithms create new restrictions on our online speech. The danger of mandatory filtering is that machines will replace human judgment. While European “fair dealing” law doesn’t have the same flexibilities as U.S. fair use, it does allow more than a dozen exceptions and limitations to copyright, including protected uses like caricature, parody, criticism, and incidental inclusion of a copyrighted work—uses that a robot simply can’t be trained to reliably recognize. Implemented thoughtfully, copyright bots can serve as a useful aid to human review, flagging uploads that demand a serious fair use analysis. But the current proposals put forth by big media companies take humans out of the equation. In doing so, they really take free speech out of the equation. This week is Fair Use Week, an annual celebration of the important doctrines of fair use and fair dealing.
>> mehr lesen

Stupid Patent of the Month: Veripath Patents Following Privacy Laws (Fri, 01 Mar 2019)
What if we allowed some people to patent the law and then demand money from the rest of us just for following it? As anyone with a basic understanding of democratic principles can see, that is a terrible idea. In a democracy, elected representatives write laws that apply to everyone, ideally, based on the public interest. We shouldn’t let private parties “own” legal principles or use technical jargon to re-cast those principles as “inventions.”  But that’s exactly what the U.S. Patent Office has allowed two inventors, Nicholas Hall and Steven Eakin, to do. Last September, the government proclaimed that Hall and Eakin are the inventors of “Methods and Systems for User Opt-In to Data Privacy Agreements,” U.S. Patent No. 10,075,451.  The owner of this patent, a company called “Veripath,” is already filing lawsuits against companies that make privacy compliance software. With Congress and many states actively engaged in debates over consumer privacy laws, Veripath might soon be using this patent to extract licensing cash from U.S. companies as well. Privacy-For-Functionality isn’t an “Invention,” it’s a Policy Debate Claim 1 of the ’451 patent describes a basic data privacy agreement. An API provides personal information from a software application; then the user is asked for a “required permission” for the use of that information. There’s one add-on to the privacy deal: in exchange for the permission, the user gets access to “at least one enhanced function.” The next several claims go on to describe minor variations on this theme. Claim 2 specifies that the “enhanced function” won’t be available to other users. Claim 3 describes the enhanced function as being fewer advertisements; Claim 4 describes offering the enhanced function in exchange for a monetary payment. To say this “method” is well-known is a major understatement. The idea of exchanging privacy for enhanced functionality or better service is so widespread that it has been codified in law. For example, last year’s California Consumer Privacy Act (CCPA) specifically allows a business to offer “incentives” to a user to collect and sell their data. That includes “financial incentives,” or “a different price, rate, level, or quality of goods or services.” The fact that state legislators were familiar enough with these concepts to write them into law is a sign of just how ubiquitous and uninventive they are. This is not technology this is policy. (An important aside: EFF strongly opposes pay-for-privacy, and is working to remove it from the CCPA. Pay-for-privacy undermines the law’s non-discrimination provisions, and more broadly, creates a world of privacy “haves” and “have-nots.” We’ve long sought this change to the CCPA.)  Follow the Law, Infringe this Patent Veripath has already sued two companies that help website owners comply with Europe’s General Data Protection Regulation, or GDPR, saying they infringe its patent. Netherlands-based Faktor was sued [PDF] on Feb. 15, and France-based Didomi was sued [PDF] on Feb. 22 Some background: Venpath, Inc., a company with a New York address that appears to be a virtual office, assigned the rights in the ’451 patent to VeriPath just days before the patent issued in September last year. As it happens, the FTC began enforcement proceedings against VenPath last September. The FTC’s complaint [PDF] alleged that VenPath’s website represented that “VenPath participates in and has certified its compliance with the EU-U.S. Privacy Shield Framework.” The FTC alleged a count of “privacy misrepresentation.” It claimed that VenPath “did not complete the steps necessary to renew its participation in the EU-U.S. Privacy Shield framework after that certification expired in October 2017.” The FTC issued a Decision and Order [PDF] requiring VenPath to remove the misrepresentations.  An exhibit [PDF] attached to the complaint shows that one of the named inventors on the patent, Nick Hall, contacted Faktor to ask what its prices were. Hall identified himself as the CEO of VenPath. Once Faktor responded, Veripath sued Faktor in federal court in New York. In its lawsuits, Veripath claims that basic warnings about cookies on websites, a now-common method of complying with the GDPR, violate its patent. The lawsuit against Faktor notes that Faktor’s own website “might not work properly” unless a user consents to having her browser accept cookies. Veripath and its legal team argue that this simple deal—accepting cookie use, in order to visit websites—is enough to infringe the patent. They also claim that Faktor’s Privacy Manager software infringes at least Claim 1 of the patent, and facilitates infringement by others.  The ’451 patent should never have been granted. In our view, its claims are clearly ineligible for patent protection under Alice v. CLS Bank. In Alice, the Supreme Court held that an abstract idea (like privacy-for-functionality) doesn’t become eligible for a patent simply because it is implemented using generic technology. Courts have struck down similar claims, like a patent on the idea of conditioning access to content on viewing ads.  Even when a patent is invalid, defendants face pressure to settle. Patent litigation is expensive and it can cost tens or hundreds of thousands of dollars just to get through the early stages. To really protect innovation we have to ensure that patents like the ’451 patent are never issued in the first place. The fact that this patent was granted shows the Patent Office is failing to apply the law. We are currently urging the public to tell the Patent Office to stop issuing abstract software patents. You can use our Action Center to submit comments. TAKE ACTION Tell the Patent Office to Stop Issuing Abstract Software Patents
>> mehr lesen

EFF to the inter-American System: If You Want to Tackle “Fake News,” Consider Free Expression First (Thu, 28 Feb 2019)
Recent elections across the Americas ­from the United States to Brazil have stirred fears about the impact of “fake news”. Earlier this month, EFF made a submission to the Organization of American States (OAS), the pan-American institution currently investigating the extent and impact of false information across the region.  While acknowledging the perceived risks, our testimony warned of the dangers of over-reacting to a perceived online threat, at the cost of free expression standards in the region. Over-reaction isn’t just a future hypothetical. During 2018, 17 governments approved or proposed laws restricting online media with the justification of combating online manipulation. Citizens were prosecuted and faced criminal charges in at least ten countries for spreading “fake news.” Disinformation flows are not a new issue, neither is the use of "fake news" as a label to attack all criticism as baseless propaganda. The lack of a set definition for this term magnifies the problem, rendering its use susceptible to multiple and inconsistent meanings. Time and again legitimate concerns about misinformation and manipulation were misconstrued or distorted to entrench the power of established voices and stifle dissent. To combat these pitfalls, EFF’s submission presented recommendations —and stressed that the  human rights standards on which the Inter-American System builds its work, already provide substantial guidelines and methods to address disinformation without undermining free expression and other fundamental rights. The Americas’ human rights standards — which include the American Convention on Human Rights — declare that restrictions to free expression must be (1) clearly and precisely defined by law, (2) serve compelling objectives authorized by the American Convention, and (3) be necessary and appropriate in a democratic society to accomplish the objectives pursued as well as strictly proportionate to the intended objective. New prohibitions on the online dissemination of information based on vague ideas, such as “false news,” for example, fail to comply with this three-part test. Restrictions on free speech that vaguely claim to protect the “public order” also fall short of meeting these requirements. The American Convention on Human Rights also says that the right of free expression may not be restricted by indirect methods or means. Since most communication on the Internet is facilitated by intermediaries, such as ISPs and social media platforms, unnecessary and disproportionate measures targeted at them invariably result in undue limitation of the rights of freedom of expression and access to information. Governmental orders to shut down mobile networks or block entire social media platforms, as well as legislation compelling intermediaries to remove content within 24 hours after a user’s notice or to create automated content filters, all in the name of countering “fake news,” clearly constitute an excessive approach that harms free speech and access to information. Holding Internet intermediaries accountable for third-party content stimulates self-censorship by platforms and hinders innovation. Any State’s attempt to tackle disinformation in electoral contexts must carefully avoid undercutting the deep connection between democracy and free expression. The fiercest debates over society and a government’s direction take place during elections, and during which public engagement is often maximized. While abuses of free speech by the person responsible for the content will and should be addressed by subsequent civil liability, companies should not be turned into a sort of speech police. Experience has proven this is not a wise alternative; private platforms are prone to error and can disproportionately censor the less powerful. When Internet intermediaries establish terms and rules for their platforms, they should do so by following standards of transparency, due process, and accountability, also taking human rights principles into account, including free expression, access to information, and non-discrimination. So what can be done? In our submission, we outlined some guidelines on how to address actions aimed at combating disinformation during elections: Advancing transparency and accountability in content moderation. Platforms need better practices with regard to notification of users, due process, and available data on content restriction and accounts suspension, as developed in the Santa Clara Principles. Deploying better tools for users, including permitting greater user customization of feed and search algorithms, and increasing the transparency of electoral advertising, among others. Avoiding steps that might undermine personal privacy, including subverting encryption. Denying user security is not an answer to disinformation. Paying attention to network neutrality and platform competition. Zero-rating practices may discourage users from searching for alternative sources of information or even to read the actual news piece. Data portability and interoperability, on the other hand, can help to provide more players and sources. As underscored in EFF’s submission, the abundance of information in the digital world should not be deemed, in itself, a problem. But the responses to the “fake news” phenomenon—if they’re unable to adhere to proper human rights standards—could be.
>> mehr lesen

EFF Implores Nine Companies to Fix It Already! (Thu, 28 Feb 2019)
Changes from Facebook, Google, and Others Could Make Everyone’s Lives Safer and Easier San Francisco - Technology is supposed to make our lives better, yet many big companies have products with big security and privacy holes that disrespect user control and put us all at risk. The Electronic Frontier Foundation (EFF) is launching a new project called “Fix It Already!” demanding repair for nine issues from tech giants like Facebook and Google. “We chose these nine problems because they are well-known problems and weaknesses in these services that, if fixed, could make a huge difference in many people’s lives,” said EFF Associate Director of Research Gennie Gebhart. “It’s 2019, and it’s time for big tech companies to bring their products in line with what consumers expect and deserve.” “Fix It Already!” takes Facebook to task for re-using customers’ phone numbers to deliver targeted advertising, even if the customer only provided the number for security purposes, like for two-factor authorization or to receive account alerts. Facebook should not re-use your phone number for its advertising purposes and it should fix its systems to prevent all non-essential uses of this sensitive information. For Google, “Fix It Already!” demands that the Android phone allow users to deny and revoke network permissions for apps. While Android makes it easy to block apps from seeing things like your location data and your contacts, it doesn’t allow users to block the apps’ ability to phone home about every launch, click, or tap in the app. Google should let Android users block snooping apps from exfiltrating data off their phones. “Users need to have control and be able to set clear limits on what is done with their data. Information gathered for one purpose should not be secretly re-used.  A targeted advertising business model is no excuse for engaging in creepy behavior,” said EFF Director of Cybersecurity Eva Galperin. “Facebook and Google are relied upon by millions and millions of people, which makes it even more important for them to treat customers and their data with respect.” The “Fix It Already!” list also identifies problems that leave customers insecure.  It asks Apple to let users truly encrypt their iCloud backups, tells Twitter to end-to-end encrypt direct messages, and demands that Verizon stop pre-installing spyware on phones. Microsoft, Slack, Venmo, and WhatsApp are also called out in the report. “All the products on our list are supposed to be state-of-the-art, but their failure to fix these obvious problems means that they aren’t taking users’ real needs to heart,” said EFF Technology Projects Director Jeremy Gillula. “We hope that with a little more attention, these companies will now take these issues seriously and fix them already.” For the full “Fix It Already!” list: https://fixitalready.eff.org Contact:  Gennie Gebhart Associate Director of Research gennie@eff.org Jeremy Gillula Tech Projects Director jeremy@eff.org
>> mehr lesen

Fix It Already: Nine Steps That Companies Should Take To Protect You (Thu, 28 Feb 2019)
Today we are announcing Fix It Already, a new way to show companies we're serious about the big security and privacy issues they need to fix. We are demanding fixes for different issues from nine tech companies and platforms, targeting social media companies, operating systems, and enterprise platforms on issues ranging from encryption design to retention policies. Some of these issues stem from business decisions. Some are security holes. Some are design choices. The common thread? All of these well-known privacy and security issues have attainable fixes and an outsize impact on people's lives. We want to see companies bring their products in line with what consumers expect and deserve. And we need to hear from you to do it. How have these problems affected you, or people you know? What risks do you face as a result? What workarounds have you used to try to make these products and platforms work for your security and privacy concerns? Head to Fix It Already and tell us—and these companies—what these issues mean to you. Take Action Fix It Already Android should let users deny and revoke apps' Internet permissions. Apple should let users encrypt their iCloud backups. Facebook should leave your phone number where you put it. Slack should give free workspace administrators control over data retention. Twitter should end-to-end encrypt direct messages. Venmo should let users hide their friends lists. Verizon should stop pre-installing spyware on its users’ phones. WhatsApp should get your consent before you’re added to a group. Windows 10 should let users keep their disk encryption keys to themselves. It’s 2019. We have the technology to fix these problems, and companies are running out of excuses to neglect security and privacy best practices. We hope that with a little more attention, these companies will take these issues seriously and fix them already.
>> mehr lesen

Antitrust Enforcement Needs to Evolve for the 21st Century (Thu, 28 Feb 2019)
Yesterday, the Federal Trade Commission (FTC) announced the creation of a new task force to monitor competition in technology markets. Given the inadequacies of federal antitrust enforcement over the past generation, we welcome the new task force and reiterate our suggestions for how regulators can better protect technology markets and consumers. Citing the 2002 creation of a task force that reinvigorated antitrust scrutiny of mergers, and ongoing hearings on Competition and Consumer Protection, FTC Chairman Joe Simons said, “[I]t makes sense for us to closely examine technology markets to ensure consumers benefit from free and fair competition.” Bureau Director Bruce Hoffman noted that “[t]echnology markets, which are rapidly evolving and touch so many other sectors of the economy, raise distinct challenges for antitrust enforcement.”  We could not agree more.  Unfortunately, antitrust enforcement in the U.S. has become strangled in an outmoded economic doctrine that fails to recognize the realities of today’s Internet. We recently submitted comments to the FTC explaining a few key ways to strengthen antitrust enforcement and enable it to better protect competition, the marketplace, and consumer welfare.  Measures of Consumer Welfare Must Include Corporate Censorship Power Increasingly, consumers “pay” for services that we use online not in dollars, but with our data, which the companies then use without compensation to enable targeted advertising. Given that these services are nominally “free” to consumers, it makes no sense to evaluate consumer welfare solely on the basis of price.  The fetish with price among antitrust regulators originated with a group of economists known as the Chicago School. Their stated goal was to ground antitrust in empiricism. But the empirical measures they adopted have grown dramatically underinclusive, and their theories make little sense in the context of today’s corporate Internet.  In particular, the most salient “cost” paid by consumers to tech companies is often not a price that we pay, but rather the data that we provide, as well as our agency and autonomy in the face of corporate advertising and platform censorship.  In the advertising context, firms monetize user data by selling the privilege of reaching those users to third parties. Because the third parties—not the users themselves—are paying the price of advertising, a price-focused measure of consumer welfare essentially ignores crucial externalities that should inform antitrust analysis. In addition, platform censorship harms users in a dimension unrelated to price. Arbitrary filters—sometimes driven by perceived national security concerns, and just as often by narrow corporate interests like extreme copyright enforcement—often remove speech from the Internet. Users dissatisfied with one service’s practices should be able to migrate to alternative platforms, but that presumes a competitive marketplace that is almost nonexistent on today’s internet. Federal antitrust regulators should consider these very real costs to consumers when they evaluate proposed mergers, acquisitions, and anti-competitive behavior by companies leveraging longstanding and entrenched monopolies in particular digital markets. Market Power Is Apparent in Various Online Sectors  Several corporate behemoths dominate today’s Internet, each of which tends to wield monopoly power in at least one particular segment. Facebook’s share of advertising revenues among social networks in the United States is over 79%, while Google enjoys similar dominance over search tools, Amazon over cloud data infrastructure, Microsoft over operating systems, and Apple in device manufacturing.  Among the features of the contemporary marketplace that entrench these monopolists are network effects. Put simply, their value corresponds to their number of established users, and the size of their user bases represents a barrier to entry among potential competitors. One of the features that inhibit user choice is the refusal of corporate platforms to allow interoperability. In other contexts, consumers dissatisfied with a service can choose a competing one. But in the context of social media, the established content that a user has generated serves as inertia, increasing the transaction cost of migrating to alternative services, especially those that have not yet established comparable network effects. Platforms do not benefit from this inertia merely passively. Rather, they actively prevent users from migrating—and prevent third parties from developing tools that would help empower users—in at least two ways. First, companies have enforced overbroad claims leveraging the Computer Fraud and Abuse Act. They have also expansively interpreted their authorities specified in user agreements, which are legally suspect under traditional contract law principles as contracts of adhesion lacking any opportunity for negotiation or modification. To address the realities of today’s digital economy, regulators and courts must finally begin to consider harms to consumers beyond price, including corporate platform censorship.  The Essential Facilities Doctrine Could Spark and Fuel Innovation At the same time that antitrust regulators and courts developed an unsustainable, myopic interpretation of consumer harm, they also sharply limited one of the strongest levers in antitrust law for guarding competition: the “essential facilities” doctrine. It has been applied in cases ensuring that railroads could access bridges over rivers even when their competitors owned the bridges and that advertisers could run ads in newspapers even when the newspaper might prefer to exclude them in retaliation for those advertisers also buying ads in other advertising mediums. When a firm wielding monopoly power leverages a resource that other firms cannot duplicate by refusing to allow access, courts can apply the essential facilities doctrine. On the one hand, leveraging a firm’s unique infrastructure might seem like a normal way of doing business. Seen from another perspective, this kind of activity preys on consumers—and competition—by preventing competition from emerging and forcing users to settle for the first mover. Applications of essential facilities doctrine might appear aggressive, but applying the doctrine need not impose the kinds of obligations that constrain common carriers. Indeed, common carrier restrictions on social networks would risk imposing harms on speech. In contrast, recognizing essential facilities claims by competitors hampered by an anticompetitive denial of access would promote a diversity of approaches to content moderation, and other platform conduct (such as predatory uses of the Computer Fraud and Abuse Act) that harms users. Essential facilities claims would also encourage the development of new social media platforms and expand competition.  We have argued that the FTC should consider harms to consumers beyond price manipulation, and the essential facilities doctrine, to inform and revive its enforcement of antitrust principles. We anticipate making similar arguments to the Department of Justice (DOJ), and before courts evaluating potential claims in the future. And we hope the new task force, through its work monitoring technology markets, helps focus federal regulators at both the FTC and DOJ on these opportunities. Properly understood, and liberated from the constraints of an outmoded economic theory that defers to the abuses of corporate monopolies, antitrust laws can be a crucial tool to protect the Internet platform economy—and the billions of users who use it—from the dominance of companies wielding monopoly power.
>> mehr lesen

It’s Time for California to Guarantee “Privacy for All” (Wed, 27 Feb 2019)
Update, 2:35 p.m.: The coalition of groups behind Privacy for All has grown since time of publishing. This update reflects the latest count. Privacy is a right. It is past time for California to ensure that the companies using secretive practices to make money off of our personal information treat it that way. EFF has for years urged technology companies and legislators to do a better job at protecting the privacy of every person. We hoped the companies would realize the value meaningful privacy protections. Incidents such as the Cambridge Analytica scandal and countless others proved otherwise. Californians last year took an important step in the right direction, by enacting the California Consumer Privacy Act (CCPA). But much work remains to be done. “Privacy for All,” a bill introduced today by Assemblymember Buffy Wicks, builds on the CCPA’s foundation. It promises to give everyone the rights, knowledge, and power to reclaim their own privacy. Rights for All Californians have an inalienable, constitutional right to privacy. But the scale and secrecy of corporate monetization of our personal information has outpaced the state’s duty to enforce that fundamental right. Privacy for All improves on the CCPA by ensuring that companies cannot punish someone for exercising their right to privacy, by imposing a higher price or inferior service. Privacy is not a right reserved for the rich. Privacy for All also establishes a crucial power to protect our privacy: the right to act as our own privacy enforcers. With a private right of action, Privacy for All ensures that every person can go to court to hold companies accountable when they violate the law and refuse to respect our rights. Knowledge for All When it comes to protecting our own privacy, consumers are at a huge disadvantage. Companies know what they collect, how they use it, and who they share it with. Consumers usually do not. This knowledge gap has harmful effects. Without knowing where their information goes, people have been unable able to exert control over its distribution, sale, and use. There is no way for them to know, for example, that a company has given their information—their zip code, their race, their restaurant preferences—to a firm that uses this information to determine their mortgage rate or credit limit. Seniors with dementia have no way to know when their name ends up on a data broker’s list. The CCPA increases the consumer’s right-to-know. Privacy for All strengthens this right, and makes sure that everyone can learn what information companies have shared and who it’s been shared with. Power for All A cornerstone of data privacy is the consumer’s power to decide what a company may do with their data. The CCPA empowers consumers to opt-out of sale of their personal information. Privacy for All would improve the CCPA by making sure that companies that share data, as well as those that sell it, are required to get opt-in consent to do so. Privacy for All would make sure that the law covers all the ways personal information is shared in the modern digital world, including in ways people may not expect. That returns privacy power to the people. We Support Privacy for All EFF proudly stands with 30 other privacy and civil rights organizations behind Privacy for All and its commitment to protecting our fundamental right to privacy. Companies have broken their promises that they will do better when it comes to privacy. Scandals and breaches have shown, time and again, that letting companies dictate privacy policy hurts everyone. California lawmakers and Governor Gavin Newsom have already made clear that privacy is a vital right for the people of this state. It’s time for California's legislators to take the lead once again and ensure Privacy for All.
>> mehr lesen

EFF Supporting California’s Privacy For All Bill, Which Puts People, Not Tech Companies, in Control of Personal Data (Wed, 27 Feb 2019)
Measure Will Improve California’s Landmark Consumer Privacy Law San Francisco—The Electronic Frontier Foundation (EFF) is standing with Californians demanding more control over their personal data by supporting the Privacy For All bill, which requires tech companies to get their permission to share and use private information. “All eyes are on California, which has taken the lead nationwide in passing a historic consumer privacy bill at a time when people across the country are outraged by the privacy abuses they read about every day,” said EFF Legislative Counsel Ernesto Falcon. “Privacy For All improves on the existing privacy law so that consumers can control who gets access to their data and how the data is being used.” Privacy For All was introduced in Sacramento today by Assemblymember Buffy Wicks and has the support of a broad coalition of 14 consumer advocacy groups, including the ACLU, Common Sense Kids Action, Consumer Federation of America, and Privacy Rights Clearinghouse. Privacy For All Requires companies to get permission to share personal data, whether they are selling it, loaning it out, or giving app developers access to it. Currently, the California Consumer Privacy Act (CCPA) requires permission only for the sale of personal information. Facebook claims it doesn’t “sell” its customers’ data—but we know it has given it away to developers and companies like Cambridge Analytica—so the existing rule wouldn’t cover Facebook. Gives Californians the right to know what personal information companies have collected about them, and which companies it was shared with. Bars companies from retaliating against people who exercise their rights under California’s consumer privacy law by raising prices or subjecting them to bad service. Gives Californians the right to hold companies accountable for privacy violations by suing the companies in court. “When it comes to control of their personal information, Californians are at the mercy of companies who enrich themselves at the expense of our privacy,” said Lee Tien, senior staff attorney at EFF. “Privacy For All improves that imbalance of power and gives consumers the opportunity to block companies from secretly sharing and using their personal information.”  For more on Privacy For All: https://www.eff.org/deeplinks/2019/02/its-time-california-guarantee-privacy-all For more on CCPA: https://www.eff.org/deeplinks/2018/12/california-lawmakers-defend-and-strengthen-california-consumer-privacy-act For more on data privacy: https://www.eff.org/deeplinks/2018/12/data-privacy-scandals-and-public-policy-picking-speed-2018-year-review Contact:  Lee Tien Senior Staff Attorney and Adams Chair for Internet Rights lee@eff.org
>> mehr lesen

Watching the Black Body (Tue, 26 Feb 2019)
[This is a guest post authored by Malkia Cyril, executive director of the Center for Media Justice. It was originally published in The End of Trust (McSweeney's 54)] In December 2017, FBI agents forced Rakem Balogun and his fifteen-year-old son out of their Dallas home. They arrested Balogun on charges of illegal firearms possession and seized a book called Negroes with Guns. After being denied bail and spending five months in prison, Balogun was released with all charges dropped. A reemergence of civil rights–era surveillance strategies is endangering Black activists as tech companies profit.  To his shock, Balogun later discovered that the FBI had been monitoring him for years. He also discovered that he had been arrested that day for one specific reason: he had posted a Facebook update that criticized police. Balogun is considered by some to be the first individual prosecuted under a secretive government program that tracks so-called “Black Identity Extremists” (BIEs). A Black Extremist is what the FBI called my mother, fifty years ago. History Repeats Itself  There were definitely extreme things about my mother. The pain of living with sickle cell anemia was extreme. The number of books she thought she could fit into a room, piling them high in the living room of our brownstone home: that was extreme. I remember sitting on her shoulders during my first protest, in the early 1980s, against the deportation of Haitian people arriving by boat. Sitting up there, on top of my very small world, listening to extreme story after extreme story of Black bodies washed out to sea for attempting only to seek a better life, I began to understand clearly: being Black in America, and anywhere in the world, was an extreme experience. But was it extreme to want safety, freedom, and justice for herself, her family, her people? Despite the pain and complications of sickle cell anemia, and until the disease took her life in 2005, my mother worked every day as an educator, doing her part to achieve human rights for all oppressed people and specifically for Black people in the United States. Whether at a protest, on the floor of Liberation Bookstore in Harlem, in the darkened home of Japanese activist Yuri Kochiyama, or at a polling site on election day, my mother always took time to tell me and my sister stories about her days stuffing envelopes for the Student Nonviolent Coordinating Committee, then as the citywide coordinator for the Free Breakfast for Children Program, operating in churches throughout New York City at the time. According to my mom, finding her voice in the Black liberation movement was powerful. Yet, because of her voice, up until the moment of her death, my mother’s Black body was also under constant surveillance by the FBI and other government agencies. We felt the FBI’s surveillance of my mother directly in the late 1970s. In order to harass her, the FBI sent my mother’s file both to Health Services, where she worked as the assistant director for mental health programs in New York jails, and to the corrections officers at the jails where she worked. To their credit, Health Services rebuffed the FBI’s intervention. The Office of Corrections, however, banned my mother from the jails, forcing her to supervise her programs from offsite. I remember when, years later, my mother gained access to her FBI file via a Freedom of Information Act request. It was thick, with reams of redacted pages that spoke of police and FBI targeting as far back as the mid-1960s. Two weeks before my mother died, FBI agents visited our home, demanding that she come in for questioning about a case from the 1970s. My mother could barely walk, was suffering from some dementia, and was dying. She refused. My mother was the target of surveillance because of her commitment to social justice and human rights. Because of this, I grew up with government surveillance as the water in which I swam, the air that I breathed. I came to learn that the FBI has a long history of monitoring civil rights and Black liberation leaders like Ella Baker, Fannie Lou Hamer, and Martin Luther King, Jr. They monitored Black Muslim leaders like Malcolm X. They monitored Black immigrant leaders like Marcus Garvey. I came to learn about the FBI’s Counterintelligence Program, or COINTELPRO, the covert government program started in 1956 to monitor and disrupt the activities of the Communist Party in the United States. Its activities were often illegal, and expanded in the 1960s to target Black activists in the civil rights and Black Power movements, calling these activists—you guessed it—Black Extremists. In 1975, a Senate Committee, popularly known as the Church Committee, was formed to investigate the FBI’s intelligence programs, a response to pressure from a group that released papers exposing the existence of COINTELPRO. In a 2014 piece for The Nation, Betty Medsger outlines the Committee’s conclusion not only that African Americans were being watched by the government more than any other group was, but that the FBI didn’t require any evidence of “extremism” in order to justify the surveillance. For our communities, it didn’t matter if you had never uttered a subversive word, let alone taken part in any violence. As Medsger writes, “being Black was enough.” This warrantless spying on Black activists resulted in dozens of Black deaths by police shooting, and other Black lives swallowed whole for decades by the wrongful incarceration of political prisoners. Men like Kamau Sadiki and women like Safiya Bukhari, whom I grew up calling uncle and aunt, were among them. Ultimately, the Church Committee’s final report concluded that COINTELPRO was a dangerous program. As Robyn C. Spencer explains in Black Perspectives, the report states that the FBI used tactics which increased the “risk of death” while often disregarding “the personal rights and dignity of its victims.” The Committee determined that the FBI used “vaguely defined ‘pure intelligence’ and ‘preventive intelligence’” justifications for its surveillance of citizens who hadn’t committed any crimes—for reasons which had little or nothing to do with the enforcement of law. Given this history, my mother’s history, my history, I was not surprised when Foreign Policy broke the story that an August 2017 FBI intelligence assessment had identified a new designation: the “Black Identity Extremist.” I was not surprised, but I was scared. So, What Is a “Black Identity Extremist”?  “Black Identity Extremist” is a constructed category, invented by the FBI and documented in an August 2017 assessment entitled “Black Identity Extremists Likely Motivated to Target Law Enforcement Officers.” The FBI fabricated the BIE designation to create suspicion of Black people who respond publicly to police extrajudicial killings, but it doesn’t stop there. The document also focuses heavily on the convergence of what it calls “Moorish [Muslim] sovereign citizen ideology” and Black radicalization as reasons for heightened law enforcement targeting. As support, the assessment specifically cites the completely unrelated cases of Micah Johnson, a man who shot and killed multiple Dallas police officers during a protest in 2016; Zale H. Thompson, who attacked police in Queens, N.Y., with a hatchet in 2014; Gavin Eugene Long, who murdered multiple police officers in Baton Rouge, La.; and a few other unnamed subjects. In each of these cited incidents the perpetrators acted alone and without any connection to each other beyond the fact that they were all Black men. Not only are these cases unrelated to each other, but they are all unrelated to the larger organized movement for Black lives in general and the Black Lives Matter Global Network in particular. The FBI’s goal is clear: to fictitiously link democratically protected activities aimed at ending police violence and misconduct with what it calls “premeditated, retaliatory, lethal violence” against police officers. This is not only unethical and unaccountable; it places Black lives in real danger. Even the FBI’s own definition in the assessment is vague and likely unconstitutional: “The FBI defines black identity extremists as individuals who seek, wholly or in part, through unlawful acts of force or violence, in response to perceived racism and injustice in American society, [to establish] a separate black homeland or autonomous black social institutions, communities, or governing organizations within the United States.” This definition—encompassing any act of force conducted, even partially, in response to injustice in society—has no limit. It gives the FBI and prosecutors broad discretion to treat any violence by people who happen to be Black as part of a terrorist conspiracy. It is also absolutely baseless.    The fact is, as the Washington Post reported in 2015, police officers are no more likely to be killed by Black offenders than by white offenders. More than half of all officers killed in the line of duty die as a result of accidents in the commission of their job rather than attacks of any kind. The total number of officers killed in the ambush-style attacks that are central to the BIE narrative remains quite small, and overall recent officer deaths remain below the national average compared with the last decade. The bottom line is: “Black Identity Extremists” do not exist. The FBI’s assessment is rooted in a history of anti-Black racism within and beyond the FBI, with the ugly addition of Islamophobia. What’s worse is that the designation, by linking constitutionally protected political protest with violence by a few people who happen to be Black, serves to discourage vital dissent. Given the FBI’s sordid history, this assessment could also be used to rationalize the harassment of Black protesters and an even more militant police response against them. Despite the grave concerns of advocates, the FBI assessment and designation are already being used to justify both the erosion of racial justice–based consent decrees and the introduction of more than thirty-two Blue Lives Matter bills across fourteen states in 2017. The FBI’s assessment also feeds this unfounded narrative into the training of local law enforcement. A 2018 course offered by the Virginia Department of Criminal Justice Services, for instance, includes “Black Identity Extremists” in its overview of “domestic terror groups and criminally subversive subcultures which are encountered by law enforcement professionals on a daily basis.” The High-Tech Policing of Black Dissent The BIE program doesn’t just remind me of COINTELPRO; it represents its reemergence, this time in full view. Today, though, aided by the tech industry, this modern COINTELPRO has been digitized and upgraded for the twenty-first century. Just as Black Lives Matter and a broader movement for Black lives organize to confront persistently brutal and unaccountable policing, programs like BIE are legalizing the extrajudicial surveillance of Black communities. Police access to social-media data is helping to fuel this surveillance. Big tech and data companies aren’t just standing by and watching the show; they are selling tickets. And through targeted advertising and the direct sale of surveillance technologies, these companies are making a killing. Too many people still believe that civil and human rights violations of these proportions can’t happen in America. They either don’t know that they’ve been happening for centuries or wrongly believe that those days are long over. But right now, American cities with large Black populations, like Baltimore, are becoming labs for police technologies such as drones, cell phone simulators, and license plate readers. These tools, often acquired from FBI grant programs, are being used to target Black activists. This isn’t new. Tech companies and digital platforms have historically played a unique role in undermining the democratic rights of Black communities. In the twentieth century, the FBI colluded with Ma Bell to co-opt telephone lines and tap the conversations of civil rights leaders, among others. Given this history, today’s high-tech surveillance of Black bodies doesn’t feel new or dystopian to me. Quite the opposite. As author Simone Browne articulates beautifully in her book Dark Matters, agencies built to monitor Black communities and harbor white nationalists will use any available technology to carry out the mandate of white supremacy. These twenty-first-century practices are simply an extension of history and a manifestation of current relations of power. For Black bodies in America, persistent and pervasive surveillance is simply a daily fact of life. In fact, the monitoring of Black bodies is much older than either the current high-tech version or the original COINTELPRO. Browne notes that in eighteenth-century New York, “lantern laws” required that enslaved Black people be illuminated when walking at night unaccompanied by a white person. These laws, along with a system of passes that allowed Black people to come and go, Jim Crow laws that segregated Black bodies, and the lynching that repressed Black dissidence with murderous extrajudicial force, are all forms of monitoring that, as Claudia Garcia-Rojas observed in a 2016 interview with Browne for Truthout, have “made it possible for white people to identify, observe, and control the Black body in space, but also to create and maintain racial boundaries.” These are the ongoing conditions that gave birth to the FBI’s BIE designation. It has always been dangerous to be Black in America. The compliance of tech companies and—under the leadership of Attorney General Jeff Sessions and President Trump—the BIE designation escalate that danger exponentially. For example, while many have long fought for racial diversity in Amazon’s United States workforce, few were prepared for the bombshell that Amazon was selling its facial recognition tool, Rekognition, to local police departments, enabling them to identify Black activists. The problem is made worse by the fact that facial recognition tools have been shown to discriminate against Black faces. When the Center for Media Justice (CMJ), the organization I direct, and dozens of other civil rights groups demanded that Amazon stop selling this surveillance technology to the government, the company defended its practice by saying, “Our quality of life would be much worse today if we outlawed new technology because some people could choose to abuse the technology.” Such appeals assume a baseline of equity in this country that has never existed, ignoring the very real anti-Black biases built into facial recognition software. Amazon’s response also rejects any responsibility for the well-known abuses against Black communities. But these happen daily at the hands of the same police forces who are buying Rekognition. Put simply, whether they acknowledge it or not, Jeff Bezos and his company are choosing profits over Black lives. The proliferation of ineffective, unaccountable, and discriminatory technologies in the hands of brutal law enforcement agencies with a mandate to criminalize legally protected dissents using the FBI’s BIE designation isn’t simply dangerous to Black lives—it’s deadly. In 2016, the ACLU of Northern California published a report outlining how Facebook, Instagram, and Twitter provided users’ data to Geofeedia, a social-media surveillance product used by government officials, private security firms, marketing agencies, and, yes, the police to monitor the activities and discussions of activists of color. These examples show that our twenty-first-century digital environment offers Black communities a constant pendulum swing between promise and peril. On one hand, twenty-first-century technology is opening pathways to circumvent the traditional gatekeepers of power via a free and open internet—allowing marginalized communities of color to unite and build widespread movements for change. The growth of the movement for Black lives is just one example. On the other hand, high-tech profiling, policing, and punishment are supersizing racial discrimination and placing Black lives and dissent at even graver risk. Too often, the latter is disguised as the former. Defending Our Movements by Demanding Tech Company Noncompliance One way to fight back is clear: organize to demand the noncompliance of tech companies with police mass surveillance. And—despite Amazon’s initial response to criticism of its facial recognition technologies—public pressure on these public-facing technology companies to stop feeding police surveillance has succeeded before.  To fight back against Geofeedia surveillance, CMJ partnered with the ACLU of Northern California and Color of Change to pressure Facebook, Instagram, and Twitter to stop allowing their platforms and data to be used for the purposes of government surveillance. We succeeded. All three social media platforms have since stopped allowing Geofeedia to mine user data. Both from within—through demands from their own workforce—and from without—through pressure from their users, the public, and groups like CMJ and the ACLU—we can create an important choice for public-facing companies like Amazon, Twitter, IBM, Microsoft, and Facebook. We can push them to increase their role in empowering Black activists and to stop their participation in the targeting of those same people. The path forward won’t be easy. As revealed by the Cambridge Analytica scandal, in which more than eighty million Facebook users had their information sold to a political data firm hired by Donald Trump’s election campaign, the high-tech practices used by law enforcement to target Black activists are already deeply embedded in a largely white and male tech ecosystem. It’s no coincidence that Russian actors also used Facebook to influence the 2016 American elections, and did so by using anti-Black, anti-Muslim, and anti-immigrant dog-whistle rhetoric. They know that the prejudices of the general public are easy to inflame. Some in tech will continue to contest for broader law enforcement access to social-media data, but we must isolate them. The point is, demanding the non-cooperation of tech and data companies is one incredibly powerful tool to resist the growing infrastructure threatening Black dissent. In a digital age, data is power. Data companies like Facebook are disguised as social media, but their profitability comes from the data they procure and share. A BIE program, like the surveillance of my mother before it, needs data to function. The role of tech and data companies in this contest for power could not be more critical. Surveillance for Whom? The Myth of Countering Violent Extremism In attempting to justify its surveillance, the government often points to national security. But if the FBI, Attorney General Sessions, and the Department of Justice truly cared for the safety of all people in this country, they would use their surveillance systems to target white nationalists. For years, the growing threat of white-supremacist violence has been clear and obvious. A 2017 Joint Intelligence Bulletin warned that white-supremacist groups “were responsible for 49 homicides in 26 attacks from 2000 to 2016... more than any other domestic extremist movement” and that they “likely will continue to pose a threat of lethal violence over the next year.” Yet little has been done to address this larger threat. A heavily resourced structure already exists that could theoretically address such white-supremacist violence: Countering Violent Extremism (CVE). These programs tap schools and religious and civic institutions, calling for local community and religious leaders to work with law enforcement and other government agencies to identify and report “radicalized extremists” based on a set of generalized criteria. According to a Brennan Center report, the criteria include “expressions of hopelessness, sense of being unjustly treated, general health, and economic status.” The report points out that everyone from school officials to religious leaders is tasked with identifying people based on these measures. Yet despite being couched in neutral terms, CVE has focused almost exclusively on American Muslim communities to date. Recently, the Trump administration dropped all pretense and proposed changing the program’s name from Countering Violent Extremism to Countering Islamic Extremism. As reported by Reuters in February 2017, this renamed program would “no longer target groups such as white supremacists.” The disproportionate focus on monitoring Muslim communities through CVE has also helped justify the disproportionate focus on so-called Black extremism. About 32 percent of U.S.-born Muslims are Black, according to the Pew Research Center. In this way, the current repression of Black dissent by the FBI is connected, in part, to the repression of Islamic dissent. As noted above, the BIE designation ties directly to Islam. And, of course, CVE programs were modeled on COINTELPRO, and the BIE designation is modeled on the successful targeting of Muslim communities in direct violation of their civil and human rights. And tech is here, too. CVE works in combination with the reflexive use of discriminatory predictive analytics and GPS monitoring within our criminal justice system. Add to this the growth of the Department of Homeland Security’s Extreme Vetting Initiative, which uses social media and facial recognition to militarize the border and unlawfully detain generations of immigrants. Together, these programs create a political environment in which Black activists can be targeted, considered domestic terrorists, and stripped of basic democratic rights.  We Say Black Lives Matter The FBI’s BIE designation was never rooted in a concern for officer safety or national security. It wasn’t rooted in any evidence that “Black Identity Extremism” even exists. None of those justifications hold water. Instead, it is rooted in a historic desire to repress Black dissidence and prevent Black social movements from gaining momentum. And yet the movement for Black lives has, in fact, gained momentum. I became a member of the Black Lives Matter Global Network after the brutal killing of Trayvon Martin and subsequent acquittal of his killer, George Zimmerman. It was extraordinary to witness the speed and impact with which the movement grew online and in the streets. Spurred on by the bravery of Black communities in Ferguson, Mo., I was proud to be part of that growth: marching in the street, confronting the seemingly endless pattern of Black death by cop. It was an extraordinary feeling to stand with Black people across the country as we shut down police stations in protest of their violence, halted traffic to say the names of murdered Black women, and ultimately forced Democratic candidates to address the concerns of Black voters. The FBI’s BIE designation is a blatant attempt to undermine this momentum. It seeks to criminalize and chill Black dissent and prevent alliances between Black, Muslim, immigrant, and other communities. While Black activists may be the targets of the BIE designation, we aren’t the only ones impacted by this gaslighting approach. Resistance organizers working to oppose the detention, deportation, and separation of immigrant families; those fighting back against fascism and white supremacy; Muslim communities; and others are being surveilled and threatened alongside us. In 2018, we have a Supreme Court that has upheld an unconstitutional Muslim ban alongside White House efforts to deny undocumented families due process; we have an Attorney General and a Department of Justice that endorse social-media spying and high-tech surveillance of people for simply saying and ensuring that Black lives matter. It’s no coincidence that as Black activists are being targeted, the House of Representatives has quietly passed a national “Blue Lives Matter” bill, which will soon move to the Senate, protecting already heavily defended police. This even as the victims of police violence find little justice, if any, through the courts due to the thicket of already existing protections and immunities enjoyed by the police.  The movement for Black lives is a movement against borders and for belonging. It demands that tech companies divest from the surveillance and policing of Black communities, and instead invest in our lives and our sustainability. If my mother were alive, she would remind me that a government that has enslaved Africans and sold their children will just as quickly criminalize immigrant parents and hold their children hostage, and call Muslim, Arab, and South Asian children terrorists to bomb them out of being. She would remind me that undermining the civil and human rights of Black communities is history’s extreme arc, an arc that can still be bent in the direction of justice by the same bodies being monitored now. The only remedy is the new and growing movement that is us, and we demand not to be watched but to be seen.
>> mehr lesen

ETS Isn't TLS and You Shouldn't Use It (Tue, 26 Feb 2019)
The good news: TLS 1.3 is available, and the protocol, which powers HTTPS and many other encrypted communications, is better and more secure than its predecessors (including SSL). The bad news: Thanks to a financial industry group called BITS, there’s a look-alike protocol brewing called ETS (or eTLS) that intentionally disables important security measures in TLS 1.3. If someone suggests that you should deploy ETS instead of TLS 1.3, they are selling you snake oil and you should run in the other direction as fast as you can.  It's Better Than ETS ETS vs. TLS / SSL ETS removes forward secrecy, a feature that is so widely used and valued in TLS 1.2 that TLS 1.3 made it mandatory. This removal invisibly undermines security and has the potential to seriously worsen data breaches. As the ETS / eTLS spec says: "eTLS does not provide per-session forward secrecy. Knowledge of a given static Diffie-Hellman private key can be used to decrypt all sessions encrypted with that key." In earlier versions of TLS and SSL, forward secrecy was an optional feature. If enabled, it ensured that intercepted communications couldn’t be retrospectively decrypted, even by someone who later got a copy of the server’s private key. This remarkable property is so valuable for security that the Internet Engineering Task Force (IETF), which develops Internet standards including TLS, decided that TLS 1.3 would only offer algorithms that provide forward secrecy. The post-facto decryption weakness in TLS 1.2 and earlier versions is now considered a bug. It’s a product of its time that was produced by a number of factors, like government pressure not to implement stronger algorithms, a cloud of patent-related uncertainty around elliptic curve algorithms, and processor speed in the early 2000’s. Nowadays, it just makes plain sense to use forward secrecy for all TLS connections. Unfortunately, during the long tenure of TLS 1.2, some companies, mostly banks, came to rely on its specific weaknesses. Late in the TLS 1.3 process, BITS came forward on behalf of these companies and said their members “depend upon the ability to decrypt TLS traffic to implement data loss protection, intrusion detection and prevention, malware detection, packet capture and analysis, and DDoS mitigation.” In other words, BITS members send a copy of all encrypted traffic somewhere else for monitoring. The monitoring devices have a copy of all private keys, and so can decrypt all that traffic. They’d like TLS 1.3 to offer algorithms that disable forward secrecy so they can keep doing this decryption. But there’s a real harm that comes from weakening a critical protocol to provide easier in-datacenter monitoring for a small handful of organizations. Public-facing web servers might also implement the proposed weaker algorithms, either intentionally or accidentally, and this would expose billions of people’s data to easier snooping. Plus, this isn’t even a good way to do in-datacenter monitoring–with control of the servers, an organization can log data at their servers rather than relying on post-hoc decryption. Server-side logging can also redact sensitive data like plaintext passwords that should never be retained. In response to these objections, some IETF participants produced a modest proposal: By tweaking some parameters, they could make a TLS 1.3 server look like it was providing forward secrecy, but actually not provide it. This proposal, mentioned earlier and called “Static Diffie-Hellman,” would misuse a number in the handshake that is supposed to be random and discarded after each handshake. Instead of randomly generating that number (the Diffie-Hellman private key), a server using this technique would use the same number for all connections, and make sure to share a copy of it with the devices doing decryption. This would only require changes to servers, not clients, and would look just like the secure version of TLS 1.3. After much discussion, IETF decided not to standardize this modest proposal. Its risks were too great. So BITS took it to another standards organization, ETSI, which was more willing to play ball. ETSI has been working on its weakened variant since 2017, and in October 2018 released a document calling their proposal eTLS. They even submitted public comment asking NIST to delay publication of new guidelines on using TLS 1.3 and recommend eTLS instead. “Enterprise” Transport Security Tries to Use TLS’ Good Name Meanwhile, the IETF caught wind of this and strenuously objected to the misleading use of the name TLS in “eTLS:” “Our foremost concern remains the use of a name that implies the aegis of Transport Layer Security (TLS), a well-known protocol which has been developed by the IETF for over twenty years.” ETSI backed down, and the next revision of their weakened variant will be called “ETS” instead. Instead of thinking of this as “Enterprise Transport Security,” which the creators say the acronym stands for, you should think of it as “Extra Terrible Security.” Internet security as a whole is greatly improved by forward secrecy. It’s indefensible to make it worse in the name of protecting a few banks from having to update their legacy decrypt systems. Decryption makes networks less secure, and anyone who tells you differently is selling something (probably a decryption middlebox). Don’t use ETS, don’t implement it, and don’t standardize it.
>> mehr lesen

More Consumer Data Privacy Hearings Without Enough Consumer Data Privacy Advocates (Tue, 26 Feb 2019)
Last year, the U.S. Senate held a hearing about consumer privacy without a single voice for actual consumers. At the time, we were promised more hearings with more diverse voices. And while a hearing a month later with consumer advocates did seem to be a step forward, this week's two hearings—only mostly full of witnesses from tech companies—make us worried about a step back. EFF actively supports new consumer data privacy laws to empower technology users and others. Today, 90 percent of Americans feel they no longer have control over their data when they go online. Laws that impose legal duties on large technology companies that monetize consumer data, coupled with strong enforcement such as a private right of action, will give users back control. In order to create an enforceable law that actually protects consumers, Congress needs to consider many different aspects of the issue. This week, both the House and the Senate are holding hearings on this topic, but unfortunately, instead of hearing a variety of voices and perspectives on this topic, once again, Congress decided to hear mostly from tech companies. As Members of Congress and Senators prepare for their hearings, we hope they consider EFF’s past materials on consumer privacy legislation, which make clear our concerns and recommendations that should be considered for any privacy law: People should have a right to sue companies that violate their privacy rights. Laws must have strong enforcement in order to be effective. We see a persistent lack of federal enforcement regarding consumers’ private data. For years the FCC has looked the other way while wireless carriers have allowed bounty hunters (or anyone) to purchase consumers’ geolocation data. The FTC ignores Facebook and Google continuing to flaunt their consent decree, even after a litany of privacy scandals in the last year alone. It is long past time to allow individuals to protect their own privacy rights. Wide-reaching preemption would be harmful to user privacy. Preemption by Congress in any federal consumer privacy law poses a serious risk to user privacy rights that are already granted by the states. These include California's CCPA and Illinois' Biometric Information Privacy Act (which has been invoked in lawsuits against Facebook and Google for scanning faces without consent). EFF supports the creation of "Information Fiduciaries" for large Internet companies that collect user data. The law of fiduciaries is meant to address the power imbalance between ordinary people and skilled professionals (doctors, lawyers, and accountants for example). We support the creation of an "information fiduciary" rule that would impose a duty of care and loyalty on large Internet companies. Essential to a duty-of-loyalty rule is the ability for the individual to bring their own lawsuit against the business that violates this duty to them. Any proposed legislation should empower users by giving back control over their data. This includes requiring opt-in consent to online data gathering, giving users a right to “data portability,” giving users a “right to know” about data gathering and sharing, and imposing requirements on companies for when customer data is breached. We’ll be watching both of these hearings. Join us at @EFFLive as we share our thoughts in real time.
>> mehr lesen

Governments Must Face the Facts about Face Surveillance, and Stop Using It (Mon, 25 Feb 2019)
It’s time for governments to confront the harmful consequences of using facial recognition technology as an instrument of surveillance. Yet law enforcement agencies across the country are purchasing face surveillance technology with insufficient oversight—despite the many ways it harms privacy and free speech and exacerbates racial injustice. EFF supports legislative efforts in Washington and Massachusetts to place a moratorium on government use of face surveillance technology. These bills also would ban a particularly pernicious kind of face surveillance: applying it to footage taken from police body-worn cameras. The moratoriums would stay in place, unless lawmakers determined these technologies do not have a racial disparate impact, after hearing directly from minority communities about the unfair impact face surveillance has on vulnerable people. We recently sent a letter to Washington legislators in support of that state’s moratorium bill. We also support a proposal in the City of San Francisco that would permanently ban government use and acquisition of face surveillance technology. EFF objects to government use of face surveillance technology for several reasons. These technologies can track everyone who lives and works in public spaces by means of a unique identifying marker that is difficult to change or hide – our own faces. Monitoring public spaces with this technology will chill protests, an important form of free speech. Courts have long recognized that government surveillance has a “deterrent effect” on First Amendment activity. Many governments already employ powerful spying technologies in ways that harm minority communities. This includes spying on the social media of activists, particularly advocates for racial justice such as participants in the Black Lives Matter movement. Also, police watch lists are often over-inclusive and error-riddled, and cameras often are over-deployed in minority areas—effectively criminalizing entire neighborhoods. If past is prologue, we expect police will engage in racial profiling with face surveillance technology, too. Governments often deploy these tools without proper consideration for their technological limits. Several studies, including by Joy Buolamwini of the M.I.T. Media Lab and the ACLU, show that face surveillance technologies are more inaccurate when identifying the faces of young people, women, and minorities. And these spying tools increasingly are being used in conjunction with powerful mathematical algorithms, which often amplify bias. It’s important to consider all of these problems with face surveillance now. Once government builds this spying infrastructure, and starts harvesting and stockpiling a record of where we have been and who we were with, there is the inherent risk that thieves will steal this sensitive data, employees will misuse it, and policymakers will redeploy it in new unforeseen manners. For all of these reasons, companies shouldn’t sell face surveillance technology to governments. EFF supports the effort, led by ACLU, to persuade companies to stop doing so. Face surveillance erodes everyone’s privacy, chills free speech, and has an outsized negative impact on minority communities. So governments should not use these tools. Rather, they must face the facts about how damaging this surveillance technology is to the people they have a duty to protect.
>> mehr lesen

EFF Asks the Supreme Court to Clean Up the Oracle v. Google Mess (Mon, 25 Feb 2019)
EFF has just filed an amicus brief in support of Google’s petition asking the U.S. Supreme Court to review the long-running case of Oracle v. Google. The case asks whether functional aspects of computer programs are copyrightable, and involves two dangerous court opinions that held that functional works are both copyrightable and are not fair use as a matter of law. That Supreme Court review is long overdue. Nine years ago, Oracle filed a copyright suit against Google over the application program interface (API) of the Java programming language. The trial court ruled in Google’s favor, finding the APIs in question weren’t copyrightable. The U.S. Court of Appeals for the Federal Circuit reversed, and Google asked the Supreme Court to review that disastrous ruling. Unfortunately the Court declined, sending the parties back to the trial court to determine whether Google’s use was a fair use. Google won, again, and the Federal Circuit reversed, again. And that means the Supreme Court now has another chance to fix this mess. It should take it. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its holding that APIs are entitled to copyright protection ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the Internet. While that first Federal Circuit decision was dreadful, things got worse. The Federal Circuit had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and despite having concluded in the first appeal that the fair use defense should be decided by a jury, in 2018 the same three Federal Circuit judges reversed that finding and held that Google had not engaged in fair use as a matter of law. By overturning the jury, the court created enormous legal uncertainty for any software developer thinking about reimplementing pre-existing APIs. If the first Federal Circuit opinion means that APIs are copyrightable, and the second opinion means that a jury isn’t allowed to decide that using a competitor’s APIs is a fair use, then there are few, if any, ways that a second competitor can enter a market with an API-compatible product.  So we were happy to hear that Google is asking the Supreme Court to step in. In our amicus brief in support, filed today, rather than focusing on why the Federal Circuit was wrong (we’ve already done that), we explain why the Court should take the case (it doesn’t have to). For one thing, the Federal Circuit was supposed to follow the law of the Ninth Circuit (which it didn’t do). Instead, the court created its own dangerous copyright law for computer software. What is worse, courts around the country are following the Federal Circuit’s opinions, instead of decisions from their local appeals courts (such as the Ninth Circuit) as they’re required to do. And the opinion’s mischief reaches beyond the courts, influencing Copyright Office rulemaking and legal scholarship. In sum, the Federal Circuit has created a copyright mess that only the Supreme Court can fix. We hope that the Supreme Court will agree to review this case and put computer copyright law back on track.   Related Cases:  Oracle v. Google
>> mehr lesen

Artists Against Article 13: When Big Tech and Big Content Make a Meal of Creators, It Doesn't Matter Who Gets the Bigger Piece (Mon, 25 Feb 2019)
Article 13 is the on-again/off-again controversial proposal to make virtually every online community, service, and platform legally liable for any infringing material posted by their users, even very briefly, even if there was no conceivable way for the online service provider to know that a copyright infringement had taken place. This will require unimaginable sums of money to even attempt, and the attempt will fail. The outcome of Article 13 will be a radical contraction of alternatives to the U.S. Big Tech platforms and the giant media conglomerates. That means that media companies will be able to pay creators less for their work, because creators will have no alternative to the multinational entertainment giants. Throwing Creators Under the Bus The media companies lured creators' groups into supporting Article 13 by arguing that media companies and the creators they distribute have the same interests. But in the endgame of Article 13, the media companies threw their creator colleagues under the bus, calling for the deletion of clauses that protect artists' rights to fair compensation from media companies, prompting entirely justifiable howls of outrage from those betrayed artists' rights groups. But the reality is that Article 13 was always going to be bad for creators. At best, all Article 13 could hope for was to move a few euros from Big Tech's balance-sheet to Big Content's balance-sheet (and that would likely be a temporary situation). Because Article 13 would reduce the options for creators by crushing independent media and tech companies, any windfalls that media companies made would go to their executives and shareholders, not to the artists who would have no alternative but to suck it up and take what they're offered. After all: when was the last time a media company celebrated a particularly profitable year by increasing their royalty rates? It Was Always Going to Be Filters The initial versions of Article 13 required companies to build copyright filters, modeled after YouTube's "Content ID" system: YouTube invites a select group of trusted rightsholders to upload samples of works they claim as their copyright, and then blocks (or diverts revenue from) any user's video that seems to match these copyright claims. There are many problems with this system. On the one hand, giant media companies complain that they are far too easy for dedicated infringers to defeat; and on the other hand, Content ID ensnares all kinds of legitimate forms of expression, including silence, birdsong, and music uploaded by the actual artist for distribution on YouTube. Sometimes, this is because a rightsholder has falsely claimed copyrights that don't belong to them; sometimes, it's because Content ID generated a "false positive" (that is, made a mistake); and sometimes it's because software just can't tell the difference between an infringing use of a copyrighted work and a use that falls under "fair dealing," like criticism, commentary, parody, etc. No one has trained an algorithm to recognise parody, and no one is likely to do so any time soon (it would be great if we could train humans to reliably recognise parody!). Take Action Stop Article 13 Copyright filters are a terrible idea. Google has spent a reported $100 million (and counting) to build a very limited copyright filter that only looks at videos and only blocks submissions from a select group of pre-vetted rightsholders. Article 13 covers all possible copyrighted works: text, audio, video, still photographs, software, translations. And some versions of Article 13 have required platforms to block infringing publications of every copyrighted work, even those that no one has told them about: somehow, your community message-board for dog-fanciers is going to have to block its users from plagiarising 50-year-old newspaper articles, posts from other message-boards, photos downloaded from social media, etc. Even the milder "compromise" versions of Article 13 required online services to block publication of anything they'd been informed about, with dire penalties for failing to honour a claim, and no penalties for bogus claims. But even as filters block things that aren't copyright infringement, they still allow dedicated infringers to operate with few hindrances. That's because filters use relatively simple, static techniques to inspect user uploads, and infringers can probe the filters' blind-spots for free, trying different techniques until they hit on ways to get around them. For example, some image filters can be bypassed by flipping the picture from left to right, or rendering it in black-and-white instead of color. Filters are “black boxes” that can be repeatedly tested by dedicated infringers to see what gets through. For non-infringers — the dolphins caught in copyright's tuna-nets — there is no underground of tipsters who will share defeat-techniques to help get your content unstuck. If you're an AIDS researcher whose videos have been falsely claimed by AIDS deniers in order to censor them, or police brutality activists whose bodycam videos have been blocked by police departments looking to evade criticism, you are already operating at the limit of your abilities, just pursuing your own cause. You can try to become a filter-busting expert in addition to your research, activism, or communications, but there are only so many hours in a day, and the overlap between people with something to say and people who can figure out how to evade overzealous (or corrupted) copyright filters just isn't very large. All of this put filters into such bad odor that mention of them was purged from Article 13, but despite obfuscation, it was clear that Article 13's purpose was to mandate filters: there's just no way to imagine that every tweet, Facebook update, message-board comment, social media photo, and other piece of user-generated content could be evaluated for copyright compliance without an automated system. And once you make online forums liable for their users' infringement, they have to find some way to evaluate everything their users post. Just Because Artists Support Media Companies, It Doesn't Mean Media Companies Support Artists Spending hundreds of millions of euros to build filters that don't stop infringers but do improperly censor legitimate materials (whether due to malice, incompetence, or sloppiness) will not put any money in artists' pockets. Which is not to say that these won't tilt the balance towards media companies (at least for a while). Because filters will always fail at least some of the time, and because Article 13 doesn't exempt companies from liability when this happens, Big Tech will have to come to some kind of accommodation with the biggest media companies — Get Out Of Jail cards, along with back-channels that media companies can use to get their own material unstuck when it is mistakenly blocked by a filter. (It’s amazing how often one part of a large media conglomerate will take down its own content, uploaded by another part of the same sprawling giant.) But it's pretty naive to imagine that transferring money from Big Tech to Big Content will enrich artists. Indeed, since there's no way that smaller European tech companies can afford to comply with Article 13, artists will have no alternative but to sign up with the major media companies, even if they don't like the deal they're offered. Smaller companies play an important role today in the EU tech ecosystem. There are national alternatives to Instagram, Google, and Facebook that outperform U.S. Big Tech in their countries of origin. These will not survive contact with Article 13. Article 13's tiny exemptions for smaller tech companies were always mere ornaments, and the latest version of Article 13 renders them useless. Smaller tech companies will also be unable to manage the inevitable flood of claims by copyright trolls and petty grifters who see an opportunity. Smaller media companies — often run by independent artists to market their own creations, or those of a few friends — will likewise find themselves without a seat at the table with Big Tech, whose focus will be entirely on keeping the media giants from using Article 13's provisions to put them out of business altogether. Meanwhile, “filters for everything” will be a bonanza for fraudsters and crooks who prey on artists. Article 13 will force these systems to err on the side of over-blocking potential copyright violations, and that's a godsend for blackmailers, who can use bogus copyright claims to shut down artists' feeds, and demand money to rescind the claims. In theory, artists victimised in this way can try to get the platforms to recognise the scam, but without the shelter of a big media company with its back-channels into the big tech companies, these artists will have to get in line behind millions of other people who have been unjustly filtered to plead their case. If You Think Big Tech Is Bad Now... In the short term, Article 13 tilts the field toward media companies, but that advantage will quickly evaporate. Without the need to buy or crush upstart competitors in Europe, the American tech giants will only grow bigger and harder to tame. Even the aggressive antitrust work of the European Commission will do little to encourage competition if competing against Big Tech requires hundreds of millions for copyright compliance as part of doing business — costs that Big Tech never had to bear while it was growing, and that would have crushed the tech companies before they could grow. Ten years after Article 13 passes, Big Tech will be bigger than ever and more crucial to the operation of media companies. The Big Tech companies will not treat this power as a public trust to be equitably managed for all: they will treat it as a commercial advantage to be exploited in every imaginable way. When the day comes that FIFA or Universal or Sky needs Google or Facebook or Apple much more than the tech companies need the media companies, the tech companies will squeeze, and squeeze, and squeeze. This will, of course, harm the media companies' bottom line. But you know who else it will hurt? Artists. Because media giants, like other companies who have a buyer's market for their raw materials — that is, art and other creative works — do not share their windfalls with their suppliers, but they absolutely expect their suppliers to share their pain. When media companies starve, they take artists with them. When artists have no other option, the media companies squeeze them even harder. What Is To Be Done? Neither media giants nor tech giants have artists' interests at heart. Both kinds of company are full of people who care about artists, but institutionally, they act for their shareholders, and every cent they give to an artist is a cent they can't return to those investors. One important check on this dynamic is competition. Antitrust regulators have many tools at their disposal, and those tools have been largely idle for more than a generation. Companies have been allowed to grow by merger, or by acquiring nascent competitors, leaving artists with fewer media companies and fewer tech companies, which means more chokepoints where they are shaken down for their share of the money from their work. Another important mechanism could be genuine copyright reform, such as re-organizing the existing regulatory framework for copyright, or encouraging new revenue-sharing schemes such as voluntary blanket licenses, which could allow artists to opt into a pool of copyrights in exchange for royalties. Any such scheme must be designed to fight historic forms of corruption, such as collecting societies that unfairly share out license payments, or media companies that claim these. That’s the sort of future-proof reform that the Copyright Directive could have explored, before it got hijacked by vested interests. In the absence of these policies, we may end up enriching the media companies, but not the artists whose works they sell. In an unfair marketplace, simply handing more copyrights to artists is like giving your bullied kid extra lunch-money: the bullies will just take the extra money, too, and your kid will still go hungry. Artists Should Be On the Side of Free Expression It's easy to focus on media and art when thinking about Article 13, but that's not where its primary effect will be felt. The platforms that Article 13 targets aren't primarily entertainment systems: they are used for everything, from romance to family life, employment to entertainment, health to leisure, politics and civics, and more besides. Copyright filters will impact all of these activities, because they will all face the same problems of false-positives, censorship, fraud and more. The arts has always championed free expression for all, not just for artists. Big Tech and Big Media already exert enormous control over our public and civic lives. Dialing that control up is bad for all of us, not just those of us in the arts. Artists and audiences share an interest in promoting the fortunes of artists: people don't buy books or music or movies because they want to support media companies, they do it to support creators. As always, the right side for artists to be on is the side of the public: the side of free expression, without corporate gatekeepers of any kind. Take Action Stop Article 13
>> mehr lesen

Win in Washington State: Judge Strikes Down Unconstitutional ‘Cyberstalking’ Law Chilling Free Speech (Sat, 23 Feb 2019)
Great news out of Washington state: a federal judge has ruled that the First Amendment protects speech on the Internet, even from anonymous speakers, and even if it’s embarrassing. EFF has been fighting this statute for a long time. It’s a prime example of how sloppy approaches to combatting “cyberstalking” can go terribly wrong. As we explained in an amicus brief filed in this case by EFF and the ACLU of Washington, the law could potentially block the routine criticism of politicians and other public figures that is an integral part of our democracy. Online harassment requires careful and sophisticated solutions, but this law instead banned using all “electronic communications” intended to “embarrass” someone that are made anonymously or repeatedly or include an obscenity. It’s easy to think of a host of perfectly reasonable criticisms that could be criminalized by this vague and overbroad law: one politician publishing various lists of questionable decisions made by an election challenger; a series of newspaper editorials arguing that a city official should be scorned because of misconduct; or an activist posting multiple videos of a lawmaker doing something unsavory. This is all valuable speech that is protected by the First Amendment, and no state law should be allowed to undermine these rights. We are pleased that the judge has agreed. Related Cases:  Washington State Cyberstalking Law
>> mehr lesen

Cyber-Mercenary Groups Shouldn't be Trusted in Your Browser or Anywhere Else (Fri, 22 Feb 2019)
DarkMatter, the notorious cyber-mercenary firm based in the United Arab Emirates, is seeking to become approved as a top-level certificate authority in Mozilla’s root certificate program. Giving such a trusted position to this company would be a very bad idea. DarkMatter has a business interest in subverting encryption, and would be able to potentially decrypt any HTTPS traffic they intercepted. One of the things HTTPS is good at is protecting your private communications from snooping governments—and when governments want to snoop, they regularly hire DarkMatter to do their dirty work. Membership in the root certificate program is the way in which Mozilla decides which certificate authorities (CAs) get to have their root certificates trusted in Firefox. Mozilla’s list of trusted root certificates is also used in many other products, including the Linux operating system.  Browsers rely on this list of authorities, which are trusted to verify and issue the certificates that allow for secure browsing, using technologies like TLS and HTTPS. Certificate Authorities are the basis of HTTPS, but they are also its greatest weakness. Any of the dozens of certificate authorities trusted by your browser could secretly issue a fraudulent certificate for any website (such as google.com or eff.org.) A certificate authority (or other organization, such as a government spy agency,) could then use the fraudulent certificate to spy on your communications with that site, even if it is encrypted with HTTPS. Certificate Transparency can mitigate some of the risk by requiring public logging of all issued certificates, but is not a panacea. Mozilla and other root certificate database maintainers (Microsoft, Google, and Apple) should not trust Dark Matter as a root certificate authority. The companies on your browser’s trusted CA list rarely commit such fraud, since not issuing malicious certificates is the foremost responsibility for a certificate authority. But it can and does still happen. The concern in this case is that DarkMatter has made its business spying on internet communications, hacking dissidents’ iPhones, and other cyber-mercenary work. DarkMatter’s business objectives directly depend on intercepting end-user traffic on behalf of snooping governments. Giving DarkMatter a trusted root certificate would be like letting the proverbial fox guard the henhouse. Currently, the standard for being accepted as a trusted certificate authority in the browser is a technical and bureaucratic one. For example, do the organization's documented practices meet the minimum requirements? Can the organization issue standards-compliant certificates? Dark Matter will likely meet those standards, eventually. But the standards don’t take into account an organization’s history of trying to break encryption, or its conflicts of interest. Other organizations have used this fact to game the system in the past and worm their way into our browsers. In 2009, Mozilla allowed CNNIC, the Chinese state certification authority, into the root CA program, after CNNIC assured Mozilla and the larger community that it would not abuse this power to create fake certificates and break encryption. In 2015 CNNIC was caught in a scandal when an intermediate CA authorized by CNNIC issued illegitimate certificates for several google-owned domains. Google, Mozilla, and others quickly revoked CNNIC’s authority in their browsers and operating systems after learning about the breach of trust. CNNIC is not the only example of this. In 2013 Mozilla considered dropping the Swedish company Teliasonera after accusations that it had helped enable government spying. Teliasonera ultimately did not get dropped, but it continues to have security problems to this day. DarkMatter was already given an "intermediate" certificate by another company, called QuoVadis, now owned by DigiCert. That's bad enough, but the "intermediate" authority at least comes with ostensible oversight by DigiCert. Without that oversight, the situation will be much worse. We would encourage Mozilla and others to revoke even this intermediate certificate, given DarkMatter's known practices subverting internet security. Mozilla and other root certificate database maintainers (Microsoft, Google, and Apple) should not trust Dark Matter as a root certificate authority. To do so would not only give Dark Matter, a company which has repeatedly demonstrated their interest in breaking encryption, enormous power; it would also open the door for other cyber-mercenary groups, such as NSO Group or Finfisher, to worm their way in as well. We encourage everyone concerned about Dark Matter being included in the Mozilla trust database to make your feelings known on Mozilla’s security policy mailing list.
>> mehr lesen

What’s the Emergency? Keeping International Requests for Law Enforcement Access Secure and Safe for Internet Users (Thu, 21 Feb 2019)
Law enforcement access to data is in the middle of a profound shake-up across the globe. States are pushing to get quicker, deeper, and more invasive access to personal data stored on the global Internet, and are looking to water down the international safeguards around privacy and due process in the name of “speed” and “modernization.” One part of that push is concentrated on the Council of Europe’s Cybercrime Convention (also known as the “Budapest Convention”) — an international instrument, ratified by the United States and over 60 countries around the world, that spells out the procedures, checks and balances when law enforcement from another nation needs to comply with when requesting digital data held in another jurisdiction. The Council of Europe’s Cybercrime Convention Committee (T-CY) is currently drafting an update in the form of a second additional protocol to the Convention. There’s lots that could go right, and plenty that could go wrong in the drafting of this new protocol. The slow and sometimes confusing mutual legal assistance treaties (MLAT) process could be reformed to better match the speed of the Internet while still protecting civil liberties and due process. But it could be an opportunity for over-eager States to create new, unprotected methods for law enforcement to pull data from big tech companies, without oversight, notification or ways for affected users to challenge the process. The latest part of that process is a consultation on the rules governing fast-track emergency access to data in the context of mutual legal assistance. EFF, EDRI, and a number of global civil society organizations have responded. In our submission, we welcome the fact that the definition of emergency include both the words significant and imminent in order to limit the use of emergency powers to relevant situations and when the emergency is close in time. Safety means a threat that would result in serious bodily harm or injury of a natural person. We believe that Emergency MLAs provides a mechanism for countries to access the results of the request in foreign countries necessary to prevent a situation in which there is a significant and imminent risk to the life or safety of any natural person, but also provide an opportunity to create strong legal safeguards for this process. In our joint statement, we explained that emergency MLA should not be used to prevent risks of destruction, flight or loss of evidence nor should it be used to prevent financial or property crimes since there is no risk to human life. We also explained that although imminent threats to the life and physical well being of a person may also implicate threats to property, it is important that emergency powers focus on the protection and preservation of human life. Expanding the definition of emergency to include property risks — as has been suggested in international corridors— would open up emergency procedures to too many requests that are unrelated to significant and imminent risks to the life or safety of any natural person. We believe that accountability mechanisms are necessary to prevent the misuse of emergency procedures. These accountability mechanisms could include penalties for blatant or systemic misuse of emergency procedures by a Party to the Convention. To further ensure emergency procedures are not being abused, the requests should always be in writing, and Parties should implement a process that compels Requesting Parties to provide a digital or paper trail of all requests in order to facilitate this audit process. Statistical and qualitative reporting on the volume of emergency requests should be published by both requesting and requested Parties on an annual basis. While this should be the case for all manner of MLA procedures, it is particularly vital for emergency mechanisms given their potential for over-reach. Watching the Second Protocol The T-CY aims to finalize the Second Additional Protocol by December 2019. We, along with 93 civil society organizations from across the globe have requested meaningful civil society participation in the Council of Europe’s (CoE) negotiations of the draft Second Additional Protocol. Civil society groups should be included throughout the entire process—not just during the Council of Europe’s Octopus Conferences or online consultations. The Council of Europe has said that it’s up to the individual countries to conduct consultations; but we’ve heard little from these States to suggest that they intend to reach out Internet users and advocacy groups. If your country is on the attendees’ list for the CyberCrime Convention TC-Y drafting group, write to your government and ask them about a potential consultation -- and let us know what you hear.
>> mehr lesen

The Public Deserves a Return to the 2015 Open Internet Order (Thu, 21 Feb 2019)
Congress is actively debating how to fix the FCC’s repeal of the net neutrality rules. But the first bills offered (H.R. 1101 (Walden), H.R. 1006 (Latta), and H.R. 1096 (McMorris Rodgers) focus narrowly on the “bright line” rules of no blocking, no throttling, and no paid prioritization. A major problem with this approach is that the public supported the 2015 Open Internet Order and a huge array of parties (with the exception of basically just AT&T, Comcast, and Verizon) supported Title II reclassification because of what else was protected. Privacy, competition, and public safety are all worse off when all you do is ban three basic tactics. Restoring the entirety of the 2015 Open Internet Order means protecting the vital components to keeping the Internet a free and open platform. If Congress decides to act, it should not shortchange the American public. Unfortunately, that appears to be where the House of Representatives is heading right now. The Rules Need To Cover Anticompetitive Zero Rating A straight ban on blocking, throttling and paid prioritization would leave out important limits on the practice of exempting certain traffic from data caps, also known as “zero rating.” This practice can be used to drive Internet users to the ISP’s own content or favored partners, squelching competition. A recent Epicenter.works multi-year study on zero-rating practices in the EU has found that countries that allow zero-rating plans have more expensive wireless services than countries that do not. It also found that when ISPs engage in zero-rating practices, only large companies are able to maintain the market relationships needed to be zero-rated. In addition, we already knew how zero rating can be used in anti-competitive ways and discriminates against low-income users, which is why EFF supported California’s ban on most harmful zero-rating practices. Ignoring the harms that anticompetitive zero rating does to net neutrality is essentially just doing the bidding of AT&T, which has regularly leveraged its data caps in an anticompetitive way. It is worth noting that the current administration is concerned that AT&T intends to use tactics like this to privilege Time Warner content over that of competitors. Antitrust law may be unsuited to address this problem. As we see more and more of these kinds of vertical mergers, we need rules on zero rating to protect consumers. While the 2015 Open Internet Order’s “general conduct rule” covering zero rating was too vague, a narrower alternative, like that in California’s net neutrality law, would ensure lower prices and keep ISPs from steering users to privileged websites and services. FCC staff had in fact found AT&T’s and Verizon’s zero rating practices to be in violation of the 2015 Order under the general conduct rule (which is not included in the three bills that have been introduced), but those investigations were terminated by FCC Chairman Ajit Pai before initiating the process to repeal net neutrality. People Want Their ISP Privacy Rights Back When all broadband access companies were classified under Title II of the Communications Act, Section 222 of the Act gave users a legal right to privacy when we use broadband communications. It also imposed a duty on your ISP to protect your personal information. In light of major wireless and broadband companies’ creation of a black market for bounty hunters (and everyone else) to be able to purchase the physical location of any American, it’s really important to restore these privacy rights. Over 90 percent of Americans feel that they have lost control of their personal information when they use the Internet, so restoring ISP privacy rules should be a part of any new legislation. Congress made a huge mistake when it reversed and prohibited the widely supported FCC privacy rules that stemmed from the 2015 Open Internet Order. Congress still appears headed in the wrong direction on consumer privacy when it openly entertains preempting strong state privacy laws (such as Illinois’ BIPA and California’s CCPA) not just on behalf of big ISPs but also at the request of Google and Facebook. But should a new communications law come into focus, reinstating Section 222’s protections would yield a huge benefit to users. ISPs are the only entities that are able to track your entire Internet experience because you have to tell them where you want to go. Virtual private networks (VPNs) offer a partial fix at best. It makes little sense for Congress to ignore consumer privacy laws it already has on the books and not reapply them to broadband access companies once again. We Need More Competition in Broadband Internet, Which the 2015 Open Internet Order Promoted Dozens of small ISPs wrote the Federal Communications Commission (FCC) and asked them not to abandon the Open Internet Order because it provided a clear framework to address anticompetitive conduct by the largest players in the market (AT&T, Comcast, and Verizon). Specifically, being classified as a common carrier under Title II of the Act applied Section 251, which required ISPs to “interconnect” with other broadband providers and related market players in good faith. This prevents large players from leveraging their size to harm the competition. The dispute made most famous by comedian John Oliver was between Comcast and Netflix where Comcast demanded new payments from Netflix simply because they had leverage. Large ISPs regularly misrepresent the cost of providing access to video services from their competitors, but the estimated cost to Comcast was a fraction of a penny per hour of viewing HD video and dropping when they demanded new fees. Other disputes exist that are less in the public eye including two between Comcast and unknown edge providers that came to light in a court filing after the passage of California’s net neutrality law (SB 822). Ultimately what this boils down to is whether interconnection charges become a rent-seeking opportunity for big ISPs as they have in many parts of the world. The other pro-competition outcome of classifying broadband companies under Title II was the application of Section 224 of the Communications Act, otherwise known as “pole attachment rights.” Under the Open Internet Order, anyone selling broadband access was given a legal right to access infrastructure such as the poles outside your home that run wires. Given that the close to 60 to 80 percent of the cost of deploying a network can be attributed to local civil works like digging up the roads, equal access to infrastructure already built helps reduce the cost of market entry. Knowing this cost barrier, it should surprise no one that when an ISP owns the infrastructure it will categorically deny access to competitors much like AT&T did with Google Fiber. Today under the Restoring Internet Freedom Order, only telephone companies (like AT&T and Verizon) and cable television companies (like Comcast) have legal rights to infrastructure. New entrants that sell competitive broadband access, like Common Networks of Alameda, are forced to explore more difficult workarounds such as asking residents to offer a portion of their rooftops. Public Safety Needs a Referee Despite the fact that Verizon has admitted all fault for its throttling and upselling activities in California to firefighters during one of the worst fires in the state’s history, the FCC has done nothing to proactively address the problem. This is despite the problem remaining unresolved in Santa Clara County months after the fact. And that is because without its Title II authority under Section 201 and Section 202, the FCC can literally do nothing about Verizon’s conduct. Such an outcome raised serious questions at the D.C. Circuit’s oral arguments on the Restoring Internet Freedom Order as judges openly questioned the FCC’s wisdom in letting first responders navigate this field alone, despite the FCC’s legal duty to address public safety. As Santa Clara County’s attorney Danielle Goldstein pointed out during oral arguments, it is not rational to expect public safety entities to come to the FCC after an emergency occurs. Given the life and death matters involved, avoiding this issue carries extreme risks of recurrence in the future not because ISPs are bad actors, but because it is not their job to figure out the balancing act between their for-profit duties and less profitable needs of public safety. That has always been a government responsibility. There is more at stake in the battle for net neutrality than preventing ISPs from blocking, throttling, or engaging in paid prioritization. Bills that are limited to those three rules ignore the high-speed cable monopoly problem that tens of millions of Americans face, and how a lack of privacy protections harms broadband adoption. These bills miss the larger impact of the 2015 rules and ask the public, which overwhelmingly opposed the Restoring Internet Freedom Order, to accept only a fraction of its benefits. The public deserves better.
>> mehr lesen

The Worst Possible Version of the EU Copyright Directive Has Sparked a German Uprising (Tue, 19 Feb 2019)
Last week's publication of the final draft of the new EU Copyright Directive baffled and infuriated almost everyone, including the massive entertainment companies that lobbied for it in the first place; the artists' groups who endorsed it only to have their interests stripped out of the final document; and the millions and millions of Europeans who had publicly called on lawmakers to fix grave deficiencies in the earlier drafts, only to find these deficiencies made even worse. Take Action Stop Article 13 Thankfully, Europeans aren't taking this lying down. With the final vote expected to come during the March 25-28 session, mere weeks before European elections, European activists are pouring the pressure onto their Members of the European Parliament (MEPs), letting them know that their vote on this dreadful mess will be on everyone's mind during the election campaigns. The epicenter of the uprising is Germany, which is only fitting, given that German MEP Axel Voss is almost singlehandedly responsible for poisoning the Directive with rules that will lead to mass surveillance and mass censorship, not to mention undermining much of Europe's tech sector. The German Consumer Association were swift to condemn the Directive, stating: "The reform of copyright law in this form does not benefit anyone, let alone consumers. MEPs are now obliged to do so. Since the outcome of the trilogue falls short of the EU Parliament's positions at key points, they should refuse to give their consent." A viral video of Axel Voss being confronted by activists has been picked up by politicians campaigning against Voss's Christian Democratic Party in the upcoming elections, spreading to Germany's top TV personalities, like Jan Böhmermann. Things are just getting started. On Saturday, with just two days of organizing, hundreds of Europeans marched on the streets of Cologne against Article 13. A day of action—March 23, just before the first possible voting date for MEPs—is being planned, with EU-wide events. In the meantime, the petition to save Europe from the Directive—already the largest in EU history—keeps racking up more signatures, and is on track to be the largest petition in the history of the world. Take Action Stop Article 13
>> mehr lesen

The Payoff From California’s “Data Dividend” Must Be Stronger Privacy Laws (Sat, 16 Feb 2019)
California Governor Gavin Newsom, in his first State of the State Address, called for a “Data Dividend” (what some are calling a “digital dividend”) from big tech. It’s not yet clear what form this dividend will take. We agree with Governor Newsom that consumers deserve more from companies that profit from their data, and we suggest that any “dividend” should take the form of stronger data privacy laws to protect the people of California from abuse by the corporations that harvest and monetize our personal information. In his February 12 address, Governor Newsom said: California is proud to be home to technology companies determined to change the world. But companies that make billions of dollars collecting, curating and monetizing our personal data have a duty to protect it. Consumers have a right to know and control how their data is being used. I applaud this legislature for passing the first-in-the-nation digital privacy law last year. But California’s consumers should also be able to share in the wealth that is created from their data. And so I’ve asked my team to develop a proposal for a new Data Dividend for Californians, because we recognize that your data has value and it belongs to you. Strengthen the California Consumer Privacy Act We agree with Governor Newsom that technology users and other Californians have “a right to know and control how their data is being used.”  That’s why California began the process of protecting consumer data privacy last year. Specifically, it enacted the law that Governor Newsom described in his address: the California Consumer Privacy Act (CCPA). The CCPA provides consumers the right to know what personal information companies have collected from them, the right to opt-out of the sale of that information, and the right to delete some of that information. EFF and other data privacy advocates will work this year to strengthen the CCPA. For example, California needs a private cause of action to enforce the CCPA, so consumers who suffer violations of their data privacy can hold accountable the corporations that violated their rights. The California Attorney General supports this expansion of CCPA enforcement power. The CCPA also should require opt-in consent before corporations share consumers’ data, and not just opt-out consent from corporations selling their data. Presumptions matter, and corporations may share personal information without selling it. Further, California needs a stronger right to know, including better “data portability,” meaning the right to obtain a machine-readable copy of one’s data. Sadly, some big tech companies will work this year to weaken the CCPA. The privacy movement will resist their efforts. With this legislative storm brewing, we are buoyed by Governor Newsom’s address. It signals his intent to stand up for the data privacy of Californians. We hope he will work with privacy advocates to strengthen the CCPA. No Pay-For-Privacy Some observers have speculated that by “Data Dividend,” Governor Newsom means payments by corporations directly to consumers in exchange for their personal information. We hope not. EFF strongly opposes “pay-for-privacy” schemes. Corporations should not be allowed to require a consumer to pay a premium, or waive a discount, in order to stop the corporation from vacuuming up—and profiting from—the consumer’s personal information. It is not a good deal for consumers to get a handful of dollars from companies in exchange for surveillance capitalism remaining unchecked. Privacy is a fundamental human right. It is guaranteed by the California Constitution. The California Supreme Court has ruled that this constitutional protection “creates a right of action against private as well as government entities.” Pay-for-privacy schemes undermine this fundamental right. They discourage all people from exercising their right to privacy. They also lead to unequal classes of privacy “haves” and “have-nots,” depending upon the income of the user. The good news is that the CCPA contains a non-discrimination rule, which forbids companies from discriminating against a consumer because the consumer exercised one of their CCPA privacy rights. For example, companies cannot deny goods, charge different prices, or provide different level of quality. The bad news is that the CCPA’s non-discrimination clause has two unclear and potentially far-reaching exceptions. This year, privacy advocates will seek to eliminate these exceptions, and some business groups will seek to expand them. We hope Governor Newsom will join us in the fight against pay-for-privacy, and for strong legal protection of consumer data privacy. As the Governor powerfully explained this week: “Consumers have a right to know and control how their data is being used.”
>> mehr lesen

EFF to State Department: Respect Freedom of Speech of Chinese Students (Fri, 15 Feb 2019)
EFF joined a letter to Secretary of State Mike Pompeo opposing a proposal to deploy stronger vetting procedures against Chinese students intending to study in the United States because the procedures would threaten the free speech interests of both Chinese students and their American associates. Reuters reported that the Trump administration is considering “checks of student phone records and scouring of personal accounts on Chinese and U.S. social media platforms for anything that might raise concerns about students’ intentions in the United States, including affiliations with government organizations.” In opposing the vetting proposal, we argued that “[p]rospective students may self-censor whom they talk to or what they say on social media out of fear that political discussion about China or the United States will harm their academic prospects—a result sharply at odds with our national commitment to academic freedom and free expression,” and that “monitoring the phone and social media activity of Chinese students also threatens the free speech rights of their American associates—whether family members, friends, or fellow students.” The State Department’s Chinese student vetting proposal follows U.S. Custom and Border Protection’s new program to ask visa applicants from China for their social media handles, which we similarly opposed. These programs focusing on Chinese visitors are part of a concerning broader strategy by the Trump administration to engage in social media surveillance of both visitors and immigrants to the United States. We joined the letter to Secretary Pompeo along with Foundation for Individual Rights in Education (FIRE), PEN America, National Coalition Against Censorship, and Defending Rights & Dissent.
>> mehr lesen

Oakland Renters Deserve Quality Service and The Power To Choose Their ISP (Fri, 15 Feb 2019)
Oakland residents, we need your stories and experience to continue the fight to stop powerful Internet Service Providers (ISPs) from limiting your ability to choose the service that’s best for you. Submit Your Story If you live in Oakland and have had trouble acquiring service from the ISP of your choice, EFF wants to know. Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option. For years, renters have been denied access to the Internet Service Provider of their choice as a result of pay-to-play schemes. These schemes―promoted by the corporations in control of the largest Internet Service Providers―allow powerful corporations to manipulate landlords into denying their tenants the ability to choose between providers who share their values, or have plans that provide the best service meeting a customer's needs and budget. This concern was only exacerbated when the FCC repealed the 2015 Open Internet Order. Chairman Pai and the FCC claimed that net neutrality protections were not necessary, as the free market would prevent exploitative practices by allowing customers to vote with their dollars. But with more than half the country only having one option of high-speed Internet Service Provider, this illusion of choice has never been based in reality. Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option by real estate trusts and management firms that restrict access to their properties to any provider other than the one with the most enticing landlord incentives. In January of 2017, San Francisco adopted critical protections to stop these exploitative practices. As a result, San Francisco residents enjoy better, more affordable options than many of their friends and coworkers in neighboring communities. EFF, local residents, advocacy groups, and businesses have begun working with Oakland lawmakers to make sure that the city's renters can take advantage of these same protections. If you live in Oakland and have experienced difficulty acquiring Internet service from the provider that's best for you, your City Council representatives want to know.
>> mehr lesen

Designing Welcome Mats to Invite User Privacy (Thu, 14 Feb 2019)
The way we design user interfaces can have a profound impact on the privacy of a user’s data. It should be easy for users to make choices that protect their data privacy. But all too often, big tech companies instead design their products to manipulate users into surrendering their data privacy. These methods are often called “Dark Patterns.” When you purchase a new phone, tablet, or “smart” device, you expect to have to set it up with the needed credentials for it to be fully usable. For Android devices, you set up your Google account. For iOS devices, you set your Apple ID. For your Kindle, you set up your Amazon account. Privacy by default should be the goal. However, particularly worrisome practices have been paired with the on-boarding process for many different platforms that serve as an obstacle to this aspiration. What are “Dark Patterns”? Harry Brignull, a UX researcher, coined the term “Dark Patterns.” He maintains a site dedicated to documenting the different types of Dark Patterns, where he explains: “Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn't mean to.” The Norwegian Consumer Council (the Forbrukerrådet or NCC) builds on this critical UX concept in a recent report that criticizes “features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question.” On the heels of this report, the NCC filed a complaint against Google on the behalf of a consumer. This complaint argues that Google violated the European Union’s General Data Protection Regulation (GDPR) by tricking the consumer into giving Google access to their location information. Likewise, the French data protection agency (the CNIL) recently ruled that some of Google’s consent and transparency practices violate the GDPR. The CNIL fined Google 50 million Euros (equivalent to about 57 million U.S. dollars). The NCC report emphasizes two important steps in the on-boarding process of Android-based devices: the enabling of Web & App Activity and Location History. These two services encompass a wide variety of information exchanges between different Google applications and services. Examples include collection of real-time location data on Google Maps and audio-based searches and commands via Google Assistant. It is possible to disable these services in the “Activity Controls” section of one’s account. But Google’s on-boarding process causes users to unintentionally opt-in to information disclosure, then makes it difficult to undo these so-called “choices” about privacy control, which were not ethically presented in the beginning. This creates more work for the consumer to retroactively opt-out. Of course, Google isn’t alone in using Dark Patterns to coerce users into “consenting” to different permissions. For example, in the image immediately below, Facebook Messenger’s SMS feature presents itself when you first download the application. Giving SMS permission would mean making Facebook Messenger the default texting application for your phone. Note the bright blue “OK”, as opposed to the less prominent “Not Now”. facebook sms messenger feature on install Likewise, in the next image immediately below, Venmo’s onboarding encourages users to connect to Facebook and sync the contacts from their phones. Note how “Connect Facebook” is presented as the bolder and more apparent option, potentially cross-sharing robust profiles of information from your Facebook network. These are classic Dark Patterns, deploying UX design against consumer privacy and in favor of corporate profit. What is “Opinionated Design”? The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take. Of course, UX design can also guide users to protect their safety. “Opinionated Design” uses the same techniques as Dark Patterns, by means of persuasive visual indicators, bolder options, and compelling wording. For example, the Google Chrome security team used the design principles of “Attractiveness of Choice” and “Choice Visibility” to effectively warn some users about SSL hazards, as discussed in their report in 2015. When the safety of the user is valued by the designer and product team, they can guide the user away from particularly vulnerable situations while browsing. The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take. As in the case of Google Chrome’s SSL warnings, where explanations and clear guidance to safety can help to prevent abuse of a person navigating the web. These are examples of Opinionated Design: wrong host ssl page on chromeuntrusted root ssl page on chrome SSL warnings are presented to the user and given brief explanations on why the connection is not safe. Note how “Back to safety” is boldly presented to guide the user back from an potential attack. Privacy by Default Part of the solution is new legislation that requires companies to obtain opt-in consent that is easy for users to understand before they harvest and monetize users’ data. To do this, UX design must pivot from using Dark Patterns to satisfy business metrics. Among other things, it should: Decouple the on-boarding process for devices and applications from the consent process. Visually display equally weighted options on pages that involve consent to data collection, use, and sharing. Consumers feel uneasy about privacy, so default to the “no” option during setup. Coercing “consent” for lucrative data bundling may satisfy a temporary metric, but public distrust of your platform will outweigh any gains from unethical design. We must continue this critical discussion around consent and privacy, and urge product designers and managers to build transparency into their applications and devices. Privacy doesn’t have to be painful and costly, if it is integrated in the beginning of UX design, rather than stapled on at the end.
>> mehr lesen

Powerful Permissions, Wimpy Warnings: Installing a Root Certificate Should be Scary (Thu, 14 Feb 2019)
More lessons from "Facebook Research" Last week, Facebook was caught using a sketchy market research app to gobble large amounts of sensitive user activity after instructing users to alter the root certificate store on their phones. A day after, Google pulled a similar iOS “research program” app. Both of these programs are a clear breach of user trust that we have written about extensively. This news also drew attention to an area both Android and iOS could improve on. Asking users to alter root certificate stores gave Facebook the ability to intercept network traffic from users’ phones even when that traffic is encrypted, making users' otherwise secure Internet traffic and communications available to Facebook. How the devices alert users to this possibility—the "UX flow"—on both Android and iOS could be improved dramatically. To be clear, Android and iOS should not ban these capabilities altogether, like Apple has already done for sideloaded applications and VPNs. The ability to alter root certificate stores is valuable to researchers and power-users, and should never be locked-down for device owners. A root certificate allows researchers to analyze encrypted data that a phone’s applications are sending off to third-parties, exposing whether they’re exfiltrating credit-card numbers or health data, or peddling other usage data to advertisers. However, Facebook’s manipulation of regular users into allowing this ability for malicious reasons indicates the necessity of a clearer UX and more obvious messaging. Confusing prompts for adding root certificates When regular users are manipulated into installing a root certificate on their device, it may not be clear that this allows the owner of the root certificate to read any encrypted network traffic. On both iOS and Android, users installing a root certificate click through a process filled with vague jargon. This is the explanation users get, with inaccessible jargon bolded. Android: “Note: The issuer of this certificate may inspect all traffic to and from the device.” iOS: “Installing the certificate “<certificate name>” will add it to the list of trusted certificates on your iPhone. This certificate will not be trusted for websites until you enable it in Certificate Trust Settings.”   While adding a certificate, Android has a small message written in red: "Note: The issuer of this certificate may inspect all traffic to and from the device." Android's warning before adding a root certificate is some small red text filled with jargon. On iOS, during the certificate installation process, the screen shows a warning that says "Installing the certificate X will add it to the list of trusted certificates on your iPhone. This certificate will not be trusted for websites until you enable it in Certificate Trust Settings." iOS's warning is much larger, but doesn't explain at all what significance this action may have to a non-technical user. Regular users probably don’t know about the X.509 Certificate ecosystem, who certificate issuers are, what it means to “trust” a certificate, and its relationship to encrypting their data. On Android, the warning is vague about who has what capabilities: an “issuer … may … inspect all traffic”. On iOS, there’s no explanation whatsoever, even in the “Certificate Trust Settings,” about why this may be a dangerous action. Security-compromising actions should have understandable messaging for non-technical users The good news: it’s possible to get this sort of messaging right.  For instance, these dangers also apply on browsers, where the warnings to users are much more clear. Compare the above messaging flow for trusting root certificates on your phone to the equivalent warnings on browsers when you hit a website with a self-signed or untrusted certificate. Chrome warns in very large letters, “Your connection is not private,” and Firefox similarly announces, “Your connection is not secure.” Chrome’s messaging even lists possible types of sensitive data that may be exfiltrated: “passwords, messages, or credit cards." Changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button. Your connection is not private. Attackers might be trying to steal your information from self-signed.badssl.com (for example, passwords, messages, or credit cards). Chrome's warning on websites with self-signed certificates. The messaging is clear and understandable, and changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button. Stop! This is a browser feature intended for developers. If someone told you to copy-paste something here to enable a Facebook feature or "hack" someone's account, it is a scam and will give them access to your Facebook account. See https://www.facebook.com/selfxss for more information. The prompt that appears when entering the developer console on the Facebook website.  Another good example comes from Facebook itself: when you open a browser developer console on Facebook’s website, a big red “Stop!” appears to prevent users not familiar with the console from doing something dangerous. Here, Facebook goes out of its way to warn users about the dangers of using a feature meant for researchers and developers. Facebook’s “market research” app, Android, and iOS did none of this. The answer should not be to vilify root certificates and their capabilities in general. Tools like this prove themselves invaluable to security researchers and privacy experts. At the same time, they should not be presented to general users without abundantly clear messaging and design to indicate their potential dangers.
>> mehr lesen