Deeplinks

Tell HUD: Algorithms Shouldn't Be an Excuse to Discriminate (Fr, 18 Okt 2019)
Update 10/18: EFF has submitted its comments to HUD, which you can read here. The U.S. Department of Housing and Urban Development (HUD) recently released a proposed rule that will have grave consequences for the enforcement of fair housing laws. Under the Fair Housing Act, individuals can bring claims on the basis of a protected characteristic (like race, sex, or disability status) when there is a facially-neutral policy or practice that results in unjustified discriminatory effect, or disparate impact. The proposed rule makes it much harder to bring a disparate impact claim under the Fair Housing Act. Moreover, HUD’s rule creates three affirmative defenses for housing providers, banks, and insurance companies that use algorithmic models to make housing decisions. As we’ve previously explained, these algorithmic defenses demonstrate that HUD doesn’t understand how machine learning actually works. This proposed rule could significantly impact housing decisions and make discrimination more prevalent. We encourage you to submit comments to speak out against HUD's proposed rule. Here's how to do it in three easy steps: Go to the government’s comments site and click on “Comment Now.” Start with the draft language below regarding EFF’s concerns with HUD’s proposed rule. We encourage you to tailor the comments to reflect your specific concerns. Adapting the language increases the chances that HUD will count your comment as a “unique” submission, which is important because HUD is required to read and respond to unique comments. Hit “Submit Comment” and feel good about doing your part to protect the civil rights of vulnerable communities and to educate the government about how technology actually works! Comments are due by Friday, October 18, 2019 at 11:59 PM ET. To Whom It May Concern: I write to oppose HUD’s proposed rule, which would change the disparate impact standard for the agency’s enforcement of the Fair Housing Act. The proposed rule would set up a burden-shifting framework that would make it nearly impossible for a plaintiff to allege a claim of unjustified discriminatory effect. Moreover, the proposed rule offers a safe harbor for defendants who rely on algorithmic models to make housing decisions. HUD’s approach is unscientific and fails to understand how machine learning actually works. HUD’s proposed rule offers three complete algorithmic defenses if: (1) the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected characteristics and the model is predictive of risk or other valid objective; (2) a third party creates or manages the algorithmic model; or (3) a neutral third party examines the model and determines the model’s inputs are not close proxies for protected characteristics and the model is predictive of risk or other valid objective. In the first and third defenses, HUD indicates that as long as a model’s inputs are not discriminatory, the overall model cannot be discriminatory. However, the whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. These combinations of different variables could be close proxies for protected classes, even if the original input variables are not. Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect, which HUD does not account for in its proposed rule.  The second defense will shield housing providers, mortgage lenders, and insurance companies that rely on a third party’s algorithmic model, which will be the case for most defendants. This defense gets rid of any incentive for defendants not to use models that result in discriminatory effect or to pressure model makers to ensure their algorithmic models avoid discriminatory outcomes. Moreover, it is unclear whether a plaintiff could actually get relief by going after a model maker, a distant and possibly unknown third party, rather than a direct defendant like a housing provider. Accordingly, this defense could allow discriminatory effects to continue without recourse. Even if a plaintiff can sue a third-party creator, trade secrets law could prevent the public from finding out about the discriminatory impact of the algorithmic model. HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that is exactly what the proposed rule will do. Today, a defendant’s use of an algorithmic model in a disparate impact case is considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That is exactly how it should work. I respectfully urge HUD to rescind its proposed rule and continue to use its current disparate impact standard.
>> mehr lesen

Massachusetts: Tell Your Lawmakers to Press Pause on Government Face Surveillance (Thu, 17 Oct 2019)
Face surveillance by government poses a threat to our privacy, chills protest in public places, and amplifies historical biases in our criminal justice system. Massachusetts has the opportunity to become the first state to stop government use of this troubling technology, from Provincetown to Pittsfield. Massachusetts residents: tell your legislature to press pause on government use of face surveillance throughout the Commonwealth. Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of the technology, and your lawmakers need to hear from you ahead of an Oct. 22 hearing on these bills. TAKE ACTION Pause Government Face Surveillance in Massachusetts Concern over government face surveillance in our communities is widespread. Polling from the ACLU of Massachusetts has found that more than three-quarters, 79 percent, support a statewide moratorium. The city council of Somerville, Massachusetts voted unanimously in July to ban government face surveillance altogether, becoming the first community on the East coast to do so. The town of Brookline, Massachusetts is currently considering a ban of its own. In California, the cities of San Francisco, Oakland—and just this week—Berkeley have passed bans as well. EFF has advocated for governments to stop use of face surveillance in our communities immediately, particularly in light of what researchers at MIT’s Media Lab and others have found about its high error rates—particularly for women and people of color. Even if it were possible to lessen these misidentification risks, however, government use of face recognition technology still poses grave threats to safety and privacy. Regardless of our race or gender, law enforcement use of face recognition technology poses a profound threat to personal privacy, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly documented and analyzed. Tell your lawmakers to support this bill and make sure that the people of Massachusetts have the opportunity to evaluate the consequences of using this technology before this type of mass surveillance becomes the norm in your communities.
>> mehr lesen

Why Fiber is Vastly Superior to Cable and 5G (Thu, 17 Oct 2019)
The United States, its states, and its local governments are in dire need of universal fiber plans. Major telecom carriers such as AT&T and Verizon have discontinued their fiber-to-the-home efforts, leaving most people facing expensive cable monopolies for the future. While much of the Internet infrastructure has already transitioned to fiber, a supermajority of households and businesses across the country still have slow and outdated connections. Transitioning the “last mile” into fiber will require a massive effort from industry and government—an effort the rest of the world has already started. Unfortunately, arguments by the U.S. telecommunications industry that 5G or currently existing DOCSIS cable infrastructure are more than up to the task of substituting for fiber have confused lawmakers, reporters, and regulators into believing we do not have a problem. In response, EFF has recently completed extensive research into the currently existing options for last mile broadband and lays out what the objective technical facts demonstrate. By every measurement, fiber connections to homes and businesses are, by far, the superior choice for the 21st century. It is not even close. The Speed Chasm Between Fiber and Other Options As a baseline, there is a divide between “wireline” internet (like cable and fiber) and “wireless” internet (like 5G). Cable systems can already deliver better service to most homes and businesses than 5G wireless deployments because the wireline service can carry signals farther with less interference than radio waves in the air.  We’ve written about the difference between wireless and wireline internet technologies in the past. While 5G is a major improvement over previous generations of wireless broadband, cable internet will remain the better option for the vast majority of households in terms of both reliability and raw speed. Gigabit and faster wireless networks have to rely on high frequency spectrum in order to have sufficient bandwidth to deliver those speeds. But the faster the speed, and the higher the frequency,the more environmental factors such as the weather or physical obstructions interfere with the transmission. Gigabit 5G uses “millimeter wave” frequencies, which can’t travel through doors or walls. In essence, the real world environment adds so much friction to wireless transmissions at high-speeds that any contention that it can replace wireline internet fiber or cable—which contend with few of those barriers due to insulated wires— is suspect. Meanwhile, fiber systems have at least a 10,000 (yes ten...thousand) fold advantage over cable systems in terms of raw bandwidth. This translates into a massive advantage for data capacity, and it’s why scientists have been able to squeeze more than 100 terabits per second (100,000 Gb/s) down a single fiber. The most advanced cable technology has achieved max speeds of around 10 Gb/s in a lab. Cable has not, and will not, come close to fiber. As we explain in our whitepaper, fiber also has significantly less latency, fewer problems with dropped packets, and will be easier to upgrade in the future. Incumbents Favor the Status Quo Because its Expensive for You and Profitable for Them The American story of broadband deployment is a tragic one where your income level determines  whether you have competition and affordable access. In the absence of national coverage policies, low-income Americans and rural Americans have been left behind. This stands to get worse absent a fundamental commitment to fiber for everyone. Our current situation and outlook for the future like did not happen in a vacuum—policy decisions made more than a decade ago, at the advent of fiber deployment in the United States, have proven to be complete failures when it comes to universal access. EFF’s review of the history of those decisions in the early 2000s has shown that none of the rationales have been justified by what followed. But it doesn’t have to be like this. There is absolutely no good reason we have to accept the current situation as the future. A fundamental refocus on competition, universality, and affordability by local, state, and the federal government is essential to get our house back in order. Policymakers doing anything short of that are effectively concluding that having slower, more expensive cable as your only choice for the gigabit future is an acceptable outcome. 
>> mehr lesen

EFF Urges Congress Not to Dismantle Section 230 (Wed, 16 Oct 2019)
The Keys to a Healthy Internet Are User Empowerment and Competition, Not Censorship The House Energy and Commerce Committee held a legislative hearing today over what to do with one of the most important Internet laws, Section 230. Members of Congress and the testifying panelists discussed many of the critical issues facing online activity like how Internet companies moderate their users’ speech, how Internet companies and law enforcement agencies are addressing online criminal activity, and how the law impacts competition.  EFF Legal Director Corynne McSherry testified at the hearing, offering a strong defense of the law that’s helped create the Internet we all rely on today. In her opening statement, McSherry urged Congress not to take Section 230’s role in building the modern Internet lightly: We all want an Internet where we are free to meet, create, organize, share, debate, and learn. We want to have control over our online experience and to feel empowered by the tools we use. We want our elections free from manipulation and for women and marginalized communities to be able to speak openly about their experiences. Chipping away at the legal foundations of the Internet in order to pressure platforms to play the role of Internet police is not the way to accomplish those goals.  mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.c-span.org%2Fvideo%2Fstandalone%2F%3Fc4822786%2Fcorynne-mcsherry-section-230%22%20allowfullscreen%3D%22allowfullscreen%22%20width%3D%22512%22%20height%3D%22330%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from c-span.org Recognizing the gravity of the challenges presented, Ranking Member Cathy McMorris Rodgers (R-WA) aptly stated: “I want to be very clear: I’m not for gutting Section 230. It’s essential for consumers and entities in the Internet ecosystem. Misguided and hasty attempts to amend or even repeal Section 230 for bias or other reasons could have unintended consequences for free speech and the ability for small businesses to provide new and innovative services.”  We agree. Any change to Section 230 risks upsetting the balance Congress struck decades ago that created the Internet as it exists today. It protects users and Internet companies big and small, and leaves open the door to future innovation. As Congress continues to debate Section 230, here are some suggestions and concerns we have for lawmakers willing to grapple with the complexities and get this right. Facing Illegal Activity Online: Focus on the Perpetrators Much of the hearing focused on illegal speech and activity online. Representatives and panelists mentioned examples like illegal drug sales, wildlife sales, and fraud. But there’s an important distinction to make between holding Internet intermediaries, such as social media companies and classified ads sites, liable for what their users say or do online, and holding users themselves accountable for their behavior. Section 230 has always had a federal criminal law carve out. This means that truly culpable online platforms can already be prosecuted in federal court, alongside their users, for illegal speech and activity. For example, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs. But EFF does not believe prosecuting Internet intermediaries is the best answer to the problems we find online. Rather, both federal and state government entities should allocate sufficient resources to target the direct perpetrators of illegal online behavior; that is, the users themselves who take advantage of open platforms to violate the law. Section 230 does not provide an impediment to going after these bad actors. McSherry pointed this out in her written testimony: “In the infamous Grindr case... the abuser was arrested two years ago under criminal charges of stalking, criminal impersonation, making a false police report, and disobeying a court order.” Weakening Section 230 protections in order to expand the liability of online platforms for what their users say or do would incentivize companies to over-censor user speech in an effort to limit the companies’ legal exposure. Not only would this be harmful for legitimate user speech, it would also detract from law enforcement efforts to target the direct perpetrators of illegal behavior. As McSherry noted regarding the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA): At this committee’s hearing on November 30, 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. One year later, there is anecdotal evidence that FOSTA has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall. Speech Moderation: User Choice and Empowerment In her testimony, McSherry stressed that the Internet is a better place for online community when numerous platforms are available with a multitude of moderation philosophies. Section 230 has contributed to this environment by giving platforms the freedom to moderate speech the way they see fit. The  freedom  that Section 230 afforded to Internet startups to choose their own moderation strategies has led to a multiplicity of options  for users—some more restrictive and sanitized, some more laissez-faire.  That  mix of  moderation philosophies contributes to a healthy environment for free expression and association online. Reddit’s Steve Huffman echoed McSherry’s defense of Section 230 (PDF), noting that its protections have enabled the company to improve on its moderation practices over the years. He explained that the company’s speech moderation philosophy is one that prioritizes users making decisions about how they’d like to govern themselves: The way Reddit handles content moderation today is unique in the industry. We use a governance model akin to our own democracy—where everyone follows a set of rules, has the ability to vote and self-organize, and ultimately shares some responsibility for how the platform works. In an environment where platforms have their own approaches to content moderation, users have the ultimate power to decide which ones to use. McSherry noted in her testimony that while Grindr was not held liable for the actions of one user, that doesn’t mean that Grindr didn’t suffer. Grindr lost users, as they moved to other dating platforms. One reason why it’s essential that Congress protect Section 230 is to preserve the multitude of platform options. “As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.” Later in the hearing, Rep. Darren Soto (D-FL) asked each of the panelists who should be “the cop on the beat” in patrolling online speech. McSherry reiterated that users themselves should be empowered to decide what material they see online: “A cardinal principle for us at EFF is that at the end of the day, users should be able to control their Internet experience, and we need to have many more tools to make that possible.” If some critics of Section 230 get their way, users won’t have that power. Prof. Danielle Citron offered a proposal (PDF) that Congress implement a “duty of care” regimen, where platforms would be required to show that they’re meeting a legal “reasonableness” standard in their moderation practices in order to keep their Section 230 protection. She proposes that courts look at what platforms are doing generally to moderate content and whether their policies are reasonable, rather than what a company did with respect to a particular piece of user content. But inviting courts to determine what moderation practices are best would effectively do away with Section 230’s protections, disempowering users in the process. In McSherry’s words, “As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.” Robots Won’t Fix It There was plenty of agreement that current moderation was flawed, but much disagreement about why it was flawed. Subject-matter experts on the panel frequently described areas of moderation that were not in their purview as working perfectly fine, and questioning why those techniques could not be applied to other areas. The deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it. In one disorienting moment, Gretchen Peters of the Alliance to Counter Crime Online asked the congressional committee when they’d last seen a “dick pic” on Facebook, and took their silence as an indication that Facebook had solved the dick pic problem. She then suggested Facebook could move on to scanning for other criminality. Professor Hany Farid, an expert in at-scale, resilient hashing of child exploitative imagery, wondered why the tech companies could not create digital fingerprinting solutions for opioid sales. Many cited Big Tech’s work to automatically remove what they believe to be copyright-infringing material as a potential model for other areas—perhaps unaware that the continuing failure of copyright bots is one of the few areas where EFF and the entertainment industry agree (though we think they take down too much entirely lawful material, and Hollywood thinks they’re not draconian enough.) The truth is that the deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it. Robots are still terrible at understanding context, which has resulted in everything from Tumblr flagging pictures of bowls of fruit as “adult content” to YouTube removing possible evidence of war crimes because it categorized the videos as “terrorist content.” Representative Lisa Blunt Rochester (D-DE) pointed out the consequences of having algorithms police speech, “Groups already facing prejudice and discrimination will be further marginalized and censored.” A lot of the demand for Big Tech to do more moderation is predicated on the idea that they’re good at it, with their magical tech tools. As our own testimony and long experience points out—they’re really not, with bots or without. Could they do better? Perhaps, but as Reddit’s Huffman noted, doing so means that the tech companies need to be able to innovate without having those attempts result in a hail of lawsuits. That is, he said, “exactly the sort of ability that 230 gives us.” Reforming 230 with Big Tech as the Focus Would Harm Small Internet Companies Critics of 230 often fail to acknowledge that many of the solutions they seek are not within reach of startups and smaller companies. Techniques like preemptive blocking of content, persistent policing of user posts, and mechanisms that analyze speech in real time to see what needs to be censored are extremely expensive. That means that controlling what users do, at scale, will only be doable by Big Tech. It’s not only cost prohibitive, it will carry a high cost of liability if they get it wrong. For example, Google’s ContentID is often held up in the copyright context as a means of enforcement, but it required a $100 million investment by Google to develop and deploy—and it still does a bad job. Google’s Katherine Oyama testified that Google already employs around 10,000 people that work on content moderation—a bar that no startup could meet—but even that appears insufficient to some critics. By comparison, a website like Wikipedia, which is the largest repository of information in human history, employs just about 350 staff for its entire operation, and is heavily reliant on volunteers. A set of rules that would require a Google-sized company to expend even more resources means that only the most well-funded firms could maintain global platforms. A minimally-staffed nonprofit like Wikipedia could not continue to operate as it does today. The Internet would become more concentrated, and further removed from the promise of a network that empowers everyone. As Congress continues to examine the problems facing the Internet today, we hope lawmakers remember the role that Section 230 plays in defending the Internet’s status as a place for free speech and community online. We fear that undermining Section 230 would harden today’s largest tech companies from future competition. Most importantly, we hope lawmakers listen to the voices of the people they risk pushing offline. Read McSherry’s full written testimony.
>> mehr lesen

Victory! Berkeley City Council Unanimously Votes to Ban Face Recognition (Wed, 16 Oct 2019)
Berkeley has become the third city in California and the fourth city in the United States to ban the use of face recognition technology by the government. After an outpouring of support from the community, the Berkeley City Council voted unanimously to adopt the ordinance introduced by Councilmember Kate Harrison earlier this year. Berkeley joins other Bay Area cities, including San Francisco and Oakland, which also banned government use of face recognition. In July 2019, Somerville, Massachusetts became the first city on the East Coast to ban the government’s use of face recognition. The passage of the ordinance also follows the signing of A.B. 1215, a California state law that places a three-year moratorium on police use of face recognition on body-worn cameras, beginning on January 1, 2020. As EFF’s Associate Director of Community Organizing Nathan Sheard told the California Assembly, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.” Over the last several years, EFF has continually voiced concerns over the First and Fourth Amendment implications of government use of face surveillance. These concerns are exacerbated by research conducted by MIT’s Media Lab regarding the technology’s high error rates for women and people of color. However, even if manufacturers are successful in addressing the technology’s substantially higher error rates for already marginalized communities, government use of face recognition technology will still threaten safety and privacy, chill free speech, and amplify historical and ongoing discrimination in our criminal justice system. Berkeley’s ban on face recognition is an important step toward curtailing the government’s use of biometric surveillance. Congratulations to the community that stood up in opposition to this invasive and flawed technology and to the city council members who listened.
>> mehr lesen

¿Quién Defiende Tus Datos?: Four Years Setting The Bar for Privacy Protections in Latin America and Spain (Wed, 16 Oct 2019)
Four years have passed since our partners first published Who Defends Your Data (¿Quién Defiende Tus Datos?), a report that holds ISPs accountable for their privacy policies and processes in eight Latin America countries and Spain. Since then, we’ve seen major technology companies providing more transparency about how and when they divulge their users’ data to the government. This shift has been fueled in large part by public attention in local media. The project started in 2015 in Colombia, Mexico, and Peru, joined by Brazil in 2016, Chile and Paraguay in 2017, Argentina and Spain in 2018, and Panama this year. When we started in 2015, none of the ISPs in the three countries surveyed had published transparency reports or any aggregate data about the number of data requests they received from governments. By 2019, the larger global companies with a regional presence in the nine countries surveyed are now doing this. This is a big victory for transparency, accountability, and users’ rights. Telefónica (Movistar/Vivo), a global company with a local presence in Spain and in 15 countries in Latin America, has been leading the way in the region, closely followed by Millicom (Tigo) with offices in seven countries in South and Central America. Far behind is Claro (America Movil) with offices in 16 countries in the region. Surprisingly, in one country, Chile, the small ISP WOM! has also stood out for its excellent transparency reporting. Telefonica publishes transparency reports in each of the countries we surveyed, while Millicom (Tigo) publishes transparency reports with data aggregated per specific region. In South America, Millicom (Tigo) publishes aggregate data for Bolivia, Colombia, and Paraguay. In 2018, Millicom (Tigo) also published a comprehensive Transparency report for Colombia only. While Claro (America Movil) operates in 16 countries in the region, it has only published a transparency report in one of the countries we surveyed, Chile. Chilean ISPs such as WOM!, VTR, and Entel have all also published their own transparency reports. In Brazil, however, Telefónica (Vivo) is the only Brazilian company that has published a transparency report. All of the reports still have plenty of room for improvement. The level of information disclosed varies significantly company-by-company, and even country-by-country. Telefónica usually discloses a separate aggregate number for different types of government requests—such as wiretapping, metadata, service suspension, content blocking and filtering—in their transparency report. But for Argentina, Telefónica only provides a single aggregate figure that covers every kind of request. And in Brazil, for example, Telefónica Brazil has not published the number of government requests it accepts or rejects,  although it has published that information in other countries. Companies have also adopted other voluntary standards in the region, like publishing their law enforcement guidelines for government data demands. For example, Telefónica provides an overview of the company's global procedure when dealing with government data requests. But four other companies, who operate in Chile, publish more precise guidelines adapted only to that country's legal frameworks including the small ISP WOM! and Entel, the largest national telecom company. A Breakdown by Country Colombia and Paraguay In 2015, the ¿Quién Defiende Tus Datos? project showed that keeping the pressure on—and having an open dialogue with—companies pay off. In Colombia, Fundación Karisma's 2015 report investigated five local ISPs and found that none published transparency reports on government blocking requests or data demands. By 2018, five of seven companies had published annual transparency reports on data requests, with four providing information on government blocking requests. Millicom’s Transparency Report stood out by clarifying the rules for government access to data in Colombia and Paraguay.  Both countries have adopted draconian laws that compel Internet Service Providers to grant direct access to their mobile network to authorities. In Colombia, the law establishes hefty fines if ISPs monitor interception taking place in their systems. This is why tech companies claim they do not possess information about how often and for what periods communications interception is carried out in their mobile networks. In this scenario, transparency reports become irrelevant. Conversely, in Paraguay, ISPs can view the judicial order requesting the interception, and the telecom company is aware when interception occurs in their system, and could potentially publish aggregate data about the number of data requests. Brazil and ChileInternetLab’s report shows progress in companies’ commitment to judicially challenge abusive law enforcement data requests or fight back against legislation that harms users’ privacy. In 2016, four of six companies took this kind of action. For example, the mobile companies featured in the research are part of an association that challenged before the Brazilian Supreme Court a law that allows law enforcement agents to access users' data without a warrant in case of human trafficking (Law 13.344/2016). The case is still open. Claro has also judicially challenged a direct request by the policy to access subscriber data. This number remained high in 2018 when five out of eight ISPs fought against unconstitutional laws, two of which also challenged disproportionate measures.  In contrast, ISPs in Chile have been hesitant to challenge illegal and excessive requests. Derechos Digitales' 2019 report indicates that many ISPs are still failing to confront such requests in the courts on behalf of their users—except one. Entel got top marks because it was the only ISP to refuse the government requests for an individual’s data, out of the several ISPs contacted for the same information. Chilean ISPs WOM!, VTR, Claro, and Entel also make clear in their law enforcement guidelines the need for a prior judicial order before handing content and metadata over to authorities. In Derechos Digitales' 2019 report, these companies published law enforcement guidelines out of the six featured in the research. None of these companies took these steps in 2017, the project's first year of operation in Chile.  An even more significant achievement can be seen in user notification. ISPs in the region have always been reluctant to lay out a proper procedure for alerting users of government data requests, which was reflected in Chile's 2017 report. In the latest edition, however, WOM!, VTR, and Claro in Chile explicitly commit to user notification in their policies. PeruIn Peru, three of five companies didn't publish privacy policies in 2015. By 2019 only one failed to provide details on the collection, use, and processing of their users’ personal data. Hiperderecho's 2019 report also shows progress in companies' commitment to demand judicial orders to hand over users' data. Bitel and Claro explicitly demand warrants when the request is for content. Telefónica (Movistar) stands out by requiring a warrant for both content and metadata. In 2015, only Movistar demanded a warrant for the content of the communication.  The Way Forward Despite the progress seen in Brazil, Colombia, Chile, and Peru, there’s still a lot to be done in those countries. We also need to wait for upcoming evaluations for Argentina, Panama, Paraguay, and Spain, which were only recently included in the project.  But overall, too many telecom companies—whether large or small, global or local—still don't publish law enforcement guidelines or have not established proper procedures and legal obligations. Those guidelines should be based upon the national legal framework and the countries’ international human rights commitments for the government to obtain users' information. Companies in the region equally fall short on committing to request a judicial order before handing over metadata to authorities. Finally, ISPs in the region are still wary of notifying users when governments make requests for user information. This is crucial for ensuring users’ ability to challenge the request and to seek remedies when it’s unlawful or disproportionate. The same fear keeps many ISPs from publicly defending their users in court and in Congress.  For more information, see https://www.eff.org/qdtd and the relevant media coverage about our partners’ reports in Colombia, Paraguay, Brazil, Peru, Argentina, Spain, Chile, Mexico, and Panama.
>> mehr lesen

EFF Defends Section 230 in Congress (Wed, 16 Oct 2019)
Watch EFF Legal Director Corynne McSherry Defend the Essential Law Protecting Internet Speech All of us have benefited from Section 230, a federal law that has promoted the creation of virtually every open platform or communication tool on the Internet. The law’s premise is simple. If you are not the original creator of speech found on the Internet, you are not held liable if it does harm. But this simple premise is under attack in Congress. If some lawmakers get their way, the Internet could become a more restrictive space very soon. EFF Legal Director Corynne McSherry will testify in support of Section 230 today in a House Energy and Commerce Committee hearing called “Fostering a Healthier Internet to Protect Consumers.” You can watch the hearing live on YouTube and follow along with our commentary @EFFLive. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FDaACbUEenZo%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com In McSherry’s written testimony, she lays out the case for why a strong Section 230 is essential to online community, innovation, and free expression. Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements—#MeToo, #WomensMarch, #BlackLivesMatter—are universally identified by hashtags. McSherry also cautions Congress to consider the unintended consequences of forcing online platforms to over-censor their users. When platforms take on overly restrictive and non-transparent moderation processes, marginalized people are often silenced disproportionately. Without Section 230—or with a weakened Section 230—online platforms would have to exercise extreme caution in their moderation decisions in order to limit their own liability. A platform with a large number of users can’t remove all unlawful speech while keeping everything else intact. Therefore, undermining Section 230 effectively forces platforms to put their thumbs on the scale—that is, to remove far more speech than only what is actually unlawful, censoring innocent people and often important speech in the process. Finally, Corynne urges Congress to consider the unintended consequences of last year’s Internet censorship bill FOSTA before it further undermines Section 230. FOSTA teaches that Congress should carefully consider the unintended consequences of this type of legislation, recognizing that any law that puts the onus on online platforms to discern and remove illegal posts will result in over-censorship. Most importantly, it should listen to the voices most likely to be taken offline. Read McSherry's full testimony.
>> mehr lesen

Congressional Hearing Wednesday: EFF Will Urge Lawmakers to Protect Important Internet Free Speech Law (Tue, 15 Oct 2019)
EFF Legal Director to Testify about How Consumers Benefit from CDA 230 Washington, D.C. – On Wednesday, Oct. 16, Electronic Frontier Foundation (EFF) Legal Director Corynne McSherry will testify at a congressional hearing in support of Section 230 of the Communications Decency Act (CDA)—one of the most important laws protecting Internet speech. CDA 230 shields online platforms from liability for content posted by users, meaning websites and online services can’t be punished in court for things that their users say online. McSherry will tell lawmakers that the law protects a broad swath of online speech, from forums for neighborhood groups and local newspapers, to ordinary email practices like forwarding and websites where people discuss their views about politics, religion, and elections. The law has played a vital role in providing a voice to those who previously lacked one, enabling marginalized groups to get their messages out to the whole world. At the same time, CDA 230 allows providers of all sizes to make choices about how to design and moderate their platforms. McSherry will tell lawmakers that weakening CDA 230 will encourage private censorship of valuable content and cement the dominance of those tech giants that can afford to shoulder new regulatory burdens. McSherry is one of six witnesses who will testify at the House Committee on Energy and Commerce hearing on Wednesday, entitled “Fostering a Healthier Internet to Protect Consumers.”  Other witnesses include law professor Danielle Citron, and representatives from YouTube and reddit. WHAT: House Committee on Energy and Commerce “Fostering a Healthier Internet to Protect Consumers” WHO: EFF Legal Director Corynne McSherry WHEN: Wednesday, Oct 16 10 am WHERE: 2123 Rayburn House Office Building John D. Dingell Room 45 Independence Ave SW Washington, DC  20515 For more on Section 230: https://www.eff.org/document/section-230-not-broken Contact:  Corynne McSherry Legal Director corynne@eff.org India McKinney Director of Federal Affairs india@eff.org
>> mehr lesen

Hearing Thursday: EFF’s Rainey Reitman Will Urge California Lawmakers to Balance Needs of Consumers In Developing Cryptocurrency Regulations (Tue, 15 Oct 2019)
Consumer Protection and Choice Should be Paramount Whittier, California—On Thursday, Oct. 17, at 10 am, EFF Chief Program Officer Rainey Reitman will urge California lawmakers to prioritize consumer choice and privacy in developing cryptocurrency regulations. Reitman will testify at a hearing convened by the California Assembly Committee on Banking and Finance. The session, Virtual Currency Businesses: The Market and Regulatory Issues, will explore the business, consumer, and regulatory issues in the cryptocurrency market. EFF supports regulators stepping in to hold accountable those engaging in fraud, theft, and other misleading cryptocurrency business practices. But EFF has been skeptical of many regulatory proposals that are vague; designed for only one type of technology; could dissuade future privacy-enhancing innovation; or that might entrench existing players to the detriment of upstart innovators. Reitman will tell lawmakers that cryptocurrency regulations should protect consumers but not chill future technological innovations that will benefit them. WHAT: Informational Hearing of the California Assembly Committee on Banking  and Finance WHO: EFF Chief Program Officer Rainey Reitman WHEN: Thursday, October 17, 10 am WHERE: Rio Hondo Community College                Campus Inn                600 Workman Mill Rd.                Whittier, California 90601 For more about blockchain: https://www.eff.org/issues/blockchain For more about EFF’s cryptocurrency activism: https://www.eff.org/document/facebook-libra-and-blockchainhttps://www.eff.org/deeplinks/2019/05/why-bill-banning-cryptocurrency-purchases-americans-terrible-idea Contact:  Rainey Reitman Chief Program Officer rainey@eff.org
>> mehr lesen

Today: Tell Congress Not to Pass Another Bad Copyright Law (Tue, 15 Oct 2019)
Today, Congress is back in session after a two-week break. Now that they’re back, we’re asking you to take a few minutes to call and tell them not to pass the Copyright Alternative in Small-Claims Enforcement (CASE) Act. The CASE Act would create an obscure board inside the U.S. Copyright Office which would be empowered to levy huge penalties against people accused of copyright infringement. It could have devastating effects on regular Internet users and little-to-no effect on true infringers. We know the CASE Act won’t work because we’ve seen similar “solutions” fail before. Take Action Tell Congress not to bankrupt users for regular Internet activity The CASE Act is supposed to help artists by making it easy to make copyright infringement claims and collect “small” amounts of recompense. However, neither the problem of infringement nor this proposed solution is simple. The CASE Act would allow copyright infringement claims to be made with a “Copyright Claims Board” staffed by “copyright claims officers,” who will then make decisions about the merits of the claim and how much is owed to the claimant. What the CASE Act doesn’t do is make sure those decisions meet the same requirements and standards that copyright claims have to follow in real courts. For example, filing a copyright infringement claim in a court requires a valid copyright registration so that there is a verifiable record about the owner and date of creation of the work at issue. The CASE Act has no such requirement for claims before the Copyright Claims Board, removing one of the important safeguards for making sure copyright claims are actually valid. The result of the CASE Act could be two different kinds of copyright law cases, with the one created by the Copyright Claims Board being almost impossible to appeal. We already know how dangerous it can be to free expression when systems make copyright claims easy and counterclaims difficult and intimidating. The Digital Millennium Copyright Act’s (DMCA) takedown procedures have given us many, many examples. A critic used it to avoid criticism of his criticism. A group called “Straight Pride UK” used it when it looked bad in an interview it did. A Nazi romance movie did not like people making fun of how bad it was. The CASE Act does not adequately consider the free speech implications of making copyright claims easy to bring. The CASE Act would also set a limit of $30,000 in penalties per proceeding. In a world where almost 40% of Americans would struggle to cover an emergency expense of $400, the Copyright Claims Board would have enormous power to ruin the lives of ordinary Americans. This problem has been glossed over in a number of ways. One is emphasis on the supposedly “small claims” nature of the bill, made most clearly by Representative Doug Collins of Georgia, who laughed off the need to discuss this bill by saying the $30,000 limit amounted to “truly small claims.” Another is an emphasis on the “voluntary” nature of the CASE Act. The CASE Act is described as voluntary not because everyone involved has agreed to be there, but because everyone involved has not not agreed. It’s as complicated as it sounds. The CASE Act would allow people  who receive notice of a claim from this brand new Copyright Claims Board to get out of the proceedings by telling the Copyright Office they would like to opt out. However, the CASE Act doesn’t have any requirements about what “opting out” looks like other than that it has to be in accordance with regulations created by the Copyright Office itself. That is no guarantee that opting out will be simple or easy. The Copyright Office’s regulations do not tend towards being easy reading for the average person. We see that every three years, when the Copyright Office issues its exemptions for Section 1201 of the DMCA. Section 1201 bans circumventing of access controls on copyrighted works. It also empowers the Copyright Office to create exemptions to this prohibition. In certain circumstances—often ones rooted in free expression—you have the right to use copyrighted material without permission or paying the owner. And that “you” means everybody, not just people who make a living as documentary filmmakers, security researchers, and so on. However, the Copyright Office continues to make exemptions too complicated for regular people who don’t have lawyers to understand and use. Given this history, it seems more likely than not that, if the CASE Act became law, the Copyright Office would continue in this vein. That is, its regulations would not be made easy to read and comply with. And the decisions of the claims board would focus less on issues of fair use and free expression, but more on technicalities and serving the desires of copyright holders. In this environment, copyright trolls and worse would flourish. Copyright trolls make their money through copyright lawsuits, rather than through any legitimate creation. They are not fictional, nor are they a problem of the past. The CASE Act would make it easy for trolls to file a lot of claims. Not only could the trolls collect on those claims, but they could use the $30,000 limit to get their targets to agree to pay less, just to avoid the chance of a huge judgment being awarded by the board. And like in the case of the DMCA takedown system, most regular Internet users would find themselves in a scary, expensive situation if they tried to fight back. DMCA takedown abuse has become a favorite tactic for scammers, and although the law makes it possible to go after fraudulent takedowns and counterclaims, it only happens in rare and extreme situations. Because of how uneven the system the CASE Act would create is, and how complicated it stands to be, small copyright holders looking for a way to hold bad actors accountable are not going to find this system workable. Regular Internet users will be trapped, while those with money, and sophisticated infringers, will be able to navigate whatever opt-out system the Copyright Office creates. So far this year, Congress has rushed the CASE Act through, without holding any hearings where its flaws could be publicly explained and debated. The CASE Act has passed out of committee in both the House and the Senate. Now that Congress is back in D.C., they need to hear from regular Internet users about how dangerous the CASE Act is for them. That’s why we’re asking you to call Congress today and tell your members of Congress to vote “no” on the CASE Act.
>> mehr lesen

One Weird Law That Interferes With Security Research, Remix Culture, and Even Car Repair (Sat, 12 Oct 2019)
How can a single, ill-conceived law wreak havoc in so many ways? It prevents you from making remix videos. It blocks computer security research. It keeps those with print disabilities from reading ebooks. It makes it illegal to repair people's cars. It makes it harder to compete with tech companies by designing interoperable products. It's even been used in an attempt to block third-party ink cartridges for printers. It's hard to believe, but these are just some of the consequences of Section 1201 of the Digital Millennium Copyright Act, which gives legal teeth to "access controls" (like DRM). Courts have mostly interpreted the law as abandoning the traditional limitations on copyright's scope, such as fair use, in favor of a strict regime that penalizes any bypassing of access controls (such as DRM) on a copyrighted work regardless of your noninfringing purpose, regardless of the fact that you own that copy of the work.   Since software can be copyrighted, companies have increasingly argued that you cannot even look at the code that controls a device you own, which would mean that you're not allowed to understand the technology on which you rely — let alone learn how to tinker with it or spot vulnerabilities or undisclosed features that violate your privacy, for instance. Given how terrible Section 1201 is, we sued the government on behalf of security researcher Matt Green and innovator Andrew "bunnie" Huang — and his company, Alphamax. Our clients want to engage in important speech and they want to empower others to do the same — even when access controls get in the way.   The case was dormant for over two years while we waited for a ruling from the judge on a preliminary matter, but it is finally moving once again, with several of our clients' First Amendment claims going forward. Last month, we asked the court to prohibit the unconstitutional enforcement of the law. That has gotten the attention of the copyright cartels, who are likely to oppose our motion later this month. In their opinion, the already astronomical penalties for actual copyright infringement aren't enough to address the perceived problem, and the collateral damage to our freedom of speech and our understanding of the technology around us are all acceptable losses in their war to control the distribution of cultural works.   EFF is proud to help our clients take on both the Department of Justice and one of the most powerful lobbying groups in the country—to fight for your freedoms and for a better world where we are free to understand the technology all around us and to participate in creating culture together. Related Cases:  Green v. U.S. Department of Justice
>> mehr lesen

Secret Court Rules That the FBI’s “Backdoor Searches” of Americans Violated the Fourth Amendment (Sat, 12 Oct 2019)
But the Court Misses the Larger Problem: Section 702’s Mass Surveillance is Inherently Unconstitutional EFF has long maintained that it is impossible to conduct mass surveillance and still protect the privacy and constitutional rights of innocent Americans, much less the human rights of innocent people around the world. This week, we were once again proven right. We learned new and disturbing information about the FBI’s repeated and unjustified searches of Americans’ information contained in massive databases of communications collected using the government’s Section 702 mass surveillance program. A series of newly unsealed rulings from the federal district and appellate courts tasked with overseeing foreign surveillance show that the FBI has been unable to comply with even modest oversight rules Congress placed on “backdoor searches” of Americans by the FBI.  Instead, the Bureau routinely abuses its ability to search through this NSA-collected information for purposes unrelated to Section 702’s intended national security purposes. The size of the problem is staggering. The Foreign Intelligence Surveillance Court (FISC) held that “the FBI has conducted tens of thousands of unjustified queries of Section 702 data.” The FISC found that the FBI created an “unduly lax” environment in which “maximal use” of these invasive searches was “a routine and encouraged practice.” The court should have imposed a real constitutional solution: it should require the FBI to get a warrant before searching for people’s communications But as is too often the case, the secret surveillance courts let the government off easy. Although the FISC initially ruled the FBI’s backdoor search procedures violated the Fourth Amendment in practice, the ultimate impact of the ruling was quite limited. After the government appealed, the FISC allowed the FBI to continue to use backdoor searches to invade people’s privacy—even in investigations that may have nothing to do with national security or foreign intelligence—so long as it follows what the appeals court called a “modest ministerial procedure.” Basically, this means requiring FBI agents to document more clearly why they were searching the giant 702 databases for information about Americans. Rather than simply requiring a bit more documentation, we believe the court should have imposed a real constitutional solution: it should require the FBI to get a warrant before searching for people’s communications. Ultimately, these orders follow a predictable path. First, they demonstrate horrific and systemic constitutional abuses. Then, they respond with small administrative adjustments.  They highlight how judges sitting on the secret surveillance courts seem to have forgotten their primary role of protecting innocent Americans from unconstitutional government actions. Instead, they become lost in a thicket of administrative procedures that are aimed at providing thin veil of privacy protection while allowing the real violations to continue. Even when these judges are alerted to actual violations of the law, which have been occurring for more than a decade, they retreat from what should now be clear as day: Section 702 is itself unconstitutional. The law allows the government to sweep up people’s communications and records of communications and amass them in a database for later warrantless searching by the FBI. This can be done for reasons unrelated to national security, much less supported by probable cause. No amount of “ministerial” adjustments can cure Section 702’s Fourth Amendment problems, which is why EFF has been fighting to halt this mass surveillance for more than a decade. Opinion Shows FBI Engaged in Lawless, Unconstitutional Backdoor Searches of Americans These rulings arose from a routine operation of Section 702—the FISC’s annual review of the government’s “certifications,” the high-level descriptions of its plans for conducting 702 surveillance. Unlike traditional FISA surveillance, the FISC does not review individualized, warrant-like applications under Section 702, and instead signs off on programmatic documents like “targeting” and “minimization” procedures. Unlike regular warrants, the individuals affected by the searches are never given notice, much less enabled to seek a remedy for misuse.  Yet, even under this limited (and we believe insufficient) judicial review, the FISC has repeatedly found deficiencies in the intelligence community’s procedures, and this most recent certification was no different. Specifically, among the problems the FISC noticed were problems with the FBI’s backdoor search procedures. The court noted that in 2018, Congress directed the FBI to record every time it searched a database of communications collected under Section 702 for a term associated with a U.S. person, but that the Bureau was simply keeping a record of every time it ran such a search on all people. In addition, it was not making any record of why it was running these searches, meaning it could search for Americans’ communications without a lawful national security purpose. The court ordered the government to submit information, and also took the opportunity to appoint amici to counter the otherwise one-sided arguments by the government, a procedure given to the court as part of the 2015 USA Freedom Act (and which EFF had strongly advocated for). As the FBI provided more information to the secret court, it became apparent just how flagrant the FBI’s disregard for the statute was. The court found no justification for FBI’s refusal to record queries of Americans’ identifiers, and that the agency was simply disobeying the will of Congress. Even more disturbing was the FBI’s misuse of backdoor searches, which is when the FBI looks through people’s communications collected under Section 702 without a warrant and often for domestic law enforcement purposes. Since the beginning of Section 702, the FBI has avoided quantifying its use of backdoor searches, but we have known that its queries dwarfed other agencies. In the October 2018 FISC opinion, we get a window into just how disparate the number of FBI’s searches is. In 2017, the NSA, CIA and National Counterterrorism Center (NCTC) “collectively used approximately 7500 terms associated with U.S. persons to query content information acquired under Section 702.” Meanwhile, the FBI ran 3.1 million queries against a single database alone. Even the FISC itself did not get a full accounting of the FBI’s queries that year, or what percentage involved Americans’ identifiers, but the court noted that “given the FBI's domestic focus it seems likely that a significant percentage of its queries involve U.S.-person query terms.” The court went on to explain that the lax—and sometimes nonexistent—oversight of these backdoor searches generated significant misuse. Examples reported by the government included tens of thousands of “batch queries” in which the FBI searched identifiers en masse on the basis that one of them would return foreign intelligence information. The court described a hypothetical involving suspicion that an employee of a government contractor was selling information about classified technology, in which the FBI would search identifiers belonging to all 100 of the contractor’s employees. As the court observed, these “compliance” issues demonstrated “fundamental misunderstandings” about the statutory and administrative limits on use of Section 702 information, which is supposed to be “reasonably likely to return foreign intelligence information.” Worse, because the FBI did not document its agents’ justifications for running these queries, “it appears entirely possible that further querying violations involving large numbers of U S.-person query terms have escaped the attention of overseers and have not been reported to the Court.” With the benefit of input from its appointed amici, the FISC initially saw these violations for what they were: a massive violation of Americans’ Fourth Amendment rights. Unfortunately, the court let the FBI off with a relatively minor modification of its backdoor search query procedures, and made no provision for those impacted by these violations to ever be formally notified, so that they could seek their own remedies. Instead, going forward, FBI personnel must document when they use U.S. person identifiers to run backdoor searches—as required by Congress—and they must describe why these queries are likely to return foreign intelligence.  That’s it. Even as to this requirement – which was already what the law required -- there are several exceptions and loopholes.  This means that at least in some cases, the FBI can still trawl through massive databases of warrantlessly collected communications using Americans’ names, phone numbers, social security numbers and other information and then use the contents of the communications for investigations that have nothing to do with national security. Secret Court Rulings Are Important, But Miss the Larger Problems With Section 702 Mass Surveillance It is disturbing that in response to widespread unconstitutional abuses by the FBI, the courts charged with protecting people’s privacy and overseeing the government’s surveillance programs required FBI officials to just do more paperwork. The fact that such a remedy was seen as appropriate underscores how abstract ordinary people’s privacy—and the Fourth Amendment’s protections—have become for both FISC judges and the appeals judges above them on the Foreign Intelligence Court of Review (FISCR). But the fact that judges view protecting people’s privacy rights through the abstract lens of procedures is also the fault of Congress and the executive branch, who continue to push the fiction that mass surveillance programs operating Section 702 can be squared with the Fourth Amendment. They cannot be. First, Section 702 allows widespread collection (seizure) of people’s Internet activities and communications without a warrant, and the subsequent use of that information (search) for general criminal purposes as well as national security purposes. Such untargeted surveillance and accompanying privacy invasions are anathema to our constitutional right to privacy and resembles a secret general warrant to search anyone, at any time. The Founders did not fight a revolution to gain the right to government agency protocols Second, rather than judges deciding in specific cases whether the government has probable cause to justify its surveillance of particular people or groups, the FISC’s role under Section 702 is relegated to approving general procedures that the government says are designed to protect people’s privacy overall. Instead of serving as a neutral magistrate that protects individual privacy, the court is several steps removed from the actual people caught up in the government’s mass surveillance. This allows judges to then decide people’s rights in the abstract and without ever having to notify the people involved, much less provide them with a remedy for violations. This likely leads the FISC to be more likely to view procedures and paperwork as sufficient to safeguard people’s Fourth Amendment rights. It’s also why individual civil cases like our Jewel v. NSA case are so necessary. As the Supreme Court stated in Riley v. California, “the Founders did not fight a revolution to gain the right to government agency protocols.” Yet such abstract agency protocols are precisely what the FISC endorses and applies here with regard to your constitutionally protected communications. Third, because Section 702 allows the government to amass vast stores of people’s communications and explicitly authorizes the FBI to search it, it encourages the very privacy abuses the FISC’s 2018 opinion details. These Fourth Amendment violations are significant and problematic. But because the FISC is so far removed from overseeing the FBI’s access to the data, it does not consider the most basic protections required by the Constitution: requiring agents to get a warrant. We hope that these latest revelations are a wake-up call for Congress to act and repeal Section 702 or, at minimum, to require the FBI to get individual warrants, approved by a court, before beginning their backdoor searches.  And while we believe current law allows our civil litigation, Congress can also remove government roadblocks by providing clear, unequivocal notice, as well as an individual remedy for those injured by any FBI or NSA or CIA violations of this right. We also hope that the FISC itself will object to merely being an administrative oversight body, and instead push for more stringent protections for people’s privacy, and pay more attention to the inherent constitutional problems of Section 702. But no matter what, EFF will continue to push its legal challenges to the government’s mass surveillance program and will work to bring an end to unconstitutional mass surveillance. Related Cases:  Jewel v. NSA
>> mehr lesen

EFF to Court: Parody Book Combining Dr. Seuss and Star Trek Themes Is Fair Use (Fri, 11 Oct 2019)
Mash-up is Transformative Work Protected by Copyright Law San Francisco—The Electronic Frontier Foundation (EFF) urged a federal appeals court today to rule that the creators of a parody book called “Oh The Places You’ll Boldly Go!”—a mash-up of Dr. Seuss and Star Trek themes—didn’t infringe copyrights in the Dr. Seuss classic “Oh The Places You’ll Go!” The illustrated, crowdsourced book combines elements found in Dr. Seuss children’s books, like the look of certain characters and landscapes, with themes and characters from the sci-fi television series Star Trek, to create a new, transformative work of creative expression, EFF said in a brief filed today. Dr. Seuss Enterprises, which licenses Seuss material, sued the book’s creators for copyright infringement. A lower court correctly concluded that the way in which the “Boldly” book borrows and builds upon copyrighted material in the Dr. Seuss book constitutes fair use under U.S. copyright law. EFF, represented by Harvard Law School’s Cyberlaw Clinic and joined by Public Knowledge, the Organization for Transformative Works, Professor Francesca Coppa, comic book writer Magdalene Visaggio, and author David Mack, asked the U.S. Court of Appeals for the Ninth Circuit to uphold the decision. “The fair use doctrine recognizes that artists and creators must have the freedom to build upon existing culture to create new works that enrich, entertain, and amuse the public,” said EFF Legal Director Corynne McSherry. “Fair use is the safety valve that ensures creators like the authors of ‘Oh The Places You’ll Boldly Go!’ don’t have to beg permission from a copyright holder in order to make works that express new and unique ideas.” “Oh The Places You’ll Boldly Go!” takes characters and images from five Dr. Seuss books and remakes them into comedic depictions of Captain Kirk, Mr. Spock, and various Star Trek creatures. The book’s visual puns—the multi-color saucer from the cover of “Oh The Places You’ll Go!” is used to create a new kind of starship Enterprise, while a Dr. Seuss character referred to as a “fix-it-up-chappie” is reimagined as Scotty, the ship’s chief engineer—are a form of commentary on the Seuss and Star Trek worlds. “‘Boldly’s’ creative adaptation of Dr. Seuss works is an example of artistic expression that would be stifled by overly restrictive application of copyright law,” said McSherry. For the brief: https://www.eff.org/document/dr-seuss-v-comicmixeff-amicus-brief For more on intellectual property and innovation: https://www.eff.org/issues/innovation Contact:  Corynne McSherry Legal Director corynne@eff.org
>> mehr lesen

China’s Global Reach: Surveillance and Censorship Beyond the Great Firewall (Thu, 10 Oct 2019)
Those outside the People’s Republic of China (PRC) are accustomed to thinking of the Internet censorship practices of the Chinese state as primarily domestic, enacted through the so-called "Great Firewall"—a system of surveillance and blocking technology that prevents Chinese citizens from viewing websites outside the country. The Chinese government’s justification for that firewall is based on the concept of “Internet sovereignty.” The PRC has long declared that “within Chinese territory, the internet is under the jurisdiction of Chinese sovereignty.'' Hong Kong, as part of the "one country, two systems" agreement, has largely lived outside that firewall: foreign services like Twitter, Google, and Facebook are available there, and local ISPs have made clear that they will oppose direct state censorship of its open Internet. But the ongoing Hong Kong protests, and mainland China's pervasive attempts to disrupt and discredit the movement globally, have highlighted that China is not above trying to extend its reach beyond the Great Firewall, and beyond its own borders. In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide. In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide. Some of those tools—such as pressure on private entities, including American corporations NBA and Blizzard—have caught U.S. headlines and outraged customers and employees of those companies. Others have been more technical, and less obvious to the Western observers. The “Great Cannon” takes aim at sites outside the Firewall The Great Cannon is a large-scale technology deployed by ISPs based in China to inject javascript code into customers’ insecure (HTTP) requests. This code weaponizes the millions of mainland Chinese Internet connections that pass through these ISPs. When users visit insecure websites, their browsers will also download and run the government’s malicious javascript—which will cause them to send additional traffic to sites outside the Great Firewall, potentially slowing these websites down for other users, or overloading them entirely. The Great Cannon’s debut in 2015 took down Github, where Chinese users were hosting anti-censorship software and mirrors of otherwise-banned news outlets like the New York Times. Following widespread international backlash, this attack was halted. Last month, the Great Cannon was activated once again, aiming this time at Hong Kong protestors. It briefly took down LIHKG, a Hong Kong social media platform central to organizing this summer’s protests. Targeting the global Chinese community through malware Pervasive online surveillance is a fact of life within the Chinese mainland. But if the communities the Chinese government wants to surveill aren’t at home, it is increasingly willing to invest in expensive zero-days to watch them abroad, or otherwise hold their families at home hostage. Last month, security researchers uncovered several expensive and involved mobile malware campaigns targeting the Uyghur and Tibetan diasporas. One constituted a broad “watering hole” attack using several zero-days to target visitors of Uyghur-language websites. As we’ve noted previously, this represents a sea-change in how zero-days are being used; while China continues to target specific high-profile individuals in spear-phishing campaigns, they are now unafraid to cast a much wider net, in order to place their surveillance software on entire ethnic and political groups outside China’s border. Censoring Chinese Apps Abroad At home, China doesn’t need to use zero-days to install its own code on individuals’ personal devices. Chinese messaging and browser app makers are required to include government filtering on their client, too. That means that when you use an app created by a mainland Chinese company, it likely contains code intended to scan and block prohibited websites or language. Until now, China has been largely content to keep the activation of this device-side censorship concentrated within its borders. The keyword filtering embedded in WeChat only occurs for users with a mainland Chinese phone number. Chinese-language versions of domestic browsers censor and surveill significantly more than the English-language versions. But as Hong Kong and domestic human rights abuses draw international interest, the temptation to enforce Chinese policy abroad has grown. TikTok is one of the largest and fastest-growing global social media platforms spun out of Beijing. It heavily moderates its content, and supposedly has localized censors for different jurisdictions. But following a government crackdown on “short video” platforms at the beginning of this year, news outlets began reporting on the lack of Hong Kong-related content on the platform. TikTok’s leaked general moderation guidelines expressly forbid any content criticizing the Chinese government, like content related to the Chinese persecution of ethnic minorities, or about Tiananmen Square. Internet users outside the United States may recognise the dynamic of a foreign service exporting its domestic decision-making abroad. For many years, America’s social media companies have been accused of exporting U.S. culture and policy to the rest of the world: Facebook imposes worldwide censorship of nudity and sexual language, even in countries that are more culturally permissive on these topics than the U.S. Most services obey DMCA takedown procedures of allegedly copyright-infringing content, even in countries that have had alternative resolution laws. The influence that the United States has on its domestic tech industries has led to an outsized influence on those companies’ international user base. That said, U.S. companies have, as with developers in most countries, resisted the inclusion of state-mandated filters or government-imposed code within their own applications. In China, domestic and foreign companies have been explicitly mandated to comply with Chinese censorship under the national Cybersecurity Law passed in 2017, which provides aggressive yet vague guidelines for content moderation. China imposing its rules on global Chinese tech companies differs from the United States’ influence on the global Internet in more than just degree. Money Talks: But Critics Can’t This brings us to the most visible arm of the China’s new worldwide censorship toolkit: economic pressure on global companies. The Chinese domestic market is increasingly important to companies like Blizzard and the National Basketball Association (NBA). This means that China can use threats of boycotts or the denial of access to Chinese markets to silence these companies when they, or people affiliated with them, express support for the Hong Kong protestors. Already, people are fighting back against the imposition of Chinese censorship on global companies. Blizzard employees staged a walk-out in protest, NBA fans continue to voice their support for the demonstrations in Hong Kong, and fans are rallying to boycott the two companies. But multi-national companies who can control their users’ speech can expect to see more pressure from China as its economic clout grows. Is China setting the Standard for Global Enforcement of Local Law? Parochial “Internet sovereignty’ has proven insufficient to China’s needs: Domestic policy objectives now require it to control the Internet outside and inside its borders. To be clear, China’s government is not alone in this: rather than forcefully opposing and protesting their actions, other states—including the United States and the European Union— have been too busy making their own justifications for the extra-territorial exercise of their own surveillance and censorship capabilities. China now projects its Internet power abroad through the pervasive and unabashed use of malware and state-supported DDoS attacks; mandated client-side filtering and surveillance; economic sanctions to limit cross-border free speech; and pressure on private entities to act as a global cultural police. Unless lawmakers, corporations, and individual users are as brave in standing up to authoritarian acts as the people of Hong Kong, we can expect to see these tactics adopted by every state, against every user of the Internet.
>> mehr lesen

Twitter "Unintentionally" Used Your Phone Number for Targeted Advertising (Wed, 09 Oct 2019)
Stop us if you’ve heard this before: you give a tech company your personal information in order to use two-factor authentication, and later find out that they were using that security information for targeted advertising. That’s exactly what Twitter fessed up to yesterday in an understated blog post: the company has been taking email addresses and phone numbers that users provided for “safety and security purposes” like two-factor authentication, and using them for its ad tracking systems, known as Tailored Audiences and Partner Audiences. Twitter claims this was an “unintentional,” “inadvertent” mistake. But whether this was avarice or incompetence on Twitter’s part, the result confirms some users’ worst fears: that taking advantage of a bread-and-butter security measure could expose them to privacy violations. Twitter’s abuse of phone numbers for ad tracking threatens to undermine people’s trust in the critical protections that two-factor authentication offers. How Did Your 2FA Phone Number End Up in Twitter’s Ad Tracking Systems?! Here’s how it works. Two-factor authentication (2FA) lets you log in, or “authenticate,” your identity with another piece of information, or “factor,” in addition to your password. It sometimes goes by different names on different platforms—Twitter calls it “login verification.” There are many different types of 2FA. SMS-based 2FA involves receiving a text with a code that you enter along with your password when you log in. Since it relies on SMS text messages, this type of 2FA requires a phone number. Other types of 2FA—like authenticator apps and hardware tokens—do not require a phone number to work. No matter what type of 2FA you choose, however, Twitter makes you hand over your phone number anyway. (Twitter now also requires a phone number for new accounts.) And that pushes users who need 2FA security the most into an unnecessary and painful choice between giving up an important security feature or surrendering part of their privacy. In this case, security phone numbers and email addresses got swept up into two of Twitter’s ad systems: Tailored Audiences, a tool to let an advertiser target Twitter users based on their own marketing list, and Partner Audiences, which lets an advertiser target users based on other advertisers’ marketing lists. Twitter claims the “error” occurred in matching people on Twitter to these marketing lists based on phone numbers or emails they provided for “safety and security purposes.” Twitter doesn’t say what they mean by “safety and security purposes,” but it is not necessarily limited to 2FA. In addition to 2FA information, it could potentially include the phone number you have to provide to unlock your account if Twitter has incorrectly marked it as a bot. Since Twitter forces many people into providing such a phone number to regain access to their account, it would be particularly pernicious if Twitter was using phone numbers gathered from that system for advertising. What We Don't Know Twitter’s post downplays the problem, leaving out numbers about the scope of the harm, and details about who was affected and for how long. For instance, if Twitter locked you out of your account and required that you add a phone number to get back in, was your phone number misused for advertising? If Twitter required you to add a phone number when you signed up, for anti-spam purposes, was your phone number misused? When is an email address considered “fair game” for ad targeting and when is it not? Twitter claims it “cannot say with certainty how many people were impacted by this.” That may be true if they are trying to parse finely who actually received an ad. But that’s an excessively narrow view of “impact.” Every user whose phone number was included in this inappropriate targeting should be considered impacted, and Twitter should disclose that number. 2FA is Not the Problem Based on what we know, and what else we can reasonably guess about how Twitter users’ security information was misused for ad tracking, Twitter’s explanation stretches the meaning of “unintentionally.” After all, the targeted advertising business model embraced by Twitter (and by most other large social media companies) incentivizes ad technology teams to scoop up data from as many places as they can get away with—and sometimes they can get away with quite a lot. The important conclusion for users is: this is not a reason to turn off or avoid 2FA. The problem here is not 2FA. Instead, the problem is how Twitter and other companies have misused users’ information with no regard for their reasonable security and privacy expectations. What Next Twitter needs to come clean about exactly what happened, when, and to how many people. It needs to explain what processes it is putting in place to ensure this doesn’t happen again. And it needs to implement 2FA methods that do not require giving Twitter your phone number.
>> mehr lesen

Victory! California Governor Signs A.B. 1215 (Wed, 09 Oct 2019)
California’s Governor Gavin Newsom has officially signed a bill that puts a moratorium on law enforcement’s use of face recognition for three years. Under Assemblymember Phil Ting’s bill, A.B. 1215, police departments and law enforcement agencies across the state of California will have until January 1, 2020 to end any existing use of face recognition on body-worn cameras. Three years without police use of this invasive technology means three years without a particularly pernicious and harmful technology on the streets and has the potential to facilitate better relationships between police officers and the communities they serve. As EFF’s Associate Director of Community Organizing Nathan Sheard told the California Assembly, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.” This moratorium brings to the entire state the privacy that some cities in California have already won. In May 2019, San Francisco became the first city in the country to ban police use of Face recognition technology and was followed in June by Oakland. Because A.B. 1215 will end on January 1, 2023, we are encouraging communities across the state to advocate for face recognition bans in your own cities and towns. Take this opportunity to advocate for the end of the harmful technology in your own neighborhoods. Congratulations to all of the members of the coalition and the Californian residents who made their voices heard on this bill. You helped make this happen.
>> mehr lesen

Bad News From the EU: High Court Blesses Global Takedown Order (Tue, 08 Oct 2019)
The European Union seems to have fallen in love with the idea of requiring service providers to edit the Internet, with predictable consequences for speech. Until recently, there was reason to hope those consequences could be contained. For example, the EU’s highest court recently ruled that the EU’s Right to Be Forgotten does not require Google to delist search results globally, thus keeping the results available to users around the world, even if de-indexed from the site associated with a particular EU state. But last week, in a case involving a defamation case in Austria, the same court held that the national courts of EU member states can order intermediaries not only to take down defamatory content across all of their services—i.e., globally—but also to take down identical or “equivalent” material. Perhaps not surprisingly in this political moment, this case started with a thin-skinned politician. The head of the Austrian Greens Party, Eva Glawischnig-Piesczek, sued Facebook, demanding that the company take down a news article posted by a user and related online comments that called her a “lousy traitor,” a “corrupt oaf” and a member of a “fascist party.” An Austrian court found the comments defamatory, and ordered Facebook to both take down the comments throughout its services and block users from repeating them. On appeal, the CJEU had to decide whether the Austrian court’s decision was consistent with EU intermediary law. Under EU law, intermediaries may be held liable for tortious content only if they have knowledge that the content is on their site, and cannot be required to affirmatively monitor for illegal activity. The CJEU found that because Facebook had knowledge of both the specific statements and other statements “equivalent” to them—and therefore would not have to make an independent assessment of illegality—the Austrian court’s order was consistent with EU law. This is a terrible outcome. First, the actual content in question is clearly lawful in many countries, including the United States. All of the statements found defamatory under Austrian law would be considered non-defamatory rhetorical hyperbole under U.S. law. Indeed, politicians and other public figures can be subject to more severe hyperbole than “corrupt oafs.” That’s one of the ways we hold them, and their egos, in check. Moreover, under U.S. law defamation is inherently contextual. The exact same words that may be capable of a defamatory meaning in one context, will not be in another. Thus, even if a court decides a specific phrase is defamatory and orders that the specific statement be removed, it cannot order the removal of future appearances of the same phrase. So it’s pretty disturbing that another country can decide otherwise, and as a practical matter prevent people who don’t even live there from speaking up or even receiving the information. That burden was not even mentioned by the CJEU. Second, the court effectively concludes that the requirement to prevent similar language from appearing isn’t an affirmative monitoring obligation as long as the “monitoring” is done by filters. While it is likely true that Facebook can develop tools that recognize when someone says “Eva Glawischnig-Piesczek is a corrupt oaf,” it’s not at all clear that those tools could automatically recognize the functional equivalent. Once again, the robots won’t save us. Third, this ruling sets a precedent that may not just apply to Facebook. A smaller company faced with a similar order would likely just drastically limit or eliminate user postings altogether. Thus, once again, the EU is helping ensure that today's social media giants need not fear competition, because no one else will have the resources to comply with the growing web of speech regulations. Coming on the heels of the new EU copyright directive, which also requires filtering, this ruling reinforces the EU’s growing role as Internet police—and its willingness to play that role without much regard for its impact on non-EU citizens. There is one ray of hope in the opinion. The CJEU explains that any blocking order must take account of “the framework of the relevant international law.” One way to assess that would be to look to Article 19 of the Universal Declaration of Human Rights, which holds that “Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.” The courts of member states should consider the impact of any order on free speech rights before issuing a global takedown order. Facebook has indicated it will challenge the takedown order. There’s no further appeal option in the EU, but it might look to international courts or, following Google’s example when it received a global de-listing mandate from a Canadian court, challenge the order’s enforceability in the United States. Google won that challenge, and it’s likely Facebook would as well. But if so, that will still be small consolation to smaller platforms that cannot afford to litigate these issues in multiple countries. For more on the issues in this case, check out this detailed analysis from Stanford’s Daphne Keller.
>> mehr lesen

EFF to First Circuit: First Amendment Protects Right to Secretly Audio Record Police (Mon, 07 Oct 2019)
The First Amendment protects the public’s right to use electronic devices to secretly audio record police officers performing their official duties in public. This is according to an amicus brief EFF filed in the U.S. Court of Appeals for the First Circuit. The case, Martin v. Rollins, was brought by the ACLU of Massachusetts on behalf of plaintiffs who are challenging the constitutionality of the Massachusetts anti-eavesdropping statute, which prohibits the secret audio recording of all conversations, even those that are not private. The First Circuit had previously held in Glik v. Cunniffe (2011) that Glik had a First Amendment right to record police officers arresting another man in Boston Common. He had used his cell phone to openly record both audio and video of the incident. The court also held that this did not violate the Massachusetts anti-eavesdropping statute. EFF’s amicus brief argues that people frequently use modern electronic devices to record and share audio and video recordings, especially on social media. These often include newsworthy recordings of fatal police shootings and other police misconduct. Such recordings facilitate police accountability and enhance the public discussion about police use of force and racial disparities in our criminal justice system. EFF’s amicus brief also argues that audio recordings can be particularly helpful in chronicling police misconduct, providing more context beyond the video images, such as when a bystander audio recorded Eric Garner screaming, “I can’t breathe.” Additionally, being able to secretly audio record police officers performing their official duties in public is critical given that many officers retaliate against civilians who openly record them. In addition to the First Circuit’s Glik decision, five other federal appellate jurisdictions have upheld a First Amendment right to record police officers performing their official duties in public: the Third, Fifth, Seventh, Ninth, and Eleventh Circuits. EFF wrote an amicus brief in the Third Circuit case, as well as in a pending case in the Tenth Circuit and a case in the Northern District of Texas that focused on the First Amendment right to record emergency medical personnel and other first responders. The First Circuit reached the right decision in Glik, and we hope the appellate court will take this opportunity to further strengthen the right to record police officers performing their official duties in public by holding that secret audio recording is also protected by the First Amendment.
>> mehr lesen

Facebook Shouldn't Give Politicians More Power Than Ordinary Users (Mon, 07 Oct 2019)
Amidst escalating rhetoric about alleged 'anti-conservative bias' on social media, Facebook has doubled down on its policies exempting (some) politicians from its ordinary fact-checking and from its hate speech rules. Facebook's policies amplify the harm that hateful politicians can do and are not necessary to advance its stated goal of ensuring that 'newsworthy' false or hateful comments are subject to robust reporting and debate. Tilting the Playing Field "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be." - Facebook's Nick Clegg This is a galling statement from a company that is happy to be the self-appointed referee for everything that the general population may say. As we've documented extensively over the years, Facebook regularly makes value judgements about various types of speech, resulting in content removals or account deactivations. These judgements often go well beyond the legality of speech, instead representing what the company and its executives find acceptable. Of course, it's within the legal rights of a platform to make such determinations, but Facebook shouldn't pretend that it isn't an arbiter of political speech. Those subject to Facebook's rules are within their own rights to insist that the platform do better. What is particularly troubling is for a platform to apply one set of rules to most people, and a more permissive set of rules to another group that holds more political power. In practice, Facebook has been doing this for years, but this latest announcement reaffirms the disparate treatment of different types of users and tries to spin it as a good thing. Newsworthy Disinformation and Hate Facebook's fact-checking does pose a danger of giving Facebook too much of a voice in public debate. It could be improved, for example, by letting users specify which sources of information they trust. But if one believes that Facebook has a good fact-checking program, does exempting politicians from the overall policy really lead to people being more aware of when they lie? Or does it mean that they will never see Facebook's usual fact-check show up for falsehoods uttered by politicians? The only clear result is that users will avoid annoying the elected officials who hold power over them. How about hate and other speech barred by the Facebook rules? It certainly is newsworthy when a politician engages in hateful speech, but the newsworthiness goal could be achieved by documenting or denoting the speech that has been penalized rather than simply letting politicians break the rules without comment. Facebook should not give special privileges to politicians, but it certainly could create special tools to scrutinize the record of politicians and highlight the hateful, false, or violent statements they have sought to publish. On the other hand, if Facebook's true goal is to placate politicians, such measures would surely be counterproductive. Who Gets to Be a Politician? Facebook suggests that it is going to avoid playing politics with content moderation decisions on its platform, but it does so more or less overtly in many contexts. Typically, those disfavored by content moderation decisions are marginalized political movements, not the US conservatives who are the most vocal in complaining about content moderation. For example, Kurdish political movements have complained of being silenced by the platform—which regularly complies with requests from increasingly authoritarian Turkey—as have Chechnyan independence movements, US leftist groups, and Brazilian political groups. Facebook also has a history of cracking down on fake accounts and anonymous users, while seemingly going soft on law enforcement officers who use sock puppet accounts. At the most blatant, Facebook bans elected officials from parties disfavored by the US government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government's list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obliged to after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.  Dear Facebook: Stop Favoring Politicians If your rules are inappropriate to apply to politicians, why are they appropriate to apply to the rest of us? Creating exemptions from the rules for people who are already powerful is simply a practical concession, one that will continue to harm the least powerful people in society. It makes us wonder what Facebook's rules might look like if they considered all of their users' concerns as important as those of aggrieved politicians who want to publish disinformation and hate.
>> mehr lesen

A Race to the Bottom of Privacy Protection: The US-UK Deal Would Trample Cross Border Privacy Safeguards (Fri, 04 Oct 2019)
UPDATE: The UK has now released the text of the UK-US Cloud Act agreement, and an explanatory memorandum available here.  Last year, we warned that the passage of the Clarifying Lawful Overseas Use of Data (CLOUD) Act would weaken global privacy standards, opening up the possibility of more permissive wiretapping and data collection laws. Today’s announcement of the U.S.-UK Agreement is the first step in a multi-country effort to chip away at privacy protections in favor of law enforcement expediency.  U.S. Attorney General William Barr and British Home Secretary Priti Patel announced that the U.S. and UK have signed an agreement that will allow each country to bypass the legal regimes of the other and request data directly from companies in certain investigations. The text of the agreement has not yet been released, but the countries were able to enter into such a regime through the controversial powers granted in the U.S. CLOUD Act and the UK Investigatory Powers Act and the 2019 Crime (Overseas Production Orders) Act. At EFF, we fought against the provisions in these bills that weaken global privacy standards, and we are concerned based on the U.S. and UK press statement that this agreement will not include necessary privacy provisions.  Based on reporting, the U.S.-UK agreement sets up a regime in which the UK police can get fast, direct access to communications data about non-U.S. persons held by American tech companies. In return, the United States government will be given fast-track access to British companies' data regardless of where that data is stored. This unprecedented arrangement will seriously undermine privacy and other human rights.   Like many international deals, the U.S.-UK negotiations were held behind closed doors, and their details were shrouded in secrecy, even though the CLOUD Act requires that the U.S. Attorney General send any potential agreement to U.S. Congress for review within seven days after certifying a final agreement. The U.S. and UK should not be able to make secret law that binds tech companies and dictates the privacy of their customers, and we will continue to push for this agreement to be made public.  What does the U.S.-UK deal change? United States law provides strong protections for the content of communications—the text of email and instant messages, the photos we privately share with our friends, our private audio/video chats, and our cloud-based documents. American tech companies are generally forbidden from disclosing this category of data to anyone (including foreign governments) without the consent of the data subject or without an order from a U.S. court determining that there is probable cause that the data in question contains evidence of a crime. The Mutual Legal Assistance Treaty (MLAT) regime, which has been followed for the past few decades and has been adopted by a majority of democratic countries, provides a vehicle for foreign governments to obtain communications content while following privacy standards set out in U.S. law.  The Clarifying Lawful Overseas Use of Data (CLOUD) Act of 2018 gives the U.S. executive branch the power to enter into bilateral agreements with “qualifying foreign governments”—a set of countries that the U.S. determines satisfy a list of privacy and human rights standards in the statute. These agreements would authorize those governments to request data stored in the U.S. and allow the U.S. to request data stored in foreign countries without going through the MLAT process.   The CLOUD Act also allows foreign police to get information stored in the United States without a probable cause warrant or an order from a U.S. judge. There are prohibitions on foreign governments against targeting U.S. citizens and residents, but this means that U.S. persons’ information can still be collected without U.S. oversight if they are communicating with a target but are not the target themselves. In practice, the foreign vs U.S. person distinction in the law means that some data stored in the U.S. is protected by U.S. law while foreign data stored in the U.S. may be subject to lesser standards.   The new agreement with the UK is the first between the U.S. and a “qualifying foreign government,” and we have specific concerns about the details of the deal. The UK legal standard for access to data is much more permissive than the Fourth Amendment. The UK still allows for general warrants (a practice that fueled U.S. colonists’ grievances against the British government) and can issue a warrant based on a “reason to believe” that there may be evidence “relevant” to a crime, rather than probable cause. The CLOUD Act does have a baseline standard for when and how evidence can be requested from tech companies directly, but there is quite a bit of daylight between the CLOUD Act standard and UK law, and without seeing the agreement we cannot tell if it meets a higher privacy standard.  The press statement frames this deal as necessary to investigate and prosecute child exploitation and terrorism, but under the framework outlined in the CLOUD Act, this agreement can be used to investigate any “serious crime,” meaning that this power can and will be used across the board from drug investigations to investigations of financial crimes.  If a U.S. company doesn’t want to comply with a UK order—if, for instance, it thinks that the order is too vague or not tied to a “serious crime”—it may likely face penalties in the United Kingdom. Presumably at this point, a U.S. provider could decide to bring the United States government into the picture if the U.S. provider believes that the foreign government order is not consistent with the CLOUD Act safeguards. But in practice, there is no mechanism to do so, and it would be burdensome for the government to interject itself in individual cases.  It has also been reported that this agreement will now let the UK wiretap individuals located anywhere on the globe with the assistance of U.S. companies (so long as the target of the wiretap is not a United States person and is not located in the United States). The MLAT process did not allow for real-time interception, and U.S. law has even higher requirements for real-time access to information because of the grave privacy risks involved when the government eavesdrops on private conversations. Letting the UK use the lesser privacy standards under UK law to wiretap information as it passes through U.S. tech companies is an enormous erosion of current data privacy laws. Contrary to a number of recent press reports, however, the agreement cannot authorize the UK government to force companies like Google or Facebook to decrypt encrypted communications. That’s because the CLOUD Act contains a specific provision that prohibits an agreement between the U.S. and another country from creating “any obligation that providers be capable of decrypting data.” The UK might rely on its horrible Investigatory Powers Act to try to force companies to build backdoors or simply pressure companies to do so voluntarily, but the Cloud Act doesn’t give those arguments any weight under U.S. law.  Going Forward:  We don’t yet know what the agreement says, so the first step is to make sure that it is released to the public. Law (even international agreements) should not be secret. The CLOUD Act contains provisions that mandate that the Attorney General send all proposed agreements to Congress for review. Congress has 180 days after notification to review an agreement and determine whether or not the agreement satisfies the requirements laid out in the CLOUD Act. If an agreement is found wanting or if Congress has any other basis for objection, either chamber can introduce a joint resolution to disapprove of the agreement and stop the executive branch from enacting it. It’s a complicated process, but it is imperative that Congress exercise it. When Congress passed the CLOUD Act, it created a rift in privacy and law enforcement access practices. Now it is Congress’ continuing responsibility to make sure that our rights don’t fall through the cracks. 
>> mehr lesen

The Open Letter from the Governments of US, UK, and Australia to Facebook is An All-Out Attack on Encryption (Fri, 04 Oct 2019)
Top law enforcement officials in the United States, United Kingdom, and Australia told Facebook today that they want backdoor access to all encrypted messages sent on all its platforms. In an open letter, these governments called on Mark Zuckerberg to stop Facebook’s plan to introduce end-to-end encryption on all of the company’s messaging products and instead promise that it will “enable law enforcement to obtain lawful access to content in a readable and usable format.”  This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation that crosses every digital device.  The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations, and to ensure that the communications infrastructure we rely on is truly working as intended. What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents in the name of combatting terrorism or civil unrest, for example.  The Department of Justice and its partners in the UK and Australia claim to support “strong encryption,” but the unfettered access to encrypted data described in this letter is incompatible with how encryption actually works. Update 10/8: More than one hundred civil society groups, including EFF, have signed on to our own open letter to Facebook CEO Mark Zuckerberg, encouraging him to continue increasing security on Facebook messaging services. “Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services,” the letter states.
>> mehr lesen

Victory! EFF Wins Access to License Plate Reader Data to Study How Law Enforcement Uses the Privacy Invasive Technology (Thu, 03 Oct 2019)
EFF, ACLU SoCal Successfully Sued Los Angeles Police and Sheriff’s Departments For ALPR Data San Francisco—Electronic Frontier Foundation (EFF) and the American Civil Liberties Union Foundation of Southern California (ACLU SoCal) have reached an agreement with Los Angeles law enforcement agencies under which the police and sheriff’s departments will turn over license plate data they indiscriminately collected on millions of law-abiding drivers in Southern California. The data, which has been deidentified to protect drivers’ privacy, will allow EFF and ACLU SoCal to learn how the agencies are using automated license plate reader (ALPR) systems throughout the city and county of Los Angeles and educate the public on the privacy risks posed by this intrusive technology. A weeks’ worth of data, composed of nearly 3 million data points, will be examined. ALPR systems include cameras mounted on police cars and at fixed locations that scan every license plate that comes into view—up to 1,800 plates per minute. They record data on each plate, including the precise time, date, and place it was encountered. The two Los Angeles agencies scan about 3 million plates every week and store the data for years at a time. Using this data, police can learn where we were in the past and infer intimate details of our daily lives such as where we work and live, who our friends are, what religious or political activities we attend, and much more. Millions of vehicles across the country have had their license plates scanned by police—and more than 99% of them weren’t associated with any crimes. Yet law enforcement agencies often share ALPR information with their counterparts in other jurisdictions, as well as with border agents,  airport security, and university police. EFF and ACLU SoCal reached the agreement with the Los Angeles Police and Sheriff’s Departments after winning a precedent-setting decision in 2017 from the California Supreme Court in our public records lawsuit against the two agencies. The court held that the data are not investigative records under the California Public Records Act that law enforcement can keep secret. “After six years of litigation, EFF and ACLU SoCal are finally getting access to millions of ALPR scans that will shed light on how the technology is being used, where it’s being used, and how it affects people’s privacy,” said EFF Surveillance Litigation Director Jennifer Lynch. “We persevered and won a tough battle against law enforcement agencies that wanted to keep this information from the public. We have a right to information about how government agencies are using high-tech systems to track our locations, surveil our neighborhoods, and collect private information without our knowledge and consent.” The California Supreme Court ruling has significance beyond the ALPR case. It set a groundbreaking precedent that mass, indiscriminate data collection by the police can’t be withheld just because the information may contain some data related to criminal investigations. For more on this case: https://www.eff.org/cases/automated-license-plate-readers-aclu-eff-v-lapd-lasd For more on ALPRs: https://www.eff.org/pages/automated-license-plate-readers-alpr Contact:  Jennifer Lynch Surveillance Litigation Director jlynch@eff.org communications@aclusocal.org
>> mehr lesen

Coders' Rights Are At Risk in Brazil, and the Harms Could Affect Everyone (Thu, 03 Oct 2019)
A bill pending in the Brazilian Senate (PLS 272/2016) amends the current anti-terrorism law to make it a “terrorist act” to interfere with, sabotage or damage computer systems or databases in order to hinder their operation for a political or ideological motivation. Publicly praising such actions, or other ill-defined terrorism offenses, could lead to a penalty for up to eight years in prison, according to the same bill. Earlier this year, EFF criticized a set of Brazilian “anti-terrorism” bills that seriously threaten  free expression and privacy safeguards. PLS 272/2016 is one of them. Now, the new rapporteur appointed in the Senate’s Constitutional Commission is expected to convene a public hearing and release a new report. Among other key concerns, Brazilian human rights groups have stressed that the bill unduly expands terrorism offenses to frame acts that are already addressed by existent criminal law—targeting them for harsher, disproportionate, penalties. Praising or inciting crime and breaking into computer devices are already illegal under the Brazilian Criminal Code. But if the bill passes, actions similar to those could receive a sentence ten times higher or more.   In addition, the Criminal Code’s offense of breaking into computer devices has far more detailed formulation than the one drafted in the new bill. As laid down in the Code, liability for this crime requires the violation of a security mechanism with the goal of obtaining, changing or destroying data or information without express or tacit authorization from the owner, or "to install vulnerabilities" to obtain an illicit advantage. By contrast, the bill refers to a "political or ideological motivation" in order to disrupt, hinder or impede the operation of systems or databases. One could claim that taking control over or sabotaging critical infrastructure and essential services, such as power systems, deserves harsher treatment than other forms of malicious intrusion. However, that is not what PLS 272/2016 is about: those acts are already punished severely by the current anti-terrorism law. To make matters worse, a proposed amendment on the bill drops even the vague requirements for motivation and intent, referring only to "interfere with, sabotage or damage computer systems or databases." If the new rapporteur embraces it, a broad range of acts related to interference or damage to computer systems could be framed as “terrorist acts.” Although the current legal definition of terrorism has requirements that limit the application and interpretation of terrorist acts set out by the law, this and other bills overly broaden such definition. For example, the law limits the crime of terrorism to reasons of xenophobia, discrimination or prejudice of race, color, ethnicity and religion. However, the same amendment to PLS 272/2016 expands it to include "other political, ideological or social motivations." Under this amendment, identifying vulnerabilities in a public system and widely publicizing them to push the government to improve its security could be understood as a terrorist act. This bill simultaneously increases penalties and broadens the language of existing law. Here’s the problem with that: criminal prohibitions aimed at deterring network or device intrusion can easily and detrimentally impact security research. An overly expansive formulation of the criminal offense could target and impair important and positive security research activities. The Role of Hackers and Security Researchers Security researchers and hackers have never been more important to the security of the Internet. By identifying and disclosing vulnerabilities, they are able to improve security for every user who depends on information systems for their daily life and work.  While they play a key role in uncovering and fixing flaws in the software and hardware that everyone uses, their actions are often misunderstood. For example, at the 2010 Black Hat technical security conference in Las Vegas, professional security researcher Barnaby Jack publicly demonstrated that it was possible to bypass security measures on ATMs and program them to dispense money. Given the widespread use of ATMs, there is a strong public interest in shedding light on these kinds of security flaws, pushing vendors to act in a timely fashion to information about vulnerabilities as well as to build machines and systems with the highest security standards possible. Jack was supposed to have given the talk at the conference the previous year, but his employer at the time, Juniper Networks, pressured him to cancel it after receiving a complaint from an ATM vendor. As a result, ATMs remained vulnerable for an entire year after Jack first intended to make their existence publicly known. EFF's Latam Coder's Rights Project demonstrates that rights recognized by the American Convention on Human Rights provide an important baseline to protect the crucial activities of hackers and security researchers, along with ensuring the secure development of the Internet and other digital technologies. Cybercrime offenses must be precisely tailored and include both malicious intent and actual damage. Penalties must be proportionate and criminal law cannot serve as a response to socially beneficial behavior by security researchers. We hope that Brazil’s legislators carefully consider these standards and acknowledge the potential harm a broad, excessive, cybercrime provision could impose on society as a whole. We should also take into account that vague, unnecessary, and disproportionate anti-terrorism legislation jeopardizes exactly the core legal values and fundamental rights it was supposed to protect. PLS 272/2016 is a demonstration of this risk and EFF will continue to monitor its progress.
>> mehr lesen

Adversarial Interoperability (Thu, 03 Oct 2019)
“Interoperability” is the act of making a new product or service work with an existing product or service: modern civilization depends on the standards and practices that allow you to put any dish into a dishwasher or any USB charger into any car’s cigarette lighter. But interoperability is just the ante. For a really competitive, innovative, dynamic marketplace, you need adversarial interoperability: that’s when you create a new product or service that plugs into the existing ones without the permission of the companies that make them. Think of third-party printer ink, alternative app stores, or independent repair shops that use compatible parts from rival manufacturers to fix your car or your phone or your tractor. Adversarial interoperability was once the driver of tech’s dynamic marketplace, where the biggest firms could go from top of the heap to scrap metal in an eyeblink, where tiny startups could topple dominant companies before they even knew what hit them. But the current crop of Big Tech companies has secured laws, regulations, and court decisions that have dramatically restricted adversarial interoperability. From the flurry of absurd software patents that the US Patent and Trademark Office granted in the dark years between the first software patents and the Alice decision to the growing use of "digital rights management" to create legal obligations to use the products you purchase in ways that benefit shareholders at your expense, Big Tech climbed the adversarial ladder and then pulled it up behind them. That can and should change. As Big Tech grows ever more concentrated, restoring adversarial interoperability must be a piece of the solution to that concentration: making big companies smaller makes their mistakes less consequential, and it deprives them of the monopoly profits they rely on to lobby for rules that make competing with them even harder. For months, we have written about the history, theory, and practice of adversarial interoperability. This page rounds up our writing on the subject in one convenient resource that you can send your friends, Members of Congress, teachers, investors, and bosses as we all struggle to figure out how to re-decentralize the Internet and spread decision-making power around to millions of individuals and firms, rather than the executives of a handful of tech giants. Interoperability: Fix the Internet, Not the Tech Companies: a taxonomy of different kinds of interoperability, from “indifferent interoperability” (I don't care if you plug your thing into my product) to “cooperative interoperability” (please plug your thing into my product) to “adversarial interoperability” (dang it, stop plugging your thing into my product!). Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today’s Monopolies: The history of adversarial interoperability and how it drove the tech revolutions of the past four decades, and what we can do to restore it. Interoperability and Privacy: Squaring the Circle: Big Tech companies created a privacy dumpster fire on the Internet, but now they say they can’t fix it unless we use the law to ban competitors from plugging new services into their flaming dumpsters. That’s awfully convenient, don't you think? A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition: Cable TV exists because of adversarial interoperability, which gave it the power to disrupt the broadcasters. Today, Big Cable is doing everything it can to stop anyone from disrupting it. ‘IBM PC Compatible’: How Adversarial Interoperability Saved PCs From Monopolization: IBM spent more than a decade on the wrong end of an antitrust action over its mainframe monopoly, but when it created its first PCs, scrappy upstarts like Phoenix and Compaq were able to clone its ROM chips and create a vibrant, fast-moving marketplace. SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects: Microsoft came this close to owning the modern office by locking up the intranet in a proprietary network protocol called SMB...That is, until a PhD candidate released SAMBA, a free/open product that adversarially interoperated with SMB and allows Macs, Unix systems, and other rivals to live on the same LANs as Windows machines. Felony Contempt of Business Model: Lexmark’s Anti-Competitive Legacy: Printer companies are notorious for abusive practices, but Lexmark reached a new low in 2002, when it argued that copyright gave it the right to decide who could put carbon powder into empty toner cartridges. Even though Lexmark failed, it blazed a trail that other companies have enthusiastically followed, successfully distorting copyright to cover everything from tractor parts to browser plugins. Adblocking: How About Nah?: The early Web was infested with intrusive pop-up ads, and adversarial interoperability rendered them invisible. Today, adblocking is the largest boycott in history, doing more to curb bad ads and the surveillance that goes with them than any regulator.
>> mehr lesen

Privacy Allies File Amicus Briefs in Support of EFF’s Jewel v. NSA Case (Wed, 02 Oct 2019)
Organizations raising concerns about mass surveillance, secrecy, and the Fourth Amendment, among other issues, have filed amicus briefs in support of EFF’s Jewel v. NSA case, currently pending in the Ninth Circuit Court of Appeals. The Court of Appeals is set to review the District Court’s decision, which dismissed the case and effectively gave the government the power to decide whether Americans can seek judicial review of mass domestic national security surveillance. EFF filed its brief on September 6.  The six amicus briefs described below cover a wide number of issues, helping flesh out a fuller story of why the case was improperly dismissed. Internationally Mass Surveillance is No Secret The Center for Democracy and Technology and the New America Foundation’s Open Technology Institute challenged the government’s assertion of the state secrets privilege by cataloguing in detail the extent to which the technical details of bulk fiber optic surveillance are discussed openly by other nations without apparent harm to their national security. These openly discussed details stand in sharp contrast to the US government’s assertion in Jewel that information about how it conducts bulk fiber optic surveillance are so secret that even the protective procedures provided for in Section 1806(f) cannot be used. The brief is based on the simultaneously released report entitled “Not a Secret: Bulk Interception Practices of Intelligence Agencies” by surveillance expert Eric Kind. Relatedly, the South African Supreme Court recently rejected bulk surveillance without succumbing to secrecy claims. State Secrets Privilege Should Not Block the Case The ACLU filed a brief confirming that when Congress created FISA, it sought both to deter the executive branch from engaging in illegal surveillance and to guarantee meaningful remedies to those subjected to it. This included displacing the state secrets privilege that the District Court relied upon in dismissing Jewel. The ACLU also confirmed, as EFF has, that the Ninth Circuit has already decided this in the recent Fazaga decision where the ACLU is co-counsel. The Fourth Amendment Has Been Violated The National Association of Criminal Defense Lawyers notes that “the mass interception, copying and examination of Plaintiffs’ Internet communications constitutes a search and seizure, triggering the Fourth Amendment’s warrant requirement.” It then specifically responds to the government’s core defense of its actions, explaining how the “special needs” exception to the warrant requirement does not apply. The exception requires two things and neither occurs here. First, the exception requires that the “primary purpose” of the search not be for law enforcement, but NSA surveillance need only have foreign intelligence as a “a significant purpose." The government admits that it uses NSA surveillance for law enforcement,through what Congress calls the “back door” on a regular basis. Second, the NACDL rightly notes that the scope of the surveillance does not meet the law’s requirement that any searches and seizures be “reasonable” given how many  innocent Americans are caught in the dragnet.  Fourth Amendment Possessory Interests and Prohibition on General Warrants Have Been Violated The Free Speech Coalition, Free Speech Defense and Education Fund, Downsize DC Foundation, DownsizeDC.org, Gun Owners Foundation, Gun Owners of America, Inc., Conservative Legal Defense and Education Fund, Poll Watchers, Policy Analysis Center, the Heller Foundation, and Restoring Liberty Action Committee explain another way that the Jewel plaintiffs have suffered an injury under the Fourth Amendment: their possessory interests. They point out that Justice Gorsuch has analogized the remote storage of digital information to the law of bailments, where title to an item remains with the owner even as the item is held by a third party, meaning that the Fourth Amendment should apply even though the seizure and searches occur when messages are in transit or stored remotely. Amici also agree with us that the foundational American prohibition on general warrants, famously argued by James Otis at the time of the American Revolution, should apply to the government’s bulk surveillance programs Mass Surveillance Affects Human Rights Defenders, and Parallel Construction Blocks Other Judicial Review Human Rights Watch wrote to emphasize how they and other human rights defenders are routinely subjected to U.S. and foreign surveillance and how such surveillance hinders their work. HRW also argued that the district court’s decision encouraged the government to employ "parallel construction," whereby the government reconstructs evidence obtained using electronic surveillance, such as by having an intelligence agency tip off separate law enforcement officers to seek that evidence, without explaining the basis of the tip. The practice enables the government to hide misconduct and evade judicial review of its actions. Journalists Are Especially Impacted by Mass Surveillance The Reporters Committee for Freedom of the Press noted that journalists' work, especially the confidential reporter-source relationship, depends on digital technologies free of mass surveillance. It also takes special notice of the need for protection “in the context of recent developments in ‘leak’ prosecutions.” We know that hundreds of millions of people around the world are concerned about mass surveillance.  We appreciate that so many strong and varied organizations were able to come forward and stand with us in this case. You can read more about EFF v. Jewel here. Related Cases:  Jewel v. NSA
>> mehr lesen

D.C. Circuit Offers Bad News, Good News on Net Neutrality: FCC Repeal Upheld, But States Can Fill the Gap (Wed, 02 Oct 2019)
Users, advocates, and service providers have been waiting for months to find out whether an appellate court will bless the Federal Communications Commission’s effort to repeal net neutrality protections, and whether the FCC can simultaneously force the states to follow suit. The answer: yes, and no.  Bound by its interpretation of Supreme Court precedent, the DC Circuit Court of Appeals has held that the FCC’s repeal wasn’t sufficiently irrational to be struck down (many Internet engineers might disagree) but, having abandoned the field, the FCC can’t prevent states from stepping in to protect their own users. We’re disappointed. The FCC is supposed to be the expert agency on telecommunications, but in the case of the so-called “Save the Internet Order,” it ignored expertise and issued an order based on a wrong interpretation of the technical realities of the Internet. But we’re very pleased that the court’s ruling gives states a chance to limit the damage. What Happened In 2017, the FCC voted to repeal the 2015 Open Internet Order, issuing in its place the so-called “Restoring Internet Freedom Order.” In doing so, the FCC declared that it would no longer oversee broadband Internet service providers (ISPs) and removed strong net neutrality protections. Net neutrality is a foundational principle of the Internet. It is the idea that all data online should be treated in a nondiscriminatory way. In other words, the company providing your Internet access shouldn't be able to determine what you see or how you experience the Internet once you are online. Blocking, throttling, and paid prioritization are famous examples of how companies have violated net neutrality in the past. Americans overwhelmingly support net neutrality, so the FCC’s decision was not a response to consumer demand but rather a giveaway to service providers that had complained, without a shred of evidence, that net neutrality rules would impede broadband investment. That lack of evidence has been a theme of the repeal process. In addition to misrepresenting the economics, the FCC misrepresented how the Internet works, in spite of information given to them by Internet engineers, pioneers, and technologists. The Case After the FCC’s action in 2017, a number of groups filed a lawsuit arguing that the FCC's repeal of the 2015 Open Internet Order was unsustainable. In particular, in light of the facts listed above and more, the FCC’s action was arbitrary, capricious, and contrary to law [pdf]. The FCC was backed by the largest ISPs—the only ones who stand to gain from the lack of net neutrality protections and oversight. Standing up to the FCC was a large number of public interest groups, local governments, and Internet companies large and small. On behalf of technologists who helped develop Internet technologies, EFF filed an amicus brief supporting the petitioners. We made clear that the FCC’s ruling was based on an incorrect understanding of how a broadband internet access service (BIAS) work and mischaracterized a number of functions BIAS providers can offer. We also pointed out that the 2017 repeal order completely ignored the negative consequences for speech and innovation that lifting net neutrality protections would have. In attempting to clear a path for ISPs to avoid complying with net neutrality rules, the FCC also included language attempting to prevent states from enacting their own net neutrality protections. In other words, the FCC issued an order saying that it no longer had authority over ISPs except, apparently, the authority to prevent states from stepping into the vacuum the FCC itself created. That portion of the order, if upheld, could have undermined California’s recently passed net neutrality law. In February 2019, Court of Appeals for the D.C. Circuit heard oral arguments in this case, Mozilla v FCC. The arguments lasted for four hours, highlighting not just the conditions that existed in 2017 when the order was issued, but also touching on harms that the order itself causes. Those harms included the effects of the FCC’s repeal on public safety. In 2018, Santa Clara County firefighters found their Verizon Internet service throttled during a state emergency, and when they complained, Verizon told them to buy a more expensive plan. Santa Clara County’s lawyer Danielle Goldstein argued in February that the FCC has a duty to ensure public safety before problems like this occur, rather than just receiving complaints after a disaster happens. While the FCC argued that there was no evidence of concrete harms, Goldstein put it clearly, “The burden is not on us to show that someone has already died.” The Decision In short, the D.C. Circuit upheld the FCC’s ability to repeal net neutrality rules but sent it back to the agency to resolve three major issues the FCC failed to address: public safety, pole attachment rights, and the subsidy program Lifeline. The court found that the FCC’s factually incorrect assessment of the way that the Internet and its related technologies worked was, nonetheless, a “reasonable policy choice.” In other words, whatever outside experts might say about the reality of the Internet, the court had to defer to the FCC ’s alternative interpretation of that reality. This is the end-result of an “expert” agency deciding not to listen to experts. But the biggest news, at least in the short term, is that the court unequivocally rejected the FCC’s effort to do a favor for the big ISPs and preempt state net neutrality laws. The court didn’t mince words on preemption, stating that the “Commission ignored binding precedent by failing to ground its sweeping Preemption Directive—which goes far beyond conflict preemption—in a lawful source of statutory authority. That failure is fatal.” This means that states can pass their own net neutrality laws without fear that the FCC’s 2017 order stops them from doing so. While there might be other challenges to state laws, there is no FCC ban on them anymore. In particular, California’s S.B. 822—which the state has delayed enforcement of until this case is completely resolved—is in a strong position going forward. In the absence of the FCC standing up for Internet users and in the wake of this decision, other states can and should be following California’s lead. Finally, the court sent the case back to the FCC to address three issues. On public safety, the court expressed deep concern that the FCC failed to account for the effects of its decision on the life and safety of its citizens. On pole attachments, the court explained that the FCC’s decision harmed stand-alone broadband providers' ability to get access to the right of way to deploy broadband. This is because the 2018 Order allowed legacy companies like Comcast and Verizon to keep their special federal rights to infrastructure to deploy their services as cable television companies and telephone companies, but fiber broadband companies were out of luck. Prior to the 2015 Open Internet Order that resolved this issue (by declaring all broadband as Title II, thus giving all ISPs equal rights) we witnessed efforts by AT&T to block Google Fiber from deploying in Austin, Texas because they owned the poles. This issue was also raised by dozens of ISPs across the country in opposition to the FCC’s 2018 Order, so it is a good thing the court is requiring the FCC to grapple with this reality. For the Lifeline program, which many low-income users depend on for communications access, the court notes that the “2018 Order . . . facially disqualifies broadband from inclusion in the Lifeline Program.” In other words, only Title II services are eligible for federal financial support to help low-income users afford communications services and so long as broadband is not a telecom service, low-income users will not receive financial assistance in obtaining access to broadband. What Happens Now The FCC must now grapple with the implications of its decisions,, which could result in further litigation. More litigation could continue to prove just how far out on a limb the FCC is going for big ISPs and how much it is leaving the public in the lurch. Congress also has a responsibility to bring this debate to an end and reflect the super-majority opinion of the public that net neutrality should be the law of the land. The House of Representatives has already done its job with the passage of the Save the Net Act but it remains blocked by the Senate’s inaction, effectively doing the work big ISPs like AT&T, Verizon, and Comcast want. Congress and the states should both be acting to protect the Internet and its users. EFF will continue to fight for the users and we will continue to fight for laws that are based on how the Internet is built, used, and developing. Take Action Tell the Senate to protect Net Neutrality    
>> mehr lesen

Senate Antitrust Hearing Explores Big Tech’s Merger Mania (Tue, 01 Oct 2019)
The Senate Judiciary Committee’s Subcommittee on Antitrust, Competition and Consumer Rights held a hearing last week to explore the competitive impacts of big tech companies’ massive string of mergers with smaller companies in the last handful of years. Before the Senate committee were experts in venture capital spending, the Federal Trade Commission (the agency tasked with merger reviews), and legal experts in antitrust law.  EFF believes a hard look and update of mergers and acquisitions policy is one of many actions needed to preserve the life cycle of competition that has been a hallmark of the Internet. In the past, the Internet was a place where a bright idea by someone with modest resources was able to be leveraged from their home into the next big innovation. We have lost track of that as a small number of corporations now control a vast array of Internet products and services we all depend on and now appear to have formed a kill zone around their markets where the incumbents target the new entrants through an acquisition or substitution by the incumbent. Mergers With Big Tech Have Been Pervasive  What is undeniable is that big tech companies have engaged in a massive number of mergers over the years. In testimony provided by the American Antitrust Institute’s (AAI) witness Dr. Diana Moss, the number of mergers engaged in by Google, Facebook, Microsoft, Amazon, and Apple are not only prolific but have been on the rise year after year (see below AAI's chart). And yet, big tech mergers according to AAI’s research faced fewer actually challenges from the government than other sectors of the economy. A variety of reasons were brought forth by the witnesses such as the inability of the law to properly screen Big Tech mergers that typically include a substantially smaller company being acquired or just that the impact on competition and innovation were not apparent at the time. Market Dominance by Big Tech Has Changed Startups and Venture Capitalists Securing investment from venture capitalists has been a major factor for startups getting off the ground and becoming major corporations. This is because launching a startup is inherently risky, so investment is assessed on risk factors. We actually have some compelling evidence that shows a relationship between risk and investment, one study showed reducing copyright liability in cloud computing increased investment by potentially up to a billion dollars in cloud computing startups. As committee witness Patricia Nakache, herself a General Partner at a venture capital firm with extensive experience in launching startups, noted, they fail on average three out of four times. With an already low odds of success, the added pressure of incumbents dominating a handful of markets has raised the bar for startups raising money when they seek to challenge the dominant players. Arguably one of the most troubling issues the committee witnesses raised with the Senate Judiciary Committee is the fact that mergers and acquisitions are now seen as a primary driving force to securing initial investment to launch a startup. In other words, how attractive your company is to a big tech acquisition is now arguably the primary reason a startup gets funded. This makes sense because ultimately these venture capital firms are interested in making money and if the main source of profit in the technology sector is derived from mergers with big tech, as opposed to competing with them, the investment dollars will flow that way. This has not happened in a vacuum though, but rather is further evidence that antitrust law is in dire need of an update because lax enforcement has changed investment behavior. The Lack of Competition Today is Not Frozen in Stone The United States has been here before. The very existence of our antitrust laws and competition policy in other business sectors have sprung forth as a response to a less than adequate competitive landscape. In fact, antitrust law and competition law played integral roles in the telecommunications market where the then the world’s largest corporation, AT&T, was converted from a regulated monopoly into a regulated competitive market years later. But policymakers gathering a strong understanding of the market’s structure is a necessary first step and EFF will continue to support Congressional efforts to explore ways to improve competition in the Internet marketplace.  
>> mehr lesen

California: Tell Governor Newsom to Stop Face Surveillance on Police Body Cams (Mon, 30 Sep 2019)
Communities called for police officers to carry or wear cameras, with the hope that doing so would improve police accountability, not further mass surveillance. But today, we stand at a crossroads: face recognition technology is now capable of being interfaced with body-worn cameras in real-time—a development that has grave implications for privacy and free speech. If California Governor Gavin Newsom signs A.B. 1215 before October 13, he affirms California should take the opportunity to hit the brakes on police use of this troubling technology in the state. This gives legislators and citizens time to evaluate the dangers of face surveillance, and it prevents the threat of mass biometric surveillance from becoming the new normal. Take Action No Face Recognition on Body-Worn Cameras EFF joined a coalition of civil rights and civil liberties organizations to support A.B. 1215, authored by California Assemlymember Phil Ting. This bill would prohibit the use of face recognition, or other forms of biometric technology, on a camera worn or carried by a police officer for three years. This technology has harmful effects on our communities today. For example, face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. Ting’s bill, by targeting a particularly pernicious and harmful application of face surveillance, is crucial not only to curbing mass surveillance, but also to facilitating better relationships between police officers and the communities they serve. As EFF activist Nathan Sheard told the California Assembly, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.” A.B. 1215 clearly taps into widespread concern over face surveillance. The Assembly passed an earlier version of the bill with a 45-17 vote on May 9; the Senate in September sent it to the governor with a 22-15 vote. Lawmakers and community members across the country are advancing their own prohibitions and moratoriums on their local government’s use of face surveillance, including the San Francisco Board of Supervisors’ historic May ban on government use of face recognition. Meanwhile, law enforcement use of face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office. We encourage people across the country to support bans in their own communities. Tell Governor Newsom: Sign A.B. 1215 and listen to the growing number of voices that oppose government use of face surveillance.
>> mehr lesen

Help EFF Find Our Next Development Director (Fri, 27 Sep 2019)
EFF’s member base is different from that of any other organization I know. I can’t count how many times someone has seen me in my EFF hoodie and excitedly approached me to show me their membership card. Our members are passionate about protecting civil liberties online, and being EFF members is part of their identity. They’re opinionated, thoughtful, and they understand the deeper moral issues behind today’s technology policy battles. Does that sound like the kind of community you’d like to help build? Then we have a job that might be perfect for you. We’re on the hunt for the newest member of EFF’s leadership team: a director for our fundraising team. Please help us get the word out to folks you know who might be a great fit and help us take our fundraising game to the next level. This is a dream job for the right candidate. You’ll be leading a rock-solid team of 10 fundraising professionals who have already built a community of over 30,000 card-carrying EFF members around the world.  We’re looking for someone who can blend the art of managing a team with the skill of effective fundraising. The right person is going to be a compelling communicator in writing and in person, able to paint an inspiring vision for EFF’s diverse community of supporters and for EFF’s development team. The majority of our funding comes from ordinary individuals, and we want someone with the social intuition to communicate well with everyone regardless of their backgrounds.  We also need someone who can understand EFF’s ethical approach to fundraising. We don’t just advocate for user privacy; we also defend it in our day-to-day practices, refusing to engage in the privacy-invasive practices that are all-too-common in the nonprofit community. We hold the security and privacy of our donors (and potential donors) to the highest standards. Our next Development Director is someone excited by that challenge. The right candidate might not have been a development director in the past. For example, folks who have a lot of experience in management, foundation funding, and major gifts might have come from a background in nonprofit leadership. Maybe you’ve run your own smaller civil liberties nonprofit and are ready to step away from an executive director role, or maybe you have a background in political fundraising. We’re looking for a broad range of work experience, even if you haven’t held the title of “development director” before. This role will be part of EFF’s senior leadership team, which guides the organization along with other directors. That’s why it’s so vital that we find the right person. We’re asking folks to help us get the word out by sharing this position and encouraging your qualified friends to apply. We know that if our big network of EFF friends and fans activates to spread the word via social media and other methods, this listing is sure to get in front of the right candidate. We have awesome benefits and an amazing workplace environment, and you can read more about it and apply on the job description.
>> mehr lesen

The FISA Oversight Hearing Confirmed That Things Need to Change (Fri, 27 Sep 2019)
Section 215, the controversial law at the heart of the NSA’s massive telephone records surveillance program, is set to expire in December. Last week the House Committee on the Judiciary held an oversight hearing to investigate how the NSA, FBI, and the rest of the intelligence community are using and interpreting 215 and other expiring national security authorities.  Congress last looked at these laws in 2015 when it passed the USA FREEDOM Act, which sought to end bulk surveillance and to bring much-needed transparency to intelligence agency activities. However, NSA itself has revealed that it has been unable to stay within the limits USA FREEDOM put on Section 215’s “Call Detail Records” (CDR) authority. In response to these revelations, we’ve been calling for an end to the Call Details Records program, as well as additional transparency into the government’s use of Section 215. If last week’s hearing made anything clear, it’s this: there is no good reason for Congress to renew the CDR authority. The Call Detail Records Program Needs to End  Chairman Nadler began the hearing by asking Susan Morgan of the NSA if she could point to any specific instance where the CDR program helped to avert any kind of an attack on American soil. Morgan pushed back on the question, telling Chairman Nadler that the value of an intelligence program should not be measured on whether or not it stopped a terrorist attack, and that as an intelligence professional, she wants to make sure the NSA has every tool in the tool box available.  However, the NSA previously reported it had deleted all the information it received from the 215 program since 2015. Morgan confirmed that part of the reason the NSA chose to mass delete all the records was because not all the information was accurate or allowed under the law.   In other words, the NSA wants Congress to renew its authority to run a program that violates privacy protections and collects inaccurate information without providing any way to measure if the program was at all useful. The agency’s best argument for why it wants to renew the legal authorization to use the CDR provision is because it might be useful one day.  Rep. Steve Cohen asked the panel if they could reassure his “liberal friends” that there have been meaningful reforms to the program. The witnesses cited some of the reforms from USA FREEDOM, passed in 2015, as evidence of post-Snowden reforms and safeguards. However, their answer did not meaningfully address recent incidents where the NSA discovered that it had improperly collected information. Documents obtained by the ACLU include an assessment by the NSA itself that the overcollection had a “significant impact on civil liberties and privacy,” which is putting it mildly. Fortunately, the committee did not appear to be convinced by this line of reasoning. As Rep. Sylvia Garcia told Morgan, “If I have a broken hammer in my toolbox, I don’t need to keep it.” We agree. No surveillance authority should exist purely because it might someday come in handy, particularly one that has already been used for illegal mass surveillance.  Other Transparency Issues In addition to the CDR program, Section 215 also allows the government to collect “business records” or other “tangible things” related to a specific order. Despite the innocuous name, the business records provision allows intelligence agencies to collect a vast range of documents. But we don’t have a sense of just what kinds of sensitive information are collected, and on what scale. Rep. Pramila Japayal pressed the witnesses on whether Section 215 allows the collection of sensitive information such as medical records, driver’s license photographs, or tax records. Reading from the current law, Brad Wiegmann, Deputy Assistant Attorney General, responded that while the statute does contemplate getting these records, it also recognizes the sensitive nature of those records and requires the requests to be elevated for senior review.  In other words, the DOJ, FBI and NSA confirmed that under the right circumstances, they believe that the current authority in Section 215 allows the government to collect sensitive records on a showing that they are “relevant” to a national security investigation. Plus, as more and more of our home devices collect information on our daily lives, all the witnesses said they could easily envision circumstances where they would want footage from Amazon’s Ring, which EFF has already argued is a privacy nightmare.  In addition, Rep. Hank Johnson and Rep. Andy Biggs pressed the witnesses on whether the government collects geolocation information under Section 215, and if there has been guidance on the impact of the Supreme Court’s landmark Carpenter decision on these activities. Wiegmann acknowledged that while there may be some Fourth Amendment issues, the committee would need to have a classified session to fully answer that question.  Additionally, when asked about information sharing with other federal agencies, none of the witnesses were able to deny that information collected under Section 215 could be used for immigration enforcement purposes.  Both of these revelations are concerning. Carpenter brought on a sea change in privacy law and it should be highly concerning to the public and to overseers in Congress that the intelligence community does not appear to be seriously consider its effect on national security surveillance. As it considers whether or not to renew any of the authorities in Section 215, Congress must also considering what meaningful privacy and civil liberties safeguards to include. Relying on the NSA to delete millions of inaccurate records collected over many years is simply insufficient.  Secret Laws in Secret Court In 2015, in the wake of Edward Snowden’s revelations about the NSA mass spying on Americans, Congress passed USA FREEDOM to modify and reform the existing statute. One of the provisions of that bill specifically requires government officials to “conduct a declassification review of each decision, order, or opinion issued” by the FISC “that includes a significant construction or interpretation of any provision of law.” Both the text of the bill and statements from members of Congress who authored and supported it make clear that the law places new, affirmative obligations on the government to go back, review decades of secret orders and opinions, and make the significant ones public.  However, the DOJ has argued in litigation with EFF that this language is not retroactive and therefore only requires the government to declassify significant opinions issued after June 2015.  It also remains unclear how the government determines which opinions are significant or novel enough to be published, as well as how many opinions remain completely secret. Allowing the Foreign Intelligence Surveillance Court (FISC) to interpret the impact of that decision on Section 215 programs in secret means that the public won’t know if their civil liberties are being violated.  Releasing all significant FISC opinions, starting from 1978, will not only comply with what Congress required under USA FREEDOM in 2015, it will also help us better understand exactly what the FISC has secretly decided about our civil liberties. Adding a new provision that requires the FISC to detail to Congress how it determines which opinions are significant and how many opinions remain entirely secret would provide additional and clearly needed transparency to the process of administering secret law. Conclusion  Despite repeated requests from the members of the panel to describe some way of measuring how effective these surveillance laws are, none of the witnesses could provide a framework. Congress must be able to determine whether any of the programs have real value and if the agencies are respecting the foundational rights to privacy and civil liberties that protect Americans from government overreach.  Back in March, EFF, along with the ACLU, New America's Open Technology Institute, EPIC and others, sent a letter to the U.S. House Committee on the Judiciary, detailing what additional measures are needed to protect individuals’ rights from abuses under the Patriot Act and other surveillance authorities. Hearing members of the Intelligence Community speak before the Judiciary Committee reconfirmed just how essential it is that these new protections and reforms be enacted. We look forward to working with the US House Committee on the Judiciary to end the authority for the Call Details Records program once and for all and to ensure that there are real transparency mechanisms in the law to protect civil liberties. Related Cases:  Jewel v. NSA
>> mehr lesen

South Africa Bans Bulk Collection. Will the U.S. Courts Follow Suit? (Fri, 27 Sep 2019)
The High Court in South Africa has issued a watershed ruling: holding that South African law currently does not authorize bulk surveillance. The decision is a model that we hope other courts, including those in the United States, will follow. Read the decision here. As an initial matter, the South African court had no trouble making a legal ruling despite the obvious need for secrecy when discussing the details of state surveillance.  This willingness to consider the merits of the case stands in sharp contrast to the overbroad secrecy claims of the U.S. government, which have, time and time again, successfully blocked consideration of the merits of bulk surveillance in open, public U.S. courts. The South African court based its ruling on a description of the surveillance provided by the government – no more detailed than the descriptions the U.S. government gives of its own  bulk surveillance -- as well as the description in the judgment by the European Court of Human Rights case, Centrum For Rattvisa v. Sweden. And yet, in the U.S. this level of detail has been called insufficient to challenge bulk surveillance.  South Africa is not an outlier.  As the amicus brief by the Center for Democracy and Technology and the Open Technology Institute explains in our Jewel v. NSA case, governments of the United Kingdom, Sweden, Germany, the Netherlands, Finland, France and Norway have all openly discussed bulk surveillance that they engage in, with conversations in both legislatures and open courts, including the European Court of Human Rights.  The South African court looked to whether there were any current South African laws that authorized bulk surveillance.  The court rejected the government’s claim that bulk surveillance was authorized by general language in South Africa’s National Strategic Intelligence Act which, in several places, authorizes the  government “to gather, correlate, evaluate and analyze domestic and foreign intelligence.” The Court’s response is direct and refreshing: “What is evident is that  nowhere in this text is there any instruction to mine internet communications covertly. ” Later, it confirms: “Nowhere else in the NSIA is there a reference to using interception as a tool of information gathering, still less any reference to bulk surveillance as a tool of information gathering.” The court then considers several other potentially relevant statutes and finds that none of them clearly authorizes bulk surveillance. It concludes that if the government believes that bulk surveillance is so important, “the least that can be required is a law that says intelligibly that the State can do so.”   Ultimately, the court rules that more is needed:  “Our law demands such clarity, especially when the claimed power is so demonstrably at odds with the Constitutional norm that guarantees privacy.”  This is a great ruling for the people of South Africa, with a court firmly recognizing that: “no lawful authority has been demonstrated to trespass onto the privacy rights or the freedom of expression rights of anyone, including South Africans whose communications cross-cross the world by means of bulk interception.”  It then declares that the activities are “unlawful and invalid.”  The South African ruling should be carefully reviewed here in the United States, both by the judiciary and by lawmakers.  The U.S. law that the government relies upon for its bulk surveillance is similarly opaque.  Section 702 provides: “Notwithstanding any other provision of law, upon the issuance of an order in accordance with subsection (j)(3) or a determination under subsection (c)(2), the Attorney General and the Director of National Intelligence may authorize jointly, for a period of up to 1 year from the effective date of the authorization, the targeting of persons reasonably believed to be located outside the United States to acquire foreign intelligence information.” As in South Africa, the statute nowhere authorizes bulk surveillance.  The most it authorizes is “acquiring” foreign intelligence information with other provisions requiring “minimization.”  What it does not do with regard to bulk surveillance is, in the words of the South African Court, “say intelligibly that the state can do” bulk surveillance. As in South Africa, such vague provisions simply should not be sufficient to “trespass on the privacy rights or the freedom of expression of anyone.”  The decision by the South African court also sets an important precedent for how states that operate a wide-ranging surveillance apparatus should consider the privacy concerns of lawyers and journalists, a special protection that the U.S. government often ignores¾especially when it comes to surveillance at the U.S. border.  We look forward to the American courts recognizing what lawmakers, courts and governments around the world have already recognized – that bulk surveillance is not a secret and that courts are and must be empowered to decide whether it is legal.    Related Cases:  Jewel v. NSA
>> mehr lesen

The Christchurch Call Comes to the UN (Thu, 26 Sep 2019)
On Monday, EFF participated in the Christchurch Call Leaders’ Dialogue at the UN General Assembly in New York in our capacity as a member of the Christchurch Call Advisory Network. The meeting, chaired by the leaders of New Zealand, France, and Jordan, featured speeches from a diverse array of government and tech company leaders, and updates to the Christchurch Call process, including the announcement of the advisory network and reforms to the Global Internet Forum to Counter Terrorism (GIFCT). As we noted back in May, we have serious concerns about some elements of the Call, including the lack of clarity around the definition of “terrorism” and the language of “eliminating” terrorist and violent extremist content online. This summer, we co-authored a whitepaper that speaks to the latter concern; in particular, the fact that elimination—particularly without preservation—of some content has led to the erasure of key documentation used by human rights defenders in places like Syria and Ukraine. We have also been frustrated with the sidelining of civil society throughout much of this process, though we appreciate the New Zealand government’s efforts toward inclusion. Another area of concern for us is the GIFCT, an industry-led effort launched by Facebook, Microsoft, Twitter, and YouTube in response to pressure to curb online extremism and “terrorist content.” On Monday, it was announced that the GIFCT would be spinning off to become an “independent organization supported by dedicated technology, counterterrorism and operations teams.” We spoke to company representatives who ensured us that the new GIFCT would be far more inclusive of civil society, but it remains unclear to us just how independent it will be. It will still remain governed by an industry-led board, and the inclusion of civil society appears limited to a multistakeholder forum and an advisory committee. The new GIFCT will still be largely funded by social media companies. Furthermore, the hash database shared by GIFCT members for the purpose of being able to remove certain content identified as “terrorism” by multiple companies as once remains opaque, despite demands from civil society for more transparency [PDF]. We don’t know what companies are feeding into the database, how many false positives there are, or how many users appeal such decisions. Lastly, we were troubled to see that some of the governments that have joined the Christchurch Call include those whose leaders are responsible for discriminatory and hateful speech, including the Prime Minister of India Narendra Modi, who spoke on Monday at the Leaders’ Dialogue. Modi’s party, the BJP, is a Hindu nationalist party, and attacks on Muslims in the country have increased considerably under its rule, owing in large part to the prime minister’s rhetoric. We know that regulations intended to curb extremist content are very rarely applied to the political class, which has the most power to incite physical violence. Nevertheless, we were heartened by the efforts of the New Zealand government, by the speeches at the UNGA by our civil society allies, and by the speech of Twitter CEO Jack Dorsey, which included repeated calls for further civil society inclusion in the Christchurch Call process. We will continue to engage with both governments and companies in the process as it moves forward.
>> mehr lesen

European Court’s Decision in Right To Be Forgotten Case is a Win for Free Speech (Thu, 26 Sep 2019)
In a significant victory for free speech rights, the European Union’s highest court ruled that the EU’s Right to Be Forgotten does not require Google to delist search results globally, thus keeping the results available to be seen by users around the world. The EU standard, established in 2014, lets individuals in member states demand that search engines not show search results containing old information about them when their privacy rights outweigh the public’s interest in having continued access to the information. The question before the court was whether Google had to remove the results from all Google search platforms, including Google.com, or just the ones identified with either the individual’s state of residence, in this case Google.fr, or ones identified with the EU as a whole. The Court of Justice of the EU (CJEU) decided that the Right to Be Forgotten does not require such global delisting. Thus, by delisting search results from Google.fr and from any search performed through an IP address identified as being located in France, Google was in compliance with the Right to be Forgotten. France’s data protection authority, the Commission Nationale de l’Informatique et des Liberties (CNIL),  had argued that the Right to be Forgotten required Google to delist search results from all of its sites, since they were all available to users in France. EFF joined Article 19 and other global free speech groups in urging the Court of Justice to reach this decision and overturn a ruling by CNIL. As the brief explained, a global delisting order would conflict with the rights of users in other nations, including U.S. users protected by First Amendment. U.S. courts have consistently held that the First Amendment’s protections for expression, petition, and assembly necessarily also protect the rights of individuals to gather information to fuel those expressions, petitions, and assemblies. As we explained in the brief: "In the United States, a right to de-reference publicly available information on data protection grounds would be unconstitutional: the First Amendment to the US Constitution guarantees the right of people to publish information on matters of public interest that they acquire legally, even in the face of significant interests relating to the private life of those involved (Smith v. Daily Mail Publishing Co. 443 US 97 (1979)). This reasoning extends to those situations where there is a significant governmental interest in maintaining the confidentiality of the information in question (Oklahoma Pub. Co. v. Distr. Court 430 US 308 (1977), where the information concerns judicial procedures (Landmark Communications, Inc. v. Virginia 435 US 829 (1978) and even where the publisher of the information knows that her or his source obtained the information illegally (Bartnicki v. Vopper 532 US 514 (2001). The First Amendment also guarantees the right to receive information, including by means of a search engine (see e.g. Langdon v. Google 474 F. Supp. 2d 622 (D. Del. 2007)). . . .  The incompatibility of broad de-referencing obligations with US law is especially relevant in the present case given that all major search providers are established in the US…" The CJEU agreed. It found “that numerous third States do not recognise the right to de-referencing or have a different approach to that right. . . . Furthermore, the balance between the right to privacy and the pf personal data, on the one hand, and the freedom of information of internet users, on the other, is likely to vary significantly around the world.” Thus, “there is no obligation under EU law, for a search engine operator  . . .  to carry out such a de-referencing on all the versions of its search engine. . . . [and] a search engine operator cannot be required . . .  to carry out a de-referencing on all the versions of its search engine. The CJEU also found that EU state data protection regulators could only order de-listing in domains associated with other EU member states after conferring with their counterparts from other states. The purpose is to ensure that such an order would be consistent with any other state’s implementation of the Right to be Forgotten. In a passage that has left commentators scratching their heads, the court emphasized that even though the Right to be Forgotten “does not currently require” delisting from all versions of Google’s search engine, EU law “does not prohibit such a practice.” The court said an authority in an EU member state may balance an individual’s right to privacy and the freedom of information and, “where appropriate,” order the operator of a search engine to delist search results from all of its versions. It is unclear how to square this with the court’s statement that “a search engine operator cannot be required . . .  to carry out a de-referencing on all the versions of its search engine.” Some commentators have suggested that the EU could rewrite the Right to Be Forgotten directives to permit global delisting. Another interpretation is that the court was preserving the ability of individual state authorities to order global delisting as a remedy in extraordinary cases. And yet another interpretation is that the court was simply allowing for the possibility of global delisting orders for violations of other laws, but not the Right to Be Forgotten. So this is unlikely to be the last time the CJEU takes up the issue of global delisting; indeed, another case, presenting a similar issue in the context of a defamation claim, is expected to be decided soon. The ability of one nation to require a search engine to delist results globally would prevent users around the world from accessing information they have a legal right to receive under their own country’s laws. That would allow the most speech-restrictive laws to be applied globally. The CJEU decision rightly rejected that scenario.
>> mehr lesen

EFF to HUD: Algorithms Are No Excuse for Discrimination (Thu, 26 Sep 2019)
The U.S. Department of Housing and Urban Development (HUD) is considering adopting new rules that would effectively insulate landlords, banks, and insurance companies that use algorithmic models from lawsuits that claim their practices have an unjustified discriminatory effect. HUD’s proposal is flawed, and suggests that the agency doesn’t understand how machine learning and other algorithmic tools work in practice. Algorithmic tools are increasingly relied upon to make assessments of tenants’ creditworthiness and risk, and HUD’s proposed rules will make it all but impossible to enforce the Fair Housing Act into the future. What Is a Disparate Impact Claim? The Fair Housing Act prohibits discrimination on the basis of seven protected classes: race, color, national origin, religion, sex, disability, or familial status. The Act is one of several civil rights laws passed in the 1960s to counteract decades of government and private policies that promoted segregation—including Jim Crow laws, redlining, and racial covenants. Under current law, plaintiffs can bring claims under the Act not only when there is direct evidence of intentional discrimination, but also when they can show that a facially-neutral practice or policy actually or predictably has a disproportionate discriminatory effect, or “disparate impact.” Disparate impact lawsuits have been a critical tool for fighting housing discrimination and ensuring equal housing opportunity for decades. As the Supreme Court has stated, recognizing disparate impact liability “permits plaintiffs to counteract unconscious prejudices and disguised animus” and helps prevent discrimination “that might otherwise result from covert and illicit stereotyping.” What Would HUD’s Proposed Rules Do? The defendant’s use of an algorithm wouldn’t merely be a factor the court would consider; it would kill the lawsuit entirely. HUD’s proposed rules do a few things. They would make it much harder for plaintiffs to prove a disparate impact claim. They would also create three complete defenses related to the use of algorithms that a housing provider, mortgage lender, or insurance company could rely on to defeat disparate impact lawsuits. That means that even after a plaintiff has successfully alleged a disparate impact claim, a defendant could still get off the hook for any legal liability by applying one of these defenses. The defendant’s use of an algorithm wouldn’t merely be a factor the court would consider; it would kill the lawsuit entirely. These affirmative defenses, if adopted, would effectively insulate those using algorithmic models from disparate impact lawsuits—even if the algorithmic model produced blatantly discriminatory outcomes. Let’s take a look at each of the three affirmative defenses, and their flaws. The first defense a defendant could raise under the new HUD rules is that the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected classes, and that the model is predictive of risk or some other valid objective. The problem? The whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. And these combinations of different variables could be close proxies for protected classes, even if the original input variables are not. For example, say you were training an AI to distinguish between penguins and other birds. You could tell it things like whether a particular bird was flightless, where it lived, what it ate, etc. Being flightless isn’t a close proxy for being a penguin, because lots of other birds are flightless (ostriches, kiwis, etc.). And living in Antarctica isn’t a close proxy for being a penguin, because lots of other birds live in Antarctica. But the combination of being flightless and living in Antarctica is a close proxy for being a penguin because penguins are the only flightless birds that live in Antarctica. In other words, while the individual inputs weren’t close proxies for being a penguin, their combination was. The same thing can happen with any characteristics, including protected classes that you wouldn’t want a model to take into account. Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect. For example, if a face recognition technology is trained by using many pictures of men, when deployed the technology may produce more accurate results for men than women. Thus, whether a model is discriminatory as a whole depends on far more than just the express inputs. HUD says its proxy defense allows a defendant to avoid liability when the model is “not the actual cause of the disparate impact alleged.” But showing that the express inputs used in the model are not close proxies for protected characteristics does not mean that the model is incapable of discriminatory outcomes. HUD’s inclusion of this defense shows that the agency doesn’t actually understand how machine learning works. The second defense a defendant could raise under HUD’s proposed rules has a similar flaw. This defense shields a housing provider, bank, or insurance company if a neutral third-party analyzed the model in question and determined—just as in the first defense—that the model’s inputs are not close proxies for protected characteristics and is predictive of credit risk or another valid objective. This has the very same problem as the first defense: proving that the express inputs used in an algorithm are not close proxies for one of the protected characteristics—even when analyzed by  a “qualified expert”—does not mean that the model itself is incapable of having a discriminatory impact. The third defense a defendant could raise under the proposed rules is that a third party created the algorithm. This situation will apply in many cases, as most defendants—i.e.,the landlord, bank, or insurance company—will use a model created by someone else. This defense would protect them even if an algorithm they used had a demonstrably discriminatory impact—and even if they knew it was having such an impact. There are several problems with this affirmative defense. For one, it gets rid of any incentive for landlords, banks, and insurance companies to make sure that the algorithms they choose to use do not have discriminatory impacts—or to put pressure on those who make the models to work actively to try to avoid discriminatory outcomes. Research has shown that some of the models being used in this space discriminate on the basis of protected classes, like race. One recent study of algorithmic discrimination in mortgage rates, for example, found that Black and Latinx borrowers paid around 5.3 basis points more in interest with online mortgage applications when purchasing homes than similarly situated non-minority borrowers. Given this pervasive discrimination, we need to be creating more incentives to address and root out systemic discrimination embedded in mortgage and risk assessment algorithms, not getting rid of them.  In addition, it is unclear whether aggrieved parties can get relief under the Fair Housing Act by suing the creator of the algorithm instead, as HUD suggests in its proposal. In disparate impact cases, plaintiffs are required under law to point to a specific policy and show how that policy (usually with statistical evidence) results in a discriminatory effect. In a case decided earlier this year, a federal judge in Connecticut held that a third-party screening company could be held liable for a criminal history screening tool that was relied upon by a landlord and led to discriminatory outcomes. However, disparate impact case law around third-party algorithm creators is sparse. If HUD’s proposed rules are implemented, courts first must decide whether third-party algorithm creators can be held liable under the Fair Housing Act for disparate impact discrimination before they can even reach the merits of a case.   Even if a plaintiff would be able to bring a lawsuit against the creator of an algorithmic model, the model maker would likely attempt to rely on trade secrets law to resist disclosing any information about how its algorithm was designed or functioned. The likely result would be that plaintiffs and their legal teams would only be allowed to inspect and criticize these systems subject to a nondisclosure order, meaning that it would be difficult to share information about their flaws and marshal public pressure to change the ways the algorithms work. Many of these algorithms are black boxes, and their creators want to keep it that way. That’s part of why it’s so important for plaintiffs to be able to sue the landlord, bank, or insurance company implementing the model: to ensure that these entities have an incentive to stop using algorithmic models with discriminatory effects, even if the model maker may try to hide behind trade secrets law to avoid disclosing how the algorithm in question operates. If HUD’s third-party defense is adopted, the public will effectively be walled off from information about how and why algorithmic models are resulting in discriminatory outcomes—both from the entity that implemented the model and from the creator of the model. Algorithms that affect our rights should be well-known, well-understood, and subject to robust scrutiny, not secretive and proprietary. HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that’s exactly what the proposal will do. HUD says it just wants to make it easier for companies to make “practical business choices and profit-related decisions.” But these three complete defenses will make it all but impossible to enforce the Fair Housing Act against any party that uses algorithmic models going forward. Today, a defendant’s use of an algorithmic model in a disparate impact case would be considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That’s exactly how it should work. HUD’s proposed affirmative defenses are dangerous, inconsistent with how machine learning actually works, and will upend enforcement of the Fair Housing Act going forward. What is EFF Doing, and What Can You Do? HUD is currently accepting comments on its proposed rules, due October 18, 2019. EFF will be submitting comments opposing HUD’s proposal and urging the agency to drop these misguided and dangerous affirmative defenses. We hope other groups make their voices heard, too.
>> mehr lesen

Nigeria Misuses Overbroad Cyberstalking Law: Levels Charges Against Political Protester Sowore (Thu, 26 Sep 2019)
EFF has long been concerned that—unless carefully drafted and limited—cyberstalking laws can be misused to criminalize political speech. In fact, earlier this year we celebrated a federal court decision in Washington State in the United States that tossed out an overbroad cyberstalking law.  In the case, the law had been used to silence a protester who used strong language and persistence in criticizing a public official. EFF filed an amicus brief in that case where we cautioned that such laws could be easily misused and the court agreed with us.  Now the problem has occurred in a high-profile political case in Nigeria. Just this week the Nigerian government formally filed “cyberstalking” charges against Omoyele Sowore, a longtime political activist and publisher of the respected Sahara Reporters online news agency. Sowore had organized political protests in Nigeria under the hashtag #RevolutionNow and conducted media interviews in support of his protest. He was detained along with another organizer between early August and late September before being granted bail. He reports that he has been beaten and denied access to his family and, for a while, denied access to an attorney. The charges make clear that this prosecution is a misuse of the overbroad cyberstalking statute, passed in 2015. They state that Sowore committed cyberstalking by: “knowingly sent messages by means of press interview granted on 'arise Television' network which you knew to be false for the purpose of causing insult, enmity, hatred and ill-will on the person of the President of the Federal Republic of Nigeria.”  That’s it. The prosecution claims that you can “cyberstalk” the President by going on TV and saying allegedly false things about him with a goal of causing “insult” or “ill-will.” This is obviously a misuse of the law and flatly inconsistent with freedom of expression under both Nigerian and international law. The President of Nigeria is a public figure and criticisms of his policies should be strongly protected. Instead, this prosecution appears to be a textbook case of a poorly drafted law being misused for political purposes.  Similar problems exist with the claim of “treason,” which is also based solely on Sowore’s protest activities and the use of the “#RevolutionNow” slogan. There appear to be a similar political agenda behind the final charges for “financial crimes,” based on Sowore allegedly moving funds between his organization's own bank accounts.  Freedom of expression is a cherished, internationally recognized human right. Nigeria is party to the International Covenant on Civil and Political Rights, and additionally recognizes the right to free expression in its 1999 Constitution under section 39(1).  Yet on its face, Nigeria’s constitution (section 45.1) also allows many exceptions to freedom of expression that can essentially eviscerate the right, unless carefully interpreted.  It’s up to the courts and the prosecutors to protect freedom of expression and interpret any exceptions narrowly and carefully, and up to the legislature not to pass laws that can be so easily misused.  We hope that the judges and prosecutors of Nigeria recognize the problem in applying this cyberstalking law to prosecute a political activist. Nigeria has a long and proud tradition of  peaceful but powerful political protest. Such protests are key to a functioning democracy. Protecting core and longstanding human rights such as freedom of expression, especially when that expression is aimed at convincing the public on a political matter, is the obligation of a modern government. If Nigeria is to uphold its international human rights obligations as well as its own traditions, these charges against Sowore and his co-defendant should be dropped immediately. Related Cases:  Washington State Cyberstalking Law
>> mehr lesen

Carnegie Experts Should Know: Defending Encryption Isn't an "Absolutist" Position (Wed, 25 Sep 2019)
In the digital world, strong encryption is how private conversations stay private. It’s also what keeps our devices secure. Encryption is under a new set of attacks by law enforcement, who continue to seek a magic bullet—a technological backdoor that could circumvent encryption, but somehow not endanger privacy and security more broadly. But that circle can’t be squared, and at this point, the FBI and DOJ know that. That’s why as the government has pushed forward with this narrative, it’s been increasingly backed by false claims.  Now, a group of prominent academics and policy makers has signed on to a deeply misguided report that attempts to re-frame the debate along the lines that law enforcement agencies have long urged. The paper is the work of a small group convened by the Carnegie Institute for Peace, which claims to seek a more “pragmatic and constructive” debate about the “challenges” of encryption. Unfortunately, the report begins with the premise that the “problem” to be solved is that law enforcement agencies sometimes can’t access encrypted devices, then suggests those who disagree with the premise hold “absolutist” positions. It goes on to endorse a version of the discredited “key escrow” scheme that, as we have explained before, just won’t work.  It’s hard to search for “middle ground” in the debate when it is, by definition, a security flaw. The Carnegie report seeks to differentiate itself from earlier discussions by narrowing areas of disagreement between law enforcement and privacy advocates, seeking to break down the issues into their “component parts.” That’s not a bad idea in itself. But in this case, the separation of the various components ends up just being a way to limit the areas of damage to encryption, focusing on data at rest on a mobile phone. And the report limits this intervention to the strategy it deems most palatable to those with privacy concerns: a system in which phones have a decryption key, specific to that phone. Once police fulfill proper legal process, such as getting a warrant, then they’ll get access to the key on the device. Presumably, that will happen via a separate key held by the company that created the device, or another external agent (the report says only that the key will be “held securely.”) But building new ways to break into encrypted devices—also known as backdoors—is just a bad idea. Narrowing down the situations and methods under which it takes place doesn’t change that fundamental calculation. Breaking Encryption Hurts Privacy and Security As we said when the National Academy of Sciences published a paper on this topic last year, there’s no substitute for strong encryption. If an additional decryption key exists, it can and will be misused. Putting it in the hands of the company that created the phone, and insisting on proper legal procedure, is no guarantee against misuse. Nor would it prevent an attack by an outside actor—a criminal who stole the keys, a rogue government agent who subverted legal process, or an insider at the key-holding company that abuses their access for personal interests. Maintaining strong encryption—in which only the intended recipient of a message can see the message—isn’t an extreme or “absolutist” position. It’s a position that privacy- and security-enhancing technology should work properly, and shouldn’t be broken by design. It’s hard to search for “middle ground” in the debate when middle ground is, by definition, a security flaw. Second, it’s not just U.S. government agencies that are interested in gaining access to mobile phones. Other governments, including repressive governments, will insist on having similar systems of access for their own police.  We can’t deny that in certain cases, providing exceptional access to law enforcement will provide helpful evidence. But constantly calling encryption a “challenge” to criminal investigations is a circular and disingenuous argument. It’s not much different than the “challenge” to law enforcement presented by any unrecorded, face-to-face conversation between two human beings. On this basis, any human interaction that is not overseen and recorded for law enforcement could be cited as an investigative “challenge.” Privacy does present challenges, but it’s indispensable to our lives. Without privacy, we won’t have the free expression and free debate we need for democracy to thrive.  Moving Beyond Breaking Phones The FBI and DOJ have spent years arguing to the American people that they should have access to plaintext of every digital conversation that crosses our devices. But that ignores the many other techniques that make it possible to investigate, and draw conclusions about, what has happened in the past—including simple interviews that rely on memory. One of the reasons for the Carnegie working group report’s narrow focus is, in fact, the astonishing amount of data police currently have access to. For instance, cloud services are excluded from consideration, dismissed as “a less worrisome area than encrypted phones or encrypted messaging.” The paper rightly points out that the prevalence of cloud data is already “a tool and source of data for law enforcement.” Even if more cloud data becomes encrypted—as EFF has urged—the adoption of Internet-connected devices will continue to generate data that’s accessible to law enforcement. The paper also strategically leaves aside other methods of access, such as forced software updates. The authors correctly note that if software updates are mechanisms of access for law enforcement, consumers could lose trust in those updates. And the report acknowledges that this could be even more pronounced in vulnerable communities, citing “minority groups who fear law enforcement targeting.” These trust problems are real, and the “key escrow” system that the authors propose does not magically avoid them.  In the end, we’re disappointed that this thoughtful group chose to examine encryption solely as a “challenge” to police seeking a form of special access. We shouldn’t lose sight of the huge benefits that secure, private encryption provides us all.
>> mehr lesen

How to Make Sure the Tech You Use and Build Reflects Your Values (Tue, 24 Sep 2019)
This article originally appeared in Mozilla's Internet Citizen blog. Technology should empower you. It should put you in control. You should not feel used by the company that provides it to you. And if you’re a builder of technologies, we believe you should always carry the responsibility to empower your users. Ultimately you should be able to say that you are proud of what you built. But when we regularly see headlines about how our phone company might have sold our location to a stalker, or how Slack is retaining all of our private messages, or how AmazonVigilant Solutions, and Palantir are each individually working to provide data to ICE, it’s hard to feel like we’re in control of the technologies we use or build, much less that we have any power to change what is happening in front of us. It’s even harder to think that we have a voice when we hear of companies selling surveillance technologies to governments for use in human rights abuses abroad, whether it’s Cisco selling tools custom-built to help China target minorities, or FinFisher selling spyware to the government of Ethiopia, or NSO group selling technology to Saudi Arabia that was used to target a U.S.-based journalist. If you worry about the impact of the tools or services you are building, now is the time to get together with your coworkers to start lobbying for change. That’s why I was so happy to participate in Firefox’s IRL podcast about the Tech Worker Resistance. The podcast featured tech workers who have taken a stand, and who have started creating real change in major companies. Those actions can point the way for the rest of us, especially those with tech skills. As the podcast notes, there are measures we can all take—as employees, contractors and customers—to help push companies toward becoming far better stewards for the powerful technologies they offer to the world. The Influence of Tech Workers: Resistance is Not Futile First for tech workers: We all know that major technology companies, especially Google, Amazon, Microsoft, Apple, and Facebook, don’t compete directly enough in their services. Their dominant technology silos, ubiquitous networks, and gargantuan hoards of data are creating high barriers to true competition. These giants have become overconfident that they can change the rules on users—removing more and more power from them—without losing their profits or their market share. But there is an area where they do sharply compete: for tech talent. Top talent can make or break a company, and firms work hard to beat out each other in recruiting staff. Once hired, they invest significant time and expense toward keeping the best workers from jumping ship. So if you are working somewhere and you worry about the impact of the tools or services you are building, now is the time to get together with your coworkers to start lobbying for change. Demand that the company that makes its money off of your labors ensures that those labors don’t enable repression at home or around the world. For many workers, worries rightly center around misuse of surveillance technology by government purchasers or larger users of those technologies – whether by ICElocal law enforcement, or governments around the world. In those situations, one thing employees and customers can do is insist that tech companies selling to potential abusers adopt a robust Know Your Customer program, to make sure that they aren’t selling tools that are being used to repress people or populations. This is such a reasonable idea that even U.S. State Department has just issued draft voluntary guidance along these lines for export of surveillance equipment. Requirements in Know Your Customer programs are based upon those that companies already have to follow in export control and anti-bribery contexts — fundamentally adding the impact on human rights to the things they have to take into account when they sell a product or service. Under the framework, companies providing technologies or technical services either directly or indirectly to governments—especially the kinds of technologies that require ongoing support and upgrades—should investigate who is buying and using their technologies. If there are credible concerns that the products and services will be (or have been) used to facilitate human rights abuses, companies should work to engineer their systems to be resistant to abuse. If they cannot, they should refrain from participating in the business transaction. Of course, this framework is not a panacea—it requires a real commitment and ongoing vigilance to move companies from doing nothing, to issuing good-sounding public statements, to creating actual ongoing accountability. But a Know Your Customer strategy provides a way to move from the moment of protest to sustained better corporate behavior. The Influence of Tech Users: Strength in Numbers But what if you aren’t an employee? Customers still have a great deal of leverage of their own. Boycotts, or simply deciding that you won’t use technology that doesn’t reflect your values, can be a strong strategy in some situations — but often it is not enough to spark fundamental change. What else can you do to solve the problem for yourself and others? You can turn yourself from a consumer into being a participating co-creator in the tech future you want. You can do this by choosing, supporting, and even help building free and open source alternatives. Another way to advocate for more ethical technology is for for Internet users to band together to change laws or public policies. At EFF, we have an activism team that works to create campaigns that can help you tell lawmakers across the country what you think. Right now, you can urge Massachusetts to hit the pause button on face surveillance, tell Congress to end NSA spying on your telephone call detail records, or instruct the Senate to restore true net neutrality. Politicians are particularly interested in technology companies right now, and you can help them focus their energy on approaches that will both help prevent and create accountability for misuse of technologies to facilitate repression both at home and abroad. It’s easy to feel frustrated and resigned about the state of the technology industry of today, whether you work in technology or not. But it would be a terrible mistake to give up and let a few powerful companies run roughshod over our values. If we are willing to put in the hard work today, we can have the future we want, full of exciting new technology and software and services that enhance our lives and empower us—and also make the world a better place.
>> mehr lesen

Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online (Fri, 20 Sep 2019)
Internet Companies Must Adopt Consistent Rules and Transparent Moderation Practices Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions. A recent Senate Commerce Committee hearing regarding violent extremism online illustrated this problem. Representatives from Google, Facebook, and Twitter each made claims about their companies’ efficacy at finding and removing terrorist content, but offered very little real transparency into their moderation processes. Facebook Head of Global Policy Management Monika Bickert claimed that more than 99% of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech. This isn’t new. When it comes to extremist content, companies have been keeping users in the dark for years. EFF recently published a paper outlining the unintended consequences of this opaque approach to screening extremist content—measures intended to curb extremist speech online have repeatedly been used to censor those attempting to document human rights abuses. For example, YouTube regularly removes violent videos coming out of Syria—videos that human rights groups say could provide essential evidence for future war crimes tribunals. In his testimony to the Commerce Committee, Google Director of Information Policy Derek Slater mentioned that more than 80% of the videos the company deletes using its automated tools are down before a single person views them, but didn’t discuss what happens when the company takes down a benign video. Unclear rules are just part of the problem. Hostile state actors have learned how to take advantage of platforms’ opaque enforcement measures in order to silence their enemies. For example, Kurdish activists have alleged that Facebook cooperates with the Turkish government’s efforts to stifle dissent. It’s essential that platforms consider the ways in which their enforcement measures can be exploited as tools of government censorship. That’s why EFF and several other human rights organizations and experts have crafted and endorsed the Santa Clara Principles, a simple set of guidelines that social media companies should follow when they remove their users’ speech. The Principles say that platforms should: provide transparent data about how many posts and accounts they remove; give notice to users who’ve had something removed about what was removed, under what rules; and give those users a meaningful opportunity to appeal the decision. While Facebook, Google, and Twitter have all publicly endorsed the Santa Clara Principles, they all have a long way to go before they fully live up to them. Until then, their opaque policies and inconsistent enforcement measures will lead to innocent people being silenced—especially those whose voices we need most in the fight against violent extremism.
>> mehr lesen

Facebook's Social Media Council Leaves Key Questions Unanswered (Fri, 20 Sep 2019)
Facebook took big step forward this week in its march to create an "oversight board" to help vet its more controversial takedown decisions, publishing more details about how it will work. Both Facebook and its users will be able to refer cases to the Board to request its review. Is this big step a big deal for online speech? Maybe not, but it's worth paying attention. A handful of tech companies govern a vast amount of speech online, including the platforms we use to get our news, form social bonds, and share our perspectives. That governance means, in practice, making choices about what users can say, to whom. Too often—on their own or under pressure—the speech police make bad choices, frequently at the expense of people who already struggle to make their voices heard and who are underrepresented in the leadership of these companies. EFF has proposed a few ways to improve the way speech is governed online, ever-vigilant to the fact that your freedoms can be threatened by governments, corporations, or by other private actors like online mobs. We must ensure that any proposed solution to one of those threats does not make the others even worse. We have six areas of concern when it comes to this kind of social media council, which we laid out earlier this year in response to a broader proposal spearheaded largely by our friends at Article 19. How does Facebook's version stack up? Independence: A subgroup of board members will be initially selected by Facebook, and they will then work with Facebook to recruit the rest (the goal is to ultimately have 40 members on the Board.) Thus, Facebook will have a strong influence on the makeup of first Board. In addition, the Board is funded through a "Trust," appointed and paid for by Facebook. This structure provides a layer of formal independence, but as a practical matter Facebook could maintain a great deal of control through its power to appoint trustees. Roles: Some have argued that an oversight board should be able to shape a platform's community standards. We've been worried that because such a board might have no more legitimacy to govern speech than a company, it should not be given the power to dictate new rules under the guise of independence. So we think an advisory role is more appropriate, particularly given that the Board is supposed to adhere to international human rights principles. Subject matter: The Oversight Board is to interpret Facebook's policies, which will hopefully improve consistency and transparency, and may suggest improvements to the rules governing speech on Facebook. We hope that they press Facebook to improve its policies, just as a wide range of advocates do (and should continue to do). Jurisdiction: One of the problems with corporate speech controls is that rules and expectations can vary by region. The Facebook proposal suggests that a panel for a given case will include someone from the relevant "region," but it is unclear how a group of eleven to forty Board members can adequately represent the diverse viewpoints of Facebook's global userbase. Personnel: As noted, the composition of the Board remains an unknown. Facebook has said it will strive for a broad diversity of geographic, gender, political, social and religious representation and perspectives. Transparency: It will certainly be a step forward if the Board's public opinions give us more insight into the rules that are supposed to govern speech at Facebook (the actual, detailed rules used internally, not the general rules made public on Facebook.com). We would like to see more information about what kinds of cases are being heard and how many requests for review the Board receives. New America has a good set of specific transparency suggestions. In short, Facebook's proposal could improve the status quo. The transparency of the Board's decisions means that we will likely know more than ever about how Facebook is making decisions. It remains to be seen, though, whether Facebook can be consistent in its application of the rules going forward, as well as how "Independent" the Oversight Board can be, and many of the important details about who will make up the Board and whether it will take the necessary steps to understand local and subcultural norms. We and other advocates will continue to press Facebook to improve the transparency and consistency of its procedures for policing speech on its platform, as well as the substance of its rules. We hope the Oversight Board will be a mechanism to support those reforms and push Facebook towards better respect for human rights. What it won't do, however, is fix the real underlying problem: Content moderation is extremely difficult to get right, and at the scale at which Facebook is operating, it may be impossible for one set of rules to properly govern the many communities that rely on the platform. As with any system of censorship, mistakes are inevitable.  And although the ability to appeal is an important measure of harm reduction, it's not an adequate remedy for having fair policies in place and adhering to them in the first place.
>> mehr lesen

EFF to Observe at United Nations General Assembly Leaders' Week Event (Thu, 19 Sep 2019)
EFF has joined the advisory committee of the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online and will be represented at meetings near the United Nations General Assembly early next week. We have been involved in the process since May, when the government of New Zealand convened more than forty civil society actors in Paris for an honest discussion of the Call’s goals and drawbacks. We are grateful to New Zealand’s government for working toward greater inclusion of civil society in the conversation around what to do about violent extremism. But, we remain concerned that some of the governments and corporations involved seek to rid the internet of terrorist content regardless of the human cost. As demonstrated by a paper we released this summer, in conjunction with Witness and Syrian Archive demonstrates, that cost is very real. At the moment, companies are scrambling to respond to demands to remove extremist content from their platforms. In doing so, however, they risk removing other expression. That includes videos that might be used as evidence in war crimes tribunals; speech from opposition groups that share key identifiers with US-designated terrorist organizations; and, in some cases, benign imagery that happens to contain a banned symbol in the background. While companies have the right to remove extremist content, they must be transparent about their rules and what they remove, and offer users an opportunity to appeal decisions. Our involvement in the Christchurch Call advisory committee is just one of several ways in which we’re engaged with this topic. We have also been observing the deliberations in the EU over the so-called terrorism regulation, and are watching the debate closely in the US as well. We will also continue our research into the impact of extremist speech regulations on human rights. We have also spoken recently on the topic, at the Chaos Communications Camp in Germany, and will be speaking again soon at NetHui in Wellington, New Zealand.
>> mehr lesen

Hearing Friday: Plaintiffs Challenging FOSTA Ask Court to Reinstate Lawsuit Seeking To Block Its Enforcement (Wed, 18 Sep 2019)
Risk of Prosecution Has Caused Groups to Self-Censor, Platforms to Shut Out Legal Services Washington D.C.—On Friday, Sept. 20, at 9:30 am, attorneys for five plaintiffs suing the government to block enforcement of FOSTA will ask a federal appeals court to reverse a judge’s decision to dismiss the case. The plaintiffs—Woodhull Freedom Foundation, the Internet Archive, Human Rights Watch, and individuals Alex Andrews and Eric Koszyk—contend that FOSTA, a federal law passed in 2018 that expansively criminalizes online speech related to sex work and removes important protections for online intermediaries, violates their First Amendment rights. Electronic Frontier Foundation (EFF) is counsel for the plaintiffs along with co-counsel Davis Wright Tremaine LLP, Walters Law Group, and Daphne Keller. FOSTA, or the Allow States and Victims to Fight Online Sex Trafficking Act, makes it a felony to use or operate an online service with the intent to “promote or facilitate the prostitution of another person,” vague terms with wide-ranging meanings that can include speech that makes sex work easier in any way. FOSTA also expanded the scope of other federal laws on sex trafficking to include online speech, and reduced statutory immunities previously provided under Section 230 of the Communications Decency Act. The plaintiffs sued to block enforcement of the law because its overbroad language sweeps up Internet speech about sex, sex workers, and sexual freedom, including harm reduction information and speech advocating decriminalization of prostitution. A federal judge dismissed the case, ruling that the plaintiffs lacked “standing” because they failed to prove a credible threat that they would be prosecuted for violating FOSTA. Because the court dismissed the case on procedural grounds, it did not rule on whether FOSTA is constitutional. Attorney Robert Corn-Revere, counsel for the plaintiffs, will argue at a hearing on Sept. 20 that the plaintiffs don’t have to wait until they face prosecution before challenging a law regulating speech when, as here, the vague and overbroad prohibitions of the law are causing numerous speakers to censor themselves and their users. FOSTA specifically authorized enforcement by state prosecutors and private litigants, vastly increasing the risk of being sued under the statute and greatly exacerbating the speech-chilling effects of the law. FOSTA has also reportedly generated increased risks for sex workers and frustrated law enforcement efforts to investigate trafficking. WHAT: Oral argument in Woodhull Freedom Foundation v. U.S. WHO: Robert Corn-Revere of Davis Wright  Tremaine LLP WHEN: Friday, Sept. 20, at 9:30 am WHERE: E. Barrett Prettyman U.S. Courthouse and William B. Bryant AnnexCourtroom 31 333 Constitution Avenue, NW Washington, DC 20001 For more on this case: https://www.eff.org/cases/woodhull-freedom-foundation-et-al-v-united-states https://www.woodhullfoundation.org/our-work/fosta/ For more on FOSTA: https://www.eff.org/deeplinks/2018/03/how-congress-censored-internet     Contact:  David Greene Civil Liberties Director davidg@eff.org
>> mehr lesen

Thanks For Helping Us Defend the California Consumer Privacy Act (Wed, 18 Sep 2019)
The California Consumer Privacy Act will go into effect on January 1, 2020—having fended off a year of targeted efforts by technology giants who wanted to gut the bill. Most recently, industry tried to weaken its important privacy protections in the last days of the legislative session. Californians made history last year when, after 600,000 people signed petitions in support of a ballot initiative, the California State Legislature answered their constituents’ call for a new data privacy law. It’s been a long fight to defend the CCPA against a raft of amendments that would have weakened this law and the protections it enshrines for Californians. Big technology companies backed a number of bills that each would have weakened the CCPA’s protections. Taken together, this package would have significantly undermined this historic law. Fortunately, the worst provisions of these bills did not make it through the legislature—though it wasn’t for lack of trying. Lawmakers proposed bills that would have opened up loopholes in the law and made it easier for businesses to skirt privacy protections if they shared information with governments, changed definitions in the bill to broaden its exemptions, and made it easier for businesses to require customers to pay for their privacy rights. These bills sailed through the Assembly but were stopped in July by the Senate Judiciary Committee, chaired by Senator Hannah-Beth Jackson. The final amendments to the CCPA that passed through the legislature last week make small changes to the law, and do not weaken its important protections. We want to thank everyone who called or wrote to their lawmakers to protect the CCPA this year and amplified how important data privacy is to the people of California. Your voices are invaluable to our advocacy. We also appreciate the time that lawmakers, our coalition partners, and other stakeholders devoted to discussions about these amendments. As a result of this hard work, the California State Legislature stood up for the privacy law that they passed last year. Still, while the CCPA is important for Californians’ consumer data privacy, it needs to be stronger. EFF and other privacy organizations earlier this year advanced two bills to strengthen the CCPA, which met significant opposition from technology industry trade association groups. Most importantly, these bills would have improved enforcement by allowing consumers to bring their own privacy claims to court. We particularly thank Assemblymember Buffy Wicks. Sen. Jackson, and the California Attorney General’s Office for leading the charge to improve the CCPA in the legislature. More than anything, this year’s CCPA fight shows that when voters speak up for their privacy, it makes a big difference with legislators. We look forward to continuing to work with legislators and our coalition partners to advance measures that improve everyone’s privacy. We also look forward to offering input on the Attorney General’s regulations for the CCPA, expected this fall. And as technology trade groups redouble their efforts to weaken state privacy laws or override them with a national law, we encourage everyone to keep pushing for strong consumer data privacy laws across the country.
>> mehr lesen

Big Tech’s Disingenuous Push for a Federal Privacy Law (Wed, 18 Sep 2019)
This week, the Internet Association launched a campaign asking the federal government to pass a new privacy law. The Internet Association (IA) is a trade group funded by some of the largest tech companies in the world, including Google, Microsoft, Facebook, Amazon, and Uber. Many of its members keep their lights on by tracking users and monetizing their personal data. So why do they want a federal consumer privacy law? Surprise! It’s not to protect your privacy. Rather, this campaign is a disingenuous ploy to undermine real progress on privacy being made around the country at the state level. IA member companies want to establish a national “privacy law” that undoes stronger state laws and lets them continue business as usual. Lawyers call this “preemption.” IA calls this “a unified, national standard” to avoid “a patchwork of state laws.” We call this a big step backwards for all of our privacy. The question we should be asking is, “What are they afraid of?” Stronger state laws After years of privacy scandals, Americans across the political spectrum want better consumer privacy protections. So far, Congress has failed to act, but states have taken matters into their own hands. The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, makes it illegal to collect biometric data from Illinois citizens without their express, informed, opt-in consent. Vermont requires data brokers to register with the state and report on their activities. And the California Consumer Privacy Act (CCPA), passed in 2018, gives users the right to access their personal data and opt out of its sale. In state legislatures across the country, consumer privacy bills are gaining momentum. This terrifies big tech companies. Last quarter alone, the IA spent nearly $176,000 lobbying the California legislature, largely to weaken CCPA before it takes effect in January 2021. Thanks to the efforts of a coalition of privacy advocates, including EFF, it failed. The IA and its allies are losing the fight against state privacy laws. So, after years of fighting any kind of privacy legislation, they’re now looking to the federal government to save them from the states. The IA has joined Technet, a group of tech CEOs, and Business Roundtable, another industry lobbying organization, in calls for a weak national “privacy” law that will preempt stronger state laws. In other words, they want to roll back all the progress states like California have made, and prevent other states from protecting consumers in the future. We must not allow them to succeed. A private right of action Laws with a private right of action allow ordinary people to sue companies when they break the law. This is essential to make sure the law is properly enforced. Without a private right of action, it’s up to regulators like the Federal Trade Commission or the U.S. Department of Justice to go after misbehaving companies. Even in the best of times, regulatory bodies often don’t have the resources needed to police a multi-trillion dollar industry. And regulators can fall prey to regulatory capture. If all the power of enforcement is left in the hands of a single group, an industry can lobby the government to fill that group with its own people. Federal Communications Commission chair Ajit Pai is a former Verizon lawyer, and he’s overseen massive deregulation of the telecom industry his office is supposed to keep in check. The strongest state privacy laws include private rights of action. Illinois BIPA allows users whose biometric data is illegally collected or handled to sue the companies responsible. And CCPA lets users sue when a company’s negligence results in a breach of personal information. The IA wants to erase these laws and reduce the penalties its member companies can face for their misconduct in legal proceedings brought by ordinary consumers. Real changes to the surveillance business model We don’t know what the IA’s final legislative proposal will say, but its campaign website is thick with weasel words and equivocation. For example, the section on “Controls” says: Individuals should have meaningful controls over how personal information they provide to companies is collected, used, and shared, except where that information is necessary for the basic operation of the business[.] The “basic operation” of data brokers involves collecting and selling personal data without your consent. Does that mean you shouldn’t be able to stop them? The rest of IA’s proposals follow the same pattern. The section on “transparency” says that users should be able to know the “categories of entities” that their data is shared with, but not the names of actual companies or people that receive it. This will make it unnecessarily difficult for people to trace how their personal information is bought and sold. The section on “access” says that users’ ability to access their data should not “unreasonably interfere with a company’s business operations.” Again, if a business depends on gathering data about people without their knowledge, will users ever be able to access their information? Sometimes, exercising your privacy rights will mean “interfering” with a company’s business. The bottom line is that tech companies are happy for Congress to enact a privacy law—as long as it doesn’t affect their “business operations” in any way. In other words, they’d like a privacy law that doesn’t change anything at all. The Internet Association knows which way the wind is blowing. Across the country, people are fed up with Big Tech’s empty promises and serial mishandling of personal data. They want real change, and state legislatures are listening. We must allow states to continue passing innovative new privacy laws. Any federal privacy legislation needs to build a floor, not a ceiling.
>> mehr lesen

Facebook Must Better Limit Its Face Surveillance (Sat, 14 Sep 2019)
Last week, Facebook started sending a small portion of its users a new notification about its face surveillance program, which concludes with two important buttons: “keep off” and “turn on.” This is a step in the right direction: for these users, the default will be no face surveillance, unless the user gives their affirmative opt-in consent. But as EFF recently explained, Facebook will not provide this privacy-protective default to billions of its current users, and it is unclear whether the company will provide it to its new users. Facebook should not subject any of its current or new users to face surveillance, absent their informed opt-in consent. We have two additional objections. First, Facebook’s announcement of this new program fails to mention that the company is acting under FTC compulsion. Second, the notice Facebook is sending to some of its users lacks critical information about the privacy hazards of face surveillance, so people who opt-in will not be fully informed. The FTC Required Facebook to Change Its Face Surveillance Settings On July 24, 2019, the Federal Trade Commission (FTC) filed a complaint in court against Facebook for violating a 2012 privacy order by the FTC against Facebook. Much of this FTC complaint concerns Facebook’s role in the Cambridge Analytica scandal. But the FTC also alleges that, in 2018, Facebook misled 60 million of its users by telling them that the company would not subject them to face surveillance unless they chose to “turn on” the feature. In fact, the feature was on by default. According to the FTC, Facebook made this misleading statement to only some of its users: those the company had not yet moved from its original face surveillance program (which Facebook calls “tag suggestions”) to its current face surveillance program (which the company calls “face recognition”). Also on July 24, the FTC and Facebook filed a proposed order to settle the issues raised by the FTC’s complaint. (EFF at that time objected that this settlement does not solve the problems that led to the Cambridge Analytica scandal.) Part of this FTC settlement requires Facebook, as to its users still using “tag suggestions” at the time of the settlement, to obtain consent before subjecting them to further face surveillance. Thus, the new Facebook program is required by the FTC settlement, though the new Facebook announcement does not mention this. Facebook’s Incomplete Description of Face Surveillance The FTC settlement requires Facebook to provide notice, to its remaining “tag suggestions” users, of how Facebook will use and share the “facial recognition templates” of these users. The new notice from Facebook does provide such information. Unfortunately, the FTC did not require Facebook to notify its users of the inherent privacy hazards posed by face surveillance, and Facebook did not do so on its own. As with any kind of personal information, the hazards of corporate collection include theft by outside hackers, misuse by company employees, and seizure by government officials. There also is the risk of “mission creep”—when company leaders seek new ways to profit from old data. Ominously, Facebook has applied to patent face surveillance systems that would link its users’ online profiles to their physical-world activities. Moreover, face templates are a uniquely hazardous form of personal information: most of us cannot hide or change our faces, and the technology that tracks our faces is rapidly improving and proliferating. In light of this gap in Facebook’s notice, users who opt-in to face surveillance might not be doing so on the basis of all the relevant information. Conclusion We are pleased that the FTC required Facebook to individually notify some of its users about how the company uses and shares face recognition templates, and forbade the company from applying face surveillance to these users unless they affirmatively opt-in. As we explained in our last post, however, we are disappointed that the FTC did not require Facebook to obtain consent before subjecting any of its users to face surveillance. And as we explain in this post, we are also disappointed that Facebook’s notice fails to identify the privacy hazards of face surveillance. This failure is all the more reason to enact strong consumer data privacy laws.
>> mehr lesen

Don't Let Congress Hand Patent Abusers Their Ultimate Wishlist (Fri, 13 Sep 2019)
Congress is considering a bill that would throw out the best defenses against bad patents. The Senate IP Subcommittee recently had a hearing about the Stronger Patents Act, a batch of recurring terrible ideas that has been introduced by Sen. Chris Coons (D-Del.) for the third time in three years. The Stronger Patents Act would tear apart inter partes review (IPR), an critical tool for challenging bad patents. People who are charged with patent violations shouldn’t have to have millions of dollars in the bank to defend themselves. IPR provides a more cost-effective way of evaluating patents than expensive federal court litigation. TAKE ACTION PRESERVE OUR DEFENSES AGAINST PATENT ABUSE Patent trolls, drug companies, and IP lawyer groups have been attacking IPR for years now, and they’re all big supporters of this bill. Big patent owners have grown so used to gaming the patent system that they’re willing to throw out IPRs, despite the fact that these reviews are clearly in the public interest. IPR allows companies to fight back against patent accusations for a fraction of the cost of district court. It also allows organizations like EFF to challenge bogus patents like we did when we busted the podcasting patent. If the Stronger Patents Act passes, EFF and our supporters won’t be allowed to file challenges anymore. Taking a second look at patents is in the public interest. In the seven years IPRs have been active, the specialized judges at the Patent Office have thrown out more than 1,500 patents that never should have been issued in the first place. Many of those are, unsurprisingly, software patents. The U.S. Patent Office often issues patents it shouldn’t have, particularly in areas like software, where examiners don’t always have access to the most relevant prior art. The office is funded by the fees paid by patent applicants. PTO examiners spend an average of about 18 hours per application, and that leads to wrongly issued patents. Too often, weak patents get used to threaten small businesses—patents that claim things like picture menus, or crowdfunding, or online contests. The IPR process is the best process, so far, for dealing with those improperly issued patents. When IPR was challenged in court, the Supreme Court upheld the process. The public has an important interest in ensuring that patents stay within their proper bounds. The Stronger Patents Act has another bad provision that will give huge amounts of leverage directly to patent trolls. Under rules laid out by the Supreme Court in 2007, it’s very hard for patent trolls to get court-ordered injunctions that can knock products off the market. The Stronger Patents Act would undo that rule, giving patent trolls leverage to scare massive cash settlements out of companies. In 2006, Blackberry (then called RIM) paid out a $612 million settlement to a patent-assertion entity when it was threatened with an injunction. That money went straight into the hands of some bad actors in the patent world, who used the capital to invest in—what else—more lawsuits against tech firms. The Stronger Patents Act will wreak havoc on a system that’s already balanced in favor of patent holders. Tell Congress to reject this proposal. TAKE ACTION PRESERVE OUR DEFENSES AGAINST PATENT ABUSE Related Cases:  Abstract Patent Litigation
>> mehr lesen

EFF's 2019 Pioneer Awards Winner Remarks and Speeches (Fri, 13 Sep 2019)
EFF’s annual Pioneer Awards ceremony celebrates individuals and groups who have made outstanding contributions to freedom and innovation on the electronic frontier. On Sept. 12, EFF welcomed keynote speaker Adam Savage, who spoke on the importance of storytelling, scientific exploration, and personal discovery. And each of our honorees had important messages to share with us: legendary science fiction author William Gibson reminded us how early science fiction shaped the world we live in now; the inspiring anti-surveillance group Oakland Privacy showed how we can stand together to make lasting differences in how technology is used in our communities today; and trailblazing tech scholar danah boyd challenged everyone in the tech world to shape a better future.  Opening the ceremony was EFF Executive Director Cindy Cohn, who framed the evening by reminding us that we must articulate what that better future looks like and work to make it happen—because "honestly, we don’t have any other choice." Additionally, she underscored how important it is to recognize our past and move toward a better future. "Even now, especially now, we need hope," she said. "In the end, we cannot build a better world unless we envision it and talk about it." Below are transcripts or prepared remarks of the keynote and award winners' speeches. Audio of the entire ceremony is available here, and individual audio recordings of each speech are below. Opening Remarks by Cindy Cohn Audio Thank you so much, Aaron.  I am just delighted to see everyone here tonight and to honor these amazing people.  Tonight we take a moment to celebrate our community.  But as we begin I want to send a moment out for our friend Chelsea Manning, who is again incarcerated by a vindictive government. Our hearts go out to her and we wish she could be with us here tonight. On to our awardees.  Each of them will have an individual introduction, but I think tonight’s awardees represent a great cross-section of the work that is being done to make our digital world better.  Cindy Cohn Executive Director Cindy Cohn delivers the opening remarks First, there’s Dr. danah boyd, who has spent her professional life trying to figure out and reflect back to us the ways in which people, especially young people, are interacting with technologies.  That would be enough, but danah has now gone far beyond that to both support and inspire other researchers and build a community thinking about how Data and Society do and should interact. Second, there’s Oakland Privacy, who represent what a supporting, inspiring, grassroots community can accomplish – putting the city of Oakland far ahead of the national conversation on these issues. And finally William Gibson, whose imagination and storytelling have framed our digital world, with both its benefits and its perils. William pioneered the vision that we needed, and he did so before EFF and these awards even existed We gather tonight in a time of reckoning and change for our community. It’s one where we desperately need to articulate and push for a better technical world because so many people have lost hope: unable to think of the future as anything but a dystopian hellscape, even as they feel trapped behind their phones or their keyboards. Outside our world, the blush of tech-excitement has given way to a tech-lash that is needed. If not conducted thoughtfully, however, this moment threatens those who most need digital tools to keep themselves safe. It threatens those who have used and are using the Net to find community, support, and solidarity, and join together to find and implement solutions to many, many problems we see pressing against us all.  Politicians of all stripes are angry at those big, brand name tech companies, powerful and unaccountable, but for very different and often sharply contradictory reasons.  But as they shoot at Big Tech, we know that the public interest Internet, the marginal voices it has empowered and the innovators that could challenge and reform the current status quo, all sit nearby and stand a great risk of becoming collateral damage. We must not let that happen. So far, we’ve seen that many of the efforts to combat the problems of big tech actually threaten to empower and ossify it.  I shed no tears for the big companies, who join John Perry’s weary giants of Flesh and Steel as the unwelcome would-be governors of cyberspace. But if we want to move toward an Internet that works for us, where power is shifted to the users and builders and away from the Wall Street financiers and surveillance capitalists who would turn us into insecure, surveilled rats in a maze, we must step up now more than ever.  But there’s a reckoning inside our world too.  Recent events have demonstrated the need to take a hard look the shift from technology being a niche issue led by quirky geeks and outcasts to one of big business, with the attendant money and power and corruption. We also need to look at the frankly horrible treatment that some in tech have wrought: from young girls to aspiring women scientists and technologists to contract and gig workers to people of color both in the U.S. and around the world. We must address our roles and own blind spots in letting this happen to so many. We must address the ways in which our embrace of the hero-narrative, and a hunger for the fruits of innovation, allowed a world in which being a genius made it OK to be an asshole, or much worse. Those days must be over now, and I say good riddance. But this shift requires work by all of us who believe that technology can be a force for good in the world. It won’t happen automatically and the decisions along the way are not simple. We must do it together. We must stand with the survivors and ensure that, as we do so, we work to bring people of good will and good intentions along with us.    Barlow said, echoing Alan Kay, that the way to make a better future is to invent it.  And it’s true.  But as recent events have unfolded, I think that even he would likely have had to reconsider some of his own role in creating some parts of this world.  But I also know that Barlow would have wanted the unvarnished truth, and was always hopeful we would find ways to discover it, and that ultimately that truth would help bring us to a better place. Even now when the tools we built to help us see have given us the clarity to uncover the very worst. When we’ve built systems that let everyone speak, we must accept that those new channels will be filled with the voices of those who have long been silenced, who speak their truth and make us confront their pain. We also know that they are filled with those who want to keep them silenced. Even now, especially now, we  need hope.  In the end, we cannot build a better world unless we envision it and talk about it.  Being here with all of you tonight renews my faith that there are so many good, smart, thoughtful and kind people in this community. And we know that there are many more of us out there, outside our community, waiting to come in. We must revel in each other and not let the awful things we’ve heard and seen make us turn away from the truth, or each other.   So that’s my challenge to all of you tonight. Even as we’re unflinching in talking about and addressing the problems and harms that our current world has created or encouraged or even just rides alongside, we must also articulate what a better future looks like and work to make it happen.  Honestly, we don’t have any other choice. Now, on to the celebration part of the evening. Keynote Speech by Adam Savage Audio I want to start by thanking EFF for asking me to be here and deliver this keynote. I've been a supporter and true believer in your mission since its inception. I was lucky enough to be at your 20th birthday party and party with John Perry Barlow, whose long-distance vision of the promise and perils of the Internet was prescient, to say the least. I'm humbled to be in the room with tonight's award winners, each heroes in their own right. Specifically, Mr. Gibson, if you knew how much your books meant to my early days in San Francisco, they equate to me at 24 first coming here in 1990 and the city that I found when I moved here. And so I want to thank you personally for all the time I've spent and the realities that you have weaved. I wanted to talk tonight about facts and stories. I've had a lot of different jobs and even careers in my life so far. Even in hosting MythBusters for 14 years on Discovery Channel, I spent a lot of time trying to figure out what that job actually was. Adam Savage Adam Savage delivers the keynote to the 2019 Pioneer Awards In the first season, newly divorced and going through the particular insanity that befalls all of the recently divorced, three months into filming, I stopped dating entirely just to hunker down and figure out what this new endeavor of hosting a TV show was asking from me, what I had to contribute to it. And the answer would take me more than a decade. At first, I thought I was there to build stuff and talk about it. And then I realized maybe my job is to concoct entertaining scientific methodologies and execute them and talk about them. And then I thought it was to make something explode in every episode. That may have come from a note from the network. In 2006, I met Neil deGrasse Tyson for the first time and did his podcast, and I was sitting across from him, watching him go, and thinking, "Look at this guy. He is like an arrow pointed towards a goal of illuminating science for people." To use a phrase from Mr. Gibson, "He is vat-grown for this job." He is a science communicator. What a great mission. Wait a minute. I'm a science communicator. What a cool mission. Albeit, I'm a science communicator with only a high school diploma. In 2008, we filmed an episode called Lead Balloon in which we made a 14-foot diameter balloon out of 28 pounds of rolled lead. No explosions. No fire. And when we talked to editorial about this episode, they expected that the cut for the lead balloon portion of the episode would maybe be 15 minutes. The first rough cut of Lead Balloon was 55 minutes long. The final cut was so thrilling and rated really well. And I realized that one of the key things that made this episode great was Jamie's and my enthusiasm. If we were engaged, it turns out, so was the audience. And that's when I started wearing more costumes on the show, and it's when Jamie started asking questions that had no myth at all attached to them, like, "Well, if you could put square wheels on a car, how fast would you have to go to get a smooth ride?" It took us two tries. The first try, all four of the brakes fell off the car at the same time, an injury I would have trouble doing if you asked me to do it on purpose. On the second try, the answer was 38 miles an hour. It wasn't until season 11 that I realized the simplicity of my job. Storytelling. We were there to tell a story about the search for a hidden truth, to quote Raymond Chandler. Often, a hidden truth in something absurd. That, in fact, it turns out, was all I had ever done for a living. When I spent several years as a graphic designer, and every designer will tell you this, the final design works not because it has the proper information, but because that information tells a story to the person who's looking at it. Your eye is guided to the right parts of the design at the right time. Instead of using time to tell a story like in a movie, a graphic designer uses space to parcel out the information so our brains can process it. When I was working as a model maker in commercials and films, making spaceships, attaching little details to a ship, we called them greebles. Every single greeble has to have a story attached to it, and that story has to be known by the model maker gluing that greeble to that ship. Otherwise, it won't work aesthetically, because the surface details on the Millennium Falcon tell a very different story than the surface details on the Enterprise. The model maker is required to know that story. Otherwise, the story won't scan. And on MythBusters, the story was one of scientific discovery but of also personal discovery. It was about watching Jamie and Kari, Tory, Grant, and I, and Jessi, and the entire team confront new ideas and new materials, and collaborating and learning what they can do, and seeing what we can learn from them. Stories are what make us human. I think that we invented language in order to tell stories. I think the story is the first mover. We don't prioritize stories enough culturally, in my opinion. Every one of us has been annoyed by the self-proclaimed science geek who simply spits out facts they found on Reddit that day. It is an easy mistake to make, because we are trained in school to think like this. Fields like math and science and geography are most often taught in public schools as monolithic groups of facts to memorize by the test next Tuesday. And when you make people memorize endless math tables or state capitals or the freezing point of elements, you lead them to believe a terrible thing, that facts equal knowledge. But they don't. Knowledge comes from taking facts and putting them in a context with each other. That context is narrative. I have a great example. My high school freshman earth science teacher, Dan Frare, was telling us about glaciers, and he was trying to explain the features you saw in glaciers as they were moving. And he was trying to explain how slowly they moved. And he said to us, "The best way to picture a glacier is it's a river on Quaaludes." It was the '80s. In fact, it was so long ago, I would go to Dan Frare's class at lunchtime, because I didn't have any friends. And I would pepper him with questions about science, and he would sit there and chain smoke in school while grading papers. This is a different time. Wait a second. Where was I? Quaaludes. Yes. This is a beautiful way to talk about glaciers because it actually gave me a deep understanding of the physics of a glacier in one sentence. He took facts, and he put them in a story and gave my brain that story for the rest of my life. Having told stories in the service of both art and science, I feel uniquely qualified—and you should know, I feel uniquely qualified for very few things—I feel uniquely qualified to tell you that I've come to understand that far from being at either end of a spectrum of human experience, people often say, "Oh, it's both an art and a science." And what we do when we say that is we place those things in opposition to each other and at a distance from each other. And what I have come to understand is that science and art are simply both ways of telling stories, and for the same reason. We use these stories to figure out the shape of the universe around us. I'm telling you all of this to talk about what I see as the two important missions that the EFF has been fulfilling throughout its tenure. One is, of course, the legal and logistical aspect of their job. Fighting in court, writing amicus briefs, and tirelessly using the tools available to them to help all of us enjoy a safer Internet with proper privacy, autonomy, and genuine dignity. But in addition, in order to wake up the public to the realities of the problem, it's not enough to recount just the facts, ma'am. We have to make compelling arguments for why we need privacy and safe spaces as well as free speech and openness. And in addition to the legal vanguard it occupies, EFF is also always working to help people understand what they are fighting for and how the issues affect them. In order to understand the thing, we need to see our place in and adjacent to it. And this is arguably the most difficult part of their job. Tonight's award winners are here for the fight, and just as much, they are here for the stories, because it is a universal human truth that when we share and listen to each other's stories, the world moves forward in a positive way. We are living through a difficult and critical time. I now truly understand the meaning of the famous curse, "May you live in interesting times." And I am genuinely not sure that we're going to make it out of this. It is the central fact of my current and probably all of our current existence. But if we make it out, and I believe this with my whole heart, if we do make it out, it'll be because we have listened to each other's stories and connected with realities different than ours, than the ones we might occupy, and we have worked hard to let all of those stories be told. I hope we do. Thank you so much to EFF, and thank you for your time. Acceptance Speech by danah boyd — "Facing the Great Reckoning Head-On" Audio I cannot begin to express how honored I am to receive this award. My awe of the Electronic Frontier Foundation dates back to my teenage years. EFF has always inspired me to think deeply about what values should shape the internet. And so I want to talk about values tonight, and what happens when those values are lost, or violated, as we have seen recently in our industry and institutions. But before I begin, I would like to ask you to join me in a moment of silence out of respect to all of those who have been raped, trafficked, harassed, and abused. For those of you who have been there, take this moment to breathe. For those who haven’t, take a moment to reflect on how the work that you do has enabled the harm of others, even when you never meant to. <silence> The story of how I got to be standing here is rife with pain and I need to expose part of my story in order to make visible why we need to have a Great Reckoning in the tech industry. This award may be about me, but it’s also not. It should be about all of the women and other minorities who have been excluded from tech by people who thought they were helping. danah boyd danah boyd delivers her acceptance speech The first blog post I ever wrote was about my own sexual assault. It was 1997 and my audience was two people. I didn’t even know what I was doing would be called blogging. Years later, when many more people started reading my blog, I erased many of those early blog posts because I didn’t want strangers to have to respond to those vulnerable posts. I obfuscated my history to make others more comfortable. I was at the MIT Media Lab from 1999–2002. At the incoming student orientation dinner, an older faculty member sat down next to me. He looked at me and asked if love existed. I raised my eyebrow as he talked about how love was a mirage, but that sex and pleasure were real. That was my introduction to Marvin Minsky and to my new institutional home. My time at the Media Lab was full of contradictions. I have so many positive memories of people and conversations. I can close my eyes and flash back to laughter and late night conversations. But my time there was also excruciating. I couldn’t afford my rent and did some things that still bother me in order to make it all work. I grew numb to the worst parts of the Demo or Die culture. I witnessed so much harassment, so much bullying that it all started to feel normal. Senior leaders told me that “students need to learn their place” and that “we don’t pay you to read, we don’t pay you to think, we pay you to do.” The final straw for me was when I was pressured to work with the Department of Defense to track terrorists in 2002. After leaving the Lab, I channeled my energy into V-Day, an organization best known for producing “The Vagina Monologues,” but whose daily work is focused on ending violence against women and girls. I found solace in helping build online networks of feminists who were trying to help combat sexual assault and a culture of abuse. To this day, I work on issues like trafficking and combating the distribution of images depicting the commercial sexual abuse of minors on social media. By 2003, I was in San Francisco, where I started meeting tech luminaries, people I had admired so deeply from afar. One told me that I was “kinda smart for a chick.” Others propositioned me. But some were really kind and supportive. Joi Ito became a dear friend and mentor. He was that guy who made sure I got home OK. He was also that guy who took being called-in seriously, changing his behavior in profound ways when I challenged him to reflect on the cost of his actions. That made me deeply respect him. I also met John Perry Barlow around the same time. We became good friends and spent lots of time together. Here was another tech luminary who had my back when I needed him to. A few years later, he asked me to forgive a friend of his, a friend whose sexual predation I had witnessed first hand. He told me it was in the past and he wanted everyone to get along. I refused, unable to convey to him just how much his ask hurt me. Our relationship frayed and we only talked a few times in the last few years of his life. So here we are… I’m receiving this award, named after Barlow less than a week after Joi resigned from an institution that nearly destroyed me after he socialized with and took money from a known pedophile. Let me be clear — this is deeply destabilizing for me. I am here today in-no-small-part because I benefited from the generosity of men who tolerated and, in effect, enabled unethical, immoral, and criminal men. And because of that privilege, I managed to keep moving forward even as the collateral damage of patriarchy stifled the voices of so many others around me. I am angry and sad, horrified and disturbed because I know all too well that this world is not meritocratic. I am also complicit in helping uphold these systems. What’s happening at the Media Lab right now is emblematic of a broader set of issues plaguing the tech industry and society more generally. Tech prides itself in being better than other sectors. But often it’s not. As an employee of Google in 2004, I watched my male colleagues ogle women coming to the cafeteria in our building from the second floor, making lewd comments. When I first visited TheFacebook in Palo Alto, I was greeted by a hyper-sexualized mural and a knowing look from the admin, one of the only women around. So many small moments seared into my brain, building up to a story of normalized misogyny. Fast forward fifteen years and there are countless stories of executive misconduct and purposeful suppression of the voices of women and sooooo many others whose bodies and experiences exclude them from the powerful elite. These are the toxic logics that have infested the tech industry. And, as an industry obsessed with scale, these are the toxic logics that the tech industry has amplified and normalized. The human costs of these logics continue to grow. Why are we tolerating sexual predators and sexual harassers in our industry? That’s not what inclusion means. I am here today because I learned how to survive and thrive in a man’s world, to use my tongue wisely, watch my back, and dodge bullets. I am being honored because I figured out how to remove a few bricks in those fortified walls so that others could look in. But this isn’t enough. I am grateful to EFF for this honor, but there are so many underrepresented and under-acknowledged voices out there trying to be heard who have been silenced. And they need to be here tonight and they need to be at tech’s tables. Around the world, they are asking for those in Silicon Valley to take their moral responsibilities seriously. They are asking everyone in the tech sector to take stock of their own complicity in what is unfolding and actively invite others in. And so, if my recognition means anything, I need it to be a call to arms. We need to all stand up together and challenge the status quo. The tech industry must start to face The Great Reckoning head-on. My experiences are all-too common for women and other marginalized peoples in tech. And it it also all too common for well-meaning guys to do shitty things that make it worse for those that they believe they’re trying to support. If change is going to happen, values and ethics need to have a seat in the boardroom. Corporate governance goes beyond protecting the interests of capitalism. Change also means that the ideas and concerns of all people need to be a part of the design phase and the auditing of systems, even if this slows down the process. We need to bring back and reinvigorate the profession of quality assurance so that products are not launched without systematic consideration of the harms that might occur. Call it security or call it safety, but it requires focusing on inclusion. After all, whether we like it or not, the tech industry is now in the business of global governance. “Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently-abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic. The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great. My ask of you is to honor me and my story by stepping back and reckoning with your own contributions to the current state of affairs. No one in tech — not you, not me — is an innocent bystander. We have all enabled this current state of affairs in one way or another. Thus, it is our responsibility to take action. How can you personally amplify underrepresented voices? How can you intentionally take time to listen to those who have been injured and understand their perspective? How can you personally stand up to injustice so that structural inequities aren’t further calcified? The goal shouldn’t be to avoid being evil; it should be to actively do good. But it’s not enough to say that we’re going to do good; we need to collectively define — and hold each other to — shared values and standards. People can change. Institutions can change. But doing so requires all who harmed — and all who benefited from harm — to come forward, admit their mistakes, and actively take steps to change the power dynamics. It requires everyone to hold each other accountable, but also to aim for reconciliation not simply retribution. So as we leave here tonight, let’s stop designing the technologies envisioned in dystopian novels. We need to heed the warnings of artists, not race head-on into their nightmares. Let’s focus on hearing the voices and experiences of those who have been harmed because of the technologies that made this industry so powerful. And let’s collaborate with and design alongside those communities to fix these wrongs, to build just and empowering technologies rather than those that reify the status quo. Many of us are aghast to learn that a pedophile had this much influence in tech, science, and academia, but so many more people face the personal and professional harm of exclusion, the emotional burden of never-ending subtle misogyny, the exhaustion from dodging daggers, and the nagging feeling that you’re going crazy as you try to get through each day. Let’s change the norms. Please help me. Thank you. Acceptance Speech by Oakland Privacy Audio Mike Katz-Lacabe: So I first have to confess I'm not just a member of the EFF. I'm also a client. Thank you to Mitch Stoltz and your team for making sure that public records that I unearth remain available on the Internet for others to see. So as Nash said, Oakland Privacy's strength comes not just from the citizens that volunteer as part of its group, but also from the coalitions that we build. And certainly every victory that is credited to us is the result of many, many other coalition members, whether in some cases it's the EFF or the ACLU or local neighborhood activists. It's really a coalition of people that makes us stronger and helps us get the things done that sometimes we not always deservedly get as much credit for. So I want to make sure to call out those other groups and to recognize that their work is important as well and critical for us. Oakland Privacy EFF's nash presents a 2019 Barlow to members of Oakland Privacy My work for Oakland Privacy comes from the belief that only from transparency can you have oversight, and from oversight derives accountability. So many examples of technology that have been acquired and used by law enforcement agencies in the Bay Area were never known about by the city councils that oversaw those police agencies. In the city of Oakland, it was seven years after the city of Oakland acquired its stingray cell site simulator that the city of Oakland and the city council became aware of the use of that device by the police. In my city, I live in San Leandro, it was five years before the city council became aware of our city's use of license plate readers and a very notorious photo of me getting out of my car that was taken by a passing license plate reader got published on the Internet. We do our best work when working together. That's been said. Let me give you ... speaking of stories, I'll take take off from Adam's talk here. For example, recently journalist Caroline Haskins obtained a bunch of documents pertaining to Ring, you may know the Ring doorbell, and its relationship with police departments. A post about a party that Ring held at the International Association of Chiefs of Police meeting with basketball player Shaquille O'Neal, where each attendee got five free Ring doorbells. That was highlighted by EFF Senior Investigative Researcher Dave Maass. I, or we as Oakland Privacy, we then found a social media post by the police chief of Dunwoody, Georgia saying, "Hey, look at this great party with Ring, and there's Shaq." Dave then went and took that information, went back and looked at Dunwoody and found that subsequently, a few months later, Dunwoody was proud to announce the first law enforcement partnership with Ring in the state of Georgia. What a coincidence. Oftentimes it's these coalitions working together that result in prying public records free and then establishing the context around them. The work we do involves very, very exciting things: Public records requests, lobbying of public officials and meeting with public officials, speaking at city council meetings and board of supervisors meetings. We're talking, this is, primo excitement here. So, as was mentioned, our work with Oakland Privacy was helpful in getting the first privacy advisory commission, an actual city of Oakland commission going, within the city of Oakland. It's this organization, led by chair Brian Hofer, that passes policies regarding surveillance technologies, and not only passes policies but actually digs down and finds out what surveillance technologies the city of Oakland has. It has been a model for cities and counties, and we're proud that our work will continue there in addition to working on many other issues surrounding surveillance. In fact, I would be very happy to tell you that we've had ... just recently the California assembly and the Senate passed a ban on the use of face surveillance on body-worn cameras. Again, our work with coalitions there makes the difference. And now, I would like to introduce another member of Oakland Privacy, Tracy Rosenberg. Tracy Rosenberg: Thank you, Mike, and hi, everyone, and thank you so much for this wonderful award. We are honored. We're splitting up the speaking here because Oakland Privacy is a coalition and is a collective, and that's important to us. We have no hierarchy after all these years, and I've been doing this for five years. All that I get to call myself is a member. That's all I am. I want to highlight, there are people in the audience that are not coming up on stage. J.P. Massar, Don Fogg, Leah Young. There are people that are not here whose names I won't mention since they're not here, but it's always a coalition effort. And this week I've been jumping up and down because the broader coalition that includes EFF and Consumer Reports and ACLU and a bunch of other people, we just stood down the Chamber of Commerce, the tech industry, and pretty much every business in California in order to keep the Consumer Privacy Act intact. There were six people on a whole bunch of conference calls, you don't want to know how many, and somehow we actually did it. It's official as of today. There is power in coalition work. I'm incredibly grateful to Oakland Privacy because I was incredibly upset about the encroaching surveillance state, and I didn't know what to do. And in the end, in 2013, Oakland Privacy showed me what I could do, and I will never be able to repay the group for that. I was thinking back to our first surveillance transparency ordinance in Santa Clara. EFF actually came down, and they took a picture of me speaking at that meeting and put it on their blog, and I thought, I wish I could put into words what lay behind that picture, which was 11 stinking months of going down to Santa Clara and sitting in that room with the goddamn Finance and Governmental Operations Committee where they were trying to bury our ordinance because let's face it, the powers that be don't want transparency. And every month standing there and saying, "I'm not going to let you do that. I'm just not." We succeeded. It became law, I think it was June 7th, 2016, which doesn't feel like that long ago. And now there are 12. Eight of them are here in the Bay Area, a couple in Massachusetts, Seattle, and somehow Nashville did it without us and more power to them. So I think that's pretty much what I kind of want to say here. I mean, what Oakland Privacy does fundamentally is we watch. The logo is the eye of Sauron, and well, I'm not a Tolkien geek, but I deal with what I am a part of. Hey look—I went to a basement, it was all guys. It is what it is. It's a little more gender-balanced now, but not entirely. But the point is that eye kind of stands for something important because it's the eye of "we are watching," and in really mechanical terms, we try to track every single agenda of God knows how many city councils there are in the Bay Area. I think we're watching about 25 now, and if a couple more of you would volunteer, we might make that 35. But the point is, and every time there's a little action going on locally that's just making the surveillance state that much worse, we try to intervene. And we show up and the sad truth is that at this point, they can kind of see us coming from a mile away, and they're like, "Oh, great. You guys came to see us." But the point is, that's our opportunity to start that conversation. Oakland is a laboratory, it's a place where we can ... And Oakland's not perfect. All that you need to do is take a look at OPD and you know that Oakland's not perfect. Right? But it's a place where we've been able to ask the questions and we're basically trying to export that as far as it possibly can, and we go there and we ask the questions. And really, the most important part to me and the part that gives me hope is we get a lot of people that come to the basement to talk to us and basically share with us how dystopia is coming, which we know. It's here. There's no hope, right? But when those people find the way to lift up their voices and say no, that's what gives me hope. So thank you. Thank you and Brian Hofer is also going to make a final set of comments. Thank you. Brian Hofer:So my name is Brian Hofer, I recently left Oakland Privacy. I founded Secure Justice with a handful of our coalition partners that are, some of who are in this room tonight. And we're going to continue carrying on the fight against surveillance, just like Oakland Privacy. I also had the privilege of chairing the city of Oakland's Commission, as you heard earlier, and it's an honor and a privilege to be recognized by EFF for the same reasons that my former colleagues have been saying, because you've been standing next to us in the trenches. You've seen us at the meetings, lobbying, joined in the long hours waiting at city council meetings late at night just for that two minute opportunity that Nash is now an expert at. You know how much labor goes into these efforts, and so I really want to thank you for standing next to us. This path has been pretty unexpected for me. I quit a litigation job, was unemployed, and I read this East Bay Express article by Darwin BondGraham and Ali Winston based on public record requests that Oakland Privacy members had founded. And there's a little side bar in that journal that the very next day, just fate I guess, that this upstart group Oakland Privacy was meeting and that I could attend it. It's even more strange to me that I stayed. It was a two hour discussion about papier-mache street puppets and the people asking me if I was a cop when I walked in. Nobody wanted to sit next to me. So when I finally spoke up and asked how many city council members they spoke to, the room got quiet. And so that became my job, because I was the one guy in the suit. At the honorable Linda Lye's going away party a couple months ago, I remarked that if we had lost the Domain Awareness Center vote, I would have never become an activist. I would have returned to my couch. I spent hundreds of hours on that project, and I would have been really disillusioned. But March 4th, 2014, which was the vote, is still the greatest day of my life. We generated international headlines by defeating the surveillance state in the true power to the people sense. It was quite a contrast the following morning, on the Oakland Privacy list, when the naysayers thought the world had ended in calamity. Little did they know, that was the formation of the ad hoc privacy commission; we were about to change the conversation around surveillance and community control. EFF is directly responsible for helping us form that privacy commission in Oakland, and so it's my turn to congratulate you. Matt Cagle of the ACLU, Dia Kayyali, and myself were sitting around trying to figure out how to make it a permanent thing, and we noticed that another piece of technology was on the agenda. We didn't have any mandate or authority to write a privacy policy for it. But Dia signed a letter with me asking that we be given that task. It worked, and that established the Privacy Commission as a policy writing instrument that remains today. As our colleagues were saying, that's been the launching pad for a lot of this legislative success around the greater Bay Area. It's the first of many dominoes to fall. I want to close with a challenge to EFF——and not your staff—like any non-profit, they're overworked and underpaid, because I'm sending them work and I don't pay for it. I was supposed to insert an Adam Schwartz joke there. I believe that we're in a fight for the very fabric of this nation. Trump, people think he's a buffoon. He's very effective at destroying our civic institutions. The silent majority is silent, secure in their privilege, or too afraid or unaware how to combat what's going on. So I'm going to tell you a dirty secret about Oakland Privacy: we're not smarter than anyone else. We have no independently wealthy people. We have no connections. We didn't get a seat at the table via nepotism or big donations. We have no funding for the tens of thousands of volunteer hours spent advocating for human rights. And yet as you heard from the previous speakers, the formula of watching agendas, which anyone with an Internet connection can do in their pajamas, submitting public record requests, which anyone can do in their pajamas, and showing up relentlessly, which in Berkeley and Oakland, you can do in your pajamas—that led to a coalition legislative streak that will never be duplicated. That four year run will never happen again. So I ask that you challenge your membership to do the same, pajamas optional. We need numbers. We need people to get off their couch, like me, for the first time. The Domain Awareness Center was literally the first time I ever walked inside the open city hall, and I apologize for the police lingo, but your membership is the force multiplier and it's critical that more folks get involved. If you don't already know, somehow next week turned onto facial recognition ban week. Berkeley, Portland, Emeryville, we have our Georgetown national convening where I know EFF will be. It's critical that new diverse faces start showing up instead of the same actors. As Tracy said, they can see us from a mile away. We need more people. In October, we expect four more cities to jump on board. Only one is in California, demonstrating that this isn't just a Bay Area bubble. It's got legs. And like the Domain Awareness Center moment, we've got a chance to change the national conversation, and we better take advantage of it. Thank you for this honor and thank you for this award. Acceptance Speech by William Gibson Audio Thank you, Cory. And thank you, danah boyd. I will confess, I was actually ... I will confess I was actually a bit worried about coming down here and getting to this part of the evening and not having heard what she said or something very like it. And I found that a dismaying worry, and it's now been dismissed. So thank you. This is the second time this year that I've received an award I wasn't expecting. The first one, Science Fiction Writers of America's Grand Master Award, I foolishly assumed I was too young for. With this one, though, I'd not thought it a possibility because I'm very probably, and I'm sure I could win a big bet with this, the least technically literate person in this room. I seem to be here, though, I seem to myself to be here, because in the early 80s, knowing nothing whatever about computers, I began to listen to those who did, drawn not by their understanding, but by their vernacular poetics. Because I'm an English major. I got my B.A. in it, my specialty is in comparative literary critical methodologies. And when that also comes in really handy for a novelist is when we get a really shitty review. But what I actually did to come up with that stuff was sit in the bar at '80s SF cons in Seattle and eavesdrop, really really intensely. And then I would deconstruct the poetics of the computer literate. William Gibson Author William Gibson accepts his 2019 Barlow The first time, for instance, that I heard interface used as an active noun, I physically swooned. Likewise, virus as a term of digital technology. That was where I first heard that as well. Made my eyes bug out, visibly. And if you don't believe me, I'll refer you to a scene in Neuromancer where Case, my street-smart cyberspace cowboy, finding that the going's just gotten particularly rough, issues an urgent call for a modem. Because I had, I confess, no idea what a modem was. But I loved the sound of the word. However, there's another scene in Neuromancer, one in which Case overhears sort of in background, partly what seems to the reader to be an infomercial for children, and it's describing something it calls, "The Matrix," with a capital M, which seems in context to be the sum of all this cyberspace thing that Case is always running around in. But there's also in that little infomercial, there's a strong suggestion that the majority of that, of cyberspace, the majority of the content, is banal, everyday, absolutely quotidian. And by putting that in, I think I actually got that right. I somehow guessed that it all wouldn't be shit-hot cowboys versus a new order of giant corporations. So tonight, receiving this award from EFF, which by the way, I first heard of as a twinkle in John Perry Barlow's eye, though probably over the phone because he could do that. I'm very, very grateful that EFF exists, that it exists today to confront, among other things, the threat of the new order of giant corporations making it their business to gather magnitudes of utterly banal little bits of business about all of us. So thank you, EFF. 2019 Barlow recipients and speakers 2019 Barlow recipients danah boyd, Oakland Privacy, and William Gibson with speakers Adam Savage, Cindy Cohn, Cory Doctorow, and Nathan 'nash' Sheard Special thanks to our sponsors: Airbnb; Dropbox; Matthew Prince; Medium; O'Reilly Media; Ridder, Costa & Johnstone LLP; and Ron Reed for supporting EFF and the 2019 Pioneer Award Ceremony. If you or your company are interested in learning more about sponsorship, please contact nicole@eff.org.
>> mehr lesen

Victory! Individuals Can Force Government to Purge Records of Their First Amendment Activity (Fri, 13 Sep 2019)
The FBI must delete its memo documenting a journalist’s First Amendment activities, a federal appellate court ruled this week in a decision that vindicates the right to be free from government surveillance. In Garris v. FBI, the United States Court of Appeals for the Ninth Circuit ordered the FBI to expunge a 2004 memo it created that documented the political expression of news website www.antiwar.com and two journalists who founded and ran it. The Ninth Circuit required the FBI to destroy the record because it violated the Privacy Act of 1974, a federal law that includes a provision prohibiting federal agencies from maintaining records on individuals that document their First Amendment activity. EFF filed a friend-of-the-court brief in the case that called on the court to robustly enforce the Privacy Act’s protections, particularly given technological changes in the past half century that have vastly increased the power of government to gather, store, and retrieve information about the expression and associations of members of the public. For example, law enforcement can use the Internet to collect and store vast amounts of information about individuals and their First Amendment activities. Congress passed the Privacy Act after documenting a series of surveillance abuses by the FBI and other federal agencies, including tracking civil rights leaders like Martin Luther King, Jr., and spying on political enemies by President Richard Nixon. The law established rules about what types of information the government can collect and keep about people. The Act gives individuals the right to access records the government has on them and change or even delete that information.  One of the most protective provisions is a prohibition against maintaining records of First Amendment activity. Law enforcement was given a narrow exception for records that are “pertinent to and within the scope of an authorized law enforcement purposes.” As EFF’s brief argued, “The prescient fears of the Act’s authors have been proven true by forty years of technological innovation that have given the federal government unprecedented ability to capture and stockpile data about the public’s First Amendment activity.” In reversing a trial court’s ruling that the FBI did not have to delete the 2004 memo, the Ninth Circuit reviewed the language of the statute and concluded that the FBI did not have an authorized law enforcement purpose for keeping the memo. As the court explained, the Privacy Act’s expungement provision defines “maintain” as "maintain, collect, use, or disseminate." The court said that because the definition is broad, Congress intended for the statute’s protections to apply to all those distinct activities. Simply put, an agency facing an expungement claim under the Privacy Act must show that the record at issue is pertinent to an authorized law enforcement activity both (1) during the initial collection of the record, and (2) during the ongoing storage of that record. Or as the court put it: “That is, if the agency does not have a sufficient current ‘law enforcement activity’ to which the record is pertinent, the agency is in violation of the Privacy Act if it keeps the record in its files.” The decision is a big win in the fight against ever-expanding federal law enforcement surveillance because it provides a meaningful mechanism for individuals to force the deletion of records that document their protected First Amendment activities. This is essential in an era when so much political and social advocacy takes place online. As EFF argued in its brief: As with political spying throughout our nation’s history, police scrutiny of First Amendment activity on the Internet chills and  deters expression in this critical democratic forum, and leads to unfairly disparate snooping on the speech of minority communities and political dissidents. Given EFF’s commitment to fighting surveillance, we look forward to building on this case to protect individuals’ rights to speak out against the government. Congratulations to the ACLU of Northern California, who represented the plaintiff in the case, for working to meaningfully restrict the government’s surveillance powers.
>> mehr lesen

Encrypted DNS Could Help Close the Biggest Privacy Gap on the Internet. Why Are Some Groups Fighting Against It? (Fri, 13 Sep 2019)
Thanks to the success of projects like Let’s Encrypt and recent UX changes in the browsers, most page-loads are now encrypted with TLS. But DNS, the system that looks up a site’s IP address when you type the site’s name into your browser, remains unprotected by encryption. Because of this, anyone along the path from your network to your DNS resolver (where domain names are converted to IP addresses) can collect information about which sites you visit. This means that certain eavesdroppers can still profile your online activity by making a list of sites you visited, or a list of who visits a particular site. Malicious DNS resolvers or on-path routers can also tamper with your DNS request, blocking you from accessing sites or even routing you to fake versions of the sites you requested. A team of engineers is working to fix these problems with “DNS over HTTPS” (or DoH), a draft technology under development through the Internet Engineering Task Force that has been championed by Mozilla. DNS over HTTPS prevents on-path eavesdropping, spoofing, and blocking by encrypting your DNS requests with TLS. An animation showing that DNS over HTTPS protects domain names and IP addresses from eavesdropping Alongside technologies like TLS 1.3 and encrypted SNI, DoH has the potential to provide tremendous privacy protections. But many Internet service providers and participants in the standardization process have expressed strong concerns about the development of the protocol. The UK Internet Service Providers Association even went so far as to call Mozilla an “Internet Villain” for its role in developing DoH. ISPs are concerned that DoH will complicate the use of captive portals, which are used to intercept connections briefly to force users to log on to a network, and will make it more difficult to block content at the resolver level. DNS over HTTPS may undermine plans in the UK to block access to online pornography (the block, introduced as part of the Digital Economy Act of 2017, was planned to be implemented through DNS).  Members of civil society have also expressed concerns over plans for browsers to automatically use specific DNS resolvers, overriding the resolver configured by the operating system (which today is most often the one suggested by the ISP). This would contribute to the centralization of Internet infrastructure, as thousands of DNS resolvers used for web requests would be replaced by a small handful.  That centralization would increase the power of the DNS resolver operators chosen by the browser vendors, which would make it possible for those resolver operators to censor and monitor browser users’ online activity. This capability prompted Mozilla to push for strong policies that forbid this kind of censorship and monitoring. The merits of trusting different entities for this purpose are complicated, and different users might have reasons to make different choices. But to avoid having this technology deployment produce such a powerful centralizing effect, EFF is calling for widespread deployment of DNS over HTTPS support by Internet service providers themselves. This will allow the security and privacy benefits of the technology to be realized while giving users the option to continue to use the huge variety of ISP-provided resolvers that they typically use now. Several privacy-friendly ISPs have already answered the call. We spoke with Marek Isalski, Chief Technology Officer at UK-based ISP Faelix, to discuss their plans around encrypted DNS. Supporting privacy-protecting technologies is a moral imperative. Faelix has implemented support for DNS over HTTPS on their pdns.faelix.net resolver. They weren’t motivated by concerns about government surveillance, Marek says, but by ”the monetisation of our personal data.” To Marek, supporting privacy-protecting technologies is a moral imperative. “I feel it is our calling as privacy- and tech-literate people to help others understand the rights that GDPR has brought to Europeans,” he said, “and to give people the tools they can use to take control of their privacy.” EFF is very excited about the privacy protections that DoH will bring, especially since many Internet standards and infrastructure developers have pointed to unencrypted DNS queries as an excuse to delay turning on encryption elsewhere in the Internet. But as with any fundamental shift in the infrastructure of the Internet, DoH must be deployed in a way that respects the rights of the users. Browsers must be transparent about who will gain access to DNS request data and give users an opportunity to choose their own resolver. ISPs and other operators of public resolvers should implement support for encrypted DNS to help preserve a decentralized ecosystem in which users have more choices of whom they rely on for various services. They should also commit to data protections like the ones Mozilla has outlined in their Trusted Recursive Resolver policy. With these steps, DNS over HTTPS has the potential to close one of the largest privacy gaps on the web.
>> mehr lesen

EFF to Third Circuit: Off-Campus Student Social Media Posts Entitled to Full First Amendment Protection (Thu, 12 Sep 2019)
Special thanks to legal intern Maria Bacha who was the lead author of this post. EFF, Student Press Law Center (SPLC), Pennsylvania Center for the First Amendment (PaCFA), and Brechner Center for Freedom of Information filed an amicus brief in B.L. v. Mahanoy Area School District urging the U.S. Court of Appeals for the Third Circuit to close a gap in the law to better protect off-campus student speech. B.L., a student at Mahanoy Area High School, had tried out for the varsity cheerleading squad but had been placed on junior varsity. Out of frustration, she posted on Snapchat a selfie with the text “fuck school, fuck softball, fuck cheer, fuck everything” off school grounds on a Saturday. One of B.L.’s friends on Snapchat came across the “snap,” took a screen shot, and shared it with the cheerleading coaches. As a result, the coaches suspended B.L. from the junior varsity squad for one year. B.L.’s father appealed to the school board, which declined to get involved. B.L., through her parents, then filed a lawsuit against the district. The U.S. District Court for the Middle District of Pennsylvania correctly held that B.L.’s off-campus speech was constitutionally protected. Thus, her public high school—a government institution bound by the First Amendment—could not lawfully punish her by suspending her from an extracurricular activity for her profanity. The school district appealed to the Third Circuit. The district court relied on the Third Circuit’s prior decision in Snyder v. Blue Mountain School District (2011) to hold that B.L.’s profanity-laden “snap,” posted off campus and outside of school hours, was fully protected by the First Amendment. In Snyder, the Third Circuit interpreted the Supreme Court’s decision in Bethel School District No. 403 v. Fraser (1986) to hold that a public school may punish a student for vulgar on-campus speech—but that Fraser does not apply to off-campus speech. One issue left open by the Third Circuit in Snyder is whether another Supreme Court student speech decision applies off campus: Tinker v. Des Moines Independent Community School (1969). That case involved only on-campus speech: students wearing black armbands on school grounds, during school hours, to protest the Vietnam War. The Supreme Court held that the school violated the student protestors’ First Amendment rights by suspending them for refusing to remove the armbands because the students’ speech did not “materially and substantially disrupt the work and discipline of the school,” and school officials did not reasonably forecast such disruption. The Third Circuit in Snyder expressly declined to address the question of whether Tinker’s substantial disruption test applies to off-campus student speech. The district court in this case correctly concluded, if Tinker were to apply off campus, that B.L.’s off-campus speech could not be punished under Tinker’s substantial disruption test, because her “snap” did not cause a likelihood of substantial disruption or actual substantial disruption in her high school. EFF’s amicus brief endorsed the district court’s decision in support of B.L., and further urged the Third Circuit to reach the question left open by Snyder and expressly hold that Tinker’s substantial disruption test does not apply to off-campus student speech. We also wrote that because social media is an increasingly important medium for off-campus student expression, it is even more important today than it was when the Third Circuit issued its decision in Snyder that the court reach this open question. Our brief provided the court with statistics and examples of how social media has increasingly become an important platform for advocacy and activism for young people all over the world, who use it as a tool to promote causes they believe in and advocate for change. Given the high barriers to entry of traditional communication channels, such as broadcast television, young people use social media to raise awareness, disseminate information, and garner supporters for the issues they care about. Social media is also a powerful tool for students seeking to discuss and criticize aspects of their lives at school. Students should be free to express themselves online, from off-campus locations, outside of school hours, about even potentially controversial topics, without having to worry that school officials will claim that their speech somehow caused or may cause a disruption at school. Tinker’s substantial disruption rule does not offer sufficient protection for off-campus student speech and thus the Third Circuit should take this opportunity to hold that students’ off-campus speech is entitled to full First Amendment protection.
>> mehr lesen

Victory! California Senate Votes Against Face Surveillance on Police Body Cams (Thu, 12 Sep 2019)
The California Senate listened to the many voices expressing concern about the use of face surveillance on cameras worn or carried by police officers, and has passed an important bill that will, for three years, prohibit police from turning a tool intended to foster police accountability into one that furthers mass surveillance. A.B. 1215, authored by Assemblymember Phil Ting, prohibits the use of face recognition, or other forms of biometric technology, on a camera worn or carried by a police officer in California for three years.  The Assembly passed an earlier version of the bill with a 45-17 vote on May 9. Today’s vote of the Senate was 22-15. We are pleased that the Senate has listened to the growing number of voices who oppose the way government agencies use face surveillance. The government's use of face surveillance—particularly when used with body-worn cameras in real-time— has grave implications for privacy, free speech, and racial justice. For example, face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. As EFF activist Nathan Sheard told the California Assembly in May, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.” Stopping the use of face surveillance on police cameras for three years gives the state time to evaluate the effect that this technology has on our communities. We hope the California Legislature will follow-up with a permanent ban. Thank you to everyone who contacted their legislators to support this bill. We also wish to thank the bill's sponsor, Assemblymember Ting, as well as the American Civil Liberties Union of Northern California and our many coalition partners for all of their hard work on this bill. Lawmakers and community members across the country are advancing their own prohibitions and moratoriums on their local government’s use of face surveillance, including the San Francisco Board of Supervisors’ historic May ban on government use of face recognition. We encourage communities across the country to enact similar measures in their own cities. A.B. 1215 will now head back to the Assembly for a procedural vote on its latest amendments, before being sent to the governor’s desk. We urge Governor Newsom to sign this important bill into law.
>> mehr lesen