Deeplinks

Court Rules That EFF's Stupid Patent of the Month Post Is Protected Speech (Mo, 20 Nov 2017)
A federal judge has ruled that EFF need not obey an Australian injunction ordering EFF to take down a “Stupid Patent of the Month” blog post and never speak of the patent owner’s intellectual property again. It all started when Global Equity Management (SA) Pty Ltd (GEMSA)’s patent was featured as the June 2016 entry in our Stupid Patent of the Month blog series. GEMSA wrote to EFF accusing us of “false and malicious slander.” It subsequently filed a lawsuit and obtained an injunction from a South Australia court purporting to require EFF to censor itself. We declined and filed a suit in the U.S. District Court for the Northern District of California seeking a declaration that EFF’s post is protected speech. The court agreed, finding that the South Australian injunction can’t be enforced in the U.S. under a 2010 federal law that took aim against “libel tourism,” a practice by which plaintiffs—often billionaires, celebrities, or oligarchs—sued U.S. writers and academics in countries like England where it was easier to win a defamation case. The Securing the Protection of Our Enduring and Established Constitutional Heritage Act (SPEECH Act) says foreign orders aren’t enforceable in the United States unless they are consistent with the free speech protections provided by the U.S. and state constitutions, as well as state law. The court analyzed each of GEMSA’s claims for defamation, and found “[n]one of these claims could give rise to defamation under U.S. and California law, and accordingly “EFF would not have been found liable for defamation under U.S. and California law.” For example, GEMSA’s lead complaint was that EFF had called its patent “stupid.” GEMSA protested that its patent is not “in fact” stupid but the court found that this was clearly protected opinion. Moreover, the court found “that the Australian court lacked jurisdiction over EFF, and that this constitutes a separate and independent reason that EFF would prevail under the SPEECH Act.” Furthermore, the court found that the Australian order was not enforceable under the SPEECH Act because “U.S. and California would provide substantially more First Amendment protection by prohibiting prior restraints on speech in all but the most extreme circumstances, and providing additional procedural protections in the form of California’s anti-SLAPP law.”  After its thorough analysis, the court declared “(1) that the Australian Injunction is repugnant to the United States Constitution and the laws of California and the Unites States; and (2) that the Australian injunction cannot be recognized or enforced in the United States.” The decision was a default judgment. GEMSA, which has three pending patent lawsuits in in the Northern District of California, had until May 23 to respond to our case. That day came and went without a word. While GEMSA knows its way around U.S. courts—having filed dozens of lawsuits against big tech companies claiming patent infringement—it failed to respond to ours. EFF thanks our counsel from Ballard Spahr LLP and Jassy Vick Carolan LLP. Related Cases:  EFF v. Global Equity Management (SA) Pty Ltd
>> mehr lesen

Why We're Helping The Stranger Unseal Electronic Surveillance Records (Mo, 20 Nov 2017)
Consider this: Deputy Attorney General Rod Rosenstein has been going around talking about “responsible encryption” for some time now— proselytizing for encryption that’s somehow only accessible by the government—something we all know to be unworkable. If the Department of Justice (DOJ) is taking this aggressive public position about what kind of access it should have to user data, it begs the question—what kind of technical assistance from companies and orders for user data is the DOJ demanding in sealed court documents? EFF’s client The Stranger, a Seattle-based newspaper, has filed a petition with one court to find out. What’s at Stake? In a democracy, we as citizens deserve to know what our government is up to, especially its interpretation of the law. A major reason we all knew about the government using the All Writs Act—a law originally passed in 1789—to compel Apple to design a backdoor for the iOS operating system is because the court order was public. However, there are many instances where we may not know what the government is asking. For example, could the government be asking Amazon to turn on the mic on its smart assistant product, the Echo, so they can listen in on people? This is not without precedent. In the past, the government has tried to compel automobile manufacturers to turn on mics in cars for surveillance. Beyond the All Writs Act, we need to know what kind of warrantless surveillance the government is conducting under statutes like the Stored Communications Act (SCA) and the Pen Register Act. For instance, under certain authorities of the SCA, the government can obtain very private details about people’s email records, such as who they communicate with and when, and that in itself can be revealing regardless of the content of the messages. The privacy problems of these non-warrant orders is compounded by the secrecy associated with them. The government files papers asking for such orders under seal, giving the public no opportunity to scrutinize them or to see how many are actually filed with the court. The people deserve to know and we support The Stranger’s efforts to seek access to these records. Of course, the government may have good reasons to prevent disclosure of surveillance orders as part of an ongoing investigation, but under the current regime, next to no information is available even for the existence of such requests, including how many are filed each year. There are ways to meet government’s priorities—by redacting the name of the suspect to avoid tipping them off, for instance—without sacrificing transparency and access to court records for the American people under the First Amendment. The Specifics of the Case Our client The Stranger is a Pulitzer Prize-winning newspaper with a history of covering stories that focus on law enforcement surveillance capabilities. In 2013, The Stranger was the first local media organization to report on the surveillance devices installed by the Seattle Police Department that were capable of tracking people’s digital devices around the city. Apart from local law enforcement, The Stranger also covers federal surveillance activities in the city of Seattle. For instance, it investigated Alcohol, Tobacco, Firearms and Explosives bureau’s operation of a network of sophisticated surveillance cameras in the city. To better report on government surveillance capabilities, the newspaper is petitioning the federal court in Seattle—home to companies like Microsoft and Amazon—to unseal government requests for electronic surveillance orders and warrants filed with the Court. As the petition points out, the current court procedures are inadequate and counter to the widely recognized presumption of public access and openness to U.S. court records. In the Western District of Washington, government applications for electronic surveillance warrants or orders are designated as Magistrate Judge (MJ) matters. But for warrantless surveillance orders, the cases are marked as Grand Jury (GJ) proceedings. By default, anything filed as a Grand Jury case is automatically sealed and completely inaccessible to the public. This is troubling. Support EFF’s Transparency Work EFF has a long history of fighting for transparency by representing clients in litigation or filing public records requests for state and federal records. If you’d like to show your support for this lawsuit, please support our work and donate today. We would like to thank Geoff M. Godfrey, Nathan D. Alexander, and David H. Tseng of Dorsey & Whitney LLP in Seattle for co-counseling with us in representing The Stranger. Related Cases:  The Stranger Unsealing
>> mehr lesen

Will Congress Bless Internet Fast Lanes? (Mo, 20 Nov 2017)
As the Federal Communications Commission (FCC) gets ready to abandon a decade of progress on net neutrality, some in Congress are considering how new legislation could fill the gap and protect users from unfair ISP practices. Unfortunately, too many lawmakers seem to be embracing the idea that they should allow ISPs to create Internet “fast lanes” -- also known as “paid prioritization,” one of the harmful practices that violates net neutrality. They are also looking to re-assign the job of protecting customers from ISP abuses to the Federal Trade Commission. These are both bad ideas.  Let's start with paid prioritization. In response to widespread public demand from across the political spectrum, the 2015 Open Internet Order expressly prohibited paid prioritization, along with other unfair practices like blocking and throttling. ISPs have operated under the threat or the reality of these prohibitions for at least a decade, and continue to be immensely profitable. But they'd like to make even more money by double-dipping: charging customers for access to the Internet, and then charging services for (better) access to customers. And some lawmakers seem keen to allow it. That desire was all too evident in a recent hearing on the role of antitrust in defending net neutrality principles. Subcommittee Chairman Tom Marino gave a baffling defense of prioritization, suggesting that it’s necessary or even beneficial to users for ISPs to give preferential treatment to certain content sources. Rep. Marino said that users should be able to choose between a more expensive Internet experience and a cheaper one that prioritizes the ISPs preferred content sources. He likened Internet service to groceries, implying that by disallowing paid prioritization, the Open Internet Order forced more casual Internet users to waste their money: “Families who just want the basics or are on a limited income aren't forced to subsidize the preferences of shoppers with higher-end preferences.” Rep. Darrel Issa took the grocery metaphor a step further, saying that paid prioritization is the modern day equivalent of the practice of grocery stores selling prime placement to manufacturers: “Within Safeway, they’ve decided that each endcap is going to be sold to whoever is going to pay the most – Pepsi, Coke, whoever – that’s certainly a prioritization that’s paid for.” That’s an absurd analogy. Unlike goods at a physical store, every bit of Internet traffic can get the best placement, and no one on a limited income is “subsidizing” their richer neighbors. When providers choose to slow down certain types of traffic, they’re not doing it because that traffic is somehow more burdensome; they’re doing it to push users toward the content and service the ISP favors (or has been paid to favor)—the very behavior the Open Internet Order was intended to prevent. ISPs become gatekeepers rather than conduit. As ISPs and content companies have become increasingly intertwined, the dangers of ISPs giving preferential treatment to their own content sources—and locking out alternative sources—have become ever more pronounced. That’s why in 2016 the FCC launched a lengthy investigation into ISPs’ zero-rating practices and whether they violated the Open Internet Order. The FCC focused in particular on cases where an ISP has an obvious economic incentive to slow down competing content providers, as was the case with AT&T prioritizing its own DirecTV services. Some members of Congress fail to see the dangers to users of these “vertical integration” arrangements. Rep. Bob Goodlatte said in the hearing that “Blanket regulation… would deny consumers the potential benefits in cost savings and improved services that would result from vertical agreements.” But if zero-rating arrangements keep new edge providers from getting a fair playing field to compete for users’ attention, services won’t improve at all. Certainly, an entity with a monopoly could choose to turn every advantage into savings for its customers, but we know from history and common sense that monopolies gouge customers instead. It’s telling—and unfortunate—that one of Ajit Pai’s first actions as FCC Chairman was to shelve the Commission’s zero-rating investigation. The other goal of the hearing was to consider whether to assign net neutrality enforcement power to the Federal Trade Commission instead of the FCC. This is a rehash of long-standing argument that the best way to defend the Internet is to have ISPs publicly promise to behave. If they break that promise or undermine competition, the FTC can go after them. Federal Trade Commissioner Terrell McSweeny correctly explained why that approach won’t cut it: “a framework that relies solely on backward-looking consumer protection and antitrust enforcement” just cannot “provide the same assurances to innovators and consumers as the forward-looking rules contained in the FCC's open internet order.” For example, as McSweeny noted, large ISPs have a huge incentive to unfairly prioritize certain content sources: their own bottom line. Every major ISP also offers streaming media services, and these ISPs naturally will want to direct users to those offerings. Antitrust law alone can’t stop these practices because the threat that paid prioritization poses isn’t to competition between ISPs; it’s to the users themselves. If the FCC abandons its commitment to net neutrality, Congress can and should step in to put it back on course.  That means enacting real, forward-looking legislation that embraces all of the bright-line rules, not just the ones ISPs don’t mind. And it means forcing the FCC to its job, rather than handing it off to another agency that’s not well-positioned to do the work.    
>> mehr lesen

The FISA Amendments Reauthorization Act Restricts Congress, Not Surveillance (Sa, 18 Nov 2017)
The FISA Amendments Reauthorization Act of 2017—legislation meant to extend government surveillance powers—squanders several opportunities for meaningful reform and, astonishingly, manages to push civil liberties backwards. The bill is a gift to the intelligence community, restricting surveillance reforms, not surveillance itself. The bill (S. 2010) was introduced October 25 by Senate Select Committee on Intelligence Chairman Richard Burr (R-NC) as an attempt to reauthorize Section 702 of the FISA Amendments Act. That law authorizes surveillance that ensnares the communications of countless Americans, and it is the justification used by agencies like the FBI to search through those collected American communications without first obtaining a warrant. Section 702 will expire at the end of this year unless Congress reauthorizes it. Other proposed legislation in the House and Senate has used Section 702’s sunset as a moment to move surveillance reform forward, demanding at least minor protections to how 702-collected American communications are accessed. In contrast, Senator Burr’s bill uses Section 702’s sunset as an opportunity codify some of the intelligence community’s more contentious practices while also neglecting the refined conversations on surveillance happening in Congress today.  Here is a breakdown of the bill. “About” Collection Much of the FISA Amendments Reauthorization Act (the “Burr bill” for short) deals with a type of surveillance called “about” collection, a practice in which the NSA searches Internet traffic for any mentions of foreign intelligence surveillance targets. As an example, the NSA could search for mentions of a target’s email address. But the communications being searched do not have to be addressed to or from that email address, the communications would simply need to include the address in their text.  This is not normal for communications surveillance. Importantly, nothing in Section 702 today mentions or even hints at “about” collection, and it wasn’t until 2013 that we learned about it. A 2011 opinion from the Foreign Intelligence Surveillance Court—which provides judicial review for the Section 702 program—found this practice to be unconstitutional without strict post-collection rules to limit its retention and use. Indeed, it is a practice the NSA ended in April precisely “to reduce the chance that it would acquire communications of U.S. persons or others who are not in direct contact with a foreign intelligence target.”  Alarmingly, it is a practice the FISA Amendments Reauthorization Act defines expansively and provides guidelines for restarting. According to the bill, should the Attorney General and the Director of National Intelligence decide that “about” collection needs to start up again, all they need to do is ask specified Congressional committees. Then, a 30-day clock begins ticking. It’s up to Congress to act before the clock stops. In those 30 days, at least one committee—including the House Judiciary Committee, the House Permanent Select Committee on Intelligence, the Senate Judiciary Committee, and the Senate Select Committee on Intelligence—must draft, vote, and pass legislation that specifically disallows the continuation of “about” collection, working against the requests of the Attorney General and the Director of National Intelligence. If Congress fails to pass such legislation in 30 days, “about” collection can restart. The 30-day period has more restrictions. If legislation is referred to any House committee because of the committee’s oversight obligations, that committee must report the legislation to the House of Representatives within 10 legislative days. If the Senate moves legislation forward, “consideration of the qualifying legislation, and all amendments, debatable motions, and appeals in connection therewith, shall be limited to not more than 10 hours,” the bill says. Limiting discussion on “about” collection to just 10 hours—when members of Congress have struggled with it for years—is reckless. It robs Congress of the ability to accurately debate a practice whose detractors even include the Foreign Intelligence Surveillance Court (FISC)—the judicial body that reviews and approves Section 702 surveillance. Worse, the Burr bill includes a process to skirt legislative approval of “about” collection in emergencies. If Congress has not already disapproved “about” collection within the 30-day period, and if the Attorney General and the Director of National Intelligence determine that such “about” collection is necessary for an emergency, they can obtain approval from the FISC without Congress. And if during the FISC approval process, Congress passes legislation preventing “about” collection—effectively creating both approval and disapproval from two separate bodies—the Burr bill provides no clarity on what happens next. Any Congressional efforts to protect American communications could be thrown aside. These are restrictions on Congress, not surveillance—as well as an open invitation to restart “about” searching. What Else is Wrong? The Burr bill includes an 8-year sunset period, the longest period included in current Section 702 reauthorization bills. The USA Liberty Act—introduced in the House—sunsets in six years. The USA Rights Act—introduced in the Senate—sunsets in four. The Burr bill also allows Section 702-collected data to be used in criminal proceedings against U.S. persons so long as the Attorney General determines that the crime involves a multitude of subjects. Those subjects include death, kidnapping, seriously bodily injury, incapacitation or destruction of critical infrastructure, and human trafficking. The Attorney General can also determine that the crime involves “cybersecurity,” a vague term open to broad abuse. The Attorney General’s determinations in these situations are not subject to judicial review. The bill also includes a small number of reporting requirements for the FBI Director and the FISC. These are minor improvements that are greatly outweighed by the bill’s larger problems. No Protections from Warrantless Searching of American Communications The Burr bill fails to protect U.S. persons from warrantless searches of their communications by intelligence agencies like the FBI and CIA. The NSA conducts surveillance on foreign individuals living outside the United States by collecting communications both sent to and from them. Often, U.S. persons are communicating with these individuals, and those communications are swept up by the NSA as well. Those communications are then stored in a massive database that can be searched by outside agencies like the FBI and CIA. These unconstitutional searches do not require a warrant and are called “backdoor” searches because they skirt U.S. persons’ Fourth Amendment rights. The USA Liberty Act, which we have written extensively about, creates a warrant requirement when government agents look through Section 702-collected data for evidence of a crime, but not for searches for foreign intelligence. The USA Rights Act creates warrant requirements for all searches of American communications within Section 702-collected data, with “emergency situation” exemptions that require judicial oversight. The Burr bill offers nothing. No Whistleblower Protections The Burr bill also fails to extend workplace retaliation protections to intelligence community contractors who report what they believe is illegal behavior within the workforce. This protection, while limited, is offered by the USA Liberty Act. The USA Rights Act takes a different approach, approving new, safe reporting channels for internal government whistleblowers. What’s Next? The Burr bill has already gone through markup in the Senate Select Committee on Intelligence. This means that it could be taken up for a floor vote by the Senate. Your voice is paramount right now. As 2017 ends, Congress is slammed with packages on debt, spending, and disaster relief—all which require votes in less than six weeks. To cut through the log jam, members of Congress could potentially attach the Burr bill to other legislation, robbing surveillance reform of its own vote. It’s a maneuver that Senator Burr himself, according to a Politico report, approves. Just because this bill is ready, doesn’t mean it’s good. Far from it, actually. We need your help to stop this surveillance extension bill. Please tell your Senators that the FISA Amendments Reauthorization Act of 2017 is unacceptable. Tell them surveillance requires reform, not regression.  TAKE ACTION STOP THE BURR BILL FROM EXTENDING NSA SPYING 8 YEARS Related Cases:  Jewel v. NSA
>> mehr lesen

Time Will Tell if the New Vulnerabilities Equities Process Is a Step Forward for Transparency (Do, 16 Nov 2017)
The White House has released a new and apparently improved Vulnerabilities Equities Process (VEP), showing signs that there will be more transparency into the government’s knowledge and use of zero day vulnerabilities. In recent years, the U.S. intelligence community has faced questions about whether it “stockpiles” vulnerabilities rather than disclosing them to affected companies or organizations, and this scrutiny has only ramped up after groups like the Shadow Brokers have leaked powerful government exploits. According to White House Cybersecurity Coordinator Rob Joyce, the form of yesterday’s release and the revised policy itself are intended to highlight the government’s commitment to transparency because it’s “the right thing to do.” EFF agrees that more transparency is a prerequisite to any debate about government use of vulnerabilities, so it’s gratifying to see the government take these affirmative steps. We also appreciate that the new VEP explicitly prioritizes the government’s mission of protecting “core Internet infrastructure, information systems, critical infrastructure systems, and the U.S. economy” and recognizes that exploiting vulnerabilities can have significant implications for privacy and security. Nevertheless, we still have concerns over potential loopholes in the policy, especially how they may play into disputes about vulnerabilities used in criminal cases. The Vulnerabilities Equities Process has a checkered history. It originated in 2010 as an attempt to balance conflicting government priorities. On one hand, disclosing vulnerabilities to vendors and others outside the government makes patching and other mitigation possible. On the other, these vulnerabilities may be secretly exploited for intelligence and law enforcement purposes. The original VEP document described an internal process for weighing these priorities and reaching a decision on whether to disclose, but it was classified, and few outside of the government knew much about it. That changed in 2014, when the NSA was accused of long-term exploitation of the Heartbleed vulnerability. In denying those accusations and seeking to reassure the public, the government described the VEP as prioritizing defensive measures and disclosure over offensive exploitation. The VEP document itself remained secret, however, and EFF waged a battle to make it public using a Freedom of Information Act lawsuit. The government retreated from its initial position that it could not release a single word, but our lawsuit concluded with a number of redactions remaining in the document. The 2017 VEP follows a similar structure as the previous process: government agencies that discover previously unknown vulnerabilities must submit them to an interagency group which weighs the “equities” involved and reaches a determination of whether to disclose. The process is facilitated by the National Security Council and the Cybersecurity Coordinator, who can settle appeals and disputes.  Tellingly, the new document publicly lists information that the government previously claimed would damage national security if released in our FOIA lawsuit. The government’s absurd overclassification and withholdings extended to such information as the identities of the agencies that regularly participate in the decision-making process, the timeline, and the specific considerations used to reach a decision. That’s all public now, without any claim that it will harm national security. Many of the changes to the VEP do seem intended to facilitate transparency and to give more weight to policies that were previously not reflected in the official document. For example, Annex B to the new VEP lists “equity considerations” that the interagency group will apply to a vulnerability. Previously, the government had argued that a similar, less-detailed list of considerations published in a 2014 White House blog post was merely a loose guideline that would not be applied in all cases. We don’t know how this more rigorous set of considerations will play out in practice, but the new policy appears to be better designed to account for complexities such as the difficulty of patching certain kinds of systems. The new policy also appears to recognize the need for swift action when vulnerabilities the government has previously retained are exploited as part of “ongoing malicious cyber activity,” a concern we’ve raised in the Shadow Brokers case. The new policy also mandates yearly reports about the VEP’s operation, including an unclassified summary. Again, it remains to be seen how much insight these reports will provide, and whether they will prompt further oversight from Congress or other bodies, but this sort of reporting is a necessary step. In spite of these positive signs, we remain concerned about exceptions to the VEP. As written, agencies need not introduce certain vulnerabilities to the process at all if they are “subject to restrictions by partner agreements and sensitive operations.” Even vulnerabilities which are part of the process can be explicitly restricted by non-disclosure agreements. The FBI avoided VEP review of the Apple iPhone vulnerability in the San Bernardino case due to an NDA with an outside contractor, and such agreements are apparently extremely common in the vulnerabilities market. And exempting vulnerabilities involved in “sensitive operations” seems like an exceptionally wide loophole, since essentially all offensive uses of vulnerabilities are sensitive. Unchecked, these exceptions could undercut the process entirely, defeating its goal of balancing secrecy and disclosure. Finally, we’ve seen the government rely on NDAs, classification, and similar restrictions to improperly and illegally withhold material from defendants in criminal cases. As the FBI and other law enforcement agencies increasingly use exploits to hack into unknown computers, the government should not be able to hide behind these secrecy claims to shield its methods from court scrutiny. We hope the VEP doesn’t add fuel to these arguments. Related Cases:  EFF v. NSA, ODNI - Vulnerabilities FOIA
>> mehr lesen

Court Rules Platforms Can Defend Users’ Free Speech Rights, But Fails to Follow Through on Protections for Anonymous Speech (Do, 16 Nov 2017)
A decision by a California appeals court on Monday recognized that online platforms can fight for their users’ First Amendment rights, though the decision also potentially makes it easier to unmask anonymous online speakers. Yelp v. Superior Court grew out of a defamation case brought in 2016 by an accountant who claims that an anonymous Yelp reviewer defamed him and his business. When the accountant subpoenaed Yelp for the identity of the reviewer, Yelp refused and asked the trial court to toss the subpoena on grounds that the First Amendment protected the reviewer’s anonymity. The trial court ruled that Yelp did not have the right to object on behalf of its users and assert their First Amendment rights. It next ruled that even if Yelp could assert its users’ rights, it would have to comply with the subpoena because the reviewer’s statements were defamatory. It then imposed almost $5,000 in sanctions on Yelp for opposing the subpoena. The trial court’s decision was wrong and dangerous, as it would have prevented online platforms from standing up for their users’ rights in court. Worse, the sanctions sent a signal that platforms could be punished for doing so. When Yelp appealed the decision earlier this year, EFF filed a brief in support [.pdf]. The good news is that the Fourth Appellate District of the California Court of Appeal heard those concerns and reversed the trial court’s ruling regarding Yelp’s ability – known in legal jargon as “standing” – to assert its users’ First Amendment rights. In upholding Yelp and other online platforms’ legal standing to defend their users’ anonymous speech, the court correctly recognized that the trial court’s ruling would have a chilling effect on anonymous speech and the platforms that allow it. The court also threw out the sanctions the trial court issued against Yelp. We applaud Yelp for fighting a bad court decision and standing up for its users in the face of court sanctions.  Although we’re glad that the court affirmed Yelp’s ability to fight for its users’ rights, another part of Monday’s ruling may ultimately make it easier for parties to unmask anonymous speakers. After finding that Yelp could argue on behalf of its anonymous reviewer, the appeals court agreed with the trial court that Yelp nevertheless had to turn over information about its user on grounds that the review contained defamatory statements about the accountant. In arriving at this conclusion, the court adopted a test that provides relatively weak protections for anonymous speakers. That test requires that plaintiffs seeking to unmask anonymous speakers make an initial showing that their legal claims have merit and that the platforms provide notice to the anonymous account being targeted by the subpoena. Once those prerequisites are met, the anonymous speaker has to be unmasked. EFF does not believe that the California court’s test adequately protects the First Amendment rights of anonymous speakers, especially given that other state and federal courts have developed more protective tests. Anonymity is often a shield used by speakers to express controversial or unpopular views that allows the ensuing debate to focus on the substance of the speech rather than the identity of the speaker. Courts more protective of the First Amendment right to anonymity typically require that before unmasking speakers, plaintiffs must show that they can prove their claims—similar to what they would need to show at a later stage in the case. And even when plaintiffs prove they have a legitimate case, these courts separately balance plaintiffs’ need to unmask the users against those speakers’ First Amendment rights to anonymity. By not adopting a more protective test, the California court’s decision potentially makes it easier for civil litigants to pierce online speakers’ anonymity, even when their legal grievances aren’t legitimate. This could invite a fresh wave of lawsuits against anonymous speakers that are designed to harass or intimidate anonymous speakers rather than vindicate actual legal grievances. We hope that we’re wrong about the implications of the court’s ruling and that California courts will take steps to prevent abuse of unmasking subpoenas. In the meantime, online platforms should continue to stand up for their users’ anonymous speech rights and defend them in court when necessary.
>> mehr lesen

EFF Urges DHS to Abandon Social Media Surveillance and Automated “Extreme Vetting” of Immigrants (Do, 16 Nov 2017)
EFF is urging the Department of Homeland Security (DHS) to end its programs of social media surveillance and automated “extreme vetting” of immigrants. Together, these programs have created a privacy-invading integrated system to harvest, preserve, and data-mine immigrants' social media information, including use of algorithms that sift through posts using vague criteria to help determine who to admit or deport. EFF today joined a letter from the Brennan Center for Justice, Georgetown Law’s Center on Privacy and Technology, and more than 50 other groups urging DHS to immediately abandon its self-described "Extreme Vetting Initiative."Also, EFF's Peter Eckersley joined a letter from more than 50 technology experts opposing this program. This follows EFF's participation last month in comments from the Center for Democracy & Technology and dozens of other advocacy groups urging DHS to stop retaining immigrants' social media information in a government record-keeping system called "Alien Files" (A-files). DHS for some time has collected social media information about immigrants and foreign visitors. DHS recently published a notice announcing its policy of storing that social media information in its A-Files. Also, DHS announced earlier this year that it is developing its “Extreme Vetting Initiative,” which will apply algorithms to the social media of immigrants to automate decision-making in deportation and other procedures. These far-reaching programs invade the privacy and chill the freedoms of speech and association of visa holders, lawful permanent residents, and naturalized U.S. citizens alike. These policies not only invade privacy and chill speech, they also are likely to discriminate against immigrants from Muslim nations. Furthermore, other countries may imitate DHS’s policies, including countries where civil liberties are nascent and freedom of expression is limited. Storing Social Media Information in the A-Files Chills First Amendment Rights The U.S. government assigns alien registration numbers to people immigrating to the United States and to non-immigrants granted authorization to visit. In addition to containing these alien registration numbers, the government’s A-File record-keeping system stores the travel and immigration history of millions of people, including visa holders, asylees, lawful permanent residents, and naturalized citizens.  In our previous post on the DHS’s new A-Files policy, we outlined the many problems with the government’s use of this record keeping system to store, share, and use immigrants’ social media information. In the new comments, we urge DHS to stop storing social media surveillance in the A-Files for the following reasons: Chilled Expression. Activists, artists, and other social media users will feel pressure to censor themselves or even disengage completely from online spaces. Afraid of surveillance, the naturalized and U.S.-born citizens with whom immigrants engage online may also limit their social media presence by sanitizing or deleting their posts. Privacy of Americans Invaded. DHS’s social media surveillance plan, while directed at immigrants, will burden the privacy of naturalized and U.S-born citizens, too. Even after immigrants are naturalized, DHS will preserve their social media data in the A-Files for many years. DHS’s sweeping surveillance will also invade the privacy of the many millions of U.S.-born Americans who engage with immigrants on social media. Creation of Second-Class Citizens. DHS’s 100-year retention of naturalized citizens’ social media content in A-Files means a life-long invasion of their privacy. Effectively, DHS’s policy will relegate over 20 million naturalized U.S. citizens to second-class status. Unproven Benefits. While DHS claims that collecting social media can help identify security threats, research shows that expressive Internet conduct is an inaccurate predictor of one’s propensity for violence. Furthermore, potential bad actors can easily circumvent social media surveillance by deleting their content or altering their online personas. Also, the meaning of social media content is highly idiosyncratic. Posts replete with sarcasm and allusions are especially difficult to decipher. This task is further complicated by the rising use of non-textual information like emojis, GIFs, and “likes.” Immigrants feel increasingly threatened by the policies of the Trump administration. Social media surveillance contributes to a climate of fear among immigrant communities, and deters First Amendment activity by immigrants and citizens alike. Thus, EFF urges DHS not to retain social media content in immigrants’ A-Files. "Extreme Vetting" of Immigrants is Ineffective and Discriminatory In July, DHS’s Immigration and Customs Enforcement (ICE) sought the expertise of technology companies to help it automate its review of social media and other information for purposes of immigration enforcement. Specifically, ICE documents reveal that DHS seeks to develop: “processes that determine and evaluate an applicant’s probability of becoming a positively contributing member of society as well as their ability to contribute to national interests”; and “methodology that allows [the agency] to assess whether an applicant intends to commit criminal or terrorist acts after entering the United States.” In the November letter, we urge DHS to abandon “extreme vetting” for many reasons. Chilling of Online Expression. ICE’s scouring of social media to make deportation and other immigration decisions will encourage immigrants, and Americans who communicate with immigrants, to censor themselves or delete their social media accounts. This will greatly reduce the quality of our national public discourse. Technical Inadequacy. ICE’s hope to forecast national security threats via predictive analytics is misguided. The necessary computational methods do not exist. Algorithms designed to judge the meaning of text struggle to identify the tone of online posts, and most fail to understand the meaning of posts in other languages. Flawed human judgment can make human-trained algorithms similarly flawed. Discriminatory Impact. ICE never defines the critical phrases “positively contributing member of society” and “contribute to national interests.” They have no meaning in American law. Efforts to automatically identify people on the basis of these nebulous concepts will lead to discriminatory results. Moreover, these vague and overbroad phrases originate in President Trump’s travel ban executive orders (Nos. 13,769 and 13,780), which courts have enjoined as discriminatory. Thus, extreme vetting would cloak discrimination behind a veneer of objectivity. In short, EFF urges DHS to abandon “extreme vetting” and any other efforts to automate immigration enforcement. DHS should also stop storing social media information in immigrants’ A-Files. Social media surveillance of our immigrant friends and neighbors is a severe intrusion on digital liberty that does not make us safer.
>> mehr lesen

Stupid Patent Data of the Month: the Devil in the Details (Mi, 15 Nov 2017)
A Misunderstanding of Data Leads to a Misunderstanding of Patent Law and Policy Bad patents shouldn’t be used to stifle competition. A process to challenge bad patents when they improperly issue is important to keeping consumer costs down and encouraging new innovation. But according to a recent post on a patent blog, post-grant procedures at the Patent Office regularly get it “wrong,” and improperly invalidate patents. We took a deep dive into the data being relied upon by patent lobbyists to show that contrary to their arguments, the data they rely on undermines their arguments and conflicts with the claims they’re making. The Patent Office has several procedures to determine whether an issued patent was improperly granted to a party that does not meet the legal standard for patentability of an invention. The most significant of these processes is called inter partes review, and is essential to reining in overly broad and bogus patents. The process helps prevent patent trolling by providing a target with a low-cost avenue for defense, so it is harder for trolls to extract a nuisance-value settlement simply because litigating is expensive. The process is, for many reasons, disliked by some patent owners. Congress is taking a new look at this process right now as a result of patent owners’ latest attempts to insulate their patents from review. An incorrect claim about the inter partes review (IPR) and other procedures like IPR at the Patent Trial and Appeal Board (PTAB) has been circulating, and was recently repeated in written comments at a congressional hearing by Philip Johnson, former head of intellectual property at Johnson & Johnson. Josh Malone and Steve Brachmann, writing for a patent blog called “IPWatchdog,” are the source of this error. In their article, cited in the comments to Congress, they claim that the PTAB is issuing decisions contrary to district courts at a very high rate. We took a closer look at the data they use, and found that the rate is disagreement is actually quite small: about 7%, not the 76% claimed by Malone and Brachmann. How did they get it so wrong? To explain, we’ll have to get into the nuts and bolts of how such an analysis can be run. Malone and Brachmann relied on data provided by a service called “Docket Navigator,” which collects statistics and documents related to patent litigation and enforcement. The search they used was to see how many cases Docket Navigator marked as a finding of “unpatentable” (from the Patent Office) and a finding of “not invalid” (from a district court). This is a very, very simplistic analysis. For instance, it would consider an unpatentability finding by the PTAB about Claim 1 of a patent to be inconsistent with a district court finding that Claim 54 is not invalid. It would consider a finding of anticipation by the PTAB to be inconsistent with a district court rejecting an argument for invalidity based on a lack of written description. These are entirely different legal issues; different results are hardly inconsistent. EFF, along with CCIA, ran the same Docket Navigator search Malone and Brachmann ran for patents found “not invalid” and “unpatentable or not unpatentable,” generating 273 results, and a search for patents found “unpatentable” and “not invalid,” generating 208 results (our analysis includes a few results that weren’t yet available when Malone and Brachmann ran their search). We looked into each of 208 results that Docket Navigator returned for patents found unpatentable and not invalid. Our analysis shows that the “200” number, and consequently the rate at which the Patent Office is supposedly “wrong” based on a comparison to times a court supposedly got it “right” is well off the mark. We reached our conclusions based on the following methodology: We considered “inconsistent results” to occur any time the Patent Office reached a determination on any one of the conditions for patentability (namely, any of 35 U.S.C. §§ 101, 102, 103 or 112) and the district court reached a different conclusion based on the same condition for patentability, with some important caveats, as discussed below. For example, if the Patent Office found claims invalid for lack of novelty (35 U.S.C. § 102), we would not treat a district court finding of claims definite (35 U.S.C. § 112(b)) as inconsistent. We did not distinguish between a finding of invalidity or lack of invalidity based on lack of novelty (35 U.S.C. § 102) or obviousness (35 U.S.C. § 103), as these bases are highly related. For example, if the Patent Office determined claims unpatentable based on anticipation, we would mark as inconsistent any jury finding that the claims were not obvious. We did not consider a decision relating the validity of one set of claims to be inconsistent with a decision relating to the validity of a different, distinct set of claims. For example, if the Patent Office found claims 1-5 of a patent not patentable, we would not consider that inconsistent with a district court finding claims 6-10 not invalid. We would count as inconsistent, however, any two differing decisions that overlapped in terms of claims, even if there was not identity of claims. We distinguished between the conditions for patentability of 35 U.S.C. § 112. For example, a district court finding of definiteness under 35 U.S.C. § 112(b) would not treated as inconsistent with a Patent Office finding of lack of written description under 35 U.S.C. § 112(a). We did not consider a district court decision to be inconsistent with Patent Office decision if that district court decision was later overturned by the Federal Circuit. However, we did treat a Patent Office decision as inconsistent with a district court decision even if that Patent Office decision were later reversed.1 For example, if the Patent Office found claims to be not patentable, but the Patent Office was later reversed by the Federal Circuit, we would still mark that decision as inconsistent with the district court. We even counted Patent Office decisions as inconsistent in the five cases where they were affirmed by the Federal Circuit and therefore were correct according to a higher authority than a district court. We did this in order to ensure we included results tending to support Malone and Brachmann’s thesis that the Patent Office was reaching the “wrong” results. We excluded fourteen results that were not the result of any district court finding. Specifically, several patents were included because of findings by the International Trade Commission, an agency (like the Patent Office) which hears cases in a non-Article III court and that does not have a jury. Those results would not meet Malone and Brachmann’s thesis of being considered “valid in full and fair trials in a court of law.” We excluded two results that should not have been included in the set and appear to be a coding error by Docket Navigator. These results were excluded because there was no final decision from the Patent Office as to unpatentability. Here’s what we found of the 194 remaining cases: A plurality of the results (n=85) were only included because the Patent Office determined claims were unpatentable based on failure to meet one or more requirements for patentability (usually 35 U.S.C. § 102 or 103) and a district court found the claims met other requirements for patentability (usually 35 U.S.C. § 101 or 112). That is, the district court made no finding whatsoever relating to the reasons why the Patent Office determined the claims should be canceled. Thus the Patent Office and the court did not disagree as to a finding on validity. For example, the Docket Navigator results include U.S. Patent No. 5,563,883. The Patent Office determined claims 1, 3, and 4 of that patent were unpatentable based on obviousness (35 U.S.C. § 103). A district court determined that those same claims however, met the definiteness requirements (35 U.S.C. § 112(b)). The Federal Circuit affirmed the Patent Office’s decision invalidating the claims, and the district court did not decide whether those claims were obvious at all. A further 46 results were situations where either (1) the patent owner requested the Patent Office cancel claims or (2) claims were stipulated to be “valid” as part of a settlement in district court. Thus the Patent Office and the court findings were not inconsistent because at least one of them did not reach any decision on the merits. For example, the Docket Navigator results includes U.S. Patent No. 6,061,551. A jury found claims not invalid, but the Federal Circuit reversed that finding, holding the claims invalid. After that determination, the Patent Owner requested an adverse judgment at the Patent Office. As another example, the Docket Navigator results includes U.S. Patent No. 7,676,411. The Patent Office found claims invalid as abstract (35 U.S.C. § 101) and obvious (35 U.S.C. § 103). Because the parties stipulated that this patent was “valid” as part of settlement, which is generally not considered to be a merits determination, this patent is also tagged as “not invalid” by Docket Navigator. A further 15 results were not inconsistent for a variety of reasons. For example, five results were not inconsistent because the Patent Office and the district court considered different patent claims. As another example, U.S. Patent No. 7,135,641 represented an instance where a jury found claims not invalid, but the district court judge reversed that finding post-trial. As another example, in the district court, U.S. patent 5,371,734 was held “not invalid” on summary judgment, but that determination was later reversed by the Federal Circuit. Under this initial cut, only 48 of the entries arguably could be considered to have inconsistent or disagreeing results between the Patent Office and a district court. But in the majority of those cases, a judge or jury considered one set of prior art when determining whether the claim was new and nonobvious, but the Patent Office considered a different set (n=28). It is not surprising that the two forums would consider different evidence. The Patent Office proceedings generally only consider certain types of prior art (printed publications). That a district court proceeding may result in a finding of “not invalid” based on, e.g., prior use, is not an inconsistent result. Eliminating those results where the Patent Office was considering completely different arguments and art means the total number of times the Patent Office arguably reached a different conclusion than a district court is only 20 times out of 273 that  a district court determined a patent "not invalid" for some reason. That means that the Patent Office is “inconsistent” with district courts only 7% of the time, not 76% of the time. It is also important to keep in mind that there have been over 1,800 final decisions in inter partes review proceedings, covered business method review proceedings, or post grant review proceedings. In all that though, only 20 times did the Patent Office reach a conclusion that may be considered inconsistent with the district court in ways that negatively impact patent owners. That’s a rate of only around 1% of the time. That’s a remarkably low rate. Moreover, inconsistent results happen even within the court system. For example, in Abbott v. Andrx, 452 F.3d 1331, the Federal Circuit found that Abbott’s patent was likely to be held invalid. But only one year later, in Abbott v. Andrx, 473 F.3d 1196, the Federal Circuit found that the same patent was likely to be not invalid. The two different results were explained by the fact that the two defendants had presented different defenses. This is not unusual. Thus the fact that there may be different results doesn’t lead to a conclusion that the whole system is faulty. An analysis like ours with respect to this data set takes time and a few cases might slip through the cracks or be incorrectly coded, but the overall result demonstrates that the vast majority of patent owners are never subject to inconsistent results between district court and the Patent Office. It is disappointing that Johnson, Malone, and Brachmann made claims that the data don’t support, but demonstrates a valuable lesson. When using data sets, it is important to understand what, exactly, the data is and how to interpret it. Unfortunately here it looks like an error in understanding the results provided by Docket Navigator by Malone and Brachmann propagated to Johnson’s testimony, and would likely travel further if no one looked harder at it. We’ve used both Docket Navigator and Lex Machina in our analyses on numerous occasions, and even briefs we submit to the court. Both services provide extremely valuable information about the state of patent litigation and policy. But its usefulness is diminished where the data they present are not understood. As always, the devil is in the details.
1. For this reason, our results differ slightly from those of CCIA, reported here. CCIA did not treat decisions as inconsistent if the Patent Office decision was later affirmed on appeal. Five patents we considered inconsistent in our analysis were excluded in CCIA’s analysis. Each approach has merit.
>> mehr lesen

Announcing the Security Education Companion (Mi, 15 Nov 2017)
The need for robust personal digital security is growing every day. From grassroots groups to civil society organizations to individual EFF members, people from across our community are voicing a need for accessible security education materials to share with their friends, neighbors, and colleagues. We are thrilled to help. Today, EFF has launched the Security Education Companion, a new resource for people who would like to help their communities learn about digital security but are new to the art of security training. It’s rare to find someone with not only technical expertise but also a strong background in pedagogy and education. More often, folks are stronger in one area: someone might have deep technical expertise but little experience teaching, or, conversely, someone might have a strong background in teaching and facilitation but be new to technical security concepts. The Security Education Companion is meant to help these kinds of beginner trainers share digital security with their friends and neighbors in short awareness-raising gatherings. A new resource for people who would like to help their communities learn about digital security but are new to the art of security training. Lesson modules guide you through creating sessions for topics like passwords and password managers, locking down social media, and end-to-end encrypted communications, along with handouts, worksheets, and other remix-able teaching materials. The Companion also includes a range of shorter “Security Education 101” articles to bring new trainers up to speed on getting started with digital security training, foundational teaching concepts, and the nuts and bolts of planning a workshop. Teaching requires mindful facilitation, thoughtful layering of content, sensitivity to learners’ needs and concerns, and mutual trust built up over time. When teaching security in particular, the challenge includes communicating counterintuitive security concepts, navigating different devices and operating systems, recognizing learners’ different attitudes toward and past experiences with various risks, and taking into account a constantly changing technical environment. What people learn—or don’t learn—has real repercussions. Nobody knows this better than the digital security trainers currently pushing this work forward around the world, and we’ve been tremendously fortunate to learn from their expertise. We’ve interviewed dozens of U.S.-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re working hard to ensure that the Companion supports, complements, and adds to the existing collective body of training knowledge and practice. We will keep adding new materials in the coming months, so check back often as the Companion grows and improves. Together, we look forward to improving as security educators and making our communities safer. Visit SEC.EFF.ORG a resource for people teaching digital security to their friends and neighbors
>> mehr lesen

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers (Mi, 15 Nov 2017)
A federal appeals court has issued an alarming ruling that significantly erodes the Constitution’s protections for anonymous speakers—and simultaneously hands law enforcement a near unlimited power to unmask them. The Ninth Circuit’s decision in  U.S. v. Glassdoor, Inc. is a significant setback for the First Amendment. The ability to speak anonymously online without fear of being identified is essential because it allows people to express controversial or unpopular views. Strong legal protections for anonymous speakers are needed so that they are not harassed, ridiculed, or silenced merely for expressing their opinions. In Glassdoor, the court’s ruling ensures that any grand jury subpoena seeking the identities of anonymous speakers will be valid virtually every time. The decision is a recipe for disaster precisely because it provides little to no legal protections for anonymous speakers. EFF applauds Glassdoor for standing up for its users’ First Amendment rights in this case and for its commitment to do so moving forward. Yet we worry that without stronger legal standards—which EFF and other groups urged the Ninth Circuit to apply (read our brief filed in the case)—the government will easily compel platforms to comply with grand jury subpoenas to unmask anonymous speakers. The Ninth Circuit Undercut Anonymous Speech by Applying the Wrong Test The case centers on a federal grand jury in Arizona investigating allegations of fraud by a private contractor working for the Department of Veterans Affairs. The grand jury issued a subpoena to Glassdoor, which operates an online platform that allows current and former employees to comment anonymously about their employers, seeking the identities of eight accounts who posted about the contractor. Glassdoor challenged the subpoena by asserting its users’ First Amendment rights. When the trial court ordered Glassdoor to comply, the company appealed to the U.S. Court of Appeals for the Ninth Circuit. The Ninth Circuit ruled that because the subpoena was issued by a grand jury as part of a criminal investigation, Glassdoor had to comply absent evidence that the investigation was being conducted in bad faith. There are several problems with the court’s ruling, but the biggest is that in adopting a “bad faith” test as the sole limit on when anonymous speakers can be unmasked by a grand jury subpoena, it relied on a U.S. Supreme Court case called Branzburg v. Hayes. In challenging the subpoena, Glassdoor rightly argued that Branzburg was not relevant because it dealt with whether journalists had a First Amendment right to  protect the identities of their confidential sources in the face of grand jury subpoenas, and more generally, whether journalists have a First Amendment right to gather the news. This case, however, squarely deals with Glassdoor users’ First Amendment right to speak anonymously. The Ninth Circuit ran roughshod over the issue, calling it “a distinction without a difference.” But here’s the problem: although the law is all over the map as to whether the First Amendment protects journalists’ ability to guard their sources’ identities, there is absolutely no question that the First Amendment grants anonymous speakers the right to protect their identities. The Supreme Court has repeatedly ruled that the First Amendment protects anonymous speakers, often by emphasizing the historic importance of anonymity in our social and political discourse. For example, many of our founders spoke anonymously while debating the provisions of our Constitution. Because the Supreme Court in Branzburg did not outright rule that reporters have a First Amendment right to protect their confidential sources, it adopted a rule that requires a reporter to respond to a grand jury subpoena for their source’s identity unless the reporter can show that the investigation is being conducted in bad faith. This is a very weak standard and difficult to prove. By contrast, because the right to speak anonymously has been firmly established by the Supreme Court and in jurisdictions throughout the country, the tests for when parties can unmask those speakers are more robust and protective of their First Amendment rights. These tests more properly calibrate the competing interests between the government’s need to investigate crime and the First Amendment rights of anonymous speakers. The Ninth Circuit’s reliance on Branzburg effectively eviscerates any substantive First Amendment protections for anonymous speakers by not imposing any meaningful limitation on grand jury subpoenas. Further, the court’s ruling puts the burden on anonymous speakers—or platforms like Glassdoor standing in their shoes—to show that an investigation is being conducted in bad faith before setting aside the subpoena. The Ninth Circuit’s reliance on Branzburg is also wrong because the Supreme Court ruling in that case was narrow and limited to the situation involving reporters’ efforts to guard the identities of their confidential sources. As Justice Powell wrote in his concurrence, “I … emphasize what seems to me to be the limited nature of the Court’s ruling.” The standards in that unique case should not be transported to cases involving grand jury subpoenas to unmask anonymous speakers generally. However, that’s what the court has done—expanded Branzburg to now apply in all instances in which a grand jury subpoena targets individuals whose identities are unknown to the grand jury. Finally, the Ninth Circuit’s use of Branzburg is further improper because there are a number of other cases and legal doctrines that more squarely address how courts should treat demands to pierce anonymity. Indeed, as we discussed in our brief, there is a whole body of law that applies robust standards to unmasking anonymous speakers, including the Ninth Circuit’s previous decision in Bursey v. U.S., which also involved a grand jury. The Ninth Circuit Failed to Recognize the Associational Rights of Anonymous Online Speakers The court’s decision is also troubling because it takes an extremely narrow view of the kind of anonymous associations that should be protected by the First Amendment. In dismissing claims by Glassdoor that the subpoena chilled their users’ First Amendment rights to privately associate with others, the court ruled that because Glassdoor was not itself a social or political organization such as the NAACP, the claim was “tenuous.” There are several layers to the First Amendment right of association, including the ability of individuals to associate with others, the ability of individuals to associate with a particular organization or group, and the ability for a group or organization to maintain the anonymity of members or supporters. Although it’s true that Glassdoor users are not joining an organization like the NAACP or a union, the court’s analysis ignores that other associational rights are implicated by the subpoena in this case. At minimum, Glassdoor’s online platform offers the potential for individuals to organize and form communities around their shared employment experiences. The First Amendment must protect those interests even if Glassdoor lacks an explicit political goal. Moreover, even if it’s true that Glassdoor users may not have an explicitly political goal in commenting on their current or past employers, they are still associating online with others with similar experiences to speak honestly about what happens inside companies, what their professional experiences are like, and how they believe those employers can improve. The risk of being identified as a Glassdoor user is a legitimate one that courts should recognize as analogous to the risks of civil rights groups or unions being compelled to identify their members. Disclosure in both instances chills individuals’ abilities to explore their own experiences, attitudes, and beliefs. The Ninth Circuit Missed an Opportunity to Vindicate Online Speakers’ First Amendment Rights Significantly absent from the court’s decision was any real discussion about the value of anonymous speech and its historical role in our country. This is a shame because the case would have been a great opportunity to show the importance of First Amendment protections for online speakers. EFF has long fought for anonymity online because we know its importance in fostering robust expression and debate. Subpoenas such as the one issued to Glassdoor deter people from speaking anonymously about issues related to their employment. Glassdoor provides a valuable service because its anonymous reviews help inform other people’s career choices while also keeping employers accountable to their workers and potentially the general public. The Ninth Circuit’s decision appeared unconcerned with this reality, and its “bad faith” standard places no meaningful limit on the use of grand jury subpoenas to unmask anonymous speakers. This will ultimately harm speakers who can now be more easily targeted and unmasked, particularly if they have said something controversial or offensive. 
>> mehr lesen

Who Has Your Back in Colombia? Our Third-Annual Report Shows Progress (Mi, 15 Nov 2017)
Fundación Karisma in cooperation with EFF has released its third-annual ¿Dónde Estan Mis Datos? report, the Colombian version of EFF’s Who Has Your Back. And this year’s report has some good news.   According to the Colombian Ministry of Information and Communication Technologies, broadband Internet penetration in Colombia is well over 50% and growing fast. Like users around the world, Colombians put their most private data, including their online relationships, political, artistic and personal discussions, and even their minute-by-minute movements online. And all of that data necessarily has to go through one of a handful of ISPs. But without transparency from those ISPs, how can Colombians trust that their data is being treated with respect?   This project is part of a series across Latin America, adapted from EFF’s annual Who Has Your Back? report. The reports are intended to evaluate mobile and fixed ISPs to see which stand with their users when responding to government requests for personal information. While there’s definitely room for improvement, the third edition of the Colombian report shows substantial improvement.   The full report is available only in Spanish from Fundación Karisma, but here are some highlights.   This third-annual report goes even further in evaluating companies than ever before. The 2017 edition doesn’t just look at ISPs data practices; it evaluates whether companies have corporate policies of gender equality and accessibility, whether they publicly report data breaches, and whether they’ve adopted HTTPS to protect their users and employees. By and large, the companies didn’t do very well at the new criteria, but that’s part of the point. Reports like this help push the companies to do better.   That’s especially clear by looking at the criteria evaluated in previous years. There’s been significant improvement.   New for 2017, a Colombian ISP, known as ETB, has released the country’s first transparency report. This type of report, which lists the number and type of legal demands for data from government and law enforcement, is essential to helping users understand the scope of Internet surveillance and make informed decisions about storing their sensitive data or engaging in private communications. We’ve long urged companies to release these reports regularly, and we’re happy to see a Colombian ISP join in.   In addition, this year’s report shows that more companies than ever are releasing public information about their data protection policies and their related corporate policies. We applaud this transparency, especially when their policies go further than the law requires as is the case with both Telefonica and ETB.   Finally, more companies than ever are taking the proactive step of notifying their users of data demands, even when they are not formally required to do so. This commitment is important because it gives users a chance to defend themselves against overreaching government requests. In most situations, a user is in a better position than a company to challenge a government request for personal information, and of course, the user has more incentive to do so.   We’re proud to have worked with Fundación Karisma to push for transparency and users’ rights in Colombia and look forward to seeing further improvement in years to come.
>> mehr lesen

20 Years of Protecting Intermediaries: Legacy of 'Zeran' Remains a Critical Protection for Freedom of Expression Online (Di, 14 Nov 2017)
This article first appeared on Nov. 10 in Law.com. At the Electronic Frontier Foundation (EFF), we are proud to be ardent defenders of §230. Even before §230 was enacted in 1996, we recognized that all speech on the Internet relies upon intermediaries, like ISPs, web hosts, search engines, and social media companies. Most of the time, it relies on more than one. Because of this, we know that intermediaries must be protected from liability for the speech of their users if the Internet is to live up to its promise, as articulated by the U.S. Supreme Court in ACLU v. Reno, of enabling “any person … [to] become a town crier with a voice that resonates farther than it could from any soapbox“ and hosting “content … as diverse as human thought.” As we hoped—and based in large measure on the strength of the Fourth Circuit’s decision in Zeran—§230 has proven to be one of the most valuable tools for protecting freedom of expression and innovation on the Internet. In the past two decades, we’ve filed well over 20 legal briefs in support of §230, probably more than on any other issue, in response to attempts to undermine or sneak around the statute. Thankfully, most of these attempts were unsuccessful. In most cases, the facts were ugly—Zeran included. We had to convince judges to look beyond the individual facts and instead focus on the broader implications: that forcing intermediaries to become censors would jeopardize the Internet’s promise of giving a voice to all and supporting more robust public discourse than ever before possible. This remains true today, and it is worth remembering now, in the face of new efforts in both Congress and the courts to undermine §230’s critical protections. Attacks on §230: The First 20 Years The first wave of attacks on §230’s protections came from plaintiffs who tried to plead around §230 in an attempt to force intermediaries to take down online speech they didn’t like. Zeran was the first of these, with an attempt to distinguish between “publishers” and “distributors” of speech that the Fourth Circuit rightfully rejected. As we noted above, the facts were not pretty: the plaintiff sought to hold AOL responsible after an anonymous poster used his name and phone number on an AOL message board to indicate—incorrectly—that he was selling horribly offensive t-shirts about the Oklahoma City bombing. The court rightfully held that §230 protected against liability for both publishing and distributing user content. The second wave of attacks came from plaintiffs trying to deny §230 protection to ordinary users who reposted content authored by others—i.e., an attempt to limit the statute to protecting only formal intermediaries. In one case, Barrett v. Rosenthal, the attackers succeeded at the California court of appeals. But in 2006, the California Supreme Court ruled that §230 protects all non-authors who republish content, not just formal intermediaries like ISPs. This ruling—which was urged by EFF as amicus along with several other amici—still protects ordinary bloggers and Facebook posters in California from liability for content they merely republish. Unsurprisingly, the California Supreme Court’s opinion included a four-page section dedicated entirely to Zeran. Another wave of attacks, also in the mid-2000s, came as plaintiffs tried to use the Fair Housing Act to hold intermediaries responsible when users posted housing advertisements that violated the law. Both Craigslist and Roommates.com were sued over discriminatory housing advertisements posted by their users. The Seventh Circuit, at the urging of EFF and other amici, held that §230 immunized Craigslist from liability for classified ads posted by its users—citing Zeran first in a long line of cases supporting broad intermediary immunity. Despite our best efforts, however, the Ninth Circuit found that §230 did not immunize Roommates.com from liability if, indeed, it was subject to the law. The majority opinion ignored both us and Zeran, citing the case only once in a footnote responding to the strong dissent. It found that Roommates.com could be at least partially responsible for the development of the ads because it had forced its users to fill out a questionnaire about housing preferences that included options that the plaintiffs asserted were illegal. The website endured four more years of needless litigation before the Ninth Circuit ultimately found that it hadn’t actually violated any anti-discrimination laws at all, even with the questionnaire. The court left its earlier opinion intact, however, and we were worried the exception carved out in Roommates.com would wreak havoc on §230’s protections. It luckily hasn’t been applied broadly by other courts—undoubtedly thanks in large part to Zeran’s stronger legal analysis and influence. The Fight Continues We are now squarely in the middle of a fourth wave of attack—efforts to hold intermediaries responsible for extremist or illegal online content. The goal, again, seems to be forcing intermediaries to actively screen users and censor speech. Many of these efforts are motivated by noble intentions, and the speech at issue is often horrible, but these efforts also risk devastating the Internet as we know it. Some of the recent attacks on §230 have been made in the courts. So far, they have not been successful. In these cases, plaintiffs are seeking to hold social media platforms accountable on the theory that providing a platform for extremist content counts as material support for terrorism. Courts across the country have universally rejected these efforts. The Ninth Circuit will be hearing one of these cases, Twitter v. Fields, in December. But the current attacks are unfortunately not only in the courts. The more dangerous threats are in Congress. Both the House and Senate are considering bills that would exempt charges under federal and state criminal and civil laws related to sex trafficking from §230’s protections—the Stop Enabling Sex Trafficking Act (S. 1693) (SESTA) in the Senate, and the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865) in the House. While the legislators backing these laws are largely well meaning, and while these laws are presented as targeting commercial classified ads websites like Backpage.com, they don’t stop there. Instead, SESTA and its house counterpart punish small businesses that just want to run a forum where people can connect and communicate. They will have disastrous consequences for community bulletin boards and comment sections, without making a dent in sex trafficking. In fact, it is already a federal criminal offense for a website to run ads that support sex trafficking, and §230 doesn’t protect against prosecutions for violations of federal criminal laws. Ultimately, SESTA and its house counterpart would impact all platforms that host user speech, big and small, commercial and noncommercial. They would also impact any intermediary in the chain of online content distribution, including ISPs, web hosting companies, websites, search engines, email and text messaging providers, and social media platforms—i.e., the platforms that people around the world rely on to communicate and learn every day. All of these companies come into contact with user-generated content: ads, emails, text messages, social media posts. Under these bills, if any of this user-generated content somehow related to sex trafficking, even without the platform’s knowledge, the platform could be held liable. Zeran’s analysis from 20 years ago demonstrates why this is a huge problem. Because these bills would have far-reaching implications—just as every other legislative proposal for limiting §230—they would open Internet intermediaries, companies, nonprofits, and community supported endeavors alike to massive legal exposure. Under this cloud of legal uncertainty, new websites, along with their investors, would be wary of hosting open platforms for speech—or of even starting up in the first place—for fear that they would face crippling lawsuits if third parties used their websites for illegal conduct. They would have to bear litigation costs even if they were completely exonerated, as Roommates.com was after many years. Small platforms that already exist could easily go bankrupt trying to defend against these lawsuits, leaving only larger ones. And the companies that remained would be pressured to over-censor content in order to proactively avoid being drawn into a lawsuit. EFF is concerned not only because this would chill new innovation and drive smaller players out of the market. Ultimately, these bills would shrink the spaces online where ordinary people can express themselves, with disastrous results for community bulletin boards and local newspapers’ comment sections. They threaten to transform the relatively open Internet of today into a closed, limited, censored Internet. This is the very result that §230 was designed to prevent. Since Zeran, the courts have recognized that without strong §230 protections, the promise of the Internet as a great leveler—amplifying and empowering voices that have never been heard, and allowing ideas to be judged on their merits rather than on the deep pockets of those behind them—will be lost. Congress needs to abandon its misguided efforts to undermine §230 and heed Zeran’s time-tested lesson: if we fail to protect intermediaries, we fail to protect online speech for everyone.
>> mehr lesen

EFF’s Street-Level Surveillance Project Dissects Police Technology (Di, 14 Nov 2017)
Step onto any city street and you may find yourself subject to numerous forms of police surveillance—many imperceptible to the human eye. A cruiser equipped with automated license plate readers (also known as ALPRs) may have just logged where you parked your car. A cell-site simulator may be capturing your cell-phone data incidentally while detectives track a suspect nearby. That speck in the sky may be a drone capturing video of your commute. Police might use face recognition technology to identify you in security camera footage. EFF first launched its Street-Level Surveillance project in 2015 to help inform the public about the advanced technologies that law enforcement are deploying in our communities, often without any transparency or public process.  We’ve scored key victories in state legislatures and city councils, limiting the adoption of these technologies and how they can be used, but the surveillance continues to spread, agency by agency. To combat the threat, EFF is proud to release the latest update to our work: a new mini-site that shines light on a wide range of surveillance technologies, including ALPRs, cell-site simulators, drones, face recognition, and body-worn cameras. Designed with community advocates, journalists, and policymakers in mind, Street-Level Surveillance seeks to answer the pressing questions about police technology. How does it work? What kind of data does it collect? How are police using it? Who’s selling it? What are the threats, and what is EFF doing to defend our rights? We also offer resources specially tailored for criminal defense attorneys, who must confront evidence collected by these technologies in court. These resources are only a launching point for advocacy. Campus and community organizations working to increase transparency and accountability around the use of surveillance technology can find additional resources and support through our Electronic Frontier Alliance. We hope you’ll join us in 2018 as we redouble our efforts to combat invasive police surveillance. 
>> mehr lesen

Despite A Victory on IP, the TPP's Resurgence Hasn't Cured Its Ills (Sa, 11 Nov 2017)
Update: The official Ministerial statement on the new Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), including the schedule of suspended provisions, was released on November 11. Ever since the United States withdrew from the Trans-Pacific Partnership (TPP) back in January, the remaining eleven countries have been quietly attempting to bring a version of the agreement into force. Following some initial confusion, it was finally announced today that they have reached an "agreement in principle" on "core elements" of a deal.  Even so Canada's trade minister, Francois-Philippe Champagne has confirmed that the agreement is far from being finalized, recognizing that more work was needed on some key issues. Meanwhile the TPP has been renamed as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and an official statement is due to be released on Saturday November 11.  However what we already know is that almost the entire Intellectual Property (IP) chapter that had been the source of some of the most controversial elements of the original agreement has been suspended. Back in August, EFF wrote to the TPP ministers explaining why it would make no sense to include copyright term extension in the agreement, because literally none of the remaining parties to the TPP would benefit from doing so.  The apparent decision of the eleven TPP countries to exclude not only the copyright provisions, but nearly the entire IP chapter from the agreement, more than vindicates this. As we have explained at length elsewhere, IP simply isn't an appropriate topic to be dealt with in trade negotiations, where issues such as the length of copyright and bans on circumventing DRM are traded off with totally unrelated issues like dairy quotas and sources of yarn used in garment manufacturing. It is important to note that the agreement's IP chapter has only been "suspended". Ever since the U.S. pulled out of the TPP, the other countries involved have been trying to salvage the deal by suspending contentious elements. Suspending issues is a common tactic in trade negotiations as it allows countries to declare victory, despite major areas of disagreement. Moroeover, suspending provisions does not stop countries from discussing them. As Michael Geist has pointed out the IP chapter may still be subject to negotiation as part of working groups. At present there is also little clarity on how the suspension of provisions would be treated if the U.S joins back to the agreement. The eleven countries could ratify an agreement that automatically reinstates these provisions when the U.S. comes back. If the countries end up being bound by provisions that they have not agreed to because of the U.S. joining back, the suspension of the IP chapter would not count for much. Nevertheless, the exclusion of so much of the IP chapter at this stage of the negotiations is a strong rejection of US-oriented provisions and a good sign for copyright standards being discussed at other trade venues. Canada, which has the second biggest economy among remaining TPP countries after Japan is simultaneously negotiating the North American Free Trade Agreement (NAFTA) and will need to ensure consistency across NAFTA and TPP. Other TPP nations such as Vietnam and Japan are involved in the Regional Comprehensive Economic Partnership (RCEP) negotiations. Although the IP chapter was the worst of the TPP, it was not the only concerning part  of the agreement for users. There are provisions elsewhere in the agreement that pose a threat to user rights and that we remain concerned about. For example, the telecommunications chapter establishes a hierarchy of interests where unfettered trade in telecommunications services and measures to protect the security and confidentiality of messages are prioritised over privacy of personal data of users. The investment chapter includes an investor-state dispute settlement (ISDS) process which enables multinational companies to challenge any new law or government action at the federal, state, or local level, in a country that is a signatory to the agreement. The inclusion of such provisions not only don't make sense in trade agreements but is also an affront to democracy and a threat to any law designed to protect the public interest. The electronic commerce chapter, with its weak support for privacy, its toothless provisions on net neutrality, and the poor trade-off made between access to source code of imported products, and the security of end users also remains part of the agreement and is unlikely to change much.  Any renegotiation of the agreement can only be successful if member states improve upon and fix the broken process of trade negotiations that led us to the point. The TPP negotiations have been carried out in secret, without public participation or even visibility into the draft document, although corporate lobbyists had direct access to the texts and the ability to influence the agreement. Even when member states have initiated consultations on the TPP at the national level, brief consultation periods between submissions and ministerial meetings has left stakeholders frustrated and with the sense that it is just "consultation theatre". The only way we can trust that the TPP agreement will reflect users' interests is if the reopened negotiations are inclusive, transparent, balanced and create avenues for meaningful consultation and participation from stakeholders. The decision to exclude some of the most dangerous threats to the public's rights to free expression, access to knowledge, and privacy online is a big win for users, if indeed the TPP countries follow through with that decision as now seems likely. However, the TPP was, and remains, a bad model for Internet regulation. 
>> mehr lesen

Another Court Overreaches With Site-Blocking Order Targeting Sci-Hub (Fr, 10 Nov 2017)
Nearly six years ago, Internet user communities rose up and said no to the disastrous SOPA copyright bill. This bill proposed creating a new, quick court order process to compel various Internet services—free speech’s weak links—to help make websites disappear. Today, despite the failure of SOPA, a federal court in Virginia issued just such an order, potentially reaching many different kinds of Internet services. The website in the crosshairs this time was Sci-Hub, a site that provides free access to research papers that are otherwise locked behind paywalls. Sci-Hub and sites like it are a symptom of a serious problem: people who can’t afford expensive journal subscriptions, and who don’t have institutional access to academic databases, are unable to use cutting-edge scientific research. Sci-Hub’s continued popularity both in the U.S. and in economically disadvantaged countries demonstrates the unfair imbalance in access to knowledge that prompted the site’s creation. Sci-Hub is also less revolutionary than its critics often imagine: it continued a longstanding tradition of informal sharing among researchers. Whatever the legality of Sci-Hub itself, the remedy pursued in this case by the American Chemical Society and awarded by the court is a dangerous overreach. Because Sci-Hub didn’t appear in court to defend itself, the court issued a default judgment. ACS, a scientific publisher, asked the court for an injunction to stop the infringement it claimed in the suit. But the injunction ACS proposed was incredibly broad: it purported to cover not only Sci-Hub but “any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries.” None of these companies were named in the suit. In fact, ACS probably couldn’t name them as legitimate defendants, because simply providing services to an infringing website, or including it in search results, doesn’t make an Internet service legally responsible for the infringement. What’s more, the Digital Millennium Copyright Act limits the remedies that courts can impose against many kinds of Internet intermediaries, including hosting services and search engines. That’s a vital protection for all Internet users, because without it, the services that help us access and communicate information over the Internet would face the impossible and error-prone task of policing innumerable users’ use of innumerable copyrighted works. Even attempting this would likely be so costly and daunting as to drive new Internet businesses out of the market, leaving today’s Internet behemoths (who can afford to do some of the policing that major media companies demand) in full control. ACS bypassed both the DMCA and basic copyright law to get a court order directed at Internet intermediaries. It simply filed a proposed injunction labeling search engines, domain registrars, and so on as “entities in privity” with Sci-Hub. A magistrate judge adopted their proposal as-is. The Computer and Communications Industry Association stepped in at that point with an amicus brief. They pointed out that injunctions can only be directed to a named party, or to those in “active concert or participation” with them. The “active concert” rule keeps a party from avoiding a court order by acting through an associate or coconspirator. It’s not a free pass to write a court order that binds anyone who does business with a defendant, especially where the law involved (here, copyright) excludes those third parties from liability. CCIA also pointed out that “privity” is a vague term with no fixed meaning in this context. It could potentially sweep in everyone who had ever engaged in the smallest business dealings with Sci-Hub. Unfortunately, while the court removed the vague “privity” language from the injunction, it proceeded to issue the order, still directed at an open-ended swath of Internet companies that neither knew of nor caused Sci-Hub’s copyright infringement. We hope that any Internet companies who get served with this order will challenge it in court rather than follow it blindly. If a domain name registrar, search engine, or other intermediary can be considered to be “in active concert” with a website that infringes, simply because they provide a basic service to that website, then the protections of copyright law and the DMCA can be rendered meaningless. Some Internet companies, including CloudFlare, have fought back against overbroad orders like this one and have succeeded in narrowing them. Companies can step up and defend their users by insisting on proper procedure and valid orders before helping to take down a website, even one that appears to be infringing. Internet users will reward companies that stand up for the rule of law and fight the tools of censorship.
>> mehr lesen

House Judiciary Committee Forced Into Difficult Compromise On Surveillance Reform (Fr, 10 Nov 2017)
The House Judiciary Committee on Wednesday approved the USA Liberty Act, a surveillance reform package introduced last month by House Judiciary Committee Chairman Bob Goodlatte (R-VA) and Ranking Member John Conyers (D-MI).  The bill is seen by many as the best option for reauthorizing and reforming Section 702 of the FISA Amendments Act of 2008, which is set to expire in less than two months. Some committee members described feeling forced to choose between supporting stronger surveillance reforms or advancing the Liberty Act, and voiced their frustration about provisions that only partly block the warrantless search of Americans’ communications when an amendment with broader surveillance reforms was introduced by Reps. Zoe Lofgren (D-CA) and Ted Poe (R-TX).  Complicating their deliberations was the fact that the Senate Select Committee on Intelligence has already reported out a bill with far fewer surveillance protections. Ranking Member Conyers reiterated the conundrum: “We have been assured in explicit terms that if we adopt this amendment today, leadership will not permit this bill to proceed to the house floor.” He continued: “We have an opportunity to enact some meaningful reform. The alternative is no reform, and after all the work that we’ve put in, I don’t want this amendment to endanger the underlying legislation.” Rep. Jerry Nadler (D-NY) summed up much of the internal conflict: “I rise in opposition to this amendment, though I wish I didn’t have to.” Rep. Sheila Jackson Lee (D-TX) also appeared frustrated with the situation: “I’ll put on record that I resent being held hostage by leadership that does not know the intensity of the work and the responsibilities of the judiciary committee.” When asked to clarify her vote in advancing the USA Liberty Act, Jackson Lee said “I am perplexed, but will be working to join in moving the bill forward.” Rep. Jordan (R-OH) spoke up, too: “We’re the Judiciary Committee, charged with one thing and one thing only: defend the Constitution. Respect the Constitution. Adhere to the amendments in that great document, particularly, today, the Fourth Amendment. This is a darned good amendment.” Rep. Ted Lieu (D-CA) also invoked his Constitutional duty: “Ultimately it’s important that we support the Constitution. That’s why we’re here. That’s the oath we took. I’m going to support the amendment.” We appreciate the votes and the voices of Reps. Louie Gohmert (R-TX), Raúl Labrador (R-ID), Andy Biggs (R-AZ), Steve Cohen (D-TN), Ted Deutch (D-FL), David Cicilline (D-RI), Pramila Jayapal (D-WA), Jamie Raskin (D-MD), Conyers, Nadler, Jordan, Poe, Lofgren and Lieu.
>> mehr lesen

TSA Plans to Use Face Recognition to Track Americans Through Airports (Do, 09 Nov 2017)
The “PreCheck” program is billed as a convenient service to allow U.S. travelers to “speed through security” at airports. However, the latest proposal released by the Transportation Security Administration (TSA) reveals the Department of Homeland Security’s greater underlying plan to collect face images and iris scans on a nationwide scale. DHS’s programs will become a massive violation of privacy that could serve as a gateway to the collection of biometric data to identify and track every traveler at every airport and border crossing in the country. Currently TSA collects fingerprints as part of its application process for people who want to apply for PreCheck. So far, TSA hasn’t used those prints for anything besides the mandatory background check that’s part of the process. But this summer, TSA ran a pilot program at Atlanta’s Hartsfield-Jackson Airport and at Denver International Airport that used those prints and a contactless fingerprint reader to verify the identity of PreCheck-approved travelers at security checkpoints at both airports. Now TSA wants to roll out this program to airports across the country and expand it to encompass face recognition, iris scans, and other biometrics as well. From Pilot Program to National Policy While this latest plan is limited to the more than 5-million Americans who have chosen to apply for PreCheck, it appears to be part of a broader push within the Department of Homeland Security (DHS) to expand its collection and use of biometrics throughout its sub-agencies. For example, in pilot programs in Georgia and Arizona last year, Customs and Border Protection (CBP) used face recognition to capture pictures of travelers boarding a flight out of the country and walking across a U.S. land border and compared those pictures to previous recorded photos from passports, visas, and “other DHS encounters.”  In the Privacy Impact Assessments (PIAs) for those pilot programs, CBP said that, although it would collect face recognition images of all travelers, it would delete any data associated with U.S. citizens. But what began as DHS’s biometric travel screening of foreign citizens morphed, without congressional authorization, into screening of U.S. citizens, too. Now the agency plans to roll out the program to other border crossings, and it says it will retain photos of U.S. citizens and lawful permanent residents for two weeks and information about their travel for 15 years. It retains data on “non-immigrant aliens” for 75 years. CBP has stated in PIAs that these biometric programs would be limited to international flights. However, over the summer, we learned CBP wants to vastly expand its program to cover domestic flights as well. It wants to create a “biometric” pathway that would use face recognition to track all travelers—including U.S. citizens—through airports from check-in, through security, into airport lounges, and onto flights. And it wants to partner with commercial airlines and airports to do just that. Congress seems poised to provide both TSA and CBP with the statutory authority to support these plans. As we noted in earlier blog posts, the “Building America’s Trust” Act would require the Department of Homeland Security (DHS) to collect biometric information from all people who exit the U.S., including U.S. and foreign citizens. And the TSA Modernization Act, introduced earlier this fall, includes a provision that would allow the agencies to deploy “biometric technology at checkpoints, screening lanes, bag drop and boarding areas, and other areas where such deployment would enhance security and facilitate passenger movement.” The Senate Commerce Committee approved the TSA bill in October. DHS Data in the Hands of Third Parties These agencies aren’t just collecting biometrics for their own use; they are also sharing them with other agencies like the FBI and with “private partners” to be used in ways that should concern travelers.  For example, TSA’s PreCheck program has already expanded outside the airport context. The vendor for PreCheck, a company called Idemia (formerly MorphoTrust), now offers expedited entry for PreCheck-approved travelers at concerts and stadiums across the country. Idemia says it will equip stadiums with biometric-based technology, not just for security, but also “to assist in fan experience.” Adding face recognition would allow Idemia to track fans as they move throughout the stadium, just as another company, NEC, is already doing at a professional soccer stadium in Medellin, Columbia and at an LPGA championship event in California earlier this year. CBP is also exchanging our data with private companies. As part of CBP’s “Traveler Verification Service,” it will partner with commercial airlines and airport authorities to get access to the facial images of travelers that those non-government partners collect “as part of their business processes.” These partners can then access CBP’s system to verify travelers as part of the airplane boarding process, potentially doing away with boarding passes altogether. As we saw earlier this year, several airlines are already planning to implement their own face recognition services to check bags, and some, like Jet Blue, are already partnering with CBP to implement face recognition for airplane boarding. The Threat to Privacy and Our Freedom to Travel We cannot overstate how big a change this will be in how the federal government regulates and tracks our movements or the huge impact this will have on privacy and on our constitutional “right to travel” and right to anonymous association with others. Even as late as May 2017, CBP recognized that its power to verify the identification of travelers was limited to those entering or leaving the country. But the TSA Modernization Act would allow CBP and TSA to collect any biometrics they want from all travelers—international and domestic—wherever they are in the airport. That’s a big change and one we shouldn’t take lightly. Private implementation of face recognition at airports only makes this more ominous. All Americans should be concerned about these proposals because the data collected—your fingerprint, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes. That creates a risk that individuals will be implicated for crimes and immigration violations they didn’t commit. These systems are notoriously inaccurate and contain out-of-date information, which poses a risk to all Americans. However, due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them. This vast data collection will also create a huge security risk. As we saw with the 2015 Office of Personnel Management data breach and the 2017 Equifax breach, no government agency or private company is capable of fully protecting your private and sensitive information. But losing your social security or credit card numbers to fraud is nothing compared to losing your biometrics. While you can change those numbers, you can’t easily change your face. Join EFF in speaking out against these proposals by emailing your senator and filing a comment opposing TSA’s plan today. Take Action No Airport Biometric Surveillance
>> mehr lesen

SESTA Approved by Senate Commerce Committee—Still an Awful Bill (Mi, 08 Nov 2017)
The Senate Commerce Committee just approved a slightly modified version of SESTA, the Stop Enabling Sex Traffickers Act (S. 1693). SESTA was and continues to be a deeply flawed bill. It would weaken 47 U.S.C. § 230, (commonly known as “CDA 230” or simply “Section 230”), one of the most important laws protecting free expression online. Section 230 says that for purposes of enforcing certain laws affecting speech online, an intermediary cannot be held legally responsible for any content created by others. It’s not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage. SESTA would create an exception to Section 230 for laws related to sex trafficking, thus exposing online platforms to an immense risk of civil and criminal litigation. What that really means is that online platforms would be forced to take drastic measures to censor their users. Some SESTA supporters imagine that compliance with SESTA would be easy—that online platforms would simply need to use automated filters to pinpoint and remove all messages in support of sex trafficking and leave everything else untouched. But such filters do not and cannot exist: computers aren’t good at recognizing subtlety and context, and with severe penalties at stake, no rational company would trust them to. Online platforms would have no choice but to program their filters to err on the side of removal, silencing a lot of innocent voices in the process. And remember, the first people silenced are likely to be trafficking victims themselves: it would be a huge technical challenge to build a filter that removes sex trafficking advertisements but doesn’t also censor a victim of trafficking telling her story or trying to find help. Along with the Center for Democracy and Technology, Access Now, Engine, and many other organizations, EFF signed a letter yesterday urging the Commerce Committee to change course. We explained the silencing effect that SESTA would have on online speech: Pressures on intermediaries to prevent trafficking-related material from appearing on their sites would also likely drive more intermediaries to rely on automated content filtering tools, in an effort to conduct comprehensive content moderation at scale. These tools have a notorious tendency to enact overbroad censorship, particularly when used without (expensive, time-consuming) human oversight. Speakers from marginalized groups and underrepresented populations are often the hardest hit by such automated filtering. It’s ironic that supporters of SESTA insist that computerized filters can serve as a substitute for human moderation: the improvements we’ve made in filtering technologies in the past two decades would not have happened without the safety provided by a strong Section 230, which provides legal cover for platforms that might harm users by taking down, editing or otherwise moderating their content (in addition to shielding platforms from liability for illegal user-generated content). We find it disappointing, but not necessarily surprising, that the Internet Association has endorsed this deeply flawed bill. Its member companies—many of the largest tech companies in the world—will not feel the brunt of SESTA in the same way as their smaller competitors. Small Internet startups don’t have the resources to police every posting on their platforms, which will uniquely pressure them to censor their users—that’s particularly true for nonprofit and noncommercial platforms like the Internet Archive and Wikipedia. It’s not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage. If you rely on online communities in your day-to-day life; if you believe that your right to speak matters just as much on the web as on the street; if you hate seeing sex trafficking victims used as props to advance an agenda of censorship; please take a moment to write your members of Congress and tell them to oppose SESTA. Take Action Tell Congress: Stop SESTA
>> mehr lesen

Here's How Congress Should Respond to the Equifax Breach (Di, 07 Nov 2017)
There is very little doubt that Equifax’s negligent security practices were a major contributing factor in the massive breach of 145.5-million Americans’ most sensitive information. In the wake of the breach, EFF has spent a lot of time thinking through how to ensure that such a catastrophic breach doesn’t happen again and, just as importantly, what Congress can do to ensure that victims of massive data breaches are compensated fairly when a company is negligent with their sensitive data. In this post, we offer up some suggestions that will go a long way in accomplishing those goals. A Federal Victims Advocate to Research and Report on Data Breaches When almost half of the country has been affected by a data breach, it’s time for Congress to create a support structure for victims at the federal level. Once a consumer’s information is compromised, there is a complex process to wade through to figure out who to call, what kind of protections to place on one’s credit information, and what legal remedies are available to hold those responsible accountable. To make it easier for consumers, a position should be created within the executive branch and given dedicated resources to support data breach victims. This executive branch official, or even department, would be charged with producing rigorous research reports on the harm caused by data breaches. This is important because the federal courts have made it very hard to sue companies like Equifax. The judiciary has effectively blocked litigation by setting too high a standard for plaintiffs to prove they were harmed by a data breach. Federal research and data analyzing the financial harm Americans have faced will help bridge that gap. If attorneys can point to authoritative empirical data demonstrating that their clients have been harmed, they can make companies like Equifax accountable for their failures to secure data. Any federal law passed in response to the data breach should be the foundation—not the ceiling—upon which states can build according to their needs. Federal Trade Commission Needs to Have Rule-making Authority Speaking of the executive branch, the Federal Trade Commission (FTC) has a crucial role to play in dealing with data breaches. As it stands now, federal regulators have little power to ensure that entities like Equifax aren’t negligent in their security practices. Though Americans rely on credit agencies to get essential services—apartments, mortgages, credit cards, just to name a few—there isn’t enough oversight and accountability to protect our sensitive information, and that’s concerning. Equifax could have easily prevented this catastrophic breach, but it didn’t take steps to do so. The company failed to patch its servers against a vulnerability that was being actively exploited, and on top of that, Equifax bungled its response to the data breach by launching a new site that could be easily imitated. To ensure strong security, Congress needs to empower an expert agency like the FTC, which has a history and expertise in data security. This can be accomplished, by restoring the FTC’s rule-making authority to set security standards and enforce them. The FTC is currently limited to only intervening in matters of unfair and deceptive business practices, and this authority is inadequate for addressing the increasingly sophisticated technological landscape and collection of personal data by third parties. Congress Should Not Preempt State Data Breach Laws While empowering executive agencies to address data breaches, Congress should take care in ensuring that states don’t lose their own laws dealing with data breaches. Any federal law passed in response to the data breach should be the foundation—not the ceiling—upon which states can build according to their needs. States are generally more capable of quickly responding to changing data collection practices. For example, California has one of the strongest laws when it comes to notifying people that their information was compromised in a data breach. Among other things, it prescribes a timeline to notify victims and the manner in which it should be done. By the time a company has to comply with California’s laws, the company has infrastructure in place to notify the rest of the country. Given this, Congress should not pass a law that would gut states’ ability to have strong consumer friendly data breach laws. We don’t need increased criminal penalties—we need to incentivize protecting the data in the first place. Create a Fiduciary Duty for Credit Bureaus to Protect Information Congress must also acknowledge the special nature of credit bureaus. Very few of us chose for our most sensitive information to be hoarded by an entity like Equifax that we have no control over. Yet the country’s financial infrastructure relies on them to execute even the most basic transactions. Since credit bureaus occupy a privileged position in our society’s economic system, Congress needs to establish that credit bureaus have a special obligation and a fiduciary duty to protect our data.  Ultimately, companies like Equifax, Experian, and Transunion serve a purpose, but they lack a duty of care towards the individuals whose data they have harvested and sell because they are not the bureaus’ customers. Without obligations to adequately protect consumer data, we will likely see lax security that will lead to more breaches on the scale of Equifax. Give People their Day in Court The first big problem for those seeking a remedy for data breaches is just getting into court at all, especially in sufficient numbers to make a company take notice. For too many people impacted by data breaches, they learn to their great dismay that somewhere in the fine print they agreed to a mandatory arbitration clause. This means that they cannot go to court at all or must engage in singular arbitration, rather than a class-action lawsuit. After the Equifax breach, a lot of the focus has been on binding arbitration clauses because of the company’s egregious attempt to use it to deny people their day in court. Companies like Equifax shouldn't be able to prevent people from going to court in exchange for weak assistance like credit-monitoring services given the scale of the breach and harm As Congress debates how to protect Americans’ legal rights after a breach, the focus should go beyond just prohibiting mandatory arbitration clauses. Congress should preserve, protect, and create an unwaiveable private right of action for Americans to sue companies that are negligent with sensitive data. We Don’t Need Additional Criminal Laws A knee-jerk reaction to a significant breach like Equifax is to suggest that we need additional criminal laws aimed at those who are responsible. The reality is, we don’t know who was behind the Equifax breach to hold them accountable. More significantly, knowing their identity does nothing to ensure that Equifax actually applies crucial security patches when they are available. We don’t need increased criminal penalties—we need to incentivize protecting the data in the first place. Another good reason for this is that these additional criminal anti-hacking laws more often end up hurting security researchers and hackers who want to do good. For instance in Equifax’s case, a security researcher had warned the company about its security vulnerabilities months before the actual breach happened; yet the company seemed to have done nothing to fix them. The security researcher couldn't go public with the findings without risking significant jail time and other penalties. Without a meaningful way for security testers to raise problems in a public setting, companies have little reason to keep up with the latest security practices and fearing the resulting negative publicity. If Congress uses the Equifax breach to enhance or expand criminal penalties for unauthorized access under laws like the Computer Fraud and Abuse Act (CFAA), we’d all be worse for it. Laws shouldn’t impede security testing and make it harder to discover and report vulnerabilities. Free Credit Freezes, Not Credit Monitoring Services Lastly, Congress needs to provide guidance on the immediate aftermath of a data breach. It’s become almost standard practice to offer credit-monitoring services to data breach victims. In reality, these services offer little protection to victims of data breaches. Many of them are inadequate in the alerts they send consumers, and more fundamentally, there’s little utility in being informed of improper usage of one’s credit information after it’s already been exploited. Consumers will still potentially have to spend hours to get their information cleared up with the various credit bureaus and entities where the information was used fraudulently. Instead, Congress should legislate that victims of data breaches get access to free credit freezes, which are much more effective in preventing financial harm to victims of data breaches, at all major credit bureaus. There are proposals in Congress along these lines and we are glad to see that. There's no question that the Equifax breach has been a disaster. We at EFF are working with congressional offices to pass sensible reforms to ensure that it doesn't happen again.
>> mehr lesen

Trump’s Blocking People From His Twitter Account Violates the First Amendment, EFF Tells Court (Mo, 06 Nov 2017)
Agencies’ and Officials’ Social Media Posts Are Vital Communications That Can’t Be Denied to People Whose Views Officials Don’t Like New York, New York—President Donald Trump's blocking of people on Twitter who criticize him violates their constitutional right to receive government messages transmitted through social media and participate in the forums created by them, the Electronic Frontier Foundation (EFF) told a court today. Public agencies and officials, from city mayors and county sheriff offices to U.S. Secretaries of State and members of Congress, routinely use social media to communicate opinions, official positions, services, and important public safety and policy messages. Twitter has become a vital communications tool for government, allowing local and federal officials to transmit information when natural disasters such as hurricanes and wildfires strike, hold online town halls, and answer citizens’ questions about programs. President Trump’s frequent use of Twitter to communicate policy decisions, air opinions on local and global events and leaders, and broadcast calls for congressional action has become a hallmark of his administration. In July, the Knight First Amendment Institute filed suit in the U.S. District Court for the Southern District of New York alleging the president and his communications team violated the First Amendment by blocking seven people from the @realDonaldTrump Twitter account because they criticized the president or his policies. The seven individuals include a university professor, a surgeon, a comedy writer, a community organizer, an author, a legal analyst, and a police officer. In a brief filed today siding with the plaintiffs, EFF maintains that President Trump’s use of his Twitter account is akin to past presidents’ adoption of new communication technologies to engage directly with the public. President Franklin D. Roosevelt delivered “fireside chats” with Americans over the radio, while presidential debates began being televised in the 1960s. It would be impermissible for a president to block certain individuals from receiving their messages, whether delivered by bullhorn, radio, or television. It should be the same for communications delivered by Twitter. On the local level, mayors use their Twitter feeds to direct residents to emergency services during storms and hurricanes, while fire chiefs use their feeds to transmit evacuation orders and emergency contact information. Citizens rely heavily on these channels for authoritative and reliable information in times of public safety crisis. It’s unthinkable, and unconstitutional, that certain people would be blocked from these messages because they sent a tweet criticizing the official or office maintaining the Twitter account. “Governmental use of social media platforms to communicate to and with the public, and allow the public to communicate with each other, is pervasive. It is seen all across the country, at every level of government. It is now the rule of democratic engagement, not the exception,” said EFF Civil Liberties Director David Greene. “The First Amendment prohibits the exclusion of individuals from these forums based on their viewpoint. President Trump’s blocking of people on Twitter because he doesn’t like their views infringes on their right to receive public messages from government and participate in the democratic process.” For the brief: https://www.eff.org/document/knight-first-amendment-institute-v-trump For information about the lawsuit:https://knightcolumbia.org/content/knight-institute-v-trump-lawsuit-challenging-president-trumps-blocking-critics-twitter Contact:  David Greene Civil Liberties Director davidg@eff.org
>> mehr lesen

Sen. Feinstein Supports "Backdoor" Warrants, So Why Don’t Reps. Nunes and Schiff? (So, 05 Nov 2017)
As the deadline for renewing and reforming key portions of the NSA’s spying apparatus looms less than two months away, two of the most important members of the House Intelligence Committee have stayed remarkably quiet in the conversation. Congress just introduced multiple bills to extend Section 702 of the Foreign Intelligence Surveillance Act, a law that authorizes controversial NSA surveillance programs and is set to expire at the end of this year. Some of the bills include various ways to fix what is called the "backdoor search" loophole. Currently, the NSA "incidentally" collects the communications of countless Americans and stores those communications in vast databases. The FBI routinely searches through these databases for information about U.S. citizens and lawful permanent residents. The FBI does not obtain any probable cause warrants for these searches, skirting Fourth Amendment protections and earning these searches the title of "backdoor searches." Two California representatives are key to this debate: Rep. Devin Nunes (R-Calif.), the Chair of the House Permanent Select Committee on Intelligence, and Rep. Adam Schiff (D-Calif.), the Ranking Member. The House Intelligence Committee has a responsibility to oversee the intelligence agencies that use Section 702 to justify surveillance. Rep. Bob Goodlatte (R-Va.) and Rep. John Conyers (D-Mich.) introduced the USA Liberty Act last month. They are respectively the Chair and Ranking Member of the House Judiciary Committee, which oversees surveillance that may impact Americans. The bill includes a few surveillance reforms, including a requirement that FBI agents must obtain a warrant to access Section 702-collected content in criminal investigations. Other California elected officials have taken a stand for limiting the NSA’s spying powers. Sens. Dianne Feinstein (D) and Kamala Harris (D) made waves by introducing an amendment in the Senate Select Committee on Intelligence last week to shut the backdoor search loophole.  Sen. Feinstein’s position was clear: "The Fourth Amendment of our Constitution provides basic privacy rights for all Americans. I believe the Supreme Court has been clear that in order to access the content of an American’s communications, the government is required to get a probable cause warrant. The same standard should apply to Section 702." While we have criticized Sen. Feinstein's support for Section 702, we appreciate her opposition to warrantless backdoor searches now.
Many members of Congress are rightly revisiting prior support for unchecked surveillance powers. Sens.  Feinstein and Harris may be responding to the concerns of their California constituents, as Californians have a long history of supporting privacy and civil liberties In 2015, Governor Jerry Brown signed into law the California Electronic Communications Privacy Act, which bars state and local government agencies from compelling companies to hand over digital communications, or otherwise acquiring such data, without first obtaining a warrant. More than a decade earlier, California passed a law that required websites and online services to publicly post their privacy policies online. And in 1972, California amended its constitution to enshrine privacy as a right: "All people are by nature free and independent and have inalienable rights.  Among these are enjoying and defending life and liberty, acquiring, possessing, and protecting property, and pursuing and obtaining safety, happiness, and privacy." As the Los Angeles Daily News editorial board wrote this week, “the mere invocation of ‘national security’ does not, and should not, suspend the constitutional rights of Americans.” The editorial board continued: “If the information of Americans must be collected, the NSA and other federal agencies should get a warrant to do so, as the Fourth Amendment demands.” Many members of Congress who supported NSA surveillance programs in the past may be updating their positions because they are concerned about a vast and intrusive NSA intelligence apparatus helmed by President Trump. As Michelle Richardson of the Center for Democracy and Technology has written, we know significantly more about NSA surveillance programs today than we did when Section 702 was last taken up by Congress: Lawmakers should be very concerned about its scope, lack of privacy protections, and near-constant compliance problems. Legislators who supported the program before should feel free to change their minds. Reps. Nunes and Schiff have a rare and powerful opportunity to align the national discourse on digital surveillance tools with the interests of the Californians they represent. They can help end warrantless "backdoor" searches.
>> mehr lesen

Internet Association Endorses Internet Censorship Bill (Sa, 04 Nov 2017)
A trade group representing giants of Internet business from Facebook to Microsoft has just endorsed a “compromise” version of the Stop Enabling Sex Traffickers Act (SESTA), a bill that would be disastrous for free speech and online communities. Just a few hours after Senator Thune’s amended version of SESTA surfaced online, the Internet Association rushed to praise the bill’s sponsors for their “careful work and bipartisan collaboration.” The compromise bill has all of the same fundamental flaws as the original. Like the original, it does nothing to fight sex traffickers, but it would silence legitimate speech online. It shouldn’t really come as a surprise that the Internet Association has fallen in line to endorse SESTA. The Internet Association doesn’t represent the Internet—it represents the few companies that profit the most off of Internet activity. It’s shameful that a small group of lobbyists with an agenda of censorship have presented themselves to lawmakers as the unanimous experts in sex trafficking. It’s embarrassing that it’s worked so well. Amazon and eBay would be able to absorb the increased legal risk under SESTA. They would likely be able to afford the high-powered lawyers to survive the wave in lawsuits against them. Small startups, including would-be competitors, would not. It shouldn’t pass our attention that the Internet giants are now endorsing a bill that will make it much more difficult for newcomers ever to compete with them. IA also doesn’t represent Internet users. It doesn’t represent the marginalized voices who’ll be silenced as platforms begin to over-rely on automated filters (filters that will doubtless be offered as a licensed service by large Internet companies). It doesn’t represent the LGBTQ teenager in South Dakota who depends every day on the safety of his online community. It doesn’t represent the sex worker who will be forced off of the Internet and onto a dangerous street. The Internet Association can tell itself and its members whatever it wants—that it held its ground for as long as it could despite overwhelming political opposition, that the law will motivate its members to make amazing strides in filtering technologies—but there is one thing that it simply cannot say: that it has done something to fight sex trafficking. Again and again and again, experts in sex trafficking have spoken out to say that SESTA is the wrong solution, that it will put trafficking victims in more danger, that it will remove the very tools that law enforcement uses to rescue victims. It’s shameful that a small group of lobbyists with an agenda of censorship have presented themselves to lawmakers as the unanimous experts in sex trafficking. It’s embarrassing that it’s worked so well. A serious problem calls for serious solutions, and SESTA is not a serious solution. At the heart of the sex trafficking problem lies a complex set of economic, social, and legal issues. A broken immigration system and a torn safety net. A law enforcement regime that puts trafficking victims at risk for reporting their traffickers. Officers who aren’t adequately trained to use the online tools at their disposal, or use them against victims. And yes, if there are cases where online platforms themselves directly contribute to unlawful activity, it’s a problem that the Department of Justice won’t use the powers Congress has already given it. These are the factors that deserve intense deliberation and debate by lawmakers, not a hamfisted attempt to punish online communities. The Internet Association let the Internet down today. Congress should not make the same mistake. Stop SESTA Tell Congress: The Internet Association Does Not Speak for The Internet
>> mehr lesen

Senator Thune's Bill Is Just As Bad As SESTA (Sa, 04 Nov 2017)
In advance of a markup of the Stop Enabling Sex Traffickers Act (S. 1693) (“SESTA”), scheduled for November 8 in the Senate Committee on Commerce, Science and Transportation, Senator John Thune (R-SD) has floated a manger’s amendment [.pdf] that is intended to replace the current text of SESTA. Unfortunately, Sen. Thune’s bill is not an improvement over SESTA. Amendments to Section 230 Sen. Thune’s bill, like the current SESTA language, would expand Internet intermediary liability for user-generated content by weakening Section 230 immunity (47 U.S.C. § 230). Specifically, both bills would expose online platforms to state criminal prosecutions, and federal and state civil actions. As we’ve explained, these changes are not necessary because Section 230 is not broken. Section 230 strikes a reasonable policy balance that allows the most egregious online platforms to bear responsibility for illegal third-party content, while generally preserving platform immunity so that free speech and innovation can thrive online. Amendments to Federal Criminal Law Sen. Thune’s bill, like the current SESTA language, would amend the federal criminal sex trafficking statute (18 U.S.C. § 1591) to sweep up online platforms that “assist, support, or facilitate” sex trafficking (given that Section 230 doesn’t apply to federal criminal law). As we explained, the words “assist, support, or facilitate” are extremely vague and broad. Courts have interpreted “facilitate” in the criminal context simply to mean “to make easier or less difficult.” A huge swath of innocuous intermediary products and services would fall within these newly prohibited activities, given that online platforms by their very nature make communicating and publishing “easier or less difficult.” Additionally, both Sen. Thune’s bill and the current SESTA language oddly place this new liability within a new definition of “participation in a venture.” Importantly, this would do nothing to change the existing state-of-mind standard in the last paragraph of Section 1591(a), which provides that sex trafficking liability attaches when an individual or entity acts in reckless disregard of the fact that sex trafficking is happening. This means that online platforms would be criminally liable when they do not actually know that sex trafficking is going on—much less intend to assist in sex trafficking. Retroactivity Sen. Thune’s bill, like the current SESTA language, has a retroactivity provision, meaning that liability would arise even when the relevant conduct happened before the enactment of the Act. This provision has significant due process implications. EFF is deeply disappointed to see some large tech industry companies lining up to endorse this new version of SESTA. We are glad to see Engine and our other Stop SESTA allies continue to oppose it. Like the original bill, this version is deeply flawed and would do nothing to fight sex trafficking.
>> mehr lesen

The Term “Homegrown Violent Extremist” Needs Transparency (Fr, 03 Nov 2017)
The Department of Defense has broadened surveillance to encompass a new type of potential, U.S.-based threat but it has not publicly described the criteria it uses to evaluate the threat. According to documents revealed by Human Rights Watch through a Freedom of Information Request, the Department of Defense can now conduct surveillance of U.S. persons who some in the government refer to as “homegrown violent extremists.” But the absence of clear and publicly articulated guidelines about this new surveillance category raises concerns for abuse—concerns exacerbated by the Defense Department’s opaque, multi-armed surveillance regime that already allows little room for oversight. The Air Force Office of Special Investigations, in a revealed slide deck presentation, calls this new type of monitored person a “homegrown violent extremist,” or HVE for short. Two examples of events caused by “homegrown violent extremists” are given: the shootings in San Bernardino, Calif. in 2015, and in Orlando, Fla. the year after. The inclusion of so-called HVEs stems from an expansion of the type of “counterintelligence” surveillance the Department of Defense conducts, as a 2016 Department of Defense manual explains. With the change, DoD is now permitted to conduct surveillance of Americans “reasonably believed to be acting for, or in furtherance of, the goals or objectives of an international terrorist or international terrorist organization, for purposes harmful to the national security of the United States.”   Aside from the two examples given, the new interpretation—and the corresponding slide deck—offer little insight into how an individual is designated an HVE and what type of legal process the DoD uses to conduct that surveillance. Human Rights Watch pressed the Department of Defense for clarity: “As an example of ‘homegrown violent extremists,’ the Defense Department official who commented to Human Rights Watch pointed to individuals who ‘may be self-radicalized via the internet, social media, etc., and then plan or execute terrorist acts in furtherance of the ideology or goals of a foreign terrorist group.’ However, the official did not respond to a question about the criteria the executive branch uses when designating a U.S. person a ‘homegrown violent extremist’ for the purposes of this policy.” Human Rights Watch’s attempt to understand “homegrown violent extremists” classification was met with stonewalling. We need answers to these questions. We do not know how U.S. persons are identified as “homegrown violent extremists.” We do not know, once identified as HVEs, what type of surveillance DoD believes is permitted, or how the data it collects is used. We do not know if a journalist researching international terrorism could be monitored; or if Twitter users who unwittingly retweet propaganda could be monitored. We understand the safety concerns. Yes, international terrorist organizations can now create online propaganda that can influence a U.S. person into sympathy, support, and even violence. But we also need a clear understanding of how DoD makes a determination that someone is or has become an “HVE.” A policy where innocent users are swept up in DoD surveillance, simply for clicking the wrong link, or viewing or sharing the wrong content online, is unacceptable. Expanding this type of surveillance specifically to DoD is new, and we need to understand more. A new agency means different rules, different protocols and potentially different legal standards for approval. This morass should be cleared. Surveillance regimes, as the government has built them, are obscured from public view. The government’s unwillingness to discuss this expanded surveillance again leaves us in the dark, unable to fully understand and unequipped to question. We demand something simple—more transparency. To learn more about one of the government’s most powerful surveillance authorities, read about Section 702 here. To tell your representatives that you won’t accept government agents going through your emails without a warrant, go here.
>> mehr lesen

Verizon Asks the Federal Communications Commission to Prohibit States from Protecting User Privacy (Fr, 03 Nov 2017)
After lobbying Congress to repeal consumer privacy protections over ISPs, Verizon wants the Federal Communications Commission (FCC) to do it a favor and preempt states from restoring their privacy rights. While Congress repealed the previous FCC’s privacy rule, it left the underlying Section 222 intact. As a result, dozens of state bills were then introduced to restore broadband privacy, mirroring Section 222 of the Communications Act. Verizon’s two-pronged attack on privacy protections for Internet users would require the FCC to not only abandon federal privacy protections (which is part of their Title II common carrier obligations), but to also prohibit states from protecting the privacy of their residents. The states, however, have a vital role to play in protecting Internet subscribers, particularly given the rollback of federal protections. It would be unwise for the FCC to attempt to block such protections at Verizon’s behest, and it would be on shaky legal footing if it tried to do so. Legally, Congress has the power to override state laws that interfere with federal regulation, subject to important limits set forth in the Constitution. This power is called “preemption” – Congress can “preempt” state law. Because preemption interferes with states’ ability to govern conduct within their borders, courts do not simply assume that all action by federal regulators can overturn state laws. Contrary to Verizon’s claims that the FCC has clear authority to preempt on privacy, it would be legally unwise and potentially unlawful for the FCC to preempt the states. Nothing in the Communications Act Prohibits States From Passing Their Own Privacy Laws that Go Beyond Federal Protections The Communications Act does not give the FCC the express power to bar states from protecting the privacy of Internet users. The only provision in the Act that bars states from any kind of conduct with regard to privacy is Section 222, which provides that states cannot undermine federal privacy protections, but may go further than federal law requires in protecting privacy so long as it compliments the federal law. Even the House author of the broadband privacy repeal, Congresswoman Marsha Blackburn, saw that no express statutory text exists to preempt state privacy laws. That is why she included the following language in her Browser Act legislation that seeks to impose privacy rules on ISPs and a range of Internet companies. No State or political subdivision of a State shall, with respect to a provider of a covered service subject to this Act, adopt, maintain, enforce, or impose or continue in effect any law, rule, regulation, duty, requirement, standard, or other provision having the force and effect of law relating to or with respect to the privacy of user information. That legislation has not been passed into law, meaning that Congress has not preempted the ability of states to protect online privacy. Absent any clear preemption of state power, Verizon resorts to a series of unavailing arguments that the power is implicitly granted to the FCC by other provisions of law (Section 706, 303, 153, 230, and the Congressional Review Act repeal law). We address each in turn. The Vague and Open-Ended Language of Section 706 Does Not Contain a Hidden Authority to Override State Laws Section 706 states that the FCC should address barriers to broadband deployment and competition. The problem with relying on this vague and open-ended provision for substantial authority is that Congress did not explain what it meant, and courts have struggled to articulate a principled outer bound for this power. Proponents of this theory argue that the FCC can take any action it wants, override any state law, if it concludes such an action will promote broadband deployment. If Congress ever grants an agency such power, one hopes it will at least be clear that it is doing so, and not use vague language of the kind in Section 706, which made it unclear it was granting the FCC any authority to do anything, rather than urging it to use its existing powers for a particular goal. Ironically, one very important FCC official thought Section 706 conferred no power to the FCC to block state laws. His name is Ajit Pai, the Current FCC Chairman. "I very much doubt that section 706 gives the Commission the authority to preempt any state laws, even those governing private actors." -FCC Commissioner Pai in his 2015 dissent to the FCC's effort to preempt state laws banning municipal broadband But let’s not just take Chairman Pai’s word for it. Even under a very aggressive reading of Section 706’s grant of authority, the FCC would have to prove that protecting user privacy is a barrier to competition and deployment and nothing indicates that is remotely true. In fact, a number of ISPs have explicitly told the FCC they had no new barriers to deployment or investment as common carriers subject to privacy rules. In essence, Verizon would need the FCC to make some unsubstantiated assumptions about privacy protections despite the Department of Commerce, Federal Trade Commission, and the FCC itself having found that privacy protections appear to improve broadband adoption as more sensitive information is passed online. Title I’s Lack of Statutory Text Cuts Against Preemption Title I is the alternative “classification” of broadband Internet service, and dominant ISPs like Verizon prefer it to Title II because they have successfully gutted it via a series of court challenges. Title I was a poor basis for the FCC’s authority because of the near-complete absence of statutory text on privacy, non-discrimination, and competition. That in turn means the FCC cannot legally enforce network neutrality, privacy, and other policies that would help competitive entry. However, while this silence on privacy aided Verizon when it sought to hamstring the FCC, it undermines its current argument for wide-reaching preemption powers. Because the statute does not govern privacy or expressly bar states from doing so, it cannot preempt state laws unless those laws interfere with federal regulation of interstate commerce. Protecting user privacy however is an intrastate activity (meaning it does not have to involve crossing borders) and states have passed numerous privacy laws historically that compliment federal law. For example, both Nevada and Minnesota have ISP privacy laws on the books today that Verizon is asking the FCC to strike down. California has the California Email Communications Privacy Act (Cal ECPA), the Student Online Personal Information Protection Act (SOPIPA), and California’s state constitution provides an affirmative right to privacy that has resulted in Comcast paying fines when it unlawfully disclosed customers’ personal information. These state laws and numerous others that impact ISPs would be impacted by Verizon’s request. Statements of Policy are Not Authorizations Granted By Congress The Communications Act includes policy statements favoring less regulation, rather than more, but a Federal appeals court has told us that policy statements do not amount to a legal grant of authority. For example, when the FCC attempted to uphold Network Neutrality under Title I in the past, the D.C. Circuit argued the FCC lacked the legal power to do so, rejecting the theory that policy statements confer statutory powers. Not a Single Court Case Exists To Sustain the Argument That the Congressional Review Act Preempts State Law The purpose of the Congressional Review Act (CRA) was to prohibit federal agencies from interpreting federal laws in a specific manner while placing a block on “substantially similar” regulations by those agencies. That has a strong impact on how federal law is applied, but only to the extent federal enforcers are allowed to apply them. It is with some irony that Verizon’s association, CTIA, has argued at the state level that the CRA has done nothing, yet at the federal level it is being argued that it is a massive and powerful block against state privacy laws. The reach of the CRA in particular has not been litigated because only one time prior to 2017 has the CRA even been used. However, the traditional legal standards governing preemption still apply, and nowhere in the CRA law does it have express statutory language preempting any state laws. EFF has strong doubts that the CRA, with its mechanism of restricting federal agencies, would grant those same federal agencies new powers to block states from acting in their own capacity. FCC Authority Over the Airwaves Also Does Not Directly Preempt Privacy Law The general authority of the FCC to regulate the deployment of wireless networks and licenses under Section 303 (also referred to as Title III authority) does grant the FCC the power to preempt states, but that preemption authority has its limits. For example, the FCC can block localities when they try to regulate interference or technical standards, but they cannot preempt states from regulating what is displayed on your bill from the wireless company. The FCC is also the sole entity that can decide whether a particular frequency is used for radio, television, or mobile broadband. It is not clear that the FCC can reach so far under its Title III authority to block states that want to regulate business practices that are unrelated to the underlying service being offered. The practice of monetizing the personal information of users with third parties is explicitly a business practice and wholly unnecessary to the provisioning of wireless broadband service. You do not need to monetize someone’s web browsing history in order to provide them a wireless network function, particularly given that Americans already pay substantial subscription fees for that service. It is also worth noting that the cellular industry has long lived under privacy rules that were intended to also apply to mobile broadband until Congress intervened. The FCC Should Reject Verizon’s Request to Overreach on its Legal Authority For all of the complaints lobbed at the FCC overreaching in its efforts to address the ISP market, it should not be lost on the Commission that Verizon is asking it to overreach on its behalf. The FCC should reject the request outright and not cut ISPs’ state lobbyists a break by unlawfully stepping in on state power. Not only would such a move be ill advised legally, but it would actively harm the privacy rights of all Americans and frustrate their right to seek a response from their locally elected state legislatures.
>> mehr lesen

US Federal Court Rejects Global Search Order (Fr, 03 Nov 2017)
After years of litigation in two countries, a federal court in the US has weighed in on a thorny question: Does Google US have to obey a Canadian court order requiring Google to take down information around the world, ignoring contrary rules in other jurisdictions? According to the Northern District of California, the answer is no. The case is Google v. Equustek, and it’s part of a growing trend in which courts around the world order companies to take actions far beyond the borders those courts usually respect. It started as a simple dispute in Canada between British Columbia-based Equustek Solutions and Morgan Jack and others, known as the Datalink defendants. Equustek accuses them of selling counterfeit Equustek routers online. The defendants never appeared in court to challenge the claim, which meant that Equustek effectively won without the court ever considering whether the claim was valid. That was all normal enough, but Equustek also argued that California-based Google facilitated access to the defendants’ sites. Although Google was not named in the lawsuit and everyone agreed that Google had done nothing wrong, it voluntarily took down specific URLs that directed users to the defendants’ products and ads under the Canadian Google.ca domains. Equustek wanted more and so it persuaded a Canadian court to order Google to delete the allegedly infringing search results from all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld that decision. Here’s the thing: a court in one country has no business issuing a decision affecting the rights of citizens around the world. As EFF explained in numerous filings in the case, a global de-indexing order conflicts with rights recognized in the U.S, such as the right to access information and the protections of Section 230 of the Communications Decency Act. The Canadian order set a dangerous precedent that would be followed by others, creating a race to the bottom as courts in countries with far weaker speech protections would feel empowered to effectively edit the Internet. Unfortunately, the Supreme Court of Canada dismissed those concerns, stating: If Google has evidence that complying with such an injunction would require it to violate the laws of another jurisdiction, including interfering with freedom of expression, it is always free to apply to the British Columbia courts to vary the interlocutory order accordingly. Google now appears to have that evidence. In an order granting Google's request for a preliminary injunction, Judge Edward Davila held that Section 230 protected Google's activities in indexing the website at issue, and that the Canadian order was therefore unenforceable in the United States. By forcing intermediaries to remove links to third-party material, the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet. Google can now seek a permanent injunction and take Judge Davila's order back to British Columbia and ask the court to modify the original order. The California ruling is a ray of hope on the horizon after years of litigation, but it is far from a satisfying outcome. While we're glad to see the court in California recognize the rights afforded by Section 230 of the Communications Decency Act, most companies will not have the resources to mount this kind of international fight. If the current trend continues, many overbroad and unlawful orders will go unchallenged. Courts presented with a request for such an order must step up and require plaintiffs to meet a high burden – including proving that the requested order doesn’t run contrary to the rights of everyone it will affect.     Related Cases:  Google v. Equustek
>> mehr lesen

EFF to ICANN: Don't Pick Up the Censor's Pen (Do, 02 Nov 2017)
EFF is at ICANN's 60th meeting in Abu Dhabi this week. Along with other members of ICANN's Non-Commercial Users Constituency, we are here to stand up for the rights of ordinary Internet users in the development and implementation of ICANN policies over Internet domain names. In two previous posts, one focused on ICANN's registrars (those who sell domain names to users), and the other focused on its registries (those who administer an entire top-level domain such as .org, .trade, or .eu) we have highlighted how these parties can become free speech weak leaks for censorship of online speech. In this the third installment in that series of posts, we turn our attention to ICANN itself. For years now, ICANN has been under pressure from its own powerful Intellectual Property Constituency (IPC) and from law enforcement agencies to take stronger action to eliminate objectionable content from the Internet by facilitating the cancellation of domain names. In June 2015, ICANN addressed these demands in a blog post with the self-explanatory title, ICANN Is Not the Internet Content Police. But as you might have expected, that hasn't been the end of it. And ICANN largely has itself to blame, by including in its 2013 revision to its agreement with registrars a provision requiring registrars to "receive reports of abuse involving Registered Names" and to "take reasonable and prompt steps to investigate and respond appropriately." This leaves ICANN open to an argument that goes, "No of course we are not asking you to become the content police, we are simply asking you to enforce your own contracts with registrars, when they refuse to carry out their obligations to be content police." ICANN appears to have voluntarily taken on further responsibility for addressing "abuse involving" domain names through its appointment this year of a Consumer Safeguards Director with a background in law enforcement. EFF attended and reported on the first webinar held by the new Director, in which he downplayed the significance of his role, stating that it does not carry any enforcement powers. Yet a draft report [PDF] of ICANN's Competition, Consumer Trust and Consumer Choice Review Team recommends that strict new enforcement and reporting obligations should be made compulsory for any new top-level domains that ICANN adopts in the future. ICANN's Non-Commercial Stakeholder Group (NCSG) has explained [PDF] why many of these recommendations would be unnecessary and harmful. A subteam of this same Competition, Consumer Trust and Consumer Choice Review Team has also recently released a draft proposal [PDF] for the creation of a new DNS Abuse Dispute Resolution Procedure (DADRP) that would allow enforcement action to be taken by ICANN against an entire registry if that registry's top-level domain has too many "abusive" domain names in it. One of the top-level domains that has been highlighted as having a large number of "abusive" domain names is .trade, which EFF uses as the domain for its Open Digital Trade Network. If this proposed DADRP goes ahead, registries could come under pressure to go on a purge of domains if they wish to avoid being sanctioned by ICANN. Many of the above ICANN initiatives turn upon the question of what activities constitute "abuse involving" a domain name. This week, the NCSG issued a statement, which EFF participated in developing, that adopts a clear position on this pivotal question: Domain abuse involves cases in which the domain itself is causing problems, such as domains that facilitate fraud and exploit confusion, support phishing via confusing or deceptive strings, or domains that support botnet command and control operations. We are concerned that the concept of “domain abuse” is being stretched to include various forms of allegedly “illegal” or “undesirable” content on webpages, listservs and email addresses associated with domain names. This includes the use of domain names for political speech, personal expression and competitive discussions. An overly-broad definition of “domain abuse” would require ICANN or contracted parties to become a decision-maker or judge on whether a webpage was a copyright-infringement or fair use, involved legal use of a trademark to criticize a company’s products or practices or trademark infringement, and whether hate speech, whose definition and legal status varies from country to country, was legitimate or not.    We believe that content that is allegedly illegal or objectionable is not “domain abuse” and is best addressed through other, well-established legal and regulatory methods, or through cooperative and self-regulatory action by Internet service providers. Neither ICANN nor its contracted parties should try to make DNS policy become the vehicle for global content regulation.  This statement draws a line in the sand. ICANN has an important role in helping to keep the DNS system itself secure against malicious actors. Going beyond this line would require ICANN to adjudicate claims about the legality or propriety of particular Internet content. ICANN is not equipped for this role, and lacks legitimacy as a content regulator. Claims about the legality of Internet content should be resolved by courts of law, not by a DNS administrator nor by its contracted parties. ICANN should retain its existing, limited role in the technical administration of a secure and stable domain name system—and should not pick up the censor's pen. This is the third of a series of three articles in which EFF asks ICANN, its registrars, and its registries, not to pick up the censor's pen.
>> mehr lesen

Do Not Track Implementation Guide Launched (Mi, 01 Nov 2017)
Today we are releasing the implementation guide for EFF’s Do Not Track (DNT) policy. For years users have been able to set a Do Not Track signal in their browser, but there has been little guidance for websites as to how to honor that request. EFF’s DNT policy sets out a meaningful response for servers to follow, and this guide provides details about how to apply it in practice. At its core, DNT protects user privacy by excluding the use of unique identifiers for cross-site tracking, and by limiting the retention period of log data to ten days. This short retention period gives sites the time they need for debugging and security purposes, and to generate aggregate statistical data. From this baseline, the policy then allows exceptions when the user's interactions with the site—e.g., to post comments, make a purchase, or click on an ad—necessitates collecting more information. The site is then free to retain any data necessary to complete the transaction. We believe this approach balances users’ privacy expectations with the ability of websites to deliver the functionality users want. Websites often integrate third-party content and rely on third-party services (like content delivery networks or analytics), and this creates the potential for user data to be leaked despite the best intentions of the site operator. The guide identifies potential pitfalls and catalogs providers of compliant services. It is common, for example, to embed media from platforms like You Tube, Sound Cloud, and Twitter, all of which track users whenever their widgets are loaded. Fortunately, Embedly, which offers control over the appearance of embeds, also supports DNT via its API, displaying a poster instead and loading the widget only if the user clicks on it knowingly.   Knowledge makes the difference between willing tracking and non-consensual tracking. Users should be able to choose whether they want to give up their privacy in exchange for using a site or a  particular feature. This means sites need to be transparent about their practices. A great example of this is our biggest adopter, Medium, which does not track DNT users who browse the site and gives clear information about tracking to users when they choose to log in. This is their previous log-in panel, the DNT language is currently being added to their new interface. The guide exists as a Git repository and will evolve. We want your contributions and invite you to use it as a  space to share advice on web privacy engineering. If you have suggestions for other DNT-compliant service providers, please submit them. We are also looking for configurations for Windows servers to limit log collection (we are providing example code for Nginx, Apache and Logrotate). In the future, EFF will add sections dedicated to advertising and commenting systems. When sites respect DNT, they show respect for users, reduce the risks of leaks, keep identifying data beyond the reach of law enforcement requests, and have their resources unblocked by tracker blockers such as Privacy BadgerDisconnect and AdNauseam. From 2018, there will be an additional reason. Any site collecting data from users in the European Union will be subject to strict limitations on their collection and processing practices, regardless of where they are based. Violations are punishable with large fines: up 20 million dollars or 4% of global turnover! EFF’s DNT policy is not a comprehensive solution to the obligations created by the General Data Protection Regulation, but it is the right start. To dive in and learn more about DNT implementation, check out the guide here.
>> mehr lesen

This Weekend: Celebrate the Life and Work of Aaron Swartz at the Internet Archive (Mi, 01 Nov 2017)
On November 4 and 5, the Internet Archive will host the Fifth Annual Aaron Swartz Day and Hackathon. Aaron would have turned 31 on November 8. The late activist, political organizer, programmer, and entrepreneur was a dear friend of EFF’s who made a lasting imprint on the Internet and the digital community. Aaron’s life was tragically cut short after overzealous prosecutors sought to make an example out of him under the Computer Fraud and Abuse Act for using MIT’s computer network to download millions of academic articles from the online archive JSTOR. At EFF, we carry on Aaron’s legacy every day through our work on open access and CFAA reform. And this weekend, we’ll join our friends at the Internet Archive in celebrating Aaron’s life and work. This weekend’s events include a two-day hackathon focused on SecureDrop, the whistleblower submission system Aaron created just before he passed away, and a Saturday evening memorial event. Speakers at the memorial event include Chelsea Manning, Gabriella Coleman, Barrett Brown, EFF’s Cindy Cohn, Aaron Day co-founder Lisa Rein, and many more. Aaron died on January 11, 2013, at the age of 26, after being charged with 11 counts under the notoriously vague and draconian CFAA for systematically downloading academic journal articles from JSTOR. Facing decades in federal prison, Aaron took his own life. Aaron’s case stands as an example of how prosecutors abuse the CFAA’s vague language and harsh penalties to craft trumped up criminal charges for any behavior involving a computer they don’t like. Systematically downloading academic journal articles does not rise to the level of culpability that Congress had in mind when it enacted the CFAA—i.e., malicious computer break-ins for the purpose of causing damage or stealing information. But the law makes it illegal to intentionally access any computer connected to the Internet “without authorization” or in excess of authorization without actually telling us what “without authorization”—the statute’s most critical term—means. This overly vague language likely seemed innocuous to some back in 1986, but it has opened the statute up to rampant abuse by those seeking to stretch its reach. EFF has been pushing for CFAA reform for years, and we increased those efforts after Aaron’s death. Since 2013, we’ve pushed for the passage of Aaron’s Law, which would reduce the CFAA’s disproportionately harsh penalties, shield security researchers and innovators from prosecution for doing their work, and clarify that violating a website’s terms of service is not a crime. Unfortunately, so far our efforts in Congress have been blocked, with tech giants like Google, Facebook, and Oracle shamefully unwilling to support reform even as the law needlessly claims lives and results in massively overbroad sentences. We’ve also been fighting the CFAA in court. Over the past few years, we’ve convinced multiple federal courts of appeal that violations of private computer use restrictions cannot give rise to CFAA liability. This year, we urged the Supreme Court to take up US v. Nosal, a long-running CFAA case that would have provided the high court with the opportunity to clarify once and for all that the CFAA was meant to target malicious computer breaks—not to enforce computer use preferences. The court unfortunately turned down that opportunity, rejecting Nosal’s petition for Supreme Court review. We’re disappointed in this decision, but we’ll continue to advocate for a narrow interpretation of the CFAA’s vague language in lower courts across the country. We’ll soon be filing an amicus brief in the Ninth Circuit Court of Appeals in a case challenging LinkedIn’s use of the CFAA as a tool to limit access to publicly available data—an abusive use of the CFAA that we know would have disappointed Aaron. While he was alive, Aaron railed hard against the idea of government-funded scientific research being unavailable to the public and his passion continues to motivate the open access community. While EFF continues to push for reforms to the CFAA, it's crucial to keep in mind that if open access were the standard for scientific research, then sharing it wouldn't be a crime at all—and Aaron never would have been charged in the first place. As part of our work fighting for open access to data, EFF strongly supports the Fair Access to Science and Technology Act (FASTR), a bill that would require every federal agency that spends more than $100 million on grants for research to adopt an open access policy. The bill gives each agency flexibility to implement an open access policy suited to the work it funds—so long as research is available to the public after an “embargo period.” One of the points of debate around FASTR is how long that embargo period should be. Last year, the Senate Homeland Security and Governmental Affairs Committee approved FASTR unanimously, but only after extending that embargo period from six to twelve months—a change that put FASTR in line with the 2013 White House open access memo. That’s the version that was recently reintroduced in the Senate. The House bill sets the embargo period at six months. In the fast-moving world of scientific research, even six months is an eternity to wait for open access, let alone twelve. But that said, FASTR would serve as an essential first step on which to build further reforms—and one we hope Aaron would be proud of. We hope to see some of you at this weekend’s Aaron’s Day celebration. To find out more information about the hackathon or to buy tickets to the Saturday memorial event, visit the Internet Archive’s event page. And to support EFF’s efforts on open access and CFAA reform, visit https://supporters.eff.org/donate.  Related Cases:  United States v. David Nosal
>> mehr lesen

Stupid Patent of the Month: Bad Patent Goes Down Using Procedures at Patent Office Threatened by Supreme Court Case (Di, 31 Okt 2017)
At the height of the first dot-com bubble, many patent applications were filed that took common ideas and put them on the Internet. This month’s stupid patent, U.S. Patent No. 6,738,155 (“the ’155 patent”), is a good example of that trend. The patent is titled “System and method of providing publishing and printing services via a communications network.” Generally, it relates to a “printing and publishing system” that provides “workflow services...using a communication network.” The original application was filed in 1999, and the patent issued in 2004. The ’155 patent has a significant litigation history. Starting in 2013, its owner1 CTP Innovations, LLC, filed over 50 lawsuits alleging infringement, and told a court it intended to file as many as 200 additional cases. CTP claimed [PDF] that infringement of its patent was “ubiquitous” by the printing and graphic communications industry. In response to CTP’s claims of infringement, several defendants challenged the patent at the Patent Office, using a procedure called “inter partes review” (or “IPR” for short). The IPR procedure allows third parties to argue to the Patent Office that a patent shouldn’t have been granted because what was claimed in the patent was either known or obvious (two requirements for being awarded a patent) at the time it was allegedly invented. The challenger presents what's called “prior art,” that is, material known to the public before the alleged invention. The challenger uses the prior art to show that the patent’s claims weren’t new or non-obvious when the application was filed. A patent owner is then given the chance to show why they are entitled to a patent. Here is claim 10 of the ’155 patent, one of the claims challenged by the defendants: 10. A method of providing printing and publishing services to a remote client in real time using a communication network, the method comprising: storing files on a computer server, the files containing information relating to images, text, art, and data; providing said files to a remote client for the designing of a page layout; generating a portable document format (PDF) file from the designed page layout; generating a plate-ready file from said PDF file; and providing said plate-ready file to a remote printer. Here’s how the Patent Office presiding over the IPR described [PDF] claim 10: Claim 10 is drawn to a method that requires: (1) storing files; (2) providing the files to a remote user for designing a page layout; (3) generating a PDF from the designed page layout; (4) generating a “plate-ready file” from the PDF; and (5) providing the plate-ready file to a remote printer. In order to show that this claim should be cancelled, the challenger relied on several pieces of prior art to show that claim 10 of the ’155 patent was obvious. During the IPR, the parties generally did not dispute that steps (1)-(4) were disclosed by the prior art. The only dispute noted by the Patent Office about what was disclosed by one particular prior art combination known as “Dorfman and Apogee” was whether sending a file to a remote printer (step (5)) was new or non-obvious. The Patent Office originally found [PDF] that even though the prior art disclosed all the other parts of the alleged invention, the prior art didn’t disclose sending files to a remote printer. That was enough to rule that claim 10 was new and non-obvious, and in favor of the patent owner. We don’t think that minor difference from the prior art should matter. The ’155 patent doesn’t claim to have invented how to send files to a remote printer (nor could it in 1999, as a quick search reveals). Such a trivial change shouldn’t allow someone to claim a monopoly, especially when everyone was doing things “on the Internet” in 1999. For this reason, this patent is worthy of our award. Fortunately, the Patent Office changed its mind [PDF] on the patentability of claim 10 and sending files remotely, after the challenger pointed out that the prior art did disclose doing exactly that. In January 2017, the Patent Office ruled that claim 10, as well as claims 11-7, 19 & 20, should be cancelled, and CTP did not appeal that decision. Thanks to IPR, CTP can no longer use many of the claims of the ’155 patent to sue others. Indeed, it does not appear that CTP has brought suit against the 200 parties it threatened to sue. IPR is currently facing an existential threat: the Supreme Court is currently deciding whether it is constitutional for the Patent Office to double-check its work after a patent has issued. We think it is. As this short story shows, the Patent Office sometimes misses things in the prior art, and unsurprisingly then, often allows patents that it shouldn’t. The public should be able to point out those mistakes to the Patent Office and not have to pay patent owners for things that rightfully belong to the public. 1. It turned out that at the time CTP filed its lawsuits, it didn’t actually own the patent [PDF].
>> mehr lesen

Who Speaks for The Billions of Victims of Mass Surveillance? Tech Companies Could (Mo, 30 Okt 2017)
Two clocks are ticking for US tech companies in the power centers of the modern world. In Washington, lawmakers are working to reform FISA Section 702 before it expires on December 31st, 2017. Section 702 is the main legal basis for US mass surveillance, including the programs and techniques that scoop up the data transferred by non-US individuals to US servers. Upstream surveillance collects communications as they travel over the Internet backbone, and downstream surveillance (better known as PRISM) collects communications from companies like Google, Facebook, and Yahoo. Both programs have used Section 702’s vague definitions to justify the wholesale seizure of Internet and telephony traffic: any foreign person located outside the United States could be subjected to surveillance if the government thinks that surveillance would acquire “foreign intelligence information”—which here means information about a foreign power or territory that “relates to [] the national defense or the security [or] the conduct of the foreign affairs of the United States.” Without fixes to Section 702’s treatment of foreign users, the customers of American Internet services will continue to have personal information and communications sucked up, without limit, into American intelligence agency databases. Meanwhile, in Luxembourg, at the heart of the EU, the European Court of Justice (CJEU) is due to take a renewed look at how US law protects the privacy rights of European customers, and decide whether it's sufficiently protective for American companies to be permitted to transfer European personal data to servers in the United States. The two ticking timers are inextricably linked. Last time the CJEU reviewed US privacy law, in Schrems v. Data Protection Commissioner, they saw no indication that the US mass surveillance program was necessary or proportionate, and noted that foreign victims of surveillance had no right of redress for its excesses. US law, they stated, was insufficient to protect Europeans, and declared the EU-US Data Protection Safe Harbor agreement void, instantly shutting down a major method for transferring personal data legally between the US and Europe. Now another similar case is currently weaving through the courts for review by the CJEU. Without profound changes in US law, its judges will almost certainly make the same decision, stripping away yet more methods that US Internet companies might have to process European customers' data. This time, though, it won't be possible to fix the problem by papering it over (as the weak Privacy Shield agreement did last time). The only long-term fix will be to give non-Americans the rights that European courts, and international human rights law expect. Sadly, no company has yet stepped forward to defend the rights of their non-American customers. Last week, Silicon Valley companies, including Apple, Facebook, Google, Microsoft and Twitter, wrote a lukewarm letter of support for the USA Liberty Act, characterizing this troublesome surveillance reauthorization package as an improvement to “privacy protections, accountability, and transparency.” The companies made no mention of the rights of non-Americans who rely on US companies to process their data. The USA Liberty Act reauthorizes NSA surveillance programs for 6 years and makes some adjustments to government access to American communications. But the bill fails to include any legal protections for innocent foreigners abroad. Instead, the bill offers a “sense of Congress” —a statement about Congressional intention with no legal weight or enforceability —that NSA surveillance “should respect the norms of international comity by avoiding, both in actuality and appearance, targeting of foreign individuals based on unfounded discrimination.” Previous discussions of 702 reform included demanding better justifications for seizing data. The law could, at the very least, better define “foreign intelligence” so that not every person in the world could potentially be considered a legitimate target for surveillance. The companies could call for substantively better treatment of their foreign customers, but they have chosen to say nothing. Based on these ideas, the companies could call for substantively better treatment of their foreign customers, but they have chosen to say nothing. Why? It may be that they feel that it is unlikely that such protections would pass the current Congress. But such reforms definitely won’t pass Congress unless they are proposed or supported by major Washington players like the tech giants. Much of the existing statutory language of US surveillance reform, in the USA Freedom Act and now in the USA Liberty Bill, was unimaginable until advocates spoke up for it. The other reason may be that it’s safer to keep quiet. If the tech companies point out that Section 702’s protections are weak, then that will draw the attention to the European courts, and undermine the testimony of Facebook’s lawyers in the Irish courts that everything is just fine in American surveillance law. If so, the companies are engaged in dangerous wishful thinking, because that ship has already sailed. In the early stages of the current CJEU court case, in the Irish High Court, Facebook and the US government both argued that current US law was sufficiently protective of foreigners' privacy rights. They lost that argument. And without US legal reform, they're almost certain to lose at the CJEU, the next port of call for the case. The companies need to remember what that court said in the first Schrems decision: Legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life, as guaranteed by Article 7 of the Charter of Fundamental Rights of the European Union. Likewise, legislation not providing for any possibility for an individual to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data, does not respect the essence of the fundamental right to effective judicial protection, as enshrined in Article 47 of the Charter of Fundamental Rights of the European Union. In other words, it's not American business practices that need to change: ­ it's American law. Section 702 reform, currently being debated in Congress, is the Internet companies' last chance to head off the chaos of a rift between the E.U. and the US. By pushing for improvements for non-US persons in the proposed bills renewing Section 702 (or fighting for Section 702 to be rejected outright), they could stave off the European courts sanctions ­ and reassure non-American customers that they really do care about their privacy. There's still time, but the clocks are ticking. If America's biggest businesses step up and tell Congress that the privacy of non-Americans matter, that reform bills like the Liberty Act must contain improvements in transparency, redress, and minimization for everyone, not just Americans, they'll get an audience in Washington. They’ll also be heard in the rest of the world. Since the Snowden revelations, non-American customers of US internet communication providers have repeatedly asked them: “How can we trust you? You say you have nothing to do with PRISM, and you zealously protect your users’ data. But how do we know when the US government comes knocking, you’ll have your foreign users’ backs?” Standing up in D.C. and speaking for the rights of their customers would send a powerful message that American companies believe that non-American Internet users have privacy rights too, no matter what American lawmakers currently believe. Staying quiet sends another signal entirely: that while they might prefer a world where the law protects their foreign customers, they’re unwilling to make a noise to make that world a reality. Their customers — and competitors — will draw their own conclusions.
>> mehr lesen

EFF Files Brief in Support of Ability to Challenge Bad Patents at the Patent Office (Mo, 30 Okt 2017)
The Patent Office doesn’t always do the best job. That’s how Personal Audio managed to get a patent on podcasting, even though other people were podcasting years before Personal Audio first applied for a patent. As we’ve detailed on many occasions, patents are often granted on things that are known and obvious, giving rights to patent owners that actually belong to the public. As a result, it’s important for the public to have the ability to challenge bad patents. Unfortunately, challenging bad patents in court can be hard and very expensive. In court, challenges are often decided by a judge or jury with little technical knowledge. Courts also require a high level of proof (“clear and convincing”) that can be hard to come by, especially after the passage of time. In order to help alleviate that problem, in 2011 Congress passed the America Invents Act, which created new procedures at the Patent Office to challenge patents. Those challenges are heard by an expert panel and can lead to the patent’s cancellation if a challenger can show “by a preponderance of the evidence” that the patent should not have issued in the first place. This procedure, known as inter partes review or IPR for short, has been controversial. Some patent owners claim that IPRs make it too easy to invalidate patents. EFF and others have supported the IPR process, because it provides an efficient alternative to litigation for companies threatened by bad patents and because it provides an opportunity for groups like EFF to challenge bad patents that harm the public interest. A company called Oil States is challenging the procedure at the Supreme Court, arguing that it violates the Constitution because it allows a panel of experts at the Patent Office to decide a patent’s validity, rather than a judge and jury. Together with Public Knowledge, Engine Advocacy, and R Street Institute, EFF filed an amicus brief explaining why that’s incorrect, and why members of the public should remain free to challenge bad patents at the Patent Office. In our amicus brief, we detail the long history of patents being used as a public policy tool, and how Congress has long controlled how and when patents can be canceled. We explain how the Constitution sets limits on granting patents, and how IPR is a legitimate exercise of Congress’s power to enforce those limits. We also discuss how IPRs also make policy sense. We discuss why IPRs were created in the first place. The Patent Office often does a cursory job reviewing patent applications. There is some justification for this given that the Office receives over 600,000 patent applications per year. The vast majority of these patents will never be valuable and will never be asserted against others. Given that it is hard to tell during the application phase which patents are going to become economically important, it makes some sense to focus energy on more closely reviewing patents only when they do become important. IPRs allow for that “second look” to make sure the Patent Office didn’t make a mistake in issuing a patent, and are generally only brought to challenge patents that have become economically valuable.  But if Oil States’ argument is successful, a company can take advantage of the more-than-lax Patent Office examination to get a patent, and then prevent that “second look.” The public will be burdened with massive costs and uncertainty in being forced to only challenge those patents in court, in front of judges and juries who, despite best efforts, are often overwhelmed by technology. Inter partes review is one of the few ways members of the general public can challenge bad patents. It’s the procedure EFF used to challenge the infamous podcasting patent that was used to threaten small podcasters. The Patent Office found that the claims EFF challenged shouldn’t have been issued, and that decision was affirmed by the U.S. Court of Appeals for the Federal Circuit. (The case remains on appeal as Personal Audio has requested that the appeals court rehear the case en banc.) More recently, the Initiative for Medicines, Access & Knowledge (I-Mak) has used inter partes review to challenge patents held by Gilead on a drug used to combat Hepatitis C. I-Mak estimates [PDF] that patents on the drug increase the costs to consumers by approximately $10 billion. The Oil States case is one of the most important cases in patent law in the last decade, if not longer. Many interested parties have filed briefs (copies are available on SCOTUSblog). We hope the Supreme Court recognizes that IPRs are a reasonable—and constitutional—Congressional response to bad patents.      Related Cases:  EFF v. Personal Audio LLC
>> mehr lesen

A Win for Music Listeners in Florida: No Performance Right in Pre-1972 Recordings (Sa, 28 Okt 2017)
Another court has ruled that the public still has the ability to play old music that almost everyone believed they lawfully had the ability to play. The Florida Supreme Court, following in the footsteps of New York State’s high court, ruled yesterday that its state law, which governs sound recordings made before 1972, doesn’t include a right to control public performances of sound recordings, including radio play. Both this decision and the reasoning behind it are good news for digital music companies and radio listeners. This case stems from a broader debate about copyright in sound recordings. Although federal copyrights in sound recordings cover reproduction and distribution, they don’t include a general right to control public performances, except for “digital audio transmissions” like Internet and satellite radio. That’s why AM and FM radio stations, and businesses like restaurants that play music, have never had to pay record labels or recording artists, nor ask their permission. (Songwriters and music publishers do get paid for public performances, typically through collecting societies ASCAP, BMI, and SESAC). But recordings made before February 15, 1972 aren’t covered by federal law at all. Instead, they fall under a patchwork of state laws and court decisions, most of them pre-Internet. The labels have tried for many decades to win a performance right, but so far neither Congress nor state legislatures have created one.  The strange status of pre-1972 recordings created an opportunity for recording artists and labels to try getting from the courts what Congress has never given them: a right to control public performances. Flo & Eddie, a company owned by two members of the 1960s rock band the Turtles, sued Sirius XM and other services in at least three states, claiming they should not be allowed to play Turtles tracks and other pre-1972 recordings without permission and payment, even though that's what people had been doing for over 50 years. EFF filed amicus briefs in many of these cases. In Florida, we teamed up with attorney Dineen Pashoukos Wasylik. We argued that copyright holders should only be given new rights when absolutely necessary, and creating those rights is a job for legislatures, not courts. We also pointed out that new rights under copyright (like the digital public performance right Congress created in 1996) are always coupled with appropriate limitations. Flo & Eddie’s request for an unlimited public performance right would create unpredictable legal risks for digital music services, broadcasters, and even restaurants. Each of these cases ultimately reached a state high court. The New York Court of Appeals (New York's highest court) ruled last year that New York common law didn’t include a public performance right. Yesterday, the Florida Supreme Court reached the same conclusion. The Florida court’s thoughtful decision looked closely at that state’s history of statutes and court decisions. It concluded that Florida common law has never given sound recording copyright holders a right to control performances. Echoing EFF’s concerns, the court noted that public performance rights under federal law are “limited,” while a state common law right for pre-1972 recordings would be “unfettered.” “Such a decision,” said the court, “would have an immediate impact on consumers beyond Florida’s borders and would affect numerous stakeholders who are not parties to this suit.” This decision will probably be more important for small businesses and new music services than for Sirius XM. Even before this case was decided, Sirius XM entered into a class action settlement with Flo & Eddie, setting up a process for tracking and paying for its plays of pre-1972 recordings. The Florida and New York high court decisions will mean that other music users can choose to enter compensation agreements for pre-1972 recordings, but won’t be forced to. That’s a plus for digital music innovation and a relief for small businesses like restaurants. One state—California—has yet to decide this question. The California Supreme Court might still choose to give pre-1972 recording copyright holders a big stick to wield against music services, radio stations, and other businesses. We’re going to ask them to follow New York and Florida, and not create new copyrights in old recordings. Related Cases:  Pre-1972 Sound Recordings State Law Copyright Litigation
>> mehr lesen

Twitter’s Ban on Russia Today Ads is Dangerous to Free Expression (Sa, 28 Okt 2017)
Freedom of speech “presupposes that right conclusions are more likely to be gathered out of a multitude of tongues, than through any kind of authoritative selection. To many this is, and always will be, folly; but we have staked upon it our all.”- United States v. Associated Press, 52 F. Supp. 362, 372 (S.D.N.Y. 1943) (opinion of the court by Judge Learned Hand), aff'd, 326 U.S. 1 (1945). On October 26, Twitter decided to ban “advertising from all accounts owned by Russia Today (RT) and Sputnik,” two Russian state-owned media outlets. Twitter was reacting to an assessment by the United States intelligence community that RT and Sputnik interfered with the U.S. election on behalf of the Russian government, as well as Twitter’s (non-public) internal research. Many may be tempted to celebrate Twitter’s decision as a move to protect democracy from an authoritarian state. We fear it’s just the opposite. There seems to be little question that the Russian government uses Russia Today and Sputnik to stir up division and influence foreign politics, including the last U.S. presidential election. But it would be ironic if our response to that effort was to step back from defending freedom of expression. For example, the First Amendment and Article 19 of the Universal Declaration of Human Rights forbid a state actor from doing what Twitter has done. A ban on all advertising from a particular entity, knocking out everything from articles covering a cheese rolling festival to coverage of an election, would be an over-broad prior restraint on speech. Of course Twitter is not a state actor, and has the right to moderate its platform. But it should use this right wisely. For decades platforms have chosen to be as content-neutral as possible, and the paradigm of a content-neutral platform has become increasingly valuable as private social media entities have emerged as the most common means of online communication. (Remember when Twitter said it was in the “the free speech wing of the free speech party”?) Private censorship is contagious, and we don’t want it spreading into a global pandemic. But now social media companies are increasingly abandoning that approach, with dangerous consequences. For example, now that Twitter has punished these state-owned media for their actions, it’s harder to resist calls by other countries to do the same.  State-owned media is extremely common throughout the world—consider the BBC, France Télévisions or Al Jazeera—and often write about election. They may be fairly neutral or deeply biased, but they are still a type of news media. For example, other countries may not like the effects of U.S. government-sponsored media like Voice of America, Radio Marti or Radio Free Europe, and demand similar treatment. (Indeed, Russia’s Foreign Ministry has already said an unspecified “response will follow.”)  Private censorship is contagious, and we don’t want it spreading into a global pandemic. The promotion ban is also a very blunt instrument. To be fair, Twitter didn’t go so far as to ban the account, limit your ability to retweet Russia Today, or—in a worst case scenario—prohibit users from linking to Russian state media in their own posts.  Limiting the policy to paid promotion limits the damage. But as social media sites make reach more and more dependent on paid promotion, this distinction makes less and less of a difference, and still impacts the reader’s free expression right to receive information. What is worse, the ban is likely to lead to further pressure on anonymous speech. The ban’s effectiveness relies on the notion that Twitter will be better at identifying accounts controlled by Russia than Russia will be at opening disguised accounts to promote its content. To make it really effective, Twitter may have to adopt new policies to identify and attribute anonymous accounts, undermining both speech and user privacy. Given the problems with attribution, Twitter will likely face calls to ban anyone from promoting a link to suspected Russian government content.  EFF, along with many other civil society groups, drafted the Manila Principles to create a a framework to help ensure intermediaries do not improperly inhibit free expression, either voluntarily or as a result of a legal order. Under those principles, public and governmental pressure should not force Twitter to restrict content; only a court order should be able to do that. Yet that’s exactly what happened here, with no apparent right of appeal. Indeed, Twitter has gone far beyond what a U.S. court would order in the first place. For example, U.S. electoral rules do not support a total ban on paid promotions even if the promoter violated the laws governing foreign nationals’ participation in U.S. elections. Under FEC rules, foreigners may fund ads if they are not “election influencing” in the sense that they do not mention candidates, political offices, political parties, incumbent federal officeholders or any past or future election.  But U.S. law “does not restrain foreign nationals from speaking out about issues or spending money to advocate their views about issues.” We can understand why Twitter might seek to disassociate itself from profiting off of Russia’s campaigns to disrupt open societies, and appreciate the irony of Twitter’s plan to redirect the revenue it has accrued to projects designed to support external research on the problem. But there are better, more targeted ways to fight improper interference. For example, Twitter and other social media sites can still take action against fraudulent accounts, and other more shadowy aspects of the Russian information operations. For accounts like Sputnik and RT, who are openly funded by and work closely with the Russian government, social media companies can also take steps to clearly signal to the user the origin of the post its connection to a government. But by simply removing particular media outlets from the opportunities to promote themselves that other outlets enjoy, Twitter slides further down the slippery slope toward a world where the social media platforms on which we all rely abandon any pretense of neutrality.  Neutral platforms with strong policies against content censorship, especially those with worldwide reach, are vital for freedom of expression, and necessary for a free and open society.
>> mehr lesen

It's Time for Congress to Pass an Open Access Law (Fr, 27 Okt 2017)
The public should be able to read and use the scientific research we paid for. That’s the simple premise of the Fair Access to Science and Technology Research Act, or FASTR (S. 1701, H.R. 3427). Despite broad bipartisan support on both sides of the aisle, FASTR has been stuck in Congressional gridlock for four years. As we celebrate Open Access week, please take a moment to urge your members of Congress to pass this common-sense law. Under FASTR, every federal agency that spends more than $100 million on grants for research would be required to adopt an open access policy. The bill gives each agency flexibility to choose a policy suited to the work it funds, as long as research is available to the public after an embargo period. (The House bill sets the embargo at a year, while the Senate bill sets it at six months. EFF supports an embargo period of six months or shorter.) Sen. Rand Paul recently incorporated the text of FASTR into his BASIC Research Act (S. 1973), a bill that would place several new requirements on government agencies that fund research, including adding a “taxpayer advocate” to every federal panel that approves research grants. Sen. Paul’s bill is clearly driven by a skepticism toward what he sees as “silly research.” We doubt that Sen. Paul’s bill will gain much momentum, but make no mistake: there’s nothing silly about the public being able to access government-funded scientific research. Someone’s income and institutional connections shouldn’t dictate whether they can read cutting-edge scientific research. From the most ardent supporters of federal support for science to its most vocal critics, everyone should be able to agree on a common-sense open access law. Please write your members of Congress and urge them to support FASTR. Take action Tell Congress: It’s time to move FASTR EFF is proud to participate in Open Access Week.
>> mehr lesen

Certbot Development Livestream (Halloween Edition!) (Fr, 27 Okt 2017)
UPDATE: Tune in to the livestream here! Do you want to know what it’s like to be an open-source developer? Want to see how we work on Certbot behind the scenes? This Halloween, an EFF Certbot developer will be live-streaming their work on Twitch to show you what it’s like to work on Certbot, chat with other developers, and answer your questions. On October 31st at 10:30 am Pacific time, tune into the livestream and join us! We will update this post with the livestream link, as well as tweet it from EFF’s Twitter account.  UPDATE: Tune in to the livestream here! There’s a growing trend of open-source developers live coding through streaming platforms like Twitch, and we’re excited to jump in. Certbot is one of EFF’s many tech tools. It offers all domain owners and website administrators a convenient, free way to move their websites from insecure HTTP to secure HTTPS. Gone are the days of expensive certificates that are hard to configure—Certbot deploys Let’s Encrypt certificates with easy-to-follow, interactive instructions based on your webserver and operating system. Let’s Encrypt is a certificate authority (CA) operated by the Internet Security Research Group. CAs play a crucial identification and verification role in the web encryption ecosystem—and Let’s Encrypt is one of the world’s largest, having issued over 100 million certificates to over 7 million unique domains. Join us for the livestream here to see an open-source developer in action, plus learn more about Certbot and how you and your colleagues can use it to make the web more secure. If you can’t make it, check out certbot.eff.org to learn more about what our Certbot developers are working on.
>> mehr lesen

Oakland Privacy and the Fight for Community Control (Fr, 27 Okt 2017)
Many groups in the Electronic Frontier Alliance work to ensure that their neighbors have the tools they need to maintain control of their information. Others devote their efforts to community organizing or advocacy, assuring that authorities respect the civil and privacy rights of people in their community. For over four years, Oakland Privacy has been a notable example of the latter. Initially organizing as the Occupy Oakland Privacy Working Group, Oakland Privacy began meeting in July of 2013, with a mission to stop Oakland’s Domain Awareness Center (DAC). The DAC, first approved by the City of Oakland City Council as a port security monitoring system, was moving toward approval of a second phase by the Summer of 2013. Phase II would have expanded the DAC into a city-wide surveillance apparatus that would have combined feeds from cameras, microphones, and other electronic monitoring assets throughout the city. Local authorities and their partners would have had an unprecedented ability to surveil the people of Oakland. As one might expect, the proposal raised significant concerns for Oakland residents. Oakland Privacy members recognized that a successful campaign would require a broad coalition of local partners and national civil rights advocates. Working with organizations such as Lighthouse Mosque, ONYX/Anti-Police Terror Project, Justice for Alan Blueford, and the Dan Siegel for Mayor Campaign, Oakland Privacy stopped the DAC's expansion beyond the Port of Oakland. After that success, Oakland Privacy achieved another vital victory in their effort to protect the privacy of Oakland residents by helping develop and pass Ordinance NO.13349 C.M.S., which created a city privacy advisory commission. The Privacy Advisory Commission, on which some Oakland Privacy members now serve, is charged with providing “advice to the City of Oakland on best practices to protect Oaklanders' privacy rights in connection with the City's purchase and use of surveillance equipment and other technology that collects or stores our data.” Today Oakland Privacy continues their work to protect the privacy rights of the people of Oakland. Recently, with the support of the Privacy Advisory Commission, they successfully persuaded the city council to order the Oakland Police Department to cut ties with the Department of Homeland Security's U.S. Immigration and Customs Enforcement Agency (ICE). With Congress accepting the erosion of privacy protections, and federal agencies seeking increasing powers to intrude on the privacy of US citizens and residents, we are at a critical time for local organizing around these fundamental rights. We asked Brian Hofer of Oakland Privacy and the City of Oakland Privacy Commission about their work to support neighboring communities. Can you give us some background about the Oakland Privacy Working Group? The founding members of the Oakland Privacy Working Group were the remaining Occupy players who were still around in town. Even though the main Occupy Movement had wound down, there were still a lot of activists around focusing on militarization and surveillance issues such as Urban Shield. So when the citywide surveillance network showed up on the Public Safety agenda in July of 2013, a whole bunch of alarm bells went off. The founding members got together and formed what they thought was just a temporary group to raise awareness and try to kill the Domain Awareness Center. How did you jump from reactive organizing to sustained offensive organizing? How do you preserve the urgency and maintain the participation rate? As the DAC proceeded, we basically spent all our resources on building a coalition and trying to get as many people as we could to slow the process down and derail it. That ultimately succeeded. We also got an unexpected side benefit. When the DAC was scaled back and the funding eliminated, on March 4th, 2014, they created an ad-hoc citizens commission to craft a privacy policy for the skeletal remains of the project that were left in place to address port infrastructure equipment. I participated on that committee, along with the members of the ACLU, EFF, and a good cross-mix of Oakland citizens. We crafted a policy, and along with that, we sent up a number of recommendations to the council for approval. One of those was the creation of a standing privacy commission. Oakland Privacy lobbied heavily for that along with other allies. We were also able to get members appointed to the commission. Since then, we've been able to bring this model to other entities throughout the Bay Area. It’s very hard for law enforcement, or anyone else, to say that transparency is not good, or that there shouldn't be any oversight, or that there shouldn't be public hearings. So we have these arguments that people would have a hard time defeating, and have been able to bring this approach throughout the entire Bay Area. They’re all in different stages of the project, but this public vetting process is in place throughout the greater Bay Area. Can you tell us about your current police oversight campaign? What we’re working on right now is a surveillance equipment ordinance. We have a dedicated core group of volunteers who watch the agenda items of legislative bodies across the Bay Area. If surveillance equipment pops up we can address it. We can go lobby elected officials and say, ‘we need to have a discussion about this.' The main project itself is codifying the process. The Domain Awareness Center was shoved down the public’s throat before they even knew what it was, and before we discussed privacy. We’re now flipping that on its head. We're having the public conversation at the beginning, about whether this is appropriate or not for the community, and if so how should it be regulated? What sort of use policy is going to govern it? We're maintaining oversight and then also forcing law enforcement to come back after the fact and report on how they used it. They have to demonstrate the efficacy, maybe make amendments to the policy if there have been any violations, and actually demonstrate that the equipment is achieving its purpose. I suspect that some of this equipment is snake oil, and it's not going to be able to stand up to scrutiny once it has to start demonstrating the hard numbers that should go with it. That ordinance project is in play with at least seven government entities in the Bay Area. You just mentioned that you are work on legislation in so many different regions of the Bay Area. Palo Alto, Oakland, Berkeley. How do you manage the workload? I think Oakland Privacy just has the benefit of having a really amazing group of people, but the strong skill set that we’ve been implementing is coalition building. Yes, maybe sometimes we are kind of the spearpoints, but we’re obviously way too small to have a dramatic impact by ourselves. So, if we’re in Santa Clara, we’re trying to get South Bay folks to work with us. If we’re in Berkeley, we have to get groups that are already present in Berkeley. One of the most valuable things we’ve been doing is bringing constituents in each specific district when we go to talk to a council member or a board of supervisors. One of the things they’re not shy about is saying ‘why are you guys talking to us, you’re an outsider,’ if it’s a non-Oakland entity. They want to know what their own constituents think. Something we’ve really been focusing on lately is making sure we have people in those specific jurisdictions to come to our meetings with us, people who share the same concerns that we have and are paying attention to the same agenda items. We’re lucky that in California, under the Brown Act, most of these things have to be posted publicly. Most of them require legislature to enter into a contract, or to accept funds, or to purchase equipment. So, that gives us a bite at the apple. We have an opportunity to show up and oppose (or at least comment on) the concerns that we have and to try to get a seat at the table and have an impact. With social media, you can quickly raise awareness and get online and ask people to show up, and more often than not they do. I think once you show people how to do that, just as far as agenda watching and public records requests, which go along with it, other people can replicate the model. How would one successfully be able to monitor all the Agendas for the various city meetings? Lately, we’ve been talking to a lot of allies about helping with this, simply for their own knowledge, because they’re missing items, and later they hear it from us or EFF. It actually doesn't take too much time. It’s unfortunately manual labor, but we just go to the agendas, whether it's a public safety committee or a finance committee, and just pull up the agenda which is often posted a week or two ahead of time. Look for items of interest. We’ve got seven or eight folks in the Oakland Privacy Working Group that do that for the Greater Bay Area on a regular basis. Once we see something like a Stingray or License Plate Reader acquisition, we can alert folks in that area that there’s something they want to talk to their council member about. Honestly, I spend maybe a half an hour a week. It’s actually not that time consuming once you know where the most likely committees are. Working so closely with legislative bodies have there been any lessons learned that you would like to share? I think that the vast majority of elected leaders are completely uneducated about what this equipment can do, and why there are civil liberties concerns about it. They don’t know whether it works, and they just blindly approve these things. They’re on the consent calendar most of the time. Almost everything I’ve worked with has started on the consent calendar, meaning there's no real debate, there’s no discussion. It’s just approved in a mass vote. Once we began to educate the council on the Domain Awareness Center, that's when they started postponing votes. With Alameda County, we started raising concerns about the Stingray and how it can intercept content, and they postponed votes. I think with such a low approval of Congress, that trickles down to the local level. Most folks just miss this really golden opportunity to inform people who do really care about what their constituents think. It starts with educating them. You need to understand the equipment, do the research, get the public records that you can in your possession and make a coherent argument. They’re often responsive, and even if they’re not, they can at least tone down, or narrow the use of the equipment to where it’s less alarming than where it was before. If someone is concerned about surveillance and offensive tools, that are being procured by their local law enforcement, what are some things that person can do beyond letter writing? Another tool that we use is public record requests. Almost every state has some sort of Sunshine or public requests law. Some local entities also have Sunshine Ordinances. That’s what really broke the Domain Awareness story wide open. We had thousands of their internal documents, and we analyzed them and drew a lot of media attention to it. At the time, the Domain Awareness Center was being sold to us as this big crime-fighting tool, yet none of their internal documents talked about crime fighting at all, and it did specifically talk about targeting Occupy Oakland. So, we were able to use that to a huge advantage in showing the civil liberties concerns about that project. Again, when I say that elected leaders aren't informed, they’re often not reading those big lengthy agenda packets where some of this material was submitted to them. The Hailstrom is an example where there has been material submitted lately, because it's been required under some of the new judicial oversight or legislation that’s been passed, and they're still not following that down the trail as far as they should. So, we keep pushing and keep educating them, to show them where the alarming parts are, and then you can incorporate that in your letters, or in your public comments, or in your letter to the editor, and it just makes your argument that much better. When you’re a small group or an activist or an individual, don't be afraid to reach out to the professionals. Before Oakland Privacy got going at all, naturally we had to send out SOS calls to EFF, the National Lawyers Guild, ACLU, whoever we could find to help us slow it down. I remember being asked that question at a panel earlier. ‘When do you call in the muscle? At the finish line or the beginning?’ I think for small groups or individuals, call in the muscle when you can at the beginning. Get them to help you slow it down, to challenge it, to raise awareness, so that you can get your foot in the door and start educating people. That can give you time to submit public record acts and more time to build a coalition. It’s been a really beneficial relationship to have local small community groups that can show up and speak on every item, but also to have the big muscle behind you if you need it. It’s been a really good 1-2 combo here in the Bay Area, and there are groups like that all across the U.S. that you can get to help you. Though initially organized to stop one explicit threat to the privacy and civil rights of their neighbors, Oakland Privacy continues its efforts within Oakland while also supporting neighboring communities. Their model of coalition building, monitoring local council agendas, and mobilizing community support when necessary provides a strong example of how strategic planning and cooperation can prove fruitful in the face of what often feels like overwhelming obstacles and challenges.
>> mehr lesen

Proposal to Restrict Technical Assistance Demands Before Secret Surveillance Court Raises More Questions About Section 702 (Do, 26 Okt 2017)
As we detailed yesterday, a bill introduced this week by Sens. Ron Wyden and Rand Paul would represent the most comprehensive reform so far of Section 702, the law that authorizes the government to engage in mass warrantless surveillance of the Internet. EFF supports the bill, known as the USA Rights Act, because it closes the backdoor search loophole and addresses other glaring problems with Section 702. But the bill also makes changes to lesser-known provisions of Section 702. One of these amendments raises it own questions about how the government has been enlisting private companies to provide access to our communications, including whether it has required circumvention of encryption as in the recent fight between Apple and the FBI. It may well also call into question the response EFF received from the government in FOIA litigation seeking records to determine whether such a case exists. Section 14 of the USA Rights Act restricts when the government can demand that a email or other electronic communications service provider render “technical assistance” to facilitate the government’s mass spying. The bill would require the government to show that providing such assistance would not be burdensome and also obtain an order from the Foreign Intelligence Surveillance Court (FISC) first. In a letter to EFF, the government wrote that it determined “that there were no cases brought before the Foreign Intelligence Surveillance Court (FISC) that would have resulted in responsive orders or opinions of the FISC.” Under current law, government officials can require that providers give “all information, facilities, or assistance necessary” to the acquisition of targeted communications by simply telling them to do so. Under this regime, providers must comply or challenge the request before the FISC if the technical assistance the government wants is unreasonably burdensome or worse.  EFF is not aware of an instance in which the FISC has compelled a company to provide technical help, such as decrypting communications, to assist the government in its spying efforts under Section 702. But the FISC operates in almost total secrecy, so those of us without security clearances ordinarily wouldn’t hear about it. We’ve long been concerned about this possibility. In 2016 we filed a Freedom of Information Act (FOIA) lawsuit seeking any FISC orders or other documents that would show that the government was demanding that companies provide take steps similar to the FBI’s efforts to force apple to decrypt one of its iPhones in the San Bernardino case. In our FOIA case, the government has consistently said that the FISC has not issued any orders or opinions requiring that companies provide technical assistance. After we filed suit, the government agreed to conduct a second, more thorough search for those types of FISC orders. In a letter to EFF, the government wrote that it determined “that there were no cases brought before the Foreign Intelligence Surveillance Court (FISC) that would have resulted in responsive orders or opinions of the FISC.” But the fact that the USA Rights Act specifically restricts the government’s ability to demand technical assistance from service providers is concerning precisely because it suggests the government may have done so in the past or will do so in the future. Others have similarly speculated that Senator Wyden, who often raises public concerns about the government’s spying activities via necessarily cryptic letters or public statements, is using the bill to raise alarms about this issue. To be clear, we do not have reason to believe that the government misled EFF when it represented that the FISC had never issued any technical assistance orders. Because the statute presently allows the government to demand technical assistance without getting a court order and puts the burden on providers to challenge those demands, it’s possible that the secret surveillance court has never had the opportunity to deal with the issue. Thus, there may be no orders that would have come up in a search for records in response to our FOIA. That said, the USA Rights Act’s provision does highlight the possibility that the government uses its surveillance authority to require companies to modify their services or otherwise assist with its mass surveillance efforts. It also underscores how little we know about how the government uses Section 702 as a practical matter, which is a problem in and of itself. Related Cases:  Significant FISC Opinions
>> mehr lesen

Epson is Using its eBay "Trusted Status" to Make Competing Ink Sellers Vanish (Mi, 25 Okt 2017)
It's been just over a year since HP got caught using dirty tricks to force its customers to use its official, high-priced ink, and now it's Epson's turn to get in on the act. Epson claims that ink-cartridges that are compatible with its printers violate a nonspecific patent or patents in nonspecific ways, and on the strength of those vague assertions, they have convinced eBay to remove many third-party ink sellers' products, without any scrutiny by eBay. That's because Epson is part of eBay's VeRO program, through which trusted vendors can have listings removed without anyone checking to see whether they have a valid claim, contrary to eBay's normal procedure. As the company has said in another context, "eBay believes that removing listings based on allegations of infringement would be unfair to buyers and the accused sellers." Because Epson only applies VeRO to patent claims in the EU, Americans are not affected by these claims, but Europeans are. Our friends at the Open Rights Group have done outstanding work on this, and they make several excellent points in their analysis, showing that Epson is acting to hurt the resale market, not to assert patents against the manufacturers that are their competitors. If that was their goal, they'd target manufacturers, and shut down sales at the source. Open Rights Group have asked the UK Intellectual Property Office to investigate Epson's business practices. They're also seeking contact with people who have "been affected by takedown claims relating to Epson compatible ink cartridges and patent claims" and welcome your email.
>> mehr lesen

New DOJ Policy on Gag Orders Is Good, But the Courts Could Have Done Better (Mi, 25 Okt 2017)
The Department of Justice is making significant changes to its policy for seeking gag orders under Section 2705 of the Stored Communications Act. These orders routinely accompany search warrants, subpoenas, and other requests to service providers and prevent the companies from notifying users that their information has been obtained by the government. Last year Microsoft filed a lawsuit arguing that Section 2705 violates the First Amendment, and it appears that the DOJ made the policy change rather than risking a broad ruling that the law is unconstitutional. (That lawsuit should not be confused with a different case involving DOJ access to Microsoft user data stored in Ireland that will soon be heard by the Supreme Court.) Under the new policy, federal prosecutors must demonstrate an “appropriate factual basis” in order to apply for a gag order. Gags must also be limited to a duration of one year or less “barring exceptional circumstances.” By comparison, Microsoft’s complaint explained that it received over three thousand 2705 gag orders between 2014 and early 2016, two-thirds of which had no fixed end date. So the policy is an improvement. Microsoft deserves serious praise for pursuing the lawsuit. The government had little incentive to fix the problem outside of litigation, and Microsoft’s strong First Amendment arguments forced its hand. But we’re not ready to declare the policy an “unequivocal win,” the way Microsoft did. Above all, the government will still be able to obtain 2705 gag orders without satisfying the extremely high bar the First Amendment places on “prior restraints.” Under Supreme Court precedent, gag orders like these must be necessary to prevent imminent danger to a core government interest, and the requirement that the prosecutors merely demonstrate an “appropriate factual basis” doesn’t cut it. And while it’s certainly encouraging that the DOJ promises not to seek indefinite gags, courts should require much narrower tailoring of time limits on these orders. In addition, we’re naturally skeptical of this change coming in the form of an administrative policy that can be revoked whenever the DOJ sees fit. Microsoft won an important preliminary victory in February when a federal court in Seattle ruled that its First Amendment challenge survived a motion to dismiss. And just this year, Adobe and Facebook also brought successful challenges to Section 2705 gags. It appeared the tide was turning definitively against Section 2705, so we’d much rather have seen a binding court ruling or new legislation setting out tighter rules for these gag orders. Finally, on its face, the policy does not apply to outstanding gag orders, particularly those without a fixed end date. As we know from the closely related context of National Security Letters, indefinite gags may improperly prevent providers from informing their customers for many years. As with NSLs, therefore, we’ll continue to look for ways to enforce the First Amendment against overbroad gag orders. Related Cases:  Microsoft v. Department of Justice
>> mehr lesen

No Warrantless Searching of Our Emails, Chats, and Browser Data (Mi, 25 Okt 2017)
Congress is poised to vote on extending or reforming NSA surveillance powers in the coming weeks, and one issue has risen to the forefront of the fight: backdoor searches. These are searches in which FBI, CIA, and NSA agents search through the communications of Americans collected by the NSA without a warrant. This practice violates the Fourth Amendment. But the government argues that since the NSA originally collected the communications under statutory surveillance powers, the government doesn’t need a warrant to search through them later. This is a “backdoor” around the Constitutional rights that protect our digital communications. But we have a chance to shut and lock that backdoor, so that government agents don't access the communications of Americans without proving probable cause to a judge. The USA Liberty Act introduced this month is considered the most viable NSA reform package, and privacy champions on the Hill were able to insert some safeguards against warrantless search into the initial draft. FBI agents who know about a crime and are searching someone’s communications to obtain evidence and build up a case will have to go to a judge and get a warrant before accessing those communications. That’s a good step. But it isn’t the full reform we need. That’s because the USA Liberty Act won’t extend the warrant protections to NSA or CIA agents, who we know routinely search this vast database of communications. If the FBI is merely poking around the database trying to look for criminal activity but isn’t investigating a specific crime, they won’t be required to get a warrant. And “foreign intelligence gathering” —a notoriously broad and vague term in the government’s parlance— will also be exempt from this warrant requirement. Accessing American communications should require a warrant from a judge. The reform in the USA Liberty Act is an effort to move in that direction, but it leaves a policy that’s open to abuse. Under the current legislative draft, NSA agents can still read emails of Americans and pass “tips” to domestic law enforcement, all without judicial oversight. EFF is asking members, friends, and concerned citizens to raise their voices over this issue. Please call your members of Congress and tell them that we won’t tolerate exceptions to our Fourth Amendment rights.  We have shown many times over the last few years that calls can make a huge difference. And this is the moment: the Judiciary Committee in the House is considering revisions to the bill right now. This is the time to put pressure on the House if we want to see the backdoor search loophole shut.  Visit EndtheBackdoor.com to speak out. Speak out Want to learn more about the reforms proposed in the USA Liberty Act? Read our analysis.   Related Cases:  Wikimedia v. NSA Jewel v. NSA First Unitarian Church of Los Angeles v. NSA
>> mehr lesen

The USA Rights Act Protects Us From NSA Spying (Mi, 25 Okt 2017)
A new bill introduced today in the Senate provides necessary protections from NSA surveillance programs. The USA Rights Act, introduced by Senators Ron Wyden (D-Ore.), Rand Paul (R-Ky.), and eleven other Senators would provide meaningful reforms to one of the government’s most powerful surveillance tools. It fixes the “backdoor search loophole,” which now allows warrantless searches of the NSA-collected contents of Americans’ communications. It extends broad oversight powers to an independent agency. It guarantees the end of a controversial type of data search (called “about” searches) that the NSA suspended earlier this year. It improves judicial oversight of the government’s surveillance regime. It provides better transparency and requires stricter reporting.                                                                                                        Representatives Zoe Lofgren (D-Cal.) and Ted Poe (R-Tex.) also introduced companion legislation today in the House of Representatives.           EFF supports the USA Rights Act, and we urge Congress to enact it. Plainly, the introduced legislation is a lighthouse—a beacon that extends new light on the government’s opaque surveillance regime, hopefully guiding future legislation on similar issues. At the heart of the USA Rights Act is the reform of Section 702 of the FISA Amendments Act of 2008, a law set to expire at the end of this year. Section 702 allows the NSA to collect the communications of foreign individuals not living in the United States. These collections are done ostensibly in the name of foreign intelligence and national security. But Section 702 also sweeps up a vast number of communications of countless Americans. Those communications are then stored in a database that can be searched by the NSA and other intelligence agencies, including the FBI, without obtaining a warrant. Those are called “backdoor” searches, because they evade ordinary Fourth Amendment protection of the privacy of Americans’ communications. EFF is fighting in court to prove that this entire system of NSA surveillance is unconstitutional. The USA Rights Act closes the backdoor search loophole. Government agents searching Section 702-collected data for information on a U.S. person, or a person inside the U.S., would need to acquire a warrant first. The bill’s warrant exception for “emergency situations” would require subsequent judicial oversight. EFF welcomes this immediate and plain-language change to how government agents access Americans’ communications. The USA Rights Act also guarantees the end of “about” searches. Under this practice, the NSA collected—and the NSA and other government agencies searched—communications that were “about” a targeted individual, but not “to” or “from” them. This practice swept up the communications of many people who were not targets of NSA surveillance. Though the NSA earlier this year announced the suspension of “about” collection, the NSA might reverse course. The USA Rights Act ensures the NSA cannot reinstitute this practice. This is a reassuring move. The bill also bolsters the mechanisms for Section 702 oversight. Currently, Section 702 is subject to insufficient government oversight. For example, intelligence officials have gotten away with stonewalling questions from Congress, and evading queries from the court that approves warrants under Section 702—the Foreign Intelligence Surveillance Court (FISC). To address this problem, the bill would improve judicial oversight of Section 702. First, it would make it easier for individuals to bring constitutional lawsuits challenging the program by addressing a legal doctrine called “standing.” Second, it would ensure that criminal defendants are notified when the government uses Section 702-derived data as evidence against them. The bill also would expand the opportunities for the FISC’s official amicus curiae to participate in FISC proceedings. In 2015, Congress established this amicus as a way to ensure that the FISC did not make Section 702 decisions based solely on the views of the government. Also, the bill extends new powers and authority to the Privacy and Civil Liberties Oversight Board, an independent agency established by Congress. Under the bill, the Board will be able to receive and investigate all whistleblower complaints made through approved government channels. The Board will gain the independent power to subpoena individuals, removing the current requirement that the Attorney General approve such requests. The Board’s non-chair members will become salaried employees. And the Board receives an expanded mandate to review all foreign intelligence activities. These are just some of the specific improvements written in the USA Rights Act. Overall, the bill provides better reporting, transparency, protections, and oversight. It also prohibits the collection of purely domestic communications, and creates new checks and balances in the appointment of judges to the FISC and the FISA Court of Review. Finally, the authors of the USA Rights Act understand that surveillance oversight must be an ongoing discussion. The USA Rights Act thus calls for a four-year sunset. The USA Rights Act provides meaningful reform to Section 702 and would advance the civil liberties guaranteed by the Constitution. We welcome and support this bill. Related Cases:  Jewel v. NSA
>> mehr lesen

What if You Had to Worry About a Lawsuit Every Time You Linked to an Image Online? (Mi, 25 Okt 2017)
A photographer and a photo agency are teaming up to restart a legal war against online linking in the United States. When Internet users browse websites containing images, those images often are retrieved from third-parties, rather than the author of the website. Sometimes, unbeknownst to the website author, the linked image infringes someone else’s copyright. For more than a decade, courts have held that the linker isn’t responsible for that infringement unless they do something else to encourage it, beyond linking. Liability rests with the entity that hosts it in the first place—not someone who simply links to it, probably has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. Justin Goldman, backed by Getty Images, wants to change that. They’ve accused online publications, including Breitbart, Time, and the Boston Globe, of copyright infringement for publishing articles that link to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage. Goldman and Getty say those stories infringe Goldman’s copyright. This claim is dangerous to online expression, and we've filed an amicus brief asking a federal district court to grant the defendants’ request to end the case as a matter of law. For more than a decade courts have recognized that claims like Goldman’s and Getty’s are at odds with how the Internet works. When users visit a website, their computers send a request to that website’s address for a text file written in “Hyper-Text Markup Language” (HTML). That HTML text file includes, among other things, words to be displayed and web addresses of additional content such as images. HTML files are text only and don’t contain images—they refer to images according to their web address via in-line linking. The server at the linked web address may transmit an image in response to such a request, but the original website does not. The leading case is Perfect 10 v. Google, in which adult entertainment publisher Perfect 10 sued Google's Image Search service, arguing that Google should be held liable for any copyright infringement that occurred on sites to which Google linked. The Ninth Circuit Court of Appeals correctly disagreed, ruling that because Google’s computers didn’t store the photographic images, the search engine company didn’t possess a copy of the images and therefore did not transmit or communicate them for the purposes of the Copyright Act. This approach is known as the “server test” because it looks to who actually houses the work on its server and controls whether it will “serve up” the infringing content. The rule established that the principal responsibility for any infringement lies with the entity that actually communicates the work to the world, rather than the myriad entities that simply tell browsers where to go to request access to an image file. Linking is an essential tool for free expression and innovation. E-commerce sites can employ embedded links enabling consumers to comparison shop. Companies, schools, and libraries can use links to educate and empower users. Newspapers and bloggers use the Twitter posts of President Donald Trump in their stories. An art teacher can embed images of famous works on her web page for students to learn about particular art styles. These are all normal, everyday activities that Goldman and Getty would argue are infringement, tying websites into a legal knot and degrading users’ ability to learn and innovate. We hope the court sees through this dangerous attempt to undermine the in-line linking system that benefits millions of Internet users every day. Related Cases:  Perfect 10 v. Google
>> mehr lesen

EFF and ACLU Ask Appeals Court to Find Section 702 Surveillance Unconstitutional (Di, 24 Okt 2017)
As Congress considers reforming Section 702, the NSA’s warrantless surveillance authority, EFF and ACLU are asking a federal court of appeals in New York to find this surveillance unconstitutional. Section 702 allows the government to collect billions of electronic communications—including those of Americans—and to use these communications in criminal investigations, all without a warrant. Our amicus brief in United States v. Hasbajrami argues that this practice represents an end run around the Fourth Amendment, which protects the privacy of e-mail and other electronic communications. Agron Hasbajrami is a U.S. resident who was arrested at JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Although the government used Section 702 to build its case against Hasbajrami, it withheld this fact from his lawyers. Only after the Snowden revelations (and after conviction) did the government inform a handful of defendants, including Hasbajrami, that they had been subject to warrantless surveillance. Hasbajrami is now in front of the Second Circuit Court of Appeals, which will be the second appeals court to review the legality of Section 702 surveillance after the Ninth Circuit’s misguided decision in United States v. Mohamud last year. These cases demonstrate that the government engages in a kind of do-si-do to avoid getting a warrant to spy on Americans.  These cases demonstrate that the government engages in a kind of do-si-do to avoid getting a warrant to spy on Americans. Section 702 requires only that the government “target” foreigners located outside the U.S., people who generally lack Fourth Amendment rights. But nothing in Section 702 stops the government from eavesdropping on the communications between these targets and Americans, so long as it doesn’t intentionally target specific Americans. As a result, millions of Americans’ communications are “incidentally” swept up in Section 702 and then placed into databases that are accessible to law enforcement like the FBI. The FBI then routinely searches those databases during criminal investigations, known as a “backdoor search.”  It’s worth underscoring how unusual this all is: under other circumstances where the government set out to investigate an American and to conduct surveillance of their email, it would need a warrant, regardless of who the American was talking to. Only Section 702 allows it to sidestep that process, and the volatile combination of incidental collection and backdoor search makes available a vast array of private communications that are supposed to be protected by the Fourth Amendment. The government has invented a series of baseless justifications for zeroing out Americans’ privacy: It has claimed that eavesdropping on a conversation is equivalent to having an informant report what the conversation was about. Of course, if that were true, no warrant would ever be required for wiretapping. Refining that argument slightly, it next claimed that because its targets have no Fourth Amendment rights, it is entitled to incidentally “overhear” communications involving Americans who otherwise have Fourth Amendment protection. But again, the Supreme Court has made clear that wiretapping is extremely dangerous and must be closely supervised by a court to avoid sweeping in innocent bystanders. Under Section 702, however, the Foreign Intelligence Surveillance Court does not approve targets, nor does it ensure that incidental collection is kept to the minimum required by the Supreme Court. Finally, the government has argued that the Fourth Amendment’s warrant requirement simply doesn’t apply to Section 702 because communications are acquired for foreign intelligence purposes rather than ordinary law enforcement. But the Supreme Court has looked skeptically on similar arguments in the past, and in any event, Section 702 surveillance is routinely used for criminal investigation and prosecution, as the Hasbajrami case and others show. As a result, our brief argues the Second Circuit should find that: the procedures that governed the surveillance of Mr. Hasbajrami were constitutionally unreasonable, and thus violated the Fourth Amendment, because they permitted agents to freely use and search for the communications of Americans obtained without a warrant. Because the procedures failed to require individualized judicial approval of any kind—even after the fact, and even when the government sought to use or query the communications of a known U.S. person—the Court can and should find them defective.  Meanwhile, Section 702 is set to expire at the end of the year. There are already several bills that would reauthorize the law. The first such bill, the USA Liberty Act from several representatives on the House Judiciary Committee, attempts to address problems like backdoor searches but still falls far short. This is a crucial time for lawmakers to hear from their constituents, so please join us in calling for true privacy reform. Take Action TELL CONGRESS IT'S TIME TO LET THE SUN SET ON MASS INTERNET SPYING Related Cases:  Jewel v. NSA
>> mehr lesen

Public Money, Public Code: Show Your Support For Free Software in Europe (Di, 24 Okt 2017)
The global movement for open access to publicly-funded research stems from the sensible proposition that if the government has used taxpayers' money to fund research, the publication of the results of that research should be freely-licensed. Exactly the same rationale underpins the argument that software code that the government has funded to be written should be made available as Free and Open Source Software (FOSS). Public Money, Public Code is a campaign of the Free Software Foundation Europe (FSFE) that seeks to transform that ideal into European law. An open letter at the center of the campaign reads in part: Public bodies are financed through taxes. They must make sure they spend funds in the most efficient way possible. If it is public money, it should be public code as well! That is why we, the undersigned, call our representatives to: “Implement legislation requiring that publicly financed software developed for public sector must be made publicly available under a Free and Open Source Software licence.” The campaign has already collected over 13,000 signatures to the open letter, which has already been delivered to candidates for the German Federal election. But that's not the end of it—the FSFE also plans to resubmit the letter during other European elections, including the 2018 election in Italy, culminating in a big handover for the 2019 election for the European Parliament. So there is plenty of time to add your voice to those who have already expressed their support. In the United States, under a Federal Source Code Policy that was introduced in 2016, agencies are required to release at least 20 percent of code developed by government employees and contractors as FOSS. But this isn't enough, particularly when you consider that 100% of code written by government employees is, by law, already in the public domain and should be available to the public for free. We therefore recommend that the next revision of the Federal Source Code Policy reflect that, by creating an “open-by-default” rule in place of the current 20 percent rule. Since 2013 we have also supported a proposed Open Access law called the Fair Access to Science and Technology Research Act (S.1701, H.R.3427), or FASTR, that would require every federal agency that spends more than $100 million on grants for research to adopt an open access policy. On both sides of the Atlantic, the advantages of releasing publicly-funded code under a FOSS license are the same. For example, it saves money by allowing code to be reused in multiple public or private projects, it makes government more accountable to the people by allowing them to review the software used by public agencies, and it stimulates collaboration and innovation. If you are European or have European friends or colleagues, we recommend that you review the FSFE's open letter and add your endorsement if you agree with it as strongly as we do. EFF is proud to participate in Open Access Week. 
>> mehr lesen

FBI Director Wray is Wrong About Section 702 Surveillance (Di, 24 Okt 2017)
Newly-minted FBI Director Christopher Wray threw out several justifications for the continued, warrantless government search of American communications. He’s wrong on all accounts.                                                In a presentation hosted by The Heritage Foundation, Wray warned of a metaphorical policy “wall” that, more than 15 years ago, stood between the U.S. government’s multiple intelligence-gathering agencies. That wall prevented quick data sharing, he said. It prevented quick “dot-connecting” to match threats to actors, he said. And, he said, it partly prevented the U.S. from stopping the September 11 attacks. “When people, now, sit back and say, ‘Three thousand people died on 9/11, how could the U.S. government let this happen?’” Wray said. “And one of the answers is, well, they had this wall.”                                                                        Wray is concerned with the potential expiration of the one of the government’s most powerful surveillance tools. It’s called Section 702 of the FISA Amendments Act and it allows the NSA to collect emails, browser history and chat logs of Americans. Section 702 also allows other agencies, like the FBI, to search through that data without a warrant. Those searches are called “backdoor searches.” Congress is considering bills with limitations to backdoor searches—including one bill that we have analyzed—and Wray is against that. Section 702, Wray claimed, doesn’t need limitations, or as he called it, a “self-inflicted wound.” According to Wray, Section 702 is Constitutional, has broad government oversight, and keeps Americans safe. Let’s see where he’s wrong. Constitutionality “Section 702 is Constitutional, lawful, [and] consistent with the Fourth Amendment,” Director Wray said. “Every court to consider the 702 program, including the Ninth Circuit, has found that.” The chasm between Wray’s words and his interpretation is enormous. Have courts “considered” Section 702, as Wray described? Yes. Have any decided Section 702’s constitutionality? Absolutely not. U.S. courts have delivered opinions in lawsuits involving data collected under Section 702, but no single court has delivered an opinion specifically on the constitutionality of Section 702. It’s an issue that EFF is currently fighting, in our years-long lawsuit Jewel v. NSA. When Wray mentions the Ninth Circuit, he is likely referencing a 2016 decision by the U.S. Court of Appeals for the Ninth Circuit. In the opinion for USA v. Mohamed Osman Mohamud, the appeals court ruled that, based on the very specific evidence of the lawsuit, data collected under Section 702 did not violate a U.S. person’s Fourth Amendment rights. But the judge explicitly wrote that this lawsuit did not involve some of the more “complex statutory and constitutional issues” potentially raised by Section 702. Notably, the judge wrote that the Mohamud case did not involve “the retention and querying of incidentally collected communications.” That’s exactly what we mean when we talk about “backdoor searches.” Wray is mischaracterizing the court’s opinion. He is wrong. Government Oversight “[Section 702] is subject to rigorous oversight,” Wray said. “Oversight, by not just one, not just two, but all three branches of government.” Wray’s comments again are disingenuous. U.S. Senators have tried to get clear answers from intelligence agency directors about Section 702 collection. Many times, they have been stonewalled. When Senator Ron Wyden (D-Oregon) asked former Director of National Intelligence James Clapper: “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?” “No, sir,” Clapper said. “Not wittingly. There are cases where they could inadvertently perhaps collect, but not wittingly.” Months later, defense contractor Edward Snowden confirmed that the NSA does indeed collect data on Americans. Clapper clarified his statement: he gave the “least untruthful” answer he could. If intelligence agencies, and their directors, cannot provide honest answers about Section 702, then meaningful Congressional oversight is a myth. As for judicial oversight, the court that approves warrants under Section 702—known as the Foreign Intelligence Surveillance Court—has rebuked the NSA in multiple opinions. A chart of Section 702 compliance violations, with accompanying court opinions, can be found here. While Section 702 is subject to government oversight, it doesn’t look like the NSA pays much attention. Finally, there can be no meaningful public oversight so long as we are kept in the dark. FISC opinions are not, by default, made public. Revelations to the press are denied. Even negotiations to upcoming bills are made behind closed doors. American Safety The safety and well-being of Americans is paramount, and tools that help provide that safety are clearly important. But in his remarks, Wray relied on familiar scare tactics to create political leverage. Unwilling to explain Section 702 success stories, Wray instead relied on the hypothetical. He asked What If? He conjured hypothetical mass shootings and lone gunmen. He employed the idea of a stranger taking pictures of a bridge at night; another buying suspicious supplies at a hardware store. He imagined a high schooler reporting worrying behavior of an ex-boyfriend. He invoked the specters of would-be victims. In all these situations, Wray’s position was clear: Section 702 prevents this chaos. Do not challenge it, he begged. “Any restriction on our ability to access the information that’s already Constitutionally collected in our databases, I just think is a really tragic and needless restriction,” Wray said. “And I beg the country not to go there again. I think we will regret it and I just am hoping that it doesn’t take another attack for people to realize that.” The U.S. government does not publicly provide data to assert its claim that Section 702 keeps Americans safe, claiming that such disclosures would compromise intelligence gathering. This is understandable. Wray’s suggestion of “another attack” is not. It suggests fear will help steer Americans towards the right decision.              Fear drove McCarthyism. Fear drove Japanese American internment. Fear drove the Chinese Exclusion Act and it helped drive the Patriot Act. Do not let fear drive us from our rights. Section 702 needs review, and many parts of it—including the backdoor search—do not measure up to Wray’s justifications. If the government can prove that warrantless search of American communications keeps Americans safe, why does Wray rely on hypotheticals? If you care about ending the backdoor search loophole, call your representatives today. Related Cases:  Jewel v. NSA
>> mehr lesen

How Silicon Valley’s Dirty Tricks Helped Stall Broadband Privacy in California (Di, 24 Okt 2017)
Across the country, state lawmakers are fighting to restore the Internet privacy rights of their constituents that Congress and the President misguidedly repealed earlier this year. The facts and public opinion are on their side, but the recent battle to pass California’s broadband privacy bill, A.B. 375, suggests that they will face a massive misinformation campaign launched by the telecom lobby and, sadly, joined by major tech companies. The tech industry lent their support to a host of misleading scare tactics. Big Telco’s opposition was hardly surprising. It was, after all, their lobbying efforts in Washington D.C. that repealed the privacy obligations they had to their customers. But it’s disappointing that after mostly staying out of the debate, Google and Facebook joined in opposing the restoration of broadband privacy for Californians despite the bill doing nothing about their core business models (the bill was explicitly about restoring ISP privacy rules). Through their proxy the Internet Association, which also represents companies like Airbnb, Amazon, Etsy, Expedia, LinkedIn, Netflix, Twitter, Yelp, and Zynga, among others—Google and Facebook locked arms with AT&T, Verizon, and Comcast to oppose this critical legislation.  What is worse, they didn’t just oppose the bill, but lent their support to a host of misleading scare tactics. How do we know? Because we were on the ground in Sacramento in September to witness every last-minute dirty trick to stop A.B. 375 from moving forward. But there is one positive outcome: ISP and Silicon Valley lobbyists have played their hand. When these tactics are deployed at the last minute by an army of lobbyists, false information is extremely hard to counter by citizens and consumer groups who lack special access to legislators. But over time legislators (and their constituents) learn the truth – and we’ll make sure they will remember it when this legislation comes back around in 2018. People have not forgotten they had privacy rights that were repealed this year. It is in fact one of the most unpopular moves by this Congress and opposed by voters regardless of political party affiliation. Undoubtedly, the companies and their proxies will recycle what worked in California to other states as legislatures move closer to passing their own bills. To inoculate against misinformation, here is a breakdown of the three most pervasive myths we saw at the final hours. Let's not let our lawmakers get fooled again. Read the Bill: the Definitions Are Rooted in Longstanding Telecom Law Lobbyists often calculate that some lawmakers are not going to closely read a bill and that these policymakers will instead rely on the word of “industry experts” without checking their claims. In California, the opposition lobby used this tactic and began claiming that the definition of “Broadband Internet Access Service” (the technical term for an ISP that sells broadband service) was inadequately defined and could burden all kinds of companies that are not ISPs. Technology giants like Google and Facebook, using the Internet Association as their proxy, echoed the false claim, providing the air of legitimacy that added to the intended confusion. In reality, there was nothing vague or unclear about this definition in A.B. 375. The language in the California bill was copied almost verbatim from the long-standing definition under Federal Communications Commission rules. You can see for yourself in this side-by-side comparison. And the bill’s author, Assemblymember Ed Chau, went one step further to explicitly state which entities would not be covered by the bill: “Broadband Internet access service provider” does not include a premises operator, including a coffee shop, bookstore, airline, private end-user network, or other business that acquires BIAS from a BIAS provider to enable patrons to access the Internet from its respective establishment. The language couldn’t be clearer. But repeat a false claim enough times from enough paid lobbyists and legislators start to question themselves. No, Broadband Privacy Protections Don’t Help Terrorists and Nazis One of the most offensive aspects of the misinformation campaign was the claim that pretending to restore our privacy rights, which have been on the books for communications providers for years, would help extremism. Here is the excerpt from an anonymous and fact-free document the industry put directly into the hands of state senators to stall the bill: The bill would bar ISPs from sharing potentially identifiable information with law enforcement in many circumstances. For example, a threat to conduct a terror attack could not be shared (unless it was to protect the ISP, its users, or other ISPs from fraudulent, abusive, or unlawful use of the ISP's service). AND the bill instructs that all such exceptions are to be construed narrowly. In addition to national security scaremongering, the industry put out a second document that attempted to play off fears emerging from the recent Charlottesville attack by white supremacists: This would mean that ISPs who inadvertently learned of a rightwing extremist or other violent threat to the public at large could not share that information with law enforcement without customer approval. Even IP address of bad actor [sic] could not be shared. There is absolutely nothing true about this statement. A.B. 375 specifically said that an ISP can disclose information without customer approval for any “fraudulent, abusive, or unlawful use of the service.” More importantly, it also included what is often referred to as a “catchall provision” by allowing ISPs to disclose information “as otherwise required or authorized by law.” The catchall provision is key, since there are already laws on the books allowing services to provide information to the police in emergency situations. For example, the Stored Communications Act spells out the rules under which ISPs are, and are not, allowed to disclose content to law enforcement. The California Electronic Communications Privacy Act (CalECPA), passed in 2015, allows ISPs to disclose information to law enforcement as long as it doesn’t run afoul of state or federal law and allows law enforcement to obtain this information without a warrant in specific emergency situations. Facebook and Google presumably know this, because they supported CalECPA when it was in the legislature. Comcast, AT&T, and Verizon know it too. The Great, Fake Pop-up Scare In materials like this advertisement, the opposition lobby claimed that A.B. 375 would result in a deluge of pop-ups that consumers would have to click through, and that in turn this inundation would create a sort of privacy fatigue. Consumers would stop caring, and cybersecurity would suffer. We’ve debunked most of this tale in a separate post , but let’s address the issue of pop-ups. The bill did require ISPs get your permission (also known as opt-in consent) before monetizing your information that includes the following: (1) Financial information. (2) Health information. (3) Information pertaining to children. (4) Social security numbers. (5) Precise geolocation information. (6) Content of communications. (7) (A) Internet Web site browsing history, application usage history, and the functional equivalents of either. But it did not mandate that people have to constantly receive pop ups to obtain that consent. In fact, once you said no, they couldn’t keep asking you over and over again without violating this law and likely laws that regulate fraud and deceptive acts by businesses. However, if the ISP changed the terms of your agreement, they would have to ask your permission again. Think of it like renting an apartment. If your landlord was going to change your lease agreement, you’d want to know and you’d want to make sure you agreed to any amendments. Being notified of these changes isn’t annoying, it is expected. The only thing that would be annoying is if your landlord kept pestering you to agree to changes you don’t want and did not take no for an answer. The same applies to ISPs: people are a lot more concerned about ISPs trying to sneak through new invasions of privacy than the alerts they get about those changes. Internet Users Will Need to Mobilize to Regain our Privacy Rights in 2018 It’s easy to see how lawmakers could be duped in the sleepless, high-speed, waning hours of the legislative session, especially when the information comes from sources that have historically been credible. In 2018, we plan to make sure that every legislator who was bamboozled by companies like Google, Facebook, Comcast, and AT&T is given the facts. We are confident that lawmakers in states around the nation will continue to push for consumer privacy, filling the gaps created by the Federal Communications Commission as it rolls back network neutrality and privacy protections and AT&T’s efforts in the courts to eliminate the Federal Trade Commission’s authority to oversee telephone companies. EFF will continue to support state efforts to respond, including dispelling the myths spread by privacy opponents. And we’ll need your help to make sure our legislatures respond to the demands of a vast majority of the public and side with Internet users—not the companies that seek to exploit them.
>> mehr lesen

Portugal Bans Use of DRM to Limit Access to Public Domain Works (Mo, 23 Okt 2017)
At EFF, we've become all too accustomed to bad news on copyright coming out of Europe, so it's refreshing to hear that Portugal has recently passed a law on copyright that helps to strike a fairer balance between users and copyright holders on DRM. The law doesn't abolish legal protection for DRM altogether—unfortunately, that wouldn't be possible for Portugal to do unilaterally, because it would be inconsistent with European Union law and with the WIPO Copyright Treaty to which the EU is a signatory. However, Law No. 36/2017 of June 2, 2017, which entered into force on June 3, 2017, does grant some important new exceptions to the law's anti-circumvention provisions, which make it easier for users to exercise their rights to access content without being treated as criminals. The amendments to Articles 217 and 221 of Portugal's Code of Copyright and Related Rights do three things. First, they provide that the anti-circumvention ban doesn't apply to circumvention of DRM in order to enjoy the normal exercise of copyright limitations and exceptions that are provided by Portuguese law. Although Portugal doesn't have a generalized fair use exception, the more specific copyright exceptions in Articles 75(2), 81, 152(4) and 189(1) of its law do include some key fair uses; including reproduction for private use, for news reporting, by libraries and archives, in teaching and education, in quotation, for persons with disabilities, and for digitizing orphan works. The circumvention of DRM in order to exercise these user rights is now legally protected. Second and perhaps even more significantly, the law prohibits the application of DRM to certain categories of works in the first place. These are works in the public domain (including new editions of works already in the public domain), and to works published or financed by the government. This provision alone will be a boon for libraries, archives, and for those with disabilities, ensuring that they never again have to worry about being unable to access or preserve works that ought to be free for everyone to use. The application of DRM to such works will now be an offence under the law, and if DRM has been applied to such works nevertheless, it will be permitted for a user to circumvent it. Third, the law also permits DRM to be circumvented where it was applied without the authorization of the copyright holder. From now on, if a licensee of a copyright work wishes to apply DRM to it when it is distributed in a new format or over a new streaming service, the onus will be on them to ask the copyright owner's permission first. If they don't do that, then it won't be an offence for its customers to bypass the DRM in order to obtain unimpeded access to the work, as its copyright owner may well have intended. If there's a shortcoming to the law, it's that it doesn't include any new exceptions to the ban on creating or distributing (or as lawmakers ludicrously call it, "trafficking in") anti-circumvention devices.  This means that although users are now authorized to bypass DRM in more cases than before, they're on their own when it comes to accomplishing this. The amendments ought to have established clear exceptions authorizing the development and distribution of circumvention tools that have lawful uses, rather than leaving users to gain access to such tools through legally murky channels. Overall though, these amendments go to show just how much flexibility countries have to craft laws on DRM that strike a fairer balance between users and copyright holders—even if, like Portugal, those countries have international obligations that require them to have anti-circumvention laws. We applaud Portugal for recognizing the harmful effects that DRM has access to knowledge and information, and we hope that these amendments will provide a model for other countries wishing to make a similar stand for users' rights.
>> mehr lesen

An Over-The-Top Approach to Internet Regulation in Developing Countries (Mo, 23 Okt 2017)
Increased smartphone usage and availability of wireless broadband has propelled the use of Internet based platforms and services that often compete with similar services based on older technologies. For example services like Facebook, Skype and WhatsApp that offer voice or video calls over the Internet compete with traditional SMS and voice calls over telecom networks. Such platforms have gained in popularity particularly in developing countries because calling over the Internet is far cheaper than making calls on telecom networks. Online video streaming and TV services like Netflix and online similarly compete with traditional broadcasters and network providers. These online applications and services are transforming traditional sectors and changing the economic landscape of the markets. The increasing popularity of such apps and services, often referred to by telecommunications regulators as "Over-the-top" or OTT services, brings new regulatory challenges for governments. Historically, most of these services have not required a licence or been required to pay any licensing fee. As the use of such services picks up in developing countries, governments are rushing to create rules that would subject OTT providers to local taxation, security, and content regulation obligations—often under pressure from telco incumbents who are seeking protection from change and competition. Taxing Online Platforms In August 2017, the Indonesian government via the Ministry of Communication and Informatics (MCI) unveiled a liability framework for OTT providers [doc]. The sweeping regulations cover a whole slew of companies including SMS and voice calls and email services, chatting and instant messaging platforms, financial and commercial transaction service providers, search engines, social network and online media delivery networks, and companies that store and mine online data. The regulation, which is currently under review, makes it mandatory for offshore businesses to establish a "permanent establishment" either through fixed local premises or by employing locals in their operations in Indonesia. Transnational companies are also required to have an agreement with an Indonesian network provider, and use local IP numbers and national payment gateways for their services. Considering current trade negotiations aimed at outlawing data localization, these operational obligations for OTTs cement the view that the Indonesian government is attempting to create a local territorial nexus for online transactions and activities, allowing them to be taxed and controlled. The draft MCI regulations also require online platforms to create a "censor mechanism" [sic] to filter and block "negative" content including terrorism, pornography and radical propaganda. While e-commerce and marketplace platforms enjoy immunity from content related obligations in Indonesia, the new regulation effectively dismantles this safe harbor framework. Worryingly, the regulation outlines a system of sanctions where the government can order telecommunication operators in Indonesia to use bandwidth management measures to take action against companies that violate the rules. Bandwidth management refers to the process by which the telecommunication operators manage traffic on their network, and can include traffic engineering measures such as limiting or throttling service traffic or the provision of priority access for certain services within certain periods. Such regulations would therefore likely violate net neutrality, and it is also unclear how this bandwidth management would be implemented. For example, the Ministry has not clarified safeguards to limit telecommunications providers from voluntarily conducting bandwidth management without a formal notice if it determines non-compliance with the law. Soft-Peddling Censorship Similar efforts to regulate online platforms are underway in Thailand. The National Broadcasting and Telecommunications Commission (NBTC) has committed to create a "level playing field" between OTT service providers and traditional broadcasting and telecommunications industries. In April 2017, it suggested introducing bandwidth fees for online content providers, and has also proposed bringing OTT service providers under an operating licence framework, taxing them for transactions by local merchants and making them liable for illegal content. In July 2017, the Thai government issued an ultimatum to OTT services to register with the national telecom regulator or face getting slapped with sanctions such as bans on advertising that would threaten revenue growth. The Thai regulator is exploring a "complaints-based" framework of regulation and has set up a control list of the top 100 content creating companies that are required to establish local offices and be registered as entities in Thailand. Allegedly, the efforts to regulate OTT providers are driven by the dramatic rise in the revenues being generated by them. A study conducted by the NBTC found that free OTT services had earned combined advertising revenue of 2.16 billion Thai baht in 2016, 70% of which stemmed from YouTube. Accordingly, the general policy recipe outlined by the regulator is aimed at increasing taxes collected from online platforms. Efforts to create a "level playing field" could also be interpreted as measures to empower the regulator to more easily monitor and censor content that the government is finding difficult to regulate. The Thai government has been unsuccessfully trying to pressure to online intermediaries to remove allegedly illegal speech including proposing shutting down sites for non-compliance with takedown requests. The proposals to regulate OTTs can be seen as a backhanded move to give the regulator the authority to demand the removal of content the military-run government considers illegal without waiting for a court order. Parallel to the efforts of regulating OTTs, the National Reform Steering Assembly has introduced an 84-page social media censorship proposal. If approved the rules would require fingerprint and facial scanning just to top-up a prepaid plan, in addition to existing mandatory SIM card registration and linking mobiles to national identities. Commentators say the proposed rules are similar to those in use in China and Iran. In India, regulators are considering proposals to require OTT providers to be placed under a telecom licensing-style regulatory framework. The telecom regulator has been organizing consultations on the issue since March 2015, however its stance on the matter is not clear. Reports suggest that regulating OTT may be a non-issue for the regulator in view of the future possibility of carriers to offer voice services through apps. However, telecom and network providers that stand to benefit from OTT regulation are pushing for interconnection agreements. The Department of Telecom (DoT) is reported to be working on a regulatory framework for services like WhatsApp, Facebook, Skype and WeChat that would subject them to obligations similar to those outlined for telecom service providers. The phenomenon of regulating OTTs is not limited to Asia. In Latin America, several countries including Uruguay, Costa Rica, Colombia, Argentina and Brazil are considering legislative changes to enable the taxing of OTT players. In Argentina, the government has issued a set of principles for telecommunications regulation that create obligations for registration of Internet intermediaries. Ahead of the Presidential elections in 2018 and with mounting opposition to his regime, the Zimbabwean President Robert Mugabe has created a Cyber Security, Threat Detection, and Mitigation Ministry to reign in threats emanating from social media. The government is also pressing ahead with a Computer and Cyber Crimes Bill, a comprehensive legislation that would allow the police to intercept data, seize electronic equipment and arrest people on loosely defined charges of “insurgency” and “terrorism.” Under increasing pressure to rein in the use of online platforms the regime has taken several measures to curtail the ability of activists and opposition to organize themselves, including raising prices on cellphone data and cutting off access to the Internet. Earlier this month, the Cybersecurity Ministry issued an order that requires all WhatsApp groups to be registered and administrator of the group to have government level clearance. The rules also make membership of groups that do not have necessary clearance or licensed administrator a criminal offence. As the order clarifies members belonging to unqualified groups will be "jointly and severally liable" for belonging to a group not registered with the cyber security ministry. Update: this has since been revealed as a hoax. The move to regulate WhatsApp is especially significant given that the messaging service is the default window to the Internet for most Zimbabweans. In 2010, fewer than 5 percent of Zimbabweans had access to the internet, by early 2016, nearly 50 percent did, with most people connecting to the internet through their cell phones. A report by Zimbabwe’s telecoms regulatory body shows that the number of people using WhatsApp for voice calls has been on the rise. The government's tough stance on the messaging platform has got digital rights activists worried that the regulation will have a chilling effect on freedom of expression. Towards An International Framework for Regulating OTTs? So-called OTT applications and services are the most visible part of the Internet for ordinary users. The rules and liability that are created for these applications and services impact freedom of expression, net neutrality, consumer rights and innovation. Therefore, discussions and rules on OTT regulation is at its core a debate about how the Internet should be regulated. Recognizing the global nature of online platforms, the International Telecommunications Union (ITU) has stepped in to explore global multilateral framework for OTT services and applications. The telecom arm of the ITU whose primary function is to develop and coordinate voluntary international standards, known as ITU-T Recommendations, has established a study group public policy issues related to the Internet. The technical study group includes a mandate to weigh in on several Internet-related technical and economic issues including "charging and accounting/settlement mechanisms" and "relevant aspects of IP peering". Last year, the study group adopted text encouraging governments to develop measures to strike an "effective balance" between OTT communications services and traditional communications services, in order to ensure a "level playing field" e.g., with respect to licensing, pricing and charging, universal service, quality of service, security and data protection, interconnection and interoperability, legal interception, taxation, and consumer protection. In May 2017, ITU Council Working Group on International Internet-related Public Policy Issues (CWG-Internet) launched an open online and physical consultation on OTTs. The working group will evaluate opportunities and implications associated with OTT including policy and regulatory matters. It considers regulatory approaches for OTTs that ensure security, safety and privacy of the consumer and will work towards developing model partnership agreements for cooperation at the local and international level. The physical consultation took place in September and received inputs from a wide range of stakeholders. During the World Telecommunications Development Conference (WTDC)—the main conference of the ITU’s Development sector, ITU-D—which took place in Argentina during October 2017, several governments have sought to expand the ITU Internet public policy mandate. As we approach the ITU’s 2018 Plenipotentiary Conference, or “Plenipot" we can expect conversations on regulatory frameworks to escalate in the ITU. However developing rules in a multilateral framework of the ITU may not be the most appropriate way forward. As Public Knowledge notes, the structure of the ITU renders itself vulnerable to harmful types of politicization, as states and regional coalitions seek to leverage this forum to grab greater control over Internet policy and standards development. Unlike the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force (IETF), or the Internet Governance Forum (IGF), the ITU isn’t a multistakeholder community. The only relevant actors at the ITU are Member States and although private industry and civil society may contribute to technical work, they can only participate as nonvoting sector members. With its structural lack of transparency and openness there is plenty opportunity for ITU public policy processes to be co-opted by member states to validate problematic policy or standards proposals. In an increasingly digital world where transnational global corporations shape content and speech, governments are at an inflection point in their policy choices for regulating online platforms. In seeking to create a "level playing field" between OTT providers, and legacy media and network providers, governments may end up introducing rigid frameworks that stymie innovation and competition or cause irreversible consumer harms. There may be various valid public interest reasons to regulate OTTs such as to ensure their compliance with privacy standards and net neutrality rules. But such regulations should be made on a targeted basis. Imposing a strict and unyielding regulatory framework based on telecommunications regulation and licensing goes further than this, and risks becoming a vehicle to protect legacy telcos and to enact content censorship.  
>> mehr lesen

Australian Government Wants to Give Satire The Boot (Fr, 20 Okt 2017)
The National Symbols Officer of Australia recently wrote to Juice Media, producers of Rap News and Honest Government Adverts, suggesting that its “use” of Australia’s coat of arms violated various Australian laws. This threat came despite the fact that Juice Media’s videos are clearly satire and no reasonable viewer could mistake them for official publications. Indeed, the coat of arms that appeared in the Honest Government Adverts series does not even spell “Australian” correctly. It is unfortunate that the Australian government cannot distinguish between impersonation and satire. But it is especially worrying because the government has proposed legislation that would impose jail terms for impersonation of a government agency. Some laws against impersonating government officials can be appropriate (Australia, like the U.S., is seeing telephone scams from fraudsters claiming to be tax officials). But the proposed legislation in Australia lacks sufficient safeguards. Moreover, the recent letter to Juice Media shows that the government may lack the judgment needed to apply the law fairly. In a submission to Parliament, Australian Lawyers for Human Rights explains that the proposed legislation is too broad. For example, the provision that imposes a 2 year sentence for impersonation of a government agency does not require any intent to deceive. Similarly, it does not require that any actual harm was caused by the impersonation. Thus, the law could sweep in conduct outside the kind of fraud that motivates the bill. The proposed legislation does include an exemption for “conduct engaged in solely for genuine satirical, academic or artistic purposes.” But, as critics have noted, this gives the government leeway to attack satire that it does not consider “genuine.” Similarly, the limitation that conduct be “solely” for the purpose of satire could chill speech. Is a video produced for satirical purposes unprotected because it was also created for the purpose of supporting advertising revenue? Government lawyers failing to understand satire is hardly unique to Australia. In 2005, a lawyer representing President Bush wrote to The Onion claiming that the satirical site was violating the law with its use of the presidential seal. The Onion responded that it was “inconceivable” that anyone would understand its use of the seal to be anything but parody. The White House wisely elected not to pursue the matter further. If it had, it likely would have lost on First Amendment grounds. Australia, however, does not have a First Amendment (or even a written bill of rights) so civil libertarians there are rightly concerned that the proposed law against impersonation could be used to attack political commentary. We hope the Australian government either kills the bill or amends the law to include both a requirement of intent to deceive and a more robust exemption for satire. In its own style, Juice Media has responded to the proposed legislation with an “honest” government advert. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FPmCDxmZI3I8%3Fautoplay%3D1%22%20frameborder%3D%220%22%20width%3D%22750%22%20height%3D%22422%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com      ‘Australien Government’ coat of arms Juice Media, CC BY-NC-SA 3.0 AU 
>> mehr lesen