Deeplinks

Silicon Valley Should Just Say No to Saudi (Fr, 22 Sep 2017)
American companies face a difficult tradeoff when dealing with government requests, but they should just say no to Saudi Arabia, which is using social media companies to do its dirty work in censoring Qatari media. Over the past few weeks, both Medium and Snap have caved to Saudi demands to geoblock journalistic content in the kingdom. The history of Silicon Valley companies’ compliance with requests from foreign governments is a sad one, and one that has undoubtedly led to more censorship around the world. While groups like EFF have been successful at pushing companies toward more transparency and at pushing back against domestic censorship in the United States, it seems that companies are unwilling or unable to see why protecting freedom of expression on their platforms abroad is important. After Yahoo’s compliance with a user data request from the Chinese government in the early 2000s resulted in the imprisonment of two Chinese citizens, the digital rights community began to pressure companies to use more scrutiny when dealing with orders from foreign governments. The early work of scholars such as Rebecca MacKinnon led to widespread awareness amongst civil society groups and the eventual creation of the Global Network Initiative, which created standards guiding companies’ compliance with foreign requests. A push from advocacy groups resulted in Google issuing its first transparency report in 2010, with other companies following the Silicon Valley giant’s lead. Today—thanks to tireless advocacy and projects like EFF’s Who Has Your Back report—dozens of companies issue their own reports. Transparency is vital. It helps users to understand who the censors are, and to make informed decisions about what platforms they use. But, as it turns out, transparency does not necessarily lead to less censorship. Corporate complicity The Kingdom of Saudi Arabia is one of the world’s most prolific censors, attacking everything from advertisements and album covers to journalistic publications. The government—an absolute monarchy—has in recent years implemented far-reaching surveillance, arrested bloggers and dissidents for their online speech, and allegedly deployed an online “army” against Al Jazeera and its supporters. Even before recent events, the country was known as the Arab world’s leader in Internet censorship, aggressively blocking a wide array of content from its citizens. American companies—including Facebook and Google—have at times in the past voluntarily complied with content restriction demands from Saudi Arabia, though we know little about their context. Now, in the midst of Saudi Arabia’s sustained attack on Al Jazeera (and its host country, Qatar), the government is ramping up its takedown requests. In particular, the government of Saudi Arabia is going after the press, and disappointingly, Silicon Valley companies seem all too eager to comply. In late June, Medium complied with requests from the government to restrict access to content from two publications: Qatar-backed Al Araby Al Jadeed (“The New Arab”) and The New Khaliji News. In the interest of transparency, the company sent both requests to Lumen. Medium has faced government censorship before; In 2016, the Malaysian government blocked the popular blogging platform, while Egypt included the site in a long list of banned publications earlier this year. By complying with the orders of the Saudi government, Medium is less likely to face a full ban in the country. This week, Snap disappointed free expression advocates by joining the list of companies willing to team up with Saudi Arabia against Qatar and its media outlets. The social media giant pulled the Al Jazeera Discover Publisher Channel from Saudi Arabia late last week. A company spokesperson told Reuters: “We make an effort to comply with local laws in the countries where we operate.” Corporate responsibility As we’ve argued in the past, companies should limit their compliance with foreign governments which are not democratic and where they do not have employees or other assets on the ground. By censoring at the behest of a government like Saudi Arabia’s, Medium and Snap have chosen to side with the Saudi regime in a dangerous political game—and by censoring the press, they have demonstrated a stunning lack of commitment to freedom of expression. While other companies like Facebook and Twitter may have set the precedent, it’s not one that other companies should be proud to follow. We urge Medium and Snap to reconsider their decisions, and for other companies to strengthen their commitment to freedom of expression by refusing to bow to demands from authoritarian governments when they’re not legally bound to.
>> mehr lesen

Appeals Court Rules Against Warrantless Cell-site Simulator Surveillance (Do, 21 Sep 2017)
Law enforcement officers in Washington, D.C. violated the Fourth Amendment when they used a cell site simulator to locate a suspect without a warrant, a D.C. appeals court ruled on Thursday. The court thus found that the resulting evidence should have been excluded from trial and overturned the defendant’s convictions. EFF joined the ACLU in filing an amicus brief, arguing that the use of a cell-site simulator without a warrant constituted an illegal search. We applaud the court’s decision in applying long-established Fourth Amendment principles to the digital age. Cell-site simulators (also known as “IMSI catchers” and “Stingrays”) are devices that emulate cell towers in order to gain information from a caller’s phone, such as locational information. Police have acted with unusual secrecy regarding this technology, including taking extraordinary steps to ensure that use does not appear in court filings and is not released through public records requests. Concerns over the secrecy and privacy have led to multiple lawsuits and legal challenges, as well as legislation.  The new decision in Prince Jones v. U.S. is the latest to find that police are violating our rights when using this sophisticated spying technology without a warrant. Jones was accused of sexual assault and burglary. Much of the evidence collected against him was derived from cell-site simulators targeting his phone.  The court determined that the use of a cell-site simulator to track and locate Jones was in fact a “search,” despite claims to the contrary from the prosecution. As the court wrote:  The cell-site simulator employed in this case gave the government a powerful person-locating capability that private actors do not have and that, as explained above, the government itself had previously lacked—a capability only superficially analogous to the visual tracking of a suspect. And the simulator's operation involved exploitation of a security flaw in a device that most people now feel obligated to carry with them at all times. Allowing the government to deploy such a powerful tool without judicial oversight would surely “shrink the realm of guaranteed privacy” far below that which “existed when the Fourth Amendment was adopted.” … It would also place an individual in the difficult position either of accepting the risk that at any moment his or her cellphone could be converted into tracking device or of forgoing “necessary use of” the cellphone… We thus conclude that under ordinary circumstances, the use of a cell-site simulator to locate a person through his or her cellphone invades the person's actual, legitimate, and reasonable expectation of privacy in his or her location information and is a search.  The decision should serve as yet another warning to law enforcement that new technologies do not mean investigators can bypass the Constitution. If police want data from our devices, they should come back with a warrant. 
>> mehr lesen

Appeals Court Limits Ability of Patent Trolls to File Suit in Far-Flung Districts (Do, 21 Sep 2017)
In a closely watched case, the Court of Appeals for the Federal Circuit has issued an order that should see many more patent cases leaving the Eastern District of Texas. The order in In re Cray, together with the Supreme Court’s recent decision in TC Heartland v. Kraft Foods, should make it much more difficult for patent owners to pick and choose among various courts in the country. In particular, it should drastically limit the ability of patent trolls to file in their preferred venue: the Eastern District of Texas. “Venue” is a legal doctrine that relates to where cases can be heard. Prior to 1990, the Supreme Court had long held that in patent cases, the statute found at 28 U.S.C. § 1400 controlled where a patent case could be filed. This statute says that venue in patent cases is proper either (1) where the defendant “resides” or (2) where the defendant has “committed acts of infringement and has a regular and established place of business.” However, in 1990 in a case called VE Holding, the Federal Circuit held that a small technical amendment to another statute—28 U.S.C. § 1391—abrogated this long line of cases. VE Holding, together with another case called Beverly Hills Fan, essentially meant that companies that sold products nationwide could be hailed into any court in the country on charges of patent infringement, regardless of how tenuous the connection to that forum. In May, 2017, the Supreme Court reaffirmed that the more specific statute, 28 U.S.C. § 1400, controls where a patent case can be filed. TC Heartland ruled that the term “resides” referred to a historical meaning, and was limited to the state of the defendant’s incorporation. However, TC Heartland did not discuss what was meant by the second prong of the venue statute, i.e. when defendants could be considered to have a “regular and established place of business.” In light of TC Heartland, many patent owners shifted their arguments, and pointed to the “regular and established place of business” in a district as the basis for bringing suit there. Because that term had not been applied for some time, courts have variously determined what, exactly, constitutes a “regular and established place of business.” One decision, Raytheon Co. v. Cray, Inc., written by Judge Gilstrap (a judge who at one point had ~25% of all patent cases in the entire country before him) appeared to take a broad view of what it meant to have a “regular and established place of business.” Judge Gilstrap held that “a fixed physical location in the district is not a prerequisite to proper venue.” More concerningly, Judge Gilstrap announced his own four-factor “test” that created greater possibilities that venue would be proper in the Eastern District. The Federal Circuit has now rejected both that test and Judge Gilstrap’s finding that a physical location in the district is not necessary. The Federal Circuit specifically noted that the venue statute “cannot be read to refer merely to a virtual space or to electronic communications from one person to another.” Importantly, the Federal Circuit also held that it is not enough that an employee may live in the district. What is important is whether the alleged infringer has itself (as opposed to the employee) established a place of business in the district. The Federal Circuit did stress, however, that every case should be judged on its own facts. Based on the facts of Cray’s relationship to the district, the Federal Circuit ordered Judge Gilstrap to transfer the case out of the Eastern District. This is a good ruling for many defendants who may find themselves sued in the Eastern District or any other district they may be only loosely connected with. When patent owners can drag defendants into court in far-flung corners of the country it can cause significant harm, especially for those who are on the receiving end of a frivolous lawsuit. Patent owners can pick a forum that is less inclined to grant fees, keep costs down, or stay cases. As a result, oftentimes it is cheaper to settle even a frivolous case than to fight. Between TC Heartland and now In re Cray, the ability of patent trolls to extort settlements based on cost of litigation rather than merit has been curtailed. Related Cases:  TC Heartland v. Kraft Foods
>> mehr lesen

.cat Domain a Casualty in Catalonian Independence Crackdown (Do, 21 Sep 2017)
On October 1, a referendum will be held on whether Catalonia, an autonomous region of the northeast of Spain, should declare itself to be an independent country.  The Spanish government has ruled the referendum illegal, and is taking action on a number of fronts to shut it down and to censor communications promoting it. One of its latest moves in this campaign was a Tuesday police raid of the offices of puntCAT, the domain registry that operates the .cat top-level domain, resulting in the seizure of computers, the arrest of its head of IT for sedition, and the deletion of domains promoting the October 1 referendum, such as refoct1.cat (that website is now available at an alternate URL). The .cat top-level domain was one of the earliest new top-level domains approved by ICANN in 2004, and is operated by a non-governmental, non-profit organization for the promotion of Catalan language and culture. Despite the seizure of computers at the puntCAT offices, because the operations of the domain registry are handled by an external provider, .cat domains not connected with the October 1 referendum (including eff.cat, EFF's little-known Catalan language website) have not been affected. We have deep concerns about the use of the domain name system to censor content in general, even when such seizures are authorized by a court, as happened here. And there are two particular factors that compound those concerns in this case. First, the content in question here is essentially political speech, which the European Court of Human Rights has ruled as deserving of a higher level of protection than some other forms of speech. Even though the speech concerns a referendum that has been ruled illegal, the speech does not in itself pose any imminent threat to life or limb. The second factor that especially concerns us here is that the seizure took place with only 10 days remaining until the scheduled referendum, making it unlikely that the legality of the domains' seizures could be judicially reviewed before the referendum is scheduled to take place. The fact that such mechanisms of legal review would not be timely accessible to the Catalan independence movement, and that the censorship of speech would therefore be de facto unreviewable, should have been another reason for the Spanish authorities to exercise restraint in this case. Whether it's allegations of sedition or any other form of unlawful or controversial speech, domain name intermediaries should not be held responsible for the content of websites that utilize their domains. If such content is unlawful, a court order directed to the publisher or host of that content is the appropriate way for authorities to deal with that illegality, rather than the blanket removal of entire domains from the Internet. The seizure of .cat domains is a worrying signal that the Spanish government places its own interests in quelling the Catalonian independence movement above the human rights of its citizens to access a free and open Internet, and we join ordinary Catalonians in condemning it.
>> mehr lesen

Apple does right by users and advertisers are displeased (Do, 21 Sep 2017)
With the new Safari 11 update, Apple takes an important step to protect your privacy, specifically how your browsing habits are tracked and shared with parties other than the sites you visit. In response, Apple is getting criticized by the advertising industry for "destroying the Internet's economic model." While the advertising industry is trying to shift the conversation to what they call the economic model of the Internet, the conversation must instead focus on the indiscriminate tracking of users and the violation of their privacy. When you browse the web, you might think that your information only lives in the service you choose to visit. However, many sites load elements that share your data with third parties. First-party cookies are set by the domain you are visiting, allowing sites to recognize you from your previous visits but not to track you across other sites. For example, if you visit first examplemedia.com and then socialmedia.com, your visit would only be known to each site. In contrast, third-party cookies are those set by any other domains than the one you are visiting, and were created to circumvent the original design of cookies. In this case, when you would visit examplemedia.com and loads tracker.socialmedia.com as well, socialmedia.com would be able to track you an all sites that you visit where its tracker is loaded. Websites commonly use third-party tracking to allow analytics services, data brokerages, and advertising companies to set unique cookies. This data is aggregated into individual profiles and fed into a real-time auction process where companies get to bid for the right to serve an ad to a user when they visit a page. This mechanism can be used for general behavioral advertising but also for “retargeting.” In the latter case,  the vendor of a product viewed on one site buys the chance to target the user later with ads for the same product on other sites around the web. As a user, you should be able to expect you will be treated with respect and that your personal browsing habits will be protected. When websites share your behavior without your knowledge, that trust is broken. Safari has been blocking third-party cookies by default since Safari 5.1, released in 2010, and has been key to Apple’s emerging identity as a defender of user privacy. Safari distinguished between these seedy cookies from those placed on our machines by first parties - sites we visit intentionally. From 2011 onwards, advertising companies have been devising ways to circumvent these protections. One of the biggest retargeters, Criteo, even acquired a patent on a technique to subvert this protection 1. Criteo, however, was not the first company to circumvent Safari's user protection. In 2012, Google paid 22.5 million dollars to settle an action by the FTC after they used another workaround to track Safari users with cookies from the DoubleClick Ad Network. Safari had an exception to the third-party ban for submission forms where the user entered data deliberately (e.g. to sign up). Google exploited this loophole when Safari users visited sites participating in Google's advertising network to set a unique cookie. The new Safari update, with Intelligent Tracking Protection, closes loopholes around third-party cookie-blocking by using machine learning to distinguish the sites a user has a relationship with from those they don’t, and treating the cookies differently based on that. When you visit a site, any cookies that are set can be used in a third-party context for twenty-four hours. During the first twenty-four hours the third-party cookies can be used to track the user, but afterward can only be used to login and not to track. This means that sites that you visit regularly are not significantly affected. The companies this will hit hardest are ad companies unconnected with any major publisher. At EFF we understand the need for sites to build a successful business model, but this should not come at the expense of people's privacy. This is why we launched initiatives like the EFF DNT Policy and tools like Privacy Badger. These initiatives and tools target tracking, not advertising. Rather than attacking Apple for serving their users, the advertising industry should treat this as an opportunity to change direction and develop advertising models that respect (and not exploit) users. Apple has been a powerful force in user privacy on a mass scale in recent years, as reflected by their support for encryption, the intelligent processing of user data on device rather than in the cloud, and limitations on ad tracking on mobile and desktop. By some estimates, Apple handles 30% of all pages on mobile. Safari's innovations are not the silver bullet that will stop all tracking, but by stepping up to protect their users’ privacy Apple has set a challenge for other browser developers. When the user's privacy interests conflict with the business models of the advertising technology complex, is it possible to be neutral? We hope that Mozilla, Microsoft and Google will follow Apple, Brave and Opera's lead. 1. In order to present themselves as a first party, Criteo had their host website include code on the internal links in their website to redirect when clicked. So if you click on a link to jackets in a clothes store, your click brings you for an instant to Criteo before forwarding you on to your intended destination. This trick makes them appear as a first party to your browser and they pop up a notification informing you and stating that by clicking on the page you consent to them storing a cookie. Once Safari accepted a first party cookie once, that site was allowed to set cookies also when it was a third party. So now they can retarget you elsewhere. Other companies (AdRoll, for example) used the same trick.
>> mehr lesen

Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust (Di, 19 Sep 2017)
Some of the most worrying kinds of attacks are ones that exploit users’ trust in the systems and softwares they use every day. Yesterday, Cisco’s Talos security team uncovered just that kind of attack in the computer cleanup software CCleaner. Download servers at Avast, the company that owns CCleaner, had been compromised to distribute malware inside CCleaner 5.33 updates for at least a month. Avast estimates that over 2 million users downloaded the affected update. Even worse, CCleaner’s popularity with journalists and human rights activists means that particularly vulnerable users are almost certainly among that number. Avast has advised CCleaner Windows users to update their software immediately. This is often called a “supply chain” attack, referring to all the steps software takes to get from its developers to its users. As more and more users get better at bread-and-butter personal security like enabling two-factor authentication and detecting phishing, malicious hackers are forced to stop targeting users and move “up” the supply chain to the companies and developers that make software. This means that developers need to get in the practice of “distrusting” their own  infrastructure to ensure safer software releases with reproducible builds, allowing third parties to double-check whether released binary and source packages correspond. The goal should be to secure internal development and release infrastructure to that point that no hijacking, even from a malicious actor inside the company, can slip through unnoticed. The harms of this hack extend far beyond the 2 million users who were directly affected. Supply chain attacks undermine users’ trust in official sources, and take advantage of the security safeguards that users and developers rely on. Software updates like the one Avast released for CCleaner are typically signed with the developer’s un-spoof-able cryptographic key. But the hackers appear to have penetrated Avast’s download servers before the software update was signed, essentially hijacking Avast’s update distribution process and punishing users for the security best practice of updating their software. Despite observations that these kind of attack are on the rise, the reality is that they remain extremely rare when compared to other kinds of attacks users might encounter. This and other supply chain attacks should not deter users from updating their software. Like any security decision, this is a trade-off: for every attack that might take advantage of the supply chain, there are one hundred attacks that will take advantage of users not updating their software. For users, sticking with trusted, official software sources and updating your software whenever prompted remains the best way to protect yourself from software attacks. For developers and software companies, the attack on CCleaner is a reminder of the importance of securing every link of the download supply chain.
>> mehr lesen

Live Blog: Senate Commerce Committee Discusses SESTA (Di, 19 Sep 2017)
10:00 a.m.: In closing the hearing, Sen. Dan Sullivan speaks passionately about the need for the Department of Justice to invest more resources in prosecuting sex traffickers. Ms. Slater of the Internet Assocation echoes Sen. Sullivan, arguing that the Justice Department should have more resources to prosecute sex trafficking cases. We could not agree more. Creating more liability for web platforms is, at best, a distraction. Experts in trafficking argue that, at worst, SESTA would do more harm than good. Freedom Network USA, the largest network of anti-trafficking advocate organizations in the country, expresses grave concerns about lawmakers unwittingly compromising the very tools law enforcement needs to find traffickers (PDF): "Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances." Thank you for following our live blog. Please take a moment to write to your members of Congress and ask them to defend the online communities that matter to you. Take Action Tell Congress: Stop SESTA. ____ 9:37 a.m.: "We have tried to listen to the industry," Sen. Blumenthal claims. But listening to major Internet industry players is not enough. It's essential that lawmakers talk to the marginalized communities that would be silenced under SESTA. It's essential that lawmakers talk to community-based or nonprofit platforms that will be most hurt by the increased liability, platforms like Wikipedia and the Internet Archive. In a letter to the Committee, the Wikimedia Foundation says point blank that Wikipedia would not exist without Section 230. In writing off small startups as "outliers," Blumenthal misunderstands something essential about the Internet, that any platform can compete. Liability protections in Section 230 have led to the explosion of successful Internet businesses. Blumenthal claims that SESTA will "raise the bar" in encouraging web platforms to adopt better measures for filtering content, but he's mistaken. The developments in content filtering that SESTA's proponents celebrate would not have taken place without the protections in Section 230. There is no such thing as a perfect filter. Under SESTA, platforms would have little choice but to rely far too heavily on filters, clamping down on legitimate speech in the process. ____ 9:24 a.m.: Prof. Goldman argues that adding enforcement of state criminal law as an exception to Section 230 would effectively balkanize the Internet. One state would have the ability to affect the entire Internet, so long as it can convince a judge that a state law targets sex trafficking. Goldman has written extensively on the problems that would arise from excluding state law from 230 immunity. ____ 9:09 a.m.: The committee's discussion about expanding federal criminal law liability for "facilitating" sex trafficking (by amending 18 USC 1591) misses an important point: under SESTA, platforms would be liable not if they knew sex trafficking was happening on their sites, but if they should have known (this is the "reckless disregard" standard set in 1591). ____ 9:00 a.m.: Xavier Becerra is correct that Section 230 blocks state criminal prosecutions against platforms for illegal user-generated content (but not federal prosecutions). However, state prosecutors are not prevented from going after the traffickers themselves. As California AG, he should do that. In Kiersten DiAngelo's letter to the Commerce Committee, she discusses her organization's exasperation at trying to work with California law enforcement to prosecute traffickers. That should be an Attorney General's first priority, not prosecuting web platforms that don't break the law themselves. ____ 8:55 a.m.: Yiota Souras from NCMEC says that there should be a legal barrier to enter the online ads marketplace.  There already is one: Congress passed the SAVE Act in 2015 to create express liability for platforms that knowingly advertise sex trafficking ads. Souras says that there needs to be more community intervention into the lives of children before they end up in online sex ads. We couldn't agree more. ____ 8:40 a.m.: When Abigail Slater of the Internet Association speaks to platforms' ability to filter content related to trafficking, she's talking about large web companies. Smaller platforms would be most at risk under SESTA: it would be very difficult for them to absorb the huge increase in legal exposure for user-generated content that SESTA would create. ____ 8:32 a.m.: Yiota Souras is confusing the issue. Victims of sex trafficking today can hold a platform liable in civil court for ads their traffickers posted when there is evidence that the platform had a direct hand in creating the illegal content. And victims can directly sue their traffickers without bumping into Section 230. ____ 8:25 a.m.: Professor Eric Goldman is now testifying on the importance of Section 230: SESTA would reinstate the moderation dilemma that Section 230 eliminated. Because of Section 230, online services today voluntarily take many steps to suppress socially harmful content (including false and malicious content, sexual material, and other lawful but unwanted content) without fearing liability for whatever they miss. Post-SESTA, some services will conclude that they cannot achieve this high level of accuracy, or that moderation procedures would make it impossible to serve their community. In those cases, the services will reduce or eliminate their current moderation efforts. Proponents of SESTA have tried to get around this dilemma by overstating the effectiveness of automated content filtering. In doing so, they really miss the point of filtering technologies. Automated filters can be very useful as an aid to human review, but they're not appropriate as the final arbiters of free expression online. Over-reliance on them will almost certainly result in silencing marginalized voices, including those of trafficking victims themselves. ____ 8:15 a.m.: Contrary to what Xavier Becerra suggested, we're not opposed to amending statutes in general. But Section 230 has included a reasonable policy balance, enabling culpable platforms to held liable while allowing free speech and innovation to thrive online. Amending it is unnecessary and dangerous. ____ 8:12 a.m.: Ms. Yvonne Ambrose, the mother of a trafficking victim, is now speaking on the horrors her daughter went through. It's specifically because of the horror of trafficking that Congress must be wary of bills that would do more harm than good. To quote anti-trafficking advocate (and herself a trafficking victim) Kristen DiAngelo (PDF), "SESTA would do nothing to decrease sex trafficking; in fact, it would have the opposite effect. [...] When trafficking victims are pushed off of online platforms and onto the streets, we become invisible to the outside world as well as to law enforcement, thus putting us in more danger of violence." In DiAngelo's letter, she tells the horrific story of a trafficking victim who was forced by her pimp to work the street when the FBI shut down a website where sex workers advertised: Since she was new to the street, sexual predators considered her fair game. Her first night out, she was robbed and raped at gunpoint, and when she returned to the hotel room without her money, her pimp beat her. Over the next seven months, she was arrested seven times for loitering with the intent to commit prostitution and once for prostitution, all while she was being trafficked. Freedom Network USA, the largest network of anti-trafficking service providers in the country, expresses grave concerns about any proposal that would shift more liability to web platforms (PDF): "The current legal framework encourages websites to report cases of possible trafficking to law enforcement. Responsible website administrators can, and do, provide important data and information to support criminal investigations. Reforming the CDA to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking. ____ 8:05 a.m.: Sen. Wyden is right. Sec. 230 made the Internet a platform for free speech. It should remain intact. Wyden makes it clear that by design, Section 230 does nothing to protect web platforms from prosecution for violations of federal criminal law. It also does nothing to shield platforms' users themselves from liability for their own actions in either state or federal court. Wyden speaks passionately on the need for resources to fight sex traffickers online. Reminder: SESTA would do nothing to fight traffickers. ____ 7:57 a.m.: Sen. Blumenthal is wrong. Section 230 does not provide blanket immunity to platforms for civil claims. Platforms that have a direct hand in posting illegal sex trafficking ads can be held liable in civil court. SESTA is not narrowly targeted. It would open up online platforms to a "deluge" (Sen. Blumenthal's words) of state criminal prosecutions and federal and state civil claims based on user-generated content. ____ 7:45 a.m.: Sen. Nelson asks: why aren't we doing everything we can to fight sex trafficking? We agree. That's why it's such a shame that Congress is putting its energy into enacting a measure that would not fight sex traffickers. In her letter to the Committee, anti-trafficking advocate (and herself a trafficking victim) Kristen DiAngelo outlines several proposals that Congress could take to fight trafficking: for example, enacting protective measures to make it easier for sex workers to report traffickers. Undermining Section 230 is not the right response. It's a political bait-and-switch. ____ 7:33 am: The hearing is beginning now. You can watch it at the Commerce Committee website. ____ There’s a bill in Congress that would be a disaster for free speech online. The Senate Committee on Commerce, Science, and Transportation is holding a hearing on that bill, and we’ll be blogging about it as it happens. The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem. The authors of SESTA say it’s designed to fight sex trafficking, but the bill wouldn’t punish traffickers. What it would do is threaten legitimate online speech. Join us at 7:30 a.m. Pacific time (10:30 Eastern) on Tuesday, right here and on the @EFFLive Twitter account. We’ll let you know how to watch the hearing, and we’ll share our thoughts on it as it happens. In the meantime, please take a moment to tell your members of Congress to Stop SESTA. Take Action Tell Congress: Stop SESTA.
>> mehr lesen

The Cybercrime Convention's New Protocol Needs to Uphold Human Rights (Di, 19 Sep 2017)
As part of an ongoing attempt to help law enforcement obtain data across international borders, the Council of Europe’s Cybercrime Convention— finalized in the weeks following 9/11, and ratified by the United States and over 50 countries around the world—is back on the global lawmaking agenda. This time, the Council’s Cybercrime Convention Committee (T-CY) has initiated a process to draft a second additional protocol to the Convention—a new text which could allow direct foreign law enforcement access to data stored in other countries’ territories. EFF has joined EDRi and a number of other organizations in a letter to the Council of Europe, highlighting some anticipated concerns with the upcoming process and seeking to ensure civil society concerns are considered in the new protocol. This new protocol needs to preserve the Council of Europe’s stated aim to uphold human rights, and not undermine privacy, and the integrity of our communication networks. How the Long Arm of Law Reaches into Foreign Servers Thanks to the internet, individuals and their data increasingly reside in different jurisdictions: your email might be stored on a Google server in the United States, while your shared Word documents might be stored by Microsoft in Ireland. Law enforcement agencies across the world have sought to gain access to this data, wherever it is held. That means police in one country frequently seek to extract personal, private data from servers in another. Currently, the primary international mechanism for facilitating governmental cross border data access is the Mutual Legal Assistance Treaty (MLAT) process, a series of treaties between two or more states that create a formal basis for cooperation between designated authorities of signatories. These treaties typically include some safeguards for privacy and due process, most often the safeguards of the country that hosts the data. The MLAT regime includes steps to protect privacy and due process, but frustrated agencies have increasingly sought to bypass it, by either cross-border hacking, or leaning on large service providers in foreign jurisdictions to hand over data voluntarily. The legalities of cross-border hacking remain very murky, and its operation is the very opposite of transparent and proportionate. Meanwhile, voluntary cooperation between service providers and law enforcement occurs outside the MLAT process and without any clear accountability framework. The primary window of insight into its scope and operation is the annual Transparency Reports voluntarily issued by some companies such as Google and Twitter. Hacking often blatantly ignores the laws and rights of a foreign state, but voluntary data handovers can be used to bypass domestic legal protections too.  In Canada, for example, the right to privacy includes rigorous safeguards for online anonymity: private Internet companies are not permitted to identify customers without prior judicial authorization. By identifying often sensitive anonymous online activity directly through the voluntary cooperation of a foreign company not bound by Canadian privacy law, law enforcement agents can effectively bypass this domestic privacy standard. Faster, but not Better: Bypassing MLAT The MLAT regime has been criticized as slow and inefficient. Law enforcement officers have claimed that have to wait anywhere between 6-10 months—the reported average time frame for receiving data through an MLAT request—for data necessary to their local investigation. Much of this delay, however, is attributable to a lack of adequate resources, streamlining and prioritization for the huge increase in MLAT requests for data held the United States, plus the absence of adequate training for law enforcement officers seeking to rely on another state’s legal search and seizure powers. Instead of just working to make the MLAT process more effective, the T-CY committee is seeking to create a parallel mechanism for cross-border cooperation. While the process is still in its earliest stages, many are concerned that the resulting proposals will replicate many of the problems in the existing regime, while adding new ones. What the New Protocol Might Contain The Terms of Reference for the drafting of this new second protocol reveal some areas that may be included in the final proposal. Simplified mechanisms for cross border access T-CY has flagged a number of new mechanisms it believes will streamline cross-border data access. The terms of reference mention a simplified regime’ for legal assistance with respect to subscriber data. Such a regime could be highly controversial if it compelled companies to identify anonymous online activity without prior judicial authorization. The terms of reference also envision the creation of “international production orders.”. Presumably these would be orders issued by one court under its own standards, but that must be respected by Internet companies in other jurisdictions. Such mechanisms could be problematic where they do not respect the privacy and due process rights of both jurisdictions. Direct cooperation The terms of reference also call for "provisions allowing for direct cooperation with service providers in other jurisdictions with regard to requests for [i] subscriber information, [ii] preservation requests, and [iii] emergency requests." These mechanisms would be permissive, clearing the way for companies in one state to voluntarily cooperate with certain types of requests issued by another, and even in the absence of any form of judicial authorization. Each of the proposed direct cooperation mechanisms could be problematic. Preservation requests are not controversial per se. Companies often have standard retention periods for different types of data sets. Preservation orders are intended to extend these so that law enforcement have sufficient time to obtain proper legal authorization to access the preserved data. However, preservation should not be undertaken frivolously. It can carry an accompanying stigma, and exposes affected individuals’ data to greater risk if a security breach occurs during the preservation period. This is why some jurisdictions require reasonable suspicion and court orders as requirements for preservation orders. Direct voluntary cooperation on emergency matters is challenging as well. While in such instances, there is little time to engage the judicial apparatus and most states recognize direct access to private customer data in emergency situations, such access can still be subject to controversial overreach. This potential for overreach--and even abuse--becomes far higher where there is a disconnect between standards in requesting and responding jurisdictions. Direct cooperation in identifying customers can be equally controversial. Anonymity is critical to privacy in digital contexts. Some data protection laws (such as Canada’s federal privacy law) prevent Internet companies from voluntarily providing subscriber data to law enforcement voluntarily. Safeguards The terms of reference also envisions the adoption of “safeguards". The scope and nature of these will be critical. Indeed, one of the strongest criticisms against the original Cybercrime Convention has been its lack of specific protections and safeguards for privacy and other human rights. The EDRi Letter calls for adherence to the Council of Europe’s data protection regime, Convention 108, as a minimum prerequisite to participation in the envisioned regime for cross-border access, which would provide some basis for shared privacy protection. The letter also calls for detailed statistical reporting and other safeguards. What’s next? On 18 September, the T-CY Bureau will meet with European Digital Rights Group (EDRI) to discuss the protocol. The first meeting of the Drafting Group will be held on 19 and 20 September. The draft Protocol will be prepared and finalized by the T-CY, in closed session. Law enforcement agencies are granted extraordinary powers to invade privacy in order to investigate crime. This proposed second protocol to the Cybercrime Convention must ensure that the highest privacy standards and due process protections adopted by signatory states remain intact. We believe that the Council of Europe T-CY Committee — Netherlands, Romania, Canada, Dominica Republic, Estonia, Mauritius, Norway, Portugal, Sri Lanka, Switzerland, and Ukraine — should concentrate first on fixes to the existing MLAT process, and they should ensure that this new initiative does not become an exercise in harmonization to the lowest denominator of international privacy protection. We'll be keeping track of what happens next.
>> mehr lesen

EFF to Court: The First Amendment Protects the Right to Record First Responders (Di, 19 Sep 2017)
The First Amendment protects the right of members of the public to record first responders addressing medical emergencies, EFF argued in an amicus brief filed in the federal trial court for the Northern District of Texas. The case, Adelman v. DART, concerns the arrest of a Dallas freelance press photographer for criminal trespass after he took photos of a man receiving emergency treatment in a public area. EFF’s amicus brief argues that people frequently use electronic devices to record and share photos and videos. This often includes newsworthy recordings of on-duty police officers and emergency medical services (EMS) personnel interacting with members of the public. These recordings have informed the public’s understanding of emergencies and first responder misconduct. EFF’s brief was joined by a broad coalition of media organizations: the Freedom of the Press Foundation, the National Press Photographers Association, the PEN American Center, the Radio and Television Digital News Association, Reporters Without Borders, the Society of Professional Journalists, the Texas Association of Broadcasters, and the Texas Press Association. Our local counsel are Thomas Leatherbury and Marc Fuller of Vinson & Elkins L.L.P. EFF’s new brief builds on our amicus brief filed last year before the Third Circuit Court of Appeals in Fields v. Philadelphia. There, we successfully argued that the First Amendment protects the right to use electronic devices to record on-duty police officers. Adelman, a freelance journalist, has provided photographs to media outlets for nearly 30 years. He heard a call for paramedics to respond to a K2 overdose victim at a Dallas Area Rapid Transit (“DART”) station. When he arrived, he believed the incident might be of public interest and began photographing the scene. A DART police officer demanded that Adelman stop taking photos. Despite Adelman’s assertion that he was well within his constitutional rights, the DART officer, with approval from her supervisor, arrested Adelman for criminal trespass. Adelman sued the officer and DART. EFF’s amicus brief supports his motion for summary judgment.
>> mehr lesen

Security Education: What's New on Surveillance Self-Defense (Mo, 18 Sep 2017)
Since 2014, our digital security guide, Surveillance Self-Defense (SSD), has taught thousands of Internet users how to protect themselves from surveillance, with practical tutorials and advice on the best tools and expert-approved best practices. After hearing growing concerns among activists following the 2016 US presidential election, we pledged to build, update, and expand SSD and our other security education materials to better advise people, both within and outside the United States, on how to protect their online digital privacy and security. While there’s still work to be done, here’s what we’ve been up to over the past several months. SSD Guide Audit SSD is consistently updated based on evolving technology, current events, and user feedback, but this year our SSD guides are going through a more in-depth technical and legal review to ensure they’re still relevant and up-to-date. We’ve also put our guides through a "simple English" review in order to make them more usable for digital security novices and veterans alike. We've worked to make them a little less jargon-filled, and more straightforward. That helps everyone, whether English is their first language or not. It also makes translation and localization easier: that's important for us, as SSD is maintained in eleven languages. Many of these changes are based on reader feedback. We'd like to thank everyone for all the messages you've sent and encourage you to continue providing notes and suggestions, which helps us preserve SSD as a reliable resource for people all over the world. Please keep in mind that some feedback may take longer to incorporate than others, so if you've made a substantive suggestion, we may still be working on it! As of today, we’ve updated the following guides and documents: Assessing your Risks Formerly known as "Threat Modeling," our Assessing your Risks guide was updated to be less intimidating to those new to digital security. Threat modeling is the primary and most important thing we teach at our security trainings, and because it’s such a fundamental skill, we wanted to ensure all users were able to grasp the concept. This guide walks users through how to conduct their own personal threat modeling assessment. We hope users and trainers will find it useful. SSD Glossary Updates SSD hosts a glossary of technical terms that users may encounter when using the security guide. We’ve added new terms and intend on expanding this resource over the coming months. How to: Avoid Phishing Attacks With new updates, this guide helps users identify phishing attacks when they encounter them and delves deeper into the types of phishing attacks that are out there. It also outlines five practical ways users can protect themselves against such attacks. One new tip we added suggests using a password manager with autofill. Password managers that auto-fill passwords keep track of which sites those passwords belong to. While it’s easy for a human to be tricked by fake login pages, password managers are not tricked in the same way. Check out the guide for more details, and for other tips to help defend against phishing. How to: Use Tor We updated How to: Use Tor for Windows and How to: use Tor for macOS and added a new How to: use Tor for Linux guide to SSD. These guides all include new screenshots and step-by-step instructions for how to install and use the Tor Browser—perfect for people who might need occasional anonymity and privacy when accessing websites. How to: Install Tor Messenger (beta) for macOS We've added two new guides on installing and using Tor Messenger for instant communications.  In addition to going over the Tor network, which hides your location and can protect your anonymity, Tor Messenger ensures messages are sent strictly with Off-the-Record (OTR) encryption. This means your chats with friends will only be readable by them—not a third party or service provider.  Finally, we believe Tor Messenger is employing best practices in security where other XMPP messaging apps fall short.  We plan to add installation guides for Windows and Linux in the future. Other guides we've updated include circumventing online censorship, and using two-factor authentication. What’s coming up? Continuation of our audit: This audit is ongoing, so stay tuned for more security guide updates over the coming months, as well as new additions to the SSD glossary. Translations: As we continue to audit the guides, we’ll be updating our translated content. If you’re interested in volunteering as a translator, check out EFF’s Volunteer page. Training materials: Nothing gratifies us more than hearing that someone used SSD to teach a friend or family member how to make stronger passwords, or how to encrypt their devices. While SSD was originally intended to be a self-teaching resource, we're working towards expanding the guide with resources for users to lead their friends and neighbors in healthy security practices. We’re working hard to ensure this is done in coordination with the powerful efforts of similar initiatives, and we seek to support, complement, and add to that collective body of knowledge and practice. Thus we’ve interviewed dozens of US-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re also conducting frequent critical assessment of learners and trainers, with regular live-testing of our workshop content and user testing evaluations of the SSD website. It’s been humbling to observe where beginners have difficulty learning concepts or tools, and to hear where trainers struggle using our materials. With their feedback fresh in mind, we continue to iterate on the materials and curriculum. Over the next few months, we are rolling out new content for a teacher’s edition of SSD, intended for short awareness-raising one to four hour-long sessions. If you’re interested in testing our early draft digital security educational materials and providing feedback on how they worked, please fill out this form by September 30. We can’t wait to share them with you.  
>> mehr lesen

In A Win For Privacy, Uber Restores User Control Over Location-Sharing (Mo, 18 Sep 2017)
After making an unfortunate change to its privacy settings last year, we are glad to see that Uber has reverted back to settings that empower its users to make choices about sharing their location information. Last December, an Uber update restricted users' location-sharing choices to "Always" or "Never," removing the more fine-grained "While Using" setting. This meant that, if someone wanted to use Uber, they had to agree to share their location information with the app at all times or surrender usability. In particular, this meant that riders would be tracked for five minutes after being dropped off. Now, the "While Using" setting is back—and Uber says the post-ride tracking will end even for users who choose the "Always" setting. We are glad to see Uber reverting back to giving users more control over their location privacy, and hope it will stick this time. EFF recommends that all users manually check that their Uber location privacy setting is on "While Using"after they receive the update. 1.     Open the Uber app, and press the three horizontal lines on the top left to open the sidebar. 2.     Once the sidebar is open, press Settings. 3.     Scroll to the bottom of the settings page to select Privacy Settings. 4.     In your privacy settings, select Location. 5.     In Location, check to see if it says “Always.”  If it does, click to change it. 6.     Here, change your location setting to "While Using" or "Never". Note that "Never" will require you to manually enter your pickup address every time you call a ride.
>> mehr lesen

Azure Confidential Computing Heralds the Next Generation of Encryption in the Cloud (Mo, 18 Sep 2017)
For years, EFF has commended companies who make cloud applications that encrypt data in transit. But soon, the new gold standard for cloud application encryption will be the cloud provider never having access to the user’s data—not even while performing computations on it. Microsoft has become the first major cloud provider to offer developers the ability to build their applications on top of Intel’s Software Guard Extensions (SGX) technology, making Azure “the first SGX-capable servers in the public cloud.” Azure customers in Microsoft’s Early Access program can now begin to develop applications with the “confidential computing” technology. Intel SGX uses protections baked into the hardware to ensure that data remains secure, even from the platform it’s running on. That means that an application that protects its secrets inside SGX is protecting it not just from other applications running on the system, but from the operating system, the hypervisor, and even Intel’s Management Engine, an extremely privileged coprocessor that we’ve previously warned about. Cryptographic methods of computing on encrypted data are still an active body of research, with most methods still too inefficient or involving too much data leakage to see practical use in industry. Secure enclaves like SGX, also known as Trusted Execution Environments (TEEs), offer an alternative path to applications looking to compute over encrypted data. For example, a messaging service with a server that uses secure enclaves offers similar guarantees to end-to-end encrypted services. But whereas an end-to-encrypted messaging service would have to use client-side search or accept either side channel leakage or inefficiency to implement server-side search, by using an enclave they can provide server-side search functionality with always-encrypted guarantees at little additional computational cost. The same is true for the classic challenge of changing the key that a ciphertext is encrypted without access to the key, known as proxy re-encryption. Many problems that have challenged cryptographers for decades to find efficient, leakage-free solutions are solvable instead by a sufficiently robust secure enclave ecosystem. While there is great potential here, SGX is still a relatively new technology, meaning that security vulnerabilities are still being discovered as more research is done. Memory corruption vulnerabilities within enclaves can be exploited by classic attack mechanisms like return-oriented programming (ROP). Various side channel attacks have been discovered, some of which are mitigated by a growing host of protective techniques. Promisingly, Microsoft’s press release teases that they’re “working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.” This could indicate that they’re working on developing something like Sanctum, which isolates caches by trusted application, reducing a major side channel attack surface. Until these issues are fully addressed, a dedicated attacker could recover some or all of the data protected by SGX, but it’s still a massive improvement over not using hardware protection at all. The technology underlying Azure Confidential Computing is not yet perfect, but it's efficient enough for practical usage, stops whole classes of attacks, and is available today. EFF applauds this giant step towards making encrypted applications in the cloud feasible, and we look forward to seeing cloud offerings from major providers like Amazon and Google follow suit. Secure enclaves have the potential to be a new frontier in offering users privacy in the cloud, and it will be exciting to see the applications that developers build now that this technology is becoming more widely available.
>> mehr lesen

An open letter to the W3C Director, CEO, team and membership (Mo, 18 Sep 2017)
Dear Jeff, Tim, and colleagues, In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties. When it became clear, following our formal objection, that the W3C's largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and other supporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action. This covenant would allow the W3C's large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C's DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation. More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video. Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected. The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate.  In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible. But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history. In our campaigning on this issue, we have spoken to many, many members' representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn't on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool's errand. We believe they will regret that choice. Today, the W3C bequeaths a legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures. So we'll keep fighting to keep the web free and open. We'll keep suing the US government to overturn the laws that make DRM so toxic, and we'll keep bringing that fight to the world's legislatures that are being misled by the US Trade Representative to instigate local equivalents to America's legal mistakes. We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them. We will defend those who are put in harm's way for blowing the whistle on defects in EME implementations. It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters. Effective today, EFF is resigning from the W3C. Thank you, Cory Doctorow Advisory Committee Representative to the W3C for the Electronic Frontier Foundation
>> mehr lesen

California Legislature Sells Out Our Data to ISPs (Sa, 16 Sep 2017)
In the dead of night, the California Legislature shelved legislation that would have protected every Internet user in the state from having their data collected and sold by ISPs without their permission. By failing to pass A.B. 375, the legislature demonstrated that they put the profits of Verizon, AT&T, and Comcast over the privacy rights of their constituents. Earlier this year, the Republican majority in Congress repealed the strong privacy rules issued by the Federal Communications Commission in 2016, which required ISPs to get affirmative consent before selling our data.  But while Congressional Democrats fought to protect our personal data, the Democratic-controlled California legislature did not follow suit. Instead, they kowtowed to an aggressive lobbying campaign, from telecommunications corporations and Internet companies, which included spurious claims and false social media advertisements about cybersecurity.  “It is extremely disappointing that the California legislature failed to restore broadband privacy rights for residents in this state in response to the Trump Administration and Congressional efforts to roll back consumer protection,” EFF Legislative Counsel Ernesto Falcon said. “Californians will continue to be denied the legal right to say no to their cable or telephone company using their personal data for enhancing already high profits. Perhaps the legislature needs to spend more time talking to the 80% of voters that support the goal of A.B. 375 and less time with Comcast, AT&T, and Google's lobbyists in Sacramento.”  All hope is not lost, because the bill is only stalled for the rest of the year. We can raise it again in 2018. A.B. 375 was introduced late in the session; that it made it so far in the process so quickly demonstrates that there are many legislators who are all-in on privacy.  In January, EFF will build off this year's momentum with a renewed push to move A.B. 375 to the governor's desk. Mark your calendar and join us. 
>> mehr lesen

One Last Chance for Police Transparency in California (Fr, 15 Sep 2017)
As the days wind down for the California legislature to pass bills, transparency advocates have seen landmark measures fall by the wayside. Without explanation, an Assembly committee shelved legislation that would have shined light on police use of surveillance technologies, including a requirement that police departments seek approval from their city councils. The legislature also gutted a key reform to the California Public Records Act (CPRA) that would’ve allowed courts to fine agencies that improperly thwart requests for government documents.  But there is one last chance for California to improve the public’s right to access police records. S.B. 345 would require every law enforcement agency in the state to publish on its website all “current standards, policies, practices, operating procedures, and education and training materials” by January 1, 2019. The legislation would cover all materials that would be otherwise available through a CPRA request. S.B. 345 is now on Gov. Jerry Brown's desk, and he should sign it immediately.  Take Action Tell Gov. Brown to sign S.B. 345 into law There are two main reasons EFF is supporting this bill.  The first is obvious: in order to hold law enforcement accountable, we need to understand the rules that officers are playing by. For privacy advocates, access to materials about advanced surveillance technologies—such as automated license plate readers, facial recognition, drones, and social media monitoring—will lead to better and more informed debates over policy.  The bill also would strengthen the greater police accountability movement, by proactively releasing policies and training about use of force, deaths in custody, body-worn cameras, and myriad other controversial police tactics and procedures.   The second reason is more philosophical: we believe that rather than putting the onus on the public to always file formal records requests, government agencies should automatically upload their records to the Internet whenever possible. S.B. 345 creates openness by default for hundreds of agencies across the state. To think of it another way: S.B. 345 is akin to the legislature sending its own public records request to every law enforcement agency in the state.  Unlike other measures EFF has supported this session, S.B. 345 has not drawn strong opposition from law enforcement. In fact, only the California State Sheriffs’ Association is in opposition, arguing that the bill could require the disclosure of potentially sensitive information. This is incorrect, since the bill would only require agencies to publish records that would already be available under the CPRA.  The claim is further undercut by the fact that eight organizations representing law enforcement have come out in support of the bill, including the California Narcotics Officers Association and the Association of Deputy District Attorneys.  The bill isn’t perfect. As written, the enforcement mechanism are vague, and it’s unclear what kind of consequences, if any, agencies may face if they fail to post these records in a little more than a year. In addition, agencies may overly withhold or redact policies, as is often the case with responses to traditional public records requests. Nevertheless, EFF believes that even the incremental measure contained in the bill will help pave the way for long term transparency reforms. Join us in urging Gov. Jerry Brown to sign this important bill. 
>> mehr lesen

We're Asking the Copyright Office to Protect Your Right To Remix, Study, and Tinker With Digital Devices and Media (Do, 14 Sep 2017)
Who controls your digital devices and media? If it's not you, why not? EFF has filed new petitions with the Copyright Office to give those in the United States protection against legal threats when you take control of your devices and media. We’re also seeking broader, better protection for security researchers and video creators against threats from Section 1201 of the Digital Millennium Copyright Act. DMCA 1201 is a deeply flawed and unconstitutional law. It bans “circumvention” of access controls on copyrighted works, including software, and bans making or distributing tools that circumvent such digital locks. In effect, it lets hardware and software makers, along with major entertainment companies, control how your digital devices are allowed to function and how you can use digital media. It creates legal risks for security researchers, repair shops, artists, and technology users. We’re fighting DMCA 1201 on many fronts, including a lawsuit to have the law struck down as unconstitutional. We’re also asking Congress to change the law. And every three years we petition the U.S. Copyright Office for temporary exemptions for some of the most important activities this law interferes with. This year, we’re asking the Copyright Office, along with the Librarian of Congress, to expand and simplify the exemptions they granted in 2015. We’re asking them to give legal protection to these activities: Repair, diagnosis, and tinkering with any software-enabled device, including “Internet of Things” devices, appliances, computers, peripherals, toys, vehicle, and environmental automation systems; Jailbreaking personal computing devices, including smartphones, tablets, smartwatches, and personal assistant devices like the Amazon Echo and the forthcoming Apple HomePod; Using excerpts from video discs or streaming video for criticism or commentary, without the narrow limitations on users (noncommercial vidders, documentary filmmakers, certain students) that the Copyright Office now imposes; Security research on software of all kinds, which can be found in consumer electronics, medical devices, vehicles, and more; Lawful uses of video encrypted using High-bandwidth Digital Content Protection (HDCP, which is applied to content sent over the HDMI cables used by home video equipment). Over the next few months, we’ll be presenting evidence to the Copyright Office to support these exemptions. We’ll also be supporting other exemptions, including one for vehicle maintenance and repair that was proposed by the Auto Care Association and the Consumer Technology Association. And we’ll be helping you, digital device users, tinkerers, and creators, make your voice heard in Washington DC on this issue.
>> mehr lesen

Shrinking Transparency in the NAFTA and RCEP Negotiations (Do, 14 Sep 2017)
Provisions on digital trade are quietly being squared away in both of the two major trade negotiations currently underway—the North American Free Trade Agreement (NAFTA) renegotiation and the Regional Comprehensive Economic Partnership (RCEP) trade talks. But due to the worst-ever standards of transparency in both of these negotiations, we don’t know which provisions are on the table, which have been agreed, and which remain unresolved. The risk is that important and contentious digital issues—such as rules on copyright or software source code—might become bargaining chips in negotiation over broader economic issues including wages, manufacturing and dispute resolution, and that we would be none the wiser until after the deals have been done. The danger of such bad compromises being made is especially acute because both of the deals are in trouble. Last month President Donald Trump targeted the NAFTA which includes Canada and Mexico, describing it in a tweet as "the worst trade deal ever made," which his administration "may have to terminate." At the conclusion of the 2nd round of talks held last week in Mexico, the prospects of agreement being concluded anytime soon seem unlikely. Even as a third round of talks is scheduled for Ottawa from September 23-27, 2017, concern about the agreement's future has prompted Mexico to step up efforts to boost commerce with Asia, South America and Europe. The same is true of the RCEP agreement, which is being spearheaded by the 10 member ASEAN bloc and its 6 FTA partners, and which was expected to be concluded by the end of this year. The possibility of RCEP being ratified this year now seems unlikely as nations are far from agreement on key areas. Reports suggest however that the negotiators are targeting the e-commerce chapter as a priority area for early agreement. So far, the text specific to e-commerce has not been made publicly available and the leaked Terms of Reference for the Working Group on ecommerce (WGEC) is the only reference to what issues could make an appearance in the RCEP. We have previously reported that the e-commerce chapter was expected to be shorter and less detailed than the chapters on goods and services. However the secrecy of trade negotiations makes it very difficult to accurately track developments or policy objectives that are being pushed through or prioritized in the negotiations. Trade Negotiations Are Becoming Less Open, Not More Far from adopting the enhanced measures of transparency and openness that EFF demanded and that U.S. Trade Representative Lighthizer promised to deliver, the NAFTA renegotiation process seems to be walking back from the minimal level of transparency that civil society fought hard for during the Trans-Pacific Partnership Agreement (TPP) talks. So far, the NAFTA process has had no open stakeholder meetings at its rounds to date. EFF has written a joint letter to negotiators [PDF] that has been endorsed by groups including Access Now, Creative Commons, Derechos Digitales, and OpenMedia, demanding that it reinstate stakeholder meetings, as an initial step in opening up the negotiations to greater public scrutiny. The openness of the RCEP negotiation process has also been degrading. At a public event held during the Auckland round, the Trade Minister from New Zealand and members of the Trade Negotiating Committee (TNC) fielded questions from stakeholders using social media and the event was live streamed. Organizers of earlier rounds of RCEP held in South Korea and Indonesia had facilitated formal and informal meetings between negotiators and civil society organisations (CSOs). But at recent rounds the opportunities for interaction between negotiators and stakeholders has dropped. The hosting nations have also been much more restrained with their engagement and outreach. For example, at the Hyderabad round there was no press conference or official statement released by the government representatives or chapter negotiators. A Broader Retreat from Stakeholder Inclusion? This worrying retreat from democracy in trade negotiations mirrors a broader softening of support by governments for public participation in policy development. From a high point about a decade ago, when governments embraced a so-called “multi-stakeholder model” as the foundation of bodies such as the Internet Governance Forum (IGF), several countries that were previous supporters of this model seem to be much cooler towards it now. Consider the Xiamen Declaration which was adopted by consensus at the 9th BRICS (Brazil, Russia, India, China, South Africa) summit in China this month. Unlike previous BRICS declarations which supported a multi-stakeholder approach, the Xiamen declaration stresses the importance of state sovereignty throughout the document. This trend is not reserved to the BRICS bloc. Western governments, too, are excluding civil society voices from policy development, even while they experiment with methods for engaging directly with large corporations. In January this year, Denmark in recognition of technological issues becoming matters of foreign policy has appointed a "digitisation ambassador" to engage with tech companies such as Google and Facebook. This is a poor substitute for a fully inclusive, balanced and accountable process that would also include Internet users and other civil society stakeholders. Given the complexity of trade negotiations and the fast-changing pace of the digital environment, negotiators are not always equipped to negotiate fair trade deals without the means of having a broader public discussion of the issues involved. In particular, including provisions related to the digital economy in trade agreements can result in a push to negotiate on issues before they can form an understanding of potential consequences. A wide and open consultative process, ensuring a more balanced view of the issues at stake, could help. The intransigence of trade ministries such as the USTR to heed demands either from EFF or from Congress to become more open and transparent suggest that it may be a long while before we see such an inclusive, balanced, and accountable process evolving out of trade negotiations as they exist now. But other venues for discussing digital trade, such as the IGF and the OECD, do exist today and could be used rather than rushing into closed-door norm-setting. One advantage of preferring these more flexible, soft-law mechanisms for developing norms on Internet related issues is that they provide a venue for cooperation and policy coordination, without locking countries into a set of rules that may become outmoded as business models and technologies continue to evolve. This is not the model that NAFTA or RCEP negotiators have chosen, preferring to open the door to corporate lobbyists while keeping civil society locked out. This week’s letter to the trade ministries of the United States, Canada, and Mexico calls them out on this and asks them to do better. If you are in the United States, you can also join the call for better transparency in trade negotiations by asking your representative to support the Promoting Transparency in Trade Act.
>> mehr lesen

EFF Asks Court: Can Prosecutors Hide Behind Trade Secret Privilege to Convict You? (Do, 14 Sep 2017)
California Appeals Court Urged to Allow Defense Review of DNA Matching Software If a computer DNA matching program gives test results that implicate you in a crime, how do you know that the match is correct and not the result of a software bug? The Electronic Frontier Foundation (EFF) has urged a California appeals court to allow criminal defendants to review and evaluate the source code of forensic software programs used by the prosecution, in order to ensure that none of the wrong people end up behind bars, or worse, on death row. In this case, a defendant was linked to a series of rapes by a DNA matching software program called TrueAllele. The defendant wants to examine how TrueAllele takes in a DNA sample and analyzes potential matches, as part of his challenge to the prosecution’s evidence. However, prosecutors and the manufacturers of TrueAllele’s software argue that the source code is a trade secret, and therefore should not be disclosed to anyone. “Errors and bugs in DNA matching software are a known problem,” said EFF Staff Attorney Stephanie Lacambra. “At least two other programs have been found to have serious errors that could lead to false convictions. Additionally, different products used by different police departments can provide drastically different results. If you want to make sure the right person is imprisoned—and not running free while someone innocent is convicted—we can’t have software programs’ source code hidden away from stringent examination.” The public has an overriding interest in ensuring the fair administration of justice, which favors public disclosure of evidence. However, in certain cases where public disclosure could be too financially damaging, the court could use a simple protective order so that only the defendant’s attorneys and experts are able to review the code. But even this level of secrecy should be the exception and not the rule. “Software errors are extremely common across all kinds of products,” said EFF Staff Attorney Kit Walsh. “We can’t have someone’s legal fate determined by a black box, with no opportunity to see if it’s working correctly.” For the full brief in California v. Johnson: https://www.eff.org/document/amicus-brief-california-v-johnson Contact:  Stephanie Lacambra Criminal Defense Staff Attorney stephanie@eff.org Kit Walsh Staff Attorney kit@eff.org
>> mehr lesen

With EFF’s Help, Small Business Stands Up To Patent Troll Electronic Communication Technologies, LLC (Do, 14 Sep 2017)
Since 1992, Fairytale Brownies has sold delicious brownies based on a secret family recipe. It’s a small business founded by two childhood friends who were quick to see the potential of the Internet and registered the domain www.brownies.com in 1995. Fairytale Brownies became an e-commerce website before the first dot-com boom and has remained in business ever since. But earlier this year, Fairytale Brownies received a surprising letter. The letter said its e-commerce website infringes U.S. Patent No. 9,373,261 (“the ’261 patent”). The ’261 patent is owned by Electronic Communication Technologies, LLC (“ECT”), a company that was previously known as Eclipse IP. We have written about it many times before.  What is the technology claimed by the patent? Generally, ECT states that it patented “unique solutions to minimize hacker’s impacts when mimicking order confirmations and shipment notification emails.” From what we can tell, it claims to have invented sending an automated email in response to an online order, that contains personally identifiable information (“PII”). ECT claims that including PII allows customers to know that the email is not a “phishing” email or “part of an email fraud system,” and as a consequence customers will know to trust the links in the email. There are a few more details about what exactly ECT claims to have invented and what it says infringes, which can be seen in the “chart” ECT provided to Fairytale Brownies. The chart points to claim 11 of the ’261 patent and describes how it believes Fairytale Brownies infringes the ’261 patent. In doing so, ECT tells Fairytale Brownies (and everyone else) what it thinks the ‘261 patent allows it to claim, and at least some subset of what it claims to have “invented.” Fairytale Brownies disputes that ECT is properly reading its claims, and that Fairytale Brownies infringes. Regardless, as we point out in our letter on behalf of Fairytale Brownies, many, many companies engaged in the behavior ECT says it invented long before ECT ever applied for a patent. That patent, entitled “Secure notification messaging with user option to communicate with delivery or pickup representative,” issued in 2016, but ECT claims the technology was invented in 2003. But many other companies did exactly what Fairytale Brownies does, and we provided copies of order confirmation emails from Amazon.com that included the so-called “anti-phishing invention” from over two years before ECT’s “invention.” (We have redacted some information from those emails to protect privacy). We included examples from many other companies that show the same thing, including emails from NewEgg, Crate & Barrel, and Old Navy. Indeed, this “invention” seems practically mundane, and nothing that should have been seen as “novel” in 2003. We also note in our letter that the ’261 patent is likely invalid under Alice. The Alice decision was the basis of a court ruling that invalidated claims from three of ECT’s other patents (this ruling issued when it was still known as Eclipse IP). We attach a motion that is currently pending in the Southern District of Florida that seeks a finding that claim 11 of the ’261 patent, the same claim that is asserted against Fairytale Brownies, is invalid for failing to claim patentable subject matter. Electronic Communication Technologies demanded Fairytale Brownies pay $35,000 to license the ’261 patent.  We think that $1 would be too much. As a small business, getting a letter out of the blue demanding a payment for infringing on patents you’ve never heard of based on technology that forms a standard part of any e-commerce business can be a daunting experience. With EFF’s help, Fairytale Brownies is pushing back, and refusing to pay for something that was commonplace in e-commerce long before ECT claims to have invented it.
>> mehr lesen

VICTORY: DOJ Backs Down from Facebook Gag Orders in Not-so-secret Investigation (Do, 14 Sep 2017)
The U.S. Department of Justice has come to the obvious conclusion that there’s no need to order Facebook to keep an investigation “secret” when it was never secret in the first place. While we applaud the government’s about-face, we question why they ever took such a ridiculous position in the first place. Earlier this summer, Facebook brought a First Amendment challenge to gag orders accompanying several warrants in an investigation in Washington, D.C. that Facebook argued was “known to the public.” In an amicus brief joined by three other civil liberties organizations, EFF explained to the D.C. Court of Appeals that gag orders are subject to a stringent constitutional test that they can rarely meet. We noted that the timing and circumstances of the warrants were strikingly similar to the high-profile investigations of the protests surrounding President Trump’s inauguration on January 20 (known as J20). Given these facts, we argued that there was no way the First Amendment could allow gag orders preventing Facebook from informing its users that the government had obtained their data. In a joint filing today, Facebook and the DOJ have told the court that the gag orders were no longer necessary, because the investigation had “progressed.” Of course, if the investigation in this case is about what we think it is—the January 20 protests in D.C., opposing the incoming Trump Administration—then it had “progressed” to the point where no gag orders were necessary even before the government applied for them. While we’re pleased that the government has come to its senses in this case, it routinely uses gag orders that go far beyond the very narrow circumstances allowed by the First Amendment. We’ve fought these unconstitutional prior restraints for years, and we’ll continue to do so at every opportunity. Read about what we had to say about the government’s original position here.
>> mehr lesen

Stop SESTA: Whose Voices Will SESTA Silence? (Mi, 13 Sep 2017)
Overreliance on Automated Filters Would Push Victims Off of the Internet In all of the debate about the Stop Enabling Sex Traffickers Act (SESTA, S. 1693), there’s one question that’s received surprisingly little airplay: under SESTA, what would online platforms do in order to protect themselves from the increased liability for their users’ speech? With the threat of overwhelming criminal and civil liability hanging over their heads, Internet platforms would likely turn to automated filtering of users’ speech in a big way. That’s bad news because when platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear. Take Action Tell Congress: Stop SESTA. Section 230 Built Internet Communities The modern Internet is a complex system of online intermediaries—web hosting providers, social media platforms, news websites that host comments—all of which we use to speak out and communicate with each other. Those platforms are all enabled by Section 230, a law that protects platforms from some types of liability for their users’ speech. Without those protections, most online intermediaries would not exist in their current form; the risk of liability would simply be too high. Section 230 still allows authorities to prosecute platforms that break federal criminal law, but it keeps platforms from being punished for their customers’ actions in federal civil court or at the state level. This careful balance gives online platforms the freedom to set and enforce their own community standards while still allowing the government to hold platforms accountable for criminal behavior. When platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear. SESTA would throw off that balance by shifting additional liability to intermediaries. Many online communities would have little choice but to mitigate that risk by investing heavily in policing their members’ speech. Or perhaps hire computers to police their members’ speech for them. The Trouble with Bots Massive cloud software company Oracle recently endorsed SESTA, but Oracle’s letter of support actually confirms one of the bill’s biggest problems—SESTA would effectively require Internet businesses to place more trust than ever before in automated filtering technologies to police their users’ activity. While automated filtering technologies have certainly improved since Section 230 passed in 1996, Oracle implies that bots can now filter out sex traffickers’ activity with near-perfect accuracy without causing any collateral damage. That’s simply not true. At best, automated filtering provides tools that can aid human moderators in finding content that may need further review. That review still requires human community managers. But many Internet companies (including most startups) would be unable to dedicate enough staff time to fully mitigate the risk of litigation under SESTA. So what will websites do if they don’t have enough human reviewers to match their growing user bases? It’s likely that they’ll tune their automated filters to err on the side of extreme caution—which means silencing legitimate voices. To see how that would happen, look at the recent controversy over Google’s PerspectiveAPI, a tool designed to measure the “toxicity” in online discussions. PerspectiveAPI flags statements like “I am a gay woman” or “I am a black man” as toxic because it fails to differentiate between Internet users talking about themselves and making statements about marginalized groups. It even flagged “I am a Jew” as more toxic than “I don’t like Jews.” See the problem? Now imagine a tool designed to filter out speech that advertises sex trafficking to comply with SESTA. From a technical perspective, creating such a tool that doesn’t also flag a victim of trafficking telling her story or trying to find help would be extremely difficult. (For that matter, so would training it to differentiate trafficking from consensual sex work.) If Google, the largest artificial intelligence (AI) company on the planet, can’t develop an algorithm that can reason about whether a simple statement is toxic, how likely is it that any company will be able to automatically and accurately detect sex trafficking advertisements? Despite all the progress we’ve made in analytics and AI since 1996, machines still have an incredibly difficult time understanding subtlety and context when it comes to human speech. Filtering algorithms can’t yet understand things like the motivation behind a post—a huge factor in detecting the difference between a post that actually advertises sex trafficking and a post that criticizes sex trafficking and provides support to victims. This is a classic example of the “nerd harder” problem, where policymakers believe that technology can advance to fit their specifications as soon as they pass a law requiring it to do so. They fail to recognize the inherent limits of automated filtering: bots are useful in some cases as an aid to human moderators, but they’ll never be appropriate as the unchecked gatekeeper to free expression. If we give them that position, then victims of sex trafficking may be the first people locked out. At the same time, it’s also extremely unlikely that filtering systems will actually be able to stop determined sex traffickers from posting. That’s because it’s not currently technologically possible to create an automated filtering system that can’t be fooled by a human. For example, say you have a filter that just looks for certain keywords or phrases. Sex traffickers will learn what words or phrases trigger the filter and avoid them by using other words in their place. New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators. Building a more complicated filter—say, by using advanced machine learning or AI techniques—won’t solve the problem either. That’s because all complex machine learning systems are susceptible to what are known as “adversarial inputs”—examples of data that look normal to a human, but which completely fool AI-based classification systems. For example, an AI-based filtering system that recognizes sex trafficking posts might look at such a post and classify it correctly—unless the sex trafficker adds some random-looking-yet-carefully-chosen characters to the post (maybe even a block of carefully constructed incomprehensible text at the end), in which case the filtering system will classify the post as having nothing to do with sex trafficking. If you’ve ever seen a spam email with a block of nonsense text at the bottom, then you’ve seen this tactic in action. Some spammers add blocks of text from books or articles to the bottom of their spam emails in order to fool spam filters into thinking the emails are legitimate. Research on solving this problem is ongoing, but slow. New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators. In other words, not only would automated filters be ineffective at removing sex trafficking ads from the Internet, they would also almost certainly end up silencing the very victims lawmakers are trying to help. Don’t Put Machines in Charge of Free Speech One irony of SESTA supporters’ praise for automated filtering is that Section 230 made algorithmic filtering possible. In 1995, a New York court ruled that because the online service Prodigy engaged in some editing of its members’ posts, it could be held liable as a “publisher” for the posts that it didn’t filter. When Reps. Christopher Cox and Ron Wyden introduced the Internet Freedom and Family Empowerment Act (the bill that would evolve into Section 230), they did it partially to remove that legal disincentive for online platforms to enforce community standards. Without Section 230, platforms would never have invested in improving filtering technologies. However, automated filters simply cannot be trusted as the final arbiters of online speech. At best, they’re useful as an aid to human moderators, enforcing standards that are transparent to the user community. And the platforms using them must carefully balance enforcing standards with respecting users’ right to express themselves. Laws must protect that balance by shielding platforms from liability for their customers’ actions. Otherwise, marginalized voices can be the first ones pushed off the Internet. Take Action Tell Congress: Stop SESTA.
>> mehr lesen

Three Lies Big Internet providers Are Spreading to Kill the California Broadband Privacy Bill (Mi, 13 Sep 2017)
Now that California’s Broadband Privacy Bill, A.B. 375, is headed for a final vote in the California legislature, Comcast, Verizon, and all their allies are pulling out all the stops to try to convince state legislators to vote against the bill. Unfortunately, that includes telling legislators about made-up problems the bill will supposedly create, as well as tweeting out blatantly false statements and taking out online ads that spread lies about what A.B. 375 will really do. To set the record straight, here are three lies big Internet providers and their allies are spreading—and the truth about how A.B. 375 will protect your privacy and security online. TAKE ACTION TELL YOUR REPRESENTATIVES TO SUPPORT ONLINE PRIVACY. Lie #1: A.B. 375 Will Prevent Internet providers From Stopping Future Cyberattacks In their opposition letter to legislators, big Internet providers and their allies claim that A.B. 375 “prevents Internet providers from using information they have long relied upon to prevent cybersecurity attacks.” That’s a lie. A.B. 375 explicitly says that Internet providers can use customer’s personal information (including things like IP addresses and traffic records) “to protect the rights or property of the BIAS provider, or to protect users of the BIAS and other BIAS providers from fraudulent, abusive, or unlawful use of the service.” In other words, A.B. 375 explicitly allows Internet providers to use the same information they’ve always used to detect intrusion attempts, stop cyber-attacks, and catch data breaches before they happen. And they can still work with other Internet providers to prevent attacks by sharing this vital security information, so long as they de-identify the data first by making sure it’s not linkable to an individual or device. The truth: A.B. 375 will have no impact on what Internet providers can do to protect their customers’ security. If big Internet providers really think otherwise, we challenge them to publicly explain how—because so far all they’ve done is spread FUD. Lie #2: A.B. 375 Will Lead to Pop-Ups(?!) In their letter to legislators, big Internet providers also claim that A.B. 375 would “lead to recurring pop-ups to consumers.” We’ve seen the same claim about pop-ups in an online ad circulated by opponents of A.B. 375. This claim is a lie too, and we have no idea how any rational person could read A.B. 375 and think “maybe that will mean more pop-ups.” The best we can come up with is that since A.B. 375 would require Internet providers to get your consent before sharing your data, maybe they think that if they constantly pester people with pop-ups, they’ll succeed in wearing people down until they give their consent. If that’s really what Comcast and Verizon are implying, then lawmakers should understand the claim for what it really is: a threat to hold consumers hostage in the fight for online privacy. As with Lie #1, if big Internet providers have a better explanation, we challenge them to provide it publicly. As an aside, it’s worth nothing that if anything A.B. 375 will likely result in fewer pop-ups, not to mention fewer intrusive ads during your everyday browser experience. That’s because A.B. 375 will prevent Internet providers from using your data to sell ads they target to you without your consent—which means they’ll be less likely to insert ads into your web browsing, like some Internet providers have done in the past. Lie #3: A.B. 375 Will Expose You to Hackers Not only are opponents of A.B. 375 so desperate that they’re making stuff up (see Lie #2 above), they’re also trying to scare lawmakers into thinking that A.B. 375 will do the opposite of what it really does. In particular, they’re claiming that it will expose consumers to hackers. Of course, big Internet providers and their allies won’t explain how this would happen—even when we’ve asked them politely for a direct explanation. Let’s set the record straight. Contrary to the FUD Comcast, AT&T, Verizon, and their allies are spreading,  A.B. 375 will make it less likely that your information can be targeted by privacy thieves, and will make it harder for hackers to target you online. As we explained back in March of 2017, before Congress killed the FCC’s privacy rules: In order for Internet providers to make money off your browsing history, they first have to collect that information—what sort of websites you’re browsing, metadata about whom you’re talking to, and maybe even what search terms you’re using. Internet providers will also need to store that information somewhere, in order to build up a targeted advertising profile of you… [But] Internet providers haven’t exactly been bastions of security when it comes to keeping information about their customers safe. Back in 2015, Comcast had to pay $33 million for unintentionally releasing information about customers who had paid Comcast to keep their phone numbers unlisted. “These customers ranged from domestic violence victims to law enforcement personnel”, many of whom had paid for their numbers to be unlisted to protect their safety. But Comcast screwed up, and their phone numbers were published anyway. And that was just a mistake on Comcast’s part, with a simple piece of data like phone numbers, [which wasn’t even triggered by an outside attack]. Imagine what could happen if hackers decided to [actively] target the treasure trove of personal information Internet providers start collecting. People’s personal browsing history and records of their location could easily become the target of foreign hackers who want to embarrass or blackmail politicians or celebrities. To make matters worse, FCC Chairman (and former Verizon lawyer) Ajit Pai recently halted the enforcement of a rule that would require Internet providers to “take reasonable measures to protect customer [personal information] from unauthorized use, disclosure, or access”—so Internet providers won’t be on the hook if their lax security exposes your data. With A.B. 375, the scenario described above is much less likely, because Internet providers won’t have as much incentive to collect your data in the first place. The logic is simple: no treasure trove of data, no target for hackers; no target for hackers, nothing for them to expose. But the benefits of A.B. 375 go beyond reducing the risk of identity theft to consumers. A.B. 375 will also help reduce consumers’ exposure to dangerous cyber-attacks. That’s because many of the ways big Internet providers want to monetize your data have a side-effect of reducing your security online, including: A standard called Explicit Trusted Proxies, proposed by Internet providers, which would allow your Internet provider to intercept your data, remove the encryption, read the data (and maybe even modify it), and then encrypt it again and send it on its way. The cybersecurity problem? According to a recent alert by US-CERT, an organization dedicated to computer security within the Department of Homeland Security, many of the systems designed to decrypt and then re-encrypt data actually end up weakening the security of the encryption, which exposes users to increased risk of cyber-attack. In fact, a recent study found that more than half of the connections that were intercepted (i.e. decrypted and re-encrypted) ended up with weaker encryption. Inserting ads into your browsing. Here we’re talking about your Internet provider placing additional ads in the webpages you view (beyond the ones that were already placed there by the publisher). Why is this dangerous? Because inserting new code into a webpage in an automated fashion could break the security of the existing code in that page. As security expert Dan Kaminsky put it, inserting ads could break “all sorts of stuff, in that you no longer know as a website developer precisely what code is running in browsers out there. You didn't send it, but your customers received it.” In other words, security features in sites and apps you use could be broken and hackers could take advantage of that—causing you to do anything from send your username and password to them (while thinking it was going to the genuine website) to install malware on your computer. Pre-installing spyware on your mobile phone. In the past, Internet providers have installed spyware like Carrier IQ on phones, claiming it was only to “improve wireless network and service performance.” So where’s the cybersecurity risk? As we’ve explained before, part of the problem with Carrier IQ was that it could be configured to record sensitive information into your phone’s system logs. But some apps transmit those logs off of your phone as part of standard debugging procedures, assuming there’s nothing sensitive in them. As a result, “keystrokes, text message content and other very sensitive information [was] in fact being transmitted from some phones on which Carrier IQ is installed to third parties.” Depending on how that information was transmitted, eavesdroppers could also intercept it—meaning hackers might be able to see your username or password, without having to do any real hacking. The common thread in all three of these cybersecurity risks is that the strongest reason an Internet provider would have for introducing them is to make money by collecting your data, selling it, and using it to target ads at you. A.B. 375 would remove that motivation. If A.B. 375 passes, Internet providers won’t have any reason to weaken your security in order to collect your data or insert ads into your web browsing. Privacy and security are two sides of the same coin, and when you strengthen one you strengthen the other. That’s why we need to do everything we can to make sure A.B. 375 passes the California legislature. Please, if you live in California, call your state legislator today and tell them not to believe the lies Comcast, AT&T, Verizon, and their allies are spreading. Tell them to support A.B. 375. TAKE ACTION TELL YOUR REPRESENTATIVES TO SUPPORT ONLINE PRIVACY.
>> mehr lesen

Data Protection Measure Removed from the California Values Act (Mi, 13 Sep 2017)
Shortly after the November election, human rights groups joined California Senate President Pro Tem Kevin de León in introducing a comprehensive bill to protect data collected by the government from being used for mass deportations and religious registries. S.B. 54, known as the California Values Act, also included a sweeping measure requiring all state agencies to reevaluate their privacy policies and to only collect the minimum amount of data they need to offer services. EFF was an early and strong supporter of the bill, generating more than 750 emails from our supporters. Over subsequent months, the bill was split into multiple pieces of legislation. The ban on using data to create religious registries was moved to S.B. 31, the California Religious Freedom Act, while the general protections morphed into a data privacy bill, S.B. 244, both authored by Sen. Ricardo Lara. S.B. 54 became an important set of measures designed to deal exclusively with immigrant rights by limiting California law enforcement’s cooperation with U.S. Immigrations and Customs Enforcement and other immigration authorities. These provisions include stopping local law enforcement officials from inquiring about immigration status, keeping people in custody on immigrations “holds,” and using immigrations agents as interpreters—all measures that would help protect our immigrant family members, neighbors, coworkers, and friends from persecution.  The bill originally would also have created a firewall between data collected by California law enforcement and federal immigration authorities. This piece was key to EFF’s support of the bill, but we’ve sadly seen it weakened over the course of the legislative session. First, law enforcement successfully pressured the author to write an exemption for the California Law Enforcement Telecommunications System (CLETS), allowing ICE access to large databases of criminal justice information. As we have reported, CLETS is frequently abused by law enforcement and the system has woefully insufficient oversight. In the latest batch of amendments negotiated by Governor Jerry Brown’s office and de León, the remaining database restrictions were eliminated. A new section was added that would require the California Attorney General to develop “guidance” on ensuring that police limit the availability of their databases, "to the fullest extent practicable and consistent with federal and state law," for purposes of immigration enforcement. Such guidance might be valuable to state and local police agencies that want to avoid sharing their databases with immigration enforcers. But if the legislation passes, state and local police would merely be “encouraged” to adopt the guidance. EFF long supported this bill because it contained a “database firewall.” But with that measure gone, the nexus with digital issues has largely evaporated. The optional compliance with guidance is not enough to warrant EFF’s continued support, and so we are moving to a neutral position on the legislation. This means we are closing our online email campaign.  EFF does not oppose S.B. 54. In fact, our analysis of the bill identifies many ways in which the bill will protect the rights of the many thousands of immigrants living in our communities. A large coalition of human rights organizations continue to fight for its passage. If you would like to continue to support the current version of S.B. 54, you can visit the ICE Out of CA coalition website or one of the ACLU’s action pages.  EFF continues to support S.B. 31 and S.B. 244, the spin-off bills on religious registries and data privacy, respectively. They would advance digital rights.  EFF is disappointed that the digital rights provision of S.B. 54 were cut from the bill. By asserting a neutral position, we hope to encourage the legislature to enact the “database firewall” next session.  
>> mehr lesen

EFF, ACLU Sue Over Warrantless Phone, Laptop Searches at U.S. Border (Mi, 13 Sep 2017)
Lawsuit on Behalf of 11 Travelers Challenges Unconstitutional Searches of Electronic Devices Boston, Massachusetts—The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) sued the Department of Homeland Security (DHS) today on behalf of 11 travelers whose smartphones and laptops were searched without warrants at the U.S. border. The plaintiffs in the case are 10 U.S. citizens and one lawful permanent resident who hail from seven states and come from a variety of backgrounds. The lawsuit challenges the government’s fast-growing practice of searching travelers’ electronic devices without a warrant. It seeks to establish that the government must have a warrant based on probable cause to suspect a violation of immigration or customs laws before conducting such searches. The plaintiffs include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. Several are Muslims or people of color. All were reentering the country from business or personal travel when border officers searched their devices. None were subsequently accused of any wrongdoing. Officers also confiscated and kept the devices of several plaintiffs for weeks or months—DHS has held one plaintiff’s device since January. EFF, ACLU, and the ACLU of Massachusetts are representing the 11 travelers. “People now store their whole lives, including extremely sensitive personal and business matters, on their phones, tablets, and laptops, and it’s reasonable for them to carry these with them when they travel. It’s high time that the courts require the government to stop treating the border as a place where they can end-run the Constitution,” said EFF Staff Attorney Sophia Cope. Plaintiff Diane Maye, a college professor and former U.S. Air Force officer, was detained for two hours at Miami International Airport when coming home from a vacation in Europe in June. “I felt humiliated and violated. I worried that border officers would read my email messages and texts, and look at my photos,” she said. “This was my life, and a border officer held it in the palm of his hand. I joined this lawsuit because I strongly believe the government shouldn’t have the unfettered power to invade your privacy.” Plaintiff Sidd Bikkannavar, an engineer for NASA’s Jet Propulsion Laboratory in California, was detained at the Houston airport on the way home from vacation in Chile. A U.S. Customs and Border Protection (CPB) officer demanded that he reveal the password for his phone. The officer returned the phone a half-hour later, saying that it had been searched using “algorithms.” Another plaintiff was subjected to violence. Akram Shibly, an independent filmmaker who lives in upstate New York, was crossing the U.S.-Canada border after a social outing in the Toronto area in January when a CBP officer ordered him to hand over his phone. CBP had just searched his phone three days earlier when he was returning from a work trip in Toronto, so Shibly declined. Officers then physically restrained him, with one choking him and another holding his legs, and took his phone from his pocket. They kept the phone, which was already unlocked, for over an hour before giving it back. “I joined this lawsuit so other people don’t have to have to go through what happened to me,” Shibly said. “Border agents should not be able to coerce people into providing access to their phones, physically or otherwise.” The number of electronic device searches at the border began increasing in 2016 and has grown even more under the Trump administration. CBP officers conducted nearly 15,000 electronic device searches in the first half of fiscal year 2017, putting CBP on track to conduct more than three times the number of searches than in fiscal year 2015 (8,503) and some 50 percent more than in fiscal year 2016 (19,033).  “The government cannot use the border as a dragnet to search through our private data,” said ACLU attorney Esha Bhandari. “Our electronic devices contain massive amounts of information that can paint a detailed picture of our personal lives, including emails, texts, contact lists, photos, work documents, and medical or financial records. The Fourth Amendment requires that the government get a warrant before it can search the contents of smartphones and laptops at the border.” Below is a full list of the plaintiffs:  ·      Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student.  ·      Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients.  ·      Sidd Bikkannavar is an optical engineer for NASA’s Jet Propulsion Laboratory in California.  ·      Jeremy Dupin is a journalist living in Boston.  ·      Aaron Gach is an artist living in California.  ·      Isma’il Kushkush is a journalist living in Virginia.  ·      Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida. ·      Zainab Merchant, from Florida, is a writer and a graduate student at Harvard University. ·      Akram Shibly is a filmmaker living in New York. ·      Matthew Wright is a computer programmer in Colorado. The case, Alasaad v. Duke, was filed in the U.S. District Court for the District of Massachusetts. For the complaint: https://www.eff.org/document/alasaad-v-duke-complaint For more on this case and plaintiff profiles: https://www.eff.org/cases/alasaad-v-duke For more on digital security at the border: https://www.eff.org/wp/digital-privacy-us-border-2017 Tags:  Border Searches Contact:  Sophia Cope Staff Attorney sophia@eff.org Adam Schwartz Senior Staff Attorney adam@eff.org Josh Bell ACLU Media Strategist media@aclu.org
>> mehr lesen

California Broadband Privacy Bill Heads for Final Vote This Friday (Di, 12 Sep 2017)
Huge news for broadband privacy! A California bill that would restore many of the privacy protections that Congress stripped earlier this year is headed for a final vote this Friday,  The bill, A.B. 375, had languished in the Senate Rules Committee due to the efforts of AT&T, Comcast, and Verizon to deny a vote. But constituents called and emailed their representatives and reporters started asking questions. The overwhelming public support for privacy has so far counteracted the lobbying by telecommunications companies, which will spare no expense to keep the gift handed to them by Congress and the Trump administration. The legislation is now one step away from landing on the governor’s desk, with final votes set for September 15, the last day of session.  If you live in California, you have 72 hours remaining to call your state Assemblymember and state Senator and tell them to vote AYE for A.B. 375 this Friday. The Battle Behind the Scenes Despite widespread public support, A.B. 375 has faced significant procedural hurdles behind the scenes in Sacramento, in part because the bill was introduced late in the legislation cycle in response to the Congressional vote. The bill emerged victorious from two Senate committees with strong votes in July before getting stuck in the Senate Rules Committee in the weeks that followed. Recognizing that they will not beat this bill by the votes, the ISP industry players that opposed the bill (note, many ISPs in California support AB 375), opted to run out the clock. On Tuesday, however, the Senate Rules Committee Chair and Senate Leader Kevin de Leon decided to move the bill for a vote, and his committee approved discharging the bill for the Senate floor. Following that step, the California Senate voted 25 to 13 to make it eligible for a final vote. These procedural steps are necessary under California law because bills must be in public display for 72 hours as required by Proposition 54 that was approved by voters in 2016. The bill’s final version continues to mirror the now repealed FCC broadband privacy rule and the bill as it stands now would effectively return power to California consumers over their personal data that their ISP obtains from the broadband service. That means your browser history, the applications you use, your location when you use the Internet are firmly controlled by you. If A.B. 375 is enacted, ISPs must have your permission first before they can resell or use that data with third parties beyond providing broadband access services. Furthermore, the bill expands consumer protection beyond the original FCC rule by banning pay for privacy practices such as AT&T’s effort to charge people $30 more for broadband if they did not surrender their private information. While AT&T dropped its plan once the FCC began updating the privacy rules in 2015, their successful lobbying campaign to repeal the rule had cleared the way for them to revisit plans to roll it back out. If A.B. 375 is law though residents in this state will never face what was essentially a privacy tax. The stage is set for a final vote this Friday, the last day of session for the California legislature so that it can be sent to the Governor’s office this year.  Speak up now to demand that the legislature puts people’s privacy over ISP profits. Take Action Tell your representatives to support online privacy.
>> mehr lesen

With iOS 11, More Options to Disable Touch ID Means Better Security (Di, 12 Sep 2017)
When iOS 11 is released to the public next week, it will bring a new feature with big benefits for user security. Last month, some vigilant Twitter users using the iOS 11 public beta discovered a new way to quickly disable Touch ID by just tapping the power button five times. This is good news for users, particularly those who may be in unpredictable situations with physical security concerns that change over time. The newly uncovered feature is simple. Tapping an iPhone power button rapidly five times will bring up an option to dial 9-1-1. After a user reaches that emergency services screen, Touch ID is temporarily disabled until they enter a passcode. In other words, you can call emergency services without unlocking the phone—but then your fingerprint can’t unlock it. This is a big improvement on previously known and relatively clunky methods for disabling Touch ID, including restarting the phone, swiping a different finger five times to force a lock-out, or navigating through settings to disable it manually. Not About Law Enforcement While there is some speculation that this feature is intended to defeat law enforcement—with some going as far as to call it a “cop button”—it is, at its core, a common-sense security feature. The option to disable Touch ID quickly and inconspicuously is helpful for any user who needs more choices and flexibility in their physical security. Think about all the situations where a user might be worried that someone will unexpectedly force them to unlock their phone: a mugging, domestic abuse from a partner or parent, physical harassment or stalking, bullying. Even the fact that the feature is activated alongside an option to quickly call 9-1-1 links it to a whole range of emergency situations in which law enforcement is not already present. Constitutional Bonus Even though this new feature is not aimed at law enforcement, it brings a potentially unintended side effect: now when you used Touch ID, you’re not giving up your Fifth Amendment rights. The Fifth Amendment of course gives us the right to remain silent in interactions with the government. In legalese, we say that it provides us a right to be free from compelled self-incrimination. EFF has long argued that the Fifth Amendment protects us from having to turn over our passwords. But the government, and a number of digital law scholars such as EFF Special Counsel Marcia Hofmann, have suggested that our fingerprints may not have such protection, and some courts have agreed. With today’s announcement, we no longer have to choose between maintaining our Fifth Amendment right to refuse to unlock our phones and the convenience of Touch ID. We call on other manufacturers to follow Apple’s lead and implement this kind of design in their own devices.
>> mehr lesen

FCC Chair’s “chat” with tech execs draws protest (Mo, 11 Sep 2017)
This Tuesday, FCC Chairman Ajit Pai will visit the Bay Area, supposedly for a “fireside chat” with tech executives about bridging the digital divide for underserved communities. But Chairman Pai’s brief tenure to this point has been defined by actions that undermine digital rights, such as seeking to rescind the Open Internet Order of 2015 that protects net neutrality via light touch regulations to ensure equal opportunity online. In some respects, Chairman Pai’s stance should surprise no one. Before joining the FCC, he long worked as a lawyer advocating for the industry he is now charged with regulating. According to the New York Times: Since Mr. Pai’s appointment in January by President Trump, their lobbyists have flooded the agency and the offices of Congress, pushing for an unwinding of rules that they say hamper their businesses…. Mr. Pai has been an active figure in the Trump administration’s quest to dismantle regulations. He froze a broadband subsidy program for low-income households, eased limits on television station mergers and eased caps on how much a company like AT&T or Comcast can charge another business to get online. Pai’s appearance in San Francisco will prompt protest, as his proposal is overwhelmingly opposed by the public, including both Democrats and Republicans. Outside the location at which he’ll meet with tech executives, EFF and a number of allied organizations (including the Center for Media Justice, ACLU of Northern California, The Greenlining Institute, CREDO, 18 Million Rising, the Media Alliance, Tech Workers Coalition, and more) will host a rally to which all are welcome. As explained by Tracy Rosenberg from the Media Alliance: The open Internet has provided connection and community across boundaries and distance, allowed alternative music, art and information to find its audience, allowed small businesses and startups to find their customers and allowed activists to organize online to talk back to their government. We need to keep the Internet accessible, open and uncensored. Title 2 net neutrality regulates the Internet as what it is—a vital utility and a public good that belongs to all of us. Describing the issue as ultimately implicating “our freedom to connect,” Cayden Mak from 18 Million Rising noted that, “So many of our essential rights and freedoms are under attack right now….our free and open internet is one of them.”  The Center for Media Justice put it bluntly: “Our communities depend on a free and open internet to innovate, organize for racial justice, and communicate. With people of color, queer and trans folks, and other marginalized communities at risk, our fight for democracy depends on our ability to connect with one another without censorship or interference.” The Internet has developed into a diverse and innovative platform thanks in large part to the requirement that Internet providers treat data equally, without discriminating between data from one source versus another. This neutrality has been a defining cornerstone of the Internet’s architecture since its early days. Both innovation and dissent rely on Internet users—not the company providing them bandwidth—being in control over what they read and say online. If those companies are allowed to play favorites, or to hold their customers hostage to demand tolls from those who want to reach them, opportunities for both job creation and meaningful dissent will predictably wither. We can't let that happen, and neither can you. Start now by raising your voice online to share your concerns with your members of Congress, then join us in the streets on Tuesday. If you’re looking for an ongoing way to make a difference, gather a handful of neighbors or friends who live in the same town and join the Electronic Frontier Alliance. The fight to save net neutrality will take all of us. 
>> mehr lesen

Stop SESTA: Amendments to Federal Criminal Sex Trafficking Law Sweep Too Broadly (Sa, 09 Sep 2017)
EFF opposes the Senate’s Stop Enabling Sex Traffickers Act (S. 1693) (“SESTA”), and its House counterpart the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865). Not only would both bills eviscerate the immunity from liability for user-generated content that Internet intermediaries have under Section 230, the bills would also amend the federal criminal sex trafficking statute to sweep in companies who may not even be aware of what their users are doing. As we recently explained, Section 230 has always had an express exemption for federal criminal law, meaning that Internet intermediaries can be prosecuted in federal court. Thus, federal prosecutors have always been able use the federal criminal sex trafficking statute (18 U.S.C. § 1591) to go after online platforms without running into Section 230 immunity. SESTA and its House counterpart would amend Section 1591 to expand federal criminal liability for Internet intermediaries—increasing the ways they may be on the hook for what are essentially the crimes of their users. These changes are not only unnecessary in light of current law, such an expansion of intermediary liability would undermine the online free speech and innovation that all Internet users have come to expect and enjoy. Congress Already Gave Federal Prosecutors the Ability to Target Culpable Online Platforms With the SAVE Act of 2015, Congress amended Section 1591 to make “advertising” sex trafficking a crime. Congress intended to target both the pimps who post sex trafficking ads and the online platforms who host such ads, in particular, classified ad websites like Backpage.com. We have not yet seen a prosecution under the SAVE Act, but it has been reported that prosecutors have empaneled a federal grand jury in Arizona to investigate Backpage.com. Now Congress wants to further expand federal criminal liability under Section 1591 in dangerous ways—without proof that such an expansion is necessary. Currently, given the 2015 amendments, Section 1591 can be read as prohibiting two main crimes: First, it is a crime for a person or entity to “advertise” sex trafficking or to benefit financially from “participation in a venture” that has engaged in advertising sex trafficking, knowing that an ad reflects a sex trafficking situation. Added by the SAVE Act, this crime was intended to apply only to the culpable hosts of online sex trafficking ads—those individuals or companies who, in fact, know that the ads are for sex trafficking. Second, it is a crime for a person or entity to engage in certain activities (other than advertising) related to sex trafficking or to benefit financially from “participation in a venture” that has engaged in certain activities related to sex trafficking. The statute lists the activities for which criminal liability attaches (specifically, if a person: recruits, entices, harbors, transports, provides, obtains, maintains, patronizes, or solicits). For this second set of crimes, the statute permits a lower standard for the defendant’s state of mind: a person or entity who engages in these activities or benefits financially from “participation in a venture” that engages in these activities, knowing or in reckless disregard of the fact that sex trafficking is involved. Thus, individuals or companies need not, in fact, know that a “venture” involves sex trafficking. Rather, if they were aware of a risk of sex trafficking and were "reckless" in dismissing that risk, they would be criminally liable. Congress assigned the higher “knowledge” standard to advertising sex trafficking in the SAVE Act in light of civil libertarians’ concerns that attaching criminal liability to advertising implicates First Amendment rights. SESTA Would Dangerously Expand Federal Criminal Liability to Encompass Innocent Online Platforms The Senate bill would amend Section 1591 by further defining “participation in a venture” to include any activity that “assists, supports, or facilitates” sex trafficking. Therefore, the Senate bill creates a third crime under Section 1591(a)(2): It is a crime for a person or entity to benefit financially from “participation in a venture” that has assisted, supported, or facilitated sex trafficking, knowing or in reckless disregard of the fact that sex trafficking is involved.  There are two problems with this amendment to Section 1591(a)(2). (The House bill has similar amendments.) First, the words “assists, supports, or facilitates” are extremely vague and broad. Courts have interpreted “facilitate” in the criminal context simply to mean “to make easier or less difficult,” as in using a phone to help “facilitate” a drug deal. A huge swath of innocuous intermediary products and services would fall within these newly prohibited activities, given that online platforms by their very nature make communicating and publishing “easier or less difficult.” Second, persons or entities would be criminally liable under the bill’s vague and broad terms even if they do not actually know that sex trafficking is happening—much less intend to assist in sex trafficking. This would expose innocent individuals and companies to federal criminal liability should their products or services be misused by sex traffickers. This reasonable reading of SESTA carries dangerous implications for all Internet intermediaries, not just classified ad websites like Backpage.com, as well as brick-and-mortar companies. Any company in the chain of online content distribution—whether ISPs, web hosting companies, websites, search engines, email and text messaging providers, or social media platforms would be swept up by these amendments to Section 1591. All of these companies come into contact with user-generated content—whether ads, emails, text messages, or social media posts—some of which might involve sex trafficking. And all of these services can be said to “assist, support, or facilitate” sex trafficking. For example, should a messaging app be used by the perpetrators of sex trafficking to communicate with each other, a federal prosecutor could argue that such a service assisted, supported, or facilitated sex trafficking. Thus, all of these companies would be criminally liable under Section 1591 if a jury concludes—not that the companies actually knew their services were “facilitating” sex trafficking—but that they were “reckless” in dismissing a risk of sex trafficking that they were aware of in a particular case.   Additionally, the new federal criminal liability in Section 1591 created by SESTA would not be limited to online platforms, given that Section 1591 currently is not limited to online platforms but instead applies to “whoever” participates in a venture. Thus, on the face of the bill, any individual or company that “assists, supports, or facilitates” sex trafficking, in reckless disregard of the fact that sex trafficking is happening, is open to federal criminal liability. While perhaps not Congress’ intent, this language could swallow up an endless list of companies who may not, in fact, be aware of what their customers are doing. For example, if a sex trafficker used a legitimate package delivery service or bank in the course of his illicit dealings, would those entities have “facilitated” sex trafficking? In summary, just because Internet intermediaries cannot invoke Section 230 immunity when faced with liability under federal criminal law, it does not follow that the federal criminal sex trafficking statute should be further amended—beyond what the SAVE Act did—to sweep in what may be innocent Internet intermediaries and hold them responsible for the sex trafficking crimes of their users. Section 1591—as amended two years ago—gives the U.S. Department of Justice more than enough leeway to prosecute culpable online platforms for their role in sex trafficking. Visit our STOP SESTA campaign page and tell Congress to reject S. 1693 and H.R. 1865!
>> mehr lesen

California Legislature Defangs Transparency Bill (Fr, 08 Sep 2017)
The Electronic Frontier Foundation has pulled its support of a state bill to strengthen the California Public Records Act after the legislature gutted its most important reform: allowing courts to levy penalties against agencies that knowingly impede the public's right to access information.  A.B. 1479 had received near unanimous support when it was passed by the state Assembly and through the committee process in the Senate. Nevertheless, the legislature passed up the opportunity to come together in favor of sunlight and instead reduced the bill down to requiring agencies to appoint a "custodian of public records," a practice already employed by many agencies, including most municipalities through their city clerks.  Here's what we wrote in our letter announcing our new, neutral position:  The latest amendments to A.B. 1479 remove the provisions that would have allowed courts to levy fines against agencies that frustrate the public’s right to access records. Experience from other states has consistently shown that one way to meaningfully enforce these laws is by creating penalties for agencies that willfully disregard their legal duties. The remaining provisions in A.B. 1479 would change the CPRA only slightly by requiring agencies to designate a custodian of public records. We do not oppose this proposal if it passes, but we do not believe that any member of the legislature should count it as a victory for transparency. Naming a point person for public records is a light reform that presents little burden for agencies and may be redundant in many jurisdictions. In our experience with filing records requests in California, most agencies already do identify a FOIA contact. In cities, this role is already fulfilled by city clerks. But even this basic “custodian of records” provision measure has a five-year trial period under the legislation.   The extreme watering down of the bill illustrates how little value the legislature places in enforcing Californians’ constitutional right “to information concerning the conduct of the people’s business” and the requirement that “the writings of public officials and agencies shall be open to public scrutiny.” Therefore, we cannot give our imprimatur to such a failure of accountability.  We believe our energy is better spent advocating for enforcement “teeth,” along with other efforts that would expand the types of records available to the public and adapt CPRA to match the changing technologies used for maintaining government records. We thank the bill's author, Assemblymember Rob Bonta, for his courage in moving the legislation, and we urge him not to give up on this measure and to pursue it again next session. 
>> mehr lesen

Stop SESTA: Congress Doesn’t Understand How Section 230 Works (Do, 07 Sep 2017)
As Congress considers undercutting a key law that protects online free speech and innovation, sponsors of the bills don’t seem to understand how Section 230 (47 U.S.C. § 230) works. EFF opposes the Senate’s Stop Enabling Sex Traffickers Act (S. 1693) (“SESTA”) and its House counterpart, the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865). These bills would roll back Section 230, one of the most important laws protecting online free speech and innovation. Section 230 generally immunizes Internet intermediaries from legal liability for hosting user-generated content. Many websites or services that we rely on host third-party content in some way—social media sites, photo and video-sharing apps, newspaper comment sections, and even community mailing lists. This content is often offensive when, for example, users make defamatory statements about others. With Section 230, the actual speaker is at risk of liability—but not the website or blog. Without Section 230, these intermediaries would have an incentive to review every bit of content a user wanted to publish to make sure that the content would not be illegal or create a risk of legal liability—or to stop hosting user content altogether.  But according to one of SESTA’s sponsors, Sen. Rob Portman (R-Ohio), EFF and other groups’ concerns are overblown. In a recent floor speech, Sen. Portman said: They have suggested that this bipartisan bill could impact mainstream websites and service providers—the good actors out there. That is false. Our bill does not amend, and thus preserves, the Communications Decency Act’s Good Samaritan provision. This provision protects good actors who proactively block and screen for offensive material and thus shields them from any frivolous lawsuits. Sen. Portman is simply wrong that the bill would not impact “good” platforms. He’s also wrong about how Section 230’s “Good Samaritan” provision would continue to protect online platforms if SESTA were to become law, particularly because that provision is irrelevant to the massive potential criminal and civil liability that the bill would create for online platforms, including the good actors. Section 230 Has Two Immunity Provisions: One Related to User-Generated Content and One Called the “Good Samaritan” Immunity We want to be very clear here, because even courts get confused occasionally. Section 230 contains two separate immunities for online platforms. The first immunity (Section 230(c)(1)) protects online platforms from liability for hosting user-generated content that others claim is unlawful. If Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde. The second immunity (Section 230(c)(2)) protects online platforms from legal challenges brought by their own users when the platforms filter, remove, or otherwise edit those users’ content. In the context of the above example, Bob can’t sue Alice if she unilaterally takes down Bob’s comment about Clyde. This provision explicitly states that the immunity is premised on actions the platforms take in “good faith” to remove offensive content, even if that content may be protected by the First Amendment. This second provision is what Sen. Portman called the “Good Samaritan” provision. (Law Professor Eric Goldman has a good explainer about Section 230(c)(2).) When EFF and others talk about the importance of Section 230, we’re talking about the first immunity, Section 230(c)(1), which protects platforms in their role as hosts of user-generated content. As described above, Section 230(c)(1) generally prevents people who are legally wronged by user-generated content hosted on a platform (for example, defamed by a tweet) from suing the platform. Importantly, Section 230(c)(1) contains no “good faith” or “Good Samaritan” requirement. Rather, Section 230(c)(1) provides platforms with immunity based solely on how they function: if providers offer services that enable their users to post content, they are generally shielded from liability that may result from that content. Full stop. Platforms’ motives in creating or running their services are thus irrelevant to whether they receive Section 230(c)(1) immunity for user-generated content. Section 230’s “Good Samaritan” Immunity Wouldn’t Protect Platforms From the Liability for User-Generated Content That SESTA Would Create Sen. Portman’s comments suggest that the current proposals to amend Section 230 would not impact the law’s “Good Samaritan” provision found in Section 230(c)(2). That is debatable but beside the point, and it unnecessarily confuses the impact SESTA and its House counterpart would have on online free speech and innovation.  Sen. Portman’s comments are beside the point because SESTA would blow a hole in Section 230(c)(1)’s immunity by exposing online platforms to increased liability for user-generated content in two ways: 1) it would cease to protect platforms from prosecutions under state criminal law related to sex trafficking; and 2) it would cease to protect platforms from claims brought by private plaintiffs under both federal and state civil laws related to sex trafficking.  Section 230’s “Good Samaritan” immunity simply doesn’t apply to lawsuits claiming that user-generated content is illegal or harmful. Section 230(c)(2)’s “Good Samaritan” immunity is irrelevant for platforms seeking to defend themselves from the new claims based on user-generated content that SESTA would permit. In those newly possible criminal and civil cases, platforms would be unable to invoke Section 230(c)(1) as a defense, and Section 230(c)(2) would not apply. Sen. Portman’s comments thus betray a lack of understanding regarding how Section 230 protects online platforms. Additionally, Sen. Portman implies that should SESTA become law, as long as platforms operate in “good faith,” they will not be liable should content related to sex trafficking appear on their sites. This is a dangerous misstatement of SESTA’s impact. Rather than recognizing that SESTA creates massive liability for all online platforms, Sen. Portman incorrectly implies that Section 230 currently separates good platforms from bad when it comes to which ones can be held liable for user-generated content. Sen. Portman’s comments thus do little to alleviate the damaging consequences SESTA will have on online platforms. Moreover, they appear designed to mask some of the bill’s inherent flaws. For these reasons and others, visit our STOP SESTA campaign page and tell Congress to reject S. 1693 and H.R. 1865!
>> mehr lesen

Defend Our Online Communities: Stop SESTA (Do, 07 Sep 2017)
A new bill is working its way through Congress that could be disastrous for free speech online. EFF is proud to be part of the coalition fighting back. We all rely on online platforms to work, socialize, and learn. They’re where we go to make friends and share ideas with each other. But a bill in Congress could threaten these crucial online gathering places. The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem.  The Electronic Frontier Foundation, R Street Institute, and over a dozen fellow public interest organizations are joining forces to launch a new website highlighting the problems of SESTA. Together, we’re trying to send a clear message to Congress: Don’t endanger our online communities. Stop SESTA. Stop SESTA     SESTA would weaken 47 U.S.C. § 230 (commonly known as "CDA 230" or simply “Section 230”), one of the most important laws protecting free expression online. Section 230 protects Internet intermediaries—individuals, companies, and organizations that provide a platform for others to share speech and content over the Internet.  This includes social networks like Facebook, video platforms like YouTube, news sites, blogs, and other websites that allow comments. Section 230 says that an intermediary cannot be held legally responsible for content created by others (with a few exceptions). And that’s a good thing: it’s why we have flourishing online communities where users can comment and interact with one another without waiting for a moderator to review every post. SESTA would change all of that. It would shift more blame for users’ speech to the web platforms themselves. Under SESTA, web communities would likely become much more restrictive in how they patrol and monitor users’ contributions. Some of the most vulnerable platforms would be ones that operate on small budgets—sites like Wikipedia, the Internet Archive, and small WordPress blogs that play a crucial role in modern life but don’t have the massive budgets to defend themselves that Facebook and Twitter do.  Experts in human trafficking say that SESTA is aiming at the wrong target. Alexandra Levy, adjunct professor of human trafficking and human markets at Notre Dame Law School, writes, “Section 230 doesn’t cause lawlessness. Rather, it creates a space in which many things — including lawless behavior — come to light. And it’s in that light that multitudes of organizations and people have taken proactive steps to usher victims to safety and apprehend their abusers.”  STOP SESTA Please use our campaign site to tell your members of Congress: SESTA would strangle online communities. We need to stop it now.   
>> mehr lesen

Stop SESTA: Section 230 is Not Broken (Mi, 06 Sep 2017)
EFF opposes the Senate’s Stop Enabling Sex Traffickers Act (S. 1693) (“SESTA”), and its House counterpart the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865), because they would open up liability for Internet intermediaries—the ISPs, web hosting companies, websites, and social media platforms that enable users to share and access content online—by amending Section 230’s immunity for user-generated content (47 U.S.C. § 230). While both bills have the laudable goal of curbing sex trafficking, including of minor children, they would greatly weaken Section 230’s protections for online free speech and innovation. Proponents of SESTA and its House counterpart view Section 230 as a broken law that prevents victims of sex trafficking from seeking justice. But Section 230 is not broken. First, existing federal criminal law allows federal prosecutors to go after bad online platforms, like Backpage.com, that knowingly play a role in sex trafficking. Second, courts have allowed civil claims against online platforms—despite Section 230’s immunity—when a platform had a direct hand in creating the illegal user-generated content. Thus, before Congress fundamentally changes Section 230, lawmakers should ask whether these bills are necessary to begin with. Why Section 230 Matters Section 230 is the part of the Telecommunications Act of 1996 that provides broad immunity to Internet intermediaries from liability for the content that their users create or post (i.e., user-generated content or third-party content). Section 230 can be credited with creating today’s Internet—with its abundance of unique platforms and services that enable a vast array of user-generated content. Section 230 has provided the legal buffer online entrepreneurs need to experiment with news ways for users to connect online—and this is just as important for today’s popular platforms with billions of users as it is for startups. Congress’ rationale for crafting Section 230 is just as applicable today as when the law was passed in 1996: if Internet intermediaries are not largely shielded from liability for content their users create or post—particularly given their huge numbers of users—existing companies risk being prosecuted or sued out of existence, and potential new companies may not even enter the marketplace for fear of being prosecuted or sued out of existence (or because venture capitalists fear this). This massive legal exposure would dramatically change the Internet as we know it: it would not only thwart innovation in online platforms and services, but free speech as well. As companies fall or fail to be launched in the first place, the ability of all Internet users to speak online would be disrupted. For those companies that remain, they may act in ways that undermine the open Internet. They may act as gatekeepers by preventing whole accounts from being created in the first place and pre-screening content before it is even posted. Or they may over-censor already posted content, pursuant to very strict terms of service in order to avoid the possibility of any user-generated content on their platforms and services that could get them into criminal or civil hot water. Again, this would be a disaster for online free speech. The current proposals to gut Section 230 raise the exact same problems that Congress dealt with in 1996. By guarding online platforms from being held legally responsible for what thousands or millions or even billions of users might say online, Section 230 has protected online free speech and innovation for more than 20 years. But Congress did not create blanket immunity. Section 230 reflects a purposeful balance that permits Internet intermediaries to be on the hook for their users’ content in certain carefully considered circumstances, and the courts have expanded upon these rules. Section 230 Does Not Bar Federal Prosecutors From Targeting Criminal Online Platforms Section 230 has never provided immunity to Internet intermediaries for violations of federal criminal law—like the federal criminal sex trafficking statute (18 U.S.C. § 1591). In 2015, Congress passed the SAVE Act, which amended Section 1591 to expressly include “advertising” as a criminal action. Congress intended to go after websites that host ads knowing that such ads involve sex trafficking. If these companies violate federal criminal law, they can be criminally prosecuted in federal court alongside their users who are directly engaged in sex trafficking. In a parallel context, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs. By contrast, Section 230 does provide immunity to Internet intermediaries from liability for user-generated content under state criminal law. Congress deliberately chose not to expose these companies to criminal prosecutions in 50 different states for content their users create or post. Congress fashioned this balance so that federal prosecutors could bring to justice culpable companies while still ensuring that free speech and innovation could thrive online. However, SESTA and its House counterpart would expose Internet intermediaries to liability under state criminal sex trafficking statutes. Although EFF understands the desire of state attorneys general to have more tools at their disposal to combat sex trafficking, such an amendment to Section 230 would upend the carefully crafted policy balance Congress embodied in Section 230. More fundamentally, it cannot be said that Section 230’s current approach to criminal law has failed. A Senate investigation earlier this year and a recent Washington Post article both uncovered information suggesting that Backpage.com not only knew that their users were posting sex trafficking ads to their website, but that the company also took affirmative steps to help those ads get posted. Additionally, it has been reported that a federal grand jury has been empaneled in Arizona to investigate Backpage.com. Congress should wait and see what comes of these developments before it exposes Internet intermediaries to additional criminal liability. Civil Litigants Are Not Always Without a Remedy Against Internet Intermediaries Section 230 provides immunity to Internet intermediaries from liability for user-generated content under civil law—whether federal or state civil law. Again, Congress made this deliberate policy choice to protect online free speech and innovation. Congress recognized that exposing companies to civil liability would put the Internet at risk even more than criminal liability because: 1) the standard of proof in criminal cases is “beyond a reasonable doubt,” whereas in civil cases it is merely “preponderance of the evidence,” making the likelihood higher that a company will lose a civil case; and 2) criminal prosecutors as agents of the government tend to exercise more restraint in filing charges, whereas civil litigants often exercise less restraint in suing other private parties, making the likelihood higher that a company will be sued in the first place for third-party content. However, Section 230’s immunity against civil claims is not absolute. The courts have interpreted this civil immunity as creating a presumption of civil immunity that plaintiffs can rebut if they have evidence that an Internet intermediary did not simply host illegal user-generated content, but also had a direct hand in creating the illegal content. In a seminal 2008 decision, the U.S. Court of Appeals for the Ninth Circuit in Fair Housing Council v. Roommates.com held that a website that helped people find roommates violated fair housing laws by “inducing third parties to express illegal preferences.” The website had required users to answer profile questions related to personal characteristics that may not be used to discriminate in housing (e.g., gender, sexual orientation, and the presence of children in the home). Thus, the court held that the website lost Section 230 civil immunity because it was “directly involved with developing and enforcing a system that subjects subscribers to allegedly discriminatory housing practices.” Although EFF is concerned with some of the implications of the Roommates.com decision and its potential to chill online free speech and innovation, it is the law. Thus, even without new legislation, victims of sex trafficking may bring civil cases against websites or other Internet intermediaries under the federal civil cause of action (18 U.S.C. § 1595), and overcome Section 230 civil immunity if they can show that the websites had a direct hand in creating ads for illegal sex. As mentioned above, a Senate investigation and a Washington Post article both strongly indicate that Backpage.com would not enjoy Section 230 civil immunity today. SESTA and its House counterpart would expose Internet intermediaries to liability under federal and state civil sex trafficking laws. Removing Section 230’s rebuttable presumption of civil immunity would, as with the criminal amendments, disrupt the carefully crafted policy balance found in Section 230. Moreover, victims of sex trafficking can already bring civil suits against the pimps and “johns” who harmed them, as these cases against the direct perpetrators do not implicate Section 230. Therefore, the bills’ amendments to Section 230 are not necessary—because Section 230 is not broken. Rather, Section 230 reflects a delicate policy balance that allows the most egregious online platforms to bear responsibility along with their users for illegal content, while generally preserving immunity so that free speech and innovation can thrive online. By dramatically increasing the legal exposure of Internet intermediaries for user-generated content, the risk that these bills pose to the Internet as we know it is real. Visit our STOP SESTA campaign page and tell Congress to reject S. 1693 and H.R. 1865!
>> mehr lesen

The Privacy Countdown is On: California's Legislature Has Days to Decide to Protect Your Personal Data from Big Telecom (Mi, 06 Sep 2017)
California lawmakers have until Sept. 15 to decide whose side they’re on: broadband consumers like you or giant cable and telephone companies like Comcast, AT&T, and Verizon. The matter at hand: A.B. 375, legislation from Assemblymember Ed Chau that would restore many of the privacy protections that Congress stripped earlier this year, despite roaring opposition from their constituencies. The bill would require broadband providers to obtain your permission before selling your data, like the websites you visit. All things being equal, this should be a no-brainer for the Democratic-controlled California legislature. In Congress, the repeal was opposed unanimously by Democrats, who were joined by 15 Republicans. So what gives? That is not to say that Republican legislators would benefit from opposing the bill, because 75 percent of Republican voters believed the President should have vetoed the bill.   No reason remains to deny a vote on AB 375 other than to protect the interests of the nations largest ISPs, which deploy a small army of lobbyists to Sacramento. That’s why we need California Internet users join us right now in calling and emailing legislators to demand action on A.B. 375. Take Action Tell your representatives to support online privacy. Here Are Where Things Stand Today In July, the California Senate committee that oversees utilities, as well as the Senate Judiciary Committee, approved the bill by wide margins after the author, Assemblymember Chau, promised to work with Senate leadership to amend the bill to mirror the FCC rules. Those amendments were effectively completed in late August. The strongest opposition to the bill continues to come from the vertically integrated ISPs who threw everything and the kitchen sink at the bill and still lost big when it came down to committee votes. It’s important to note that not all ISPs oppose A.B. 375. Supporters of the bill include large regional players like Sonic and local ISPs who genuinely support a law prohibiting the resale of their customers’ information without first obtaining their permission. Furthermore, the legislation has received the support of consumer privacy organizations across the board and wide support in the press. A recent national poll showed that 80% of voters, regardless of party affiliation, oppose ISPs selling their personal information without their permission. But as of today, there is no indication that Senate leadership intends to allow the bill to move forward. It remains held with the Senate Rules Committee, which is led by Senate President pro Tem Kevin de Leon. While the Sen. de Leon has given no public indication of whether he supports or opposes A.B. 375, his committee is the bottleneck, because the Senate cannot even vote on it until the committee moves the bill.  We hope that the bill will be discharged soon so that California legislators can do what their constituents want and vote to protect our privacy. Without a vote, the major ISPs will take the victory they obtained from the Trump administration and Congress earlier this year—the repeal of federal broadband privacy rights under FCC rules—one step further by preventing the state legislature from restoring those rights in California. Very little time remains before the clock runs out. Take action now to demand an up or down vote on A.B. 375. Take Action Tell your representatives to support online privacy.
>> mehr lesen

EFF Calls on New York Court to Vacate Unconstitutional Injunction Against Offensive Speech (Fr, 01 Sep 2017)
A court’s order preliminarily enjoining a website from publishing certain images and statements about a former governmental official is an unconstitutional prior restraint and must be rescinded, EFF argued in an amicus brief filed yesterday in the New York state appellate court.  The case, Brummer v. Wey, is a dispute between Christopher Brummer, a Georgetown law professor and a former presidential nominee to the Commodities and Futures Trade Commission and the online publication The Blot. Several articles were published on The Blot that were highly critical of Brummer’s actions as an appeals adjudicator of decisions of the Financial Industry Regulation Authority, particularly those in which he affirmed the lifetime ban of two African American brokers. The articles, consistent with other content on The Blot, used highly charged and hyperbolic language, including characterizing Brummer’s actions as a “lynching” and posting images of Jim Crow-era lynchings. The lawsuit is a bit of a procedural morass. Brummer sued Benjamin Wey, whom he apparently believes wrote the articles, for defamation and intentional infliction of emotional distress. Brummer then sought a preliminary injunction that would require the removal from The Blot of “photographs or other images and statements” “depicting or encouraging lynching” or “incitement of violence” against Brummer, and would further enjoin Wey from posting any images encouraging lynching “in association” with Brummer or saying “anything further concerning Professor Brummer on any traditional or online media.” In June 2017, the court entered the preliminary injunction that was even broader than what Brummer had requested, enjoining Wey from “posting any articles about the Plaintiff on The Blot for the duration of this action” and ordering the removal from the Blot of “all articles they have posted about or concerning Plaintiff.” Wey promptly appealed the entry of the preliminary injunction and moved the appellate division for a stay of the preliminary injunction pending the appeal. A single justice of the appellate division granted an interim stay. But the full panel of the court revised the stay and reinstate the portions of the preliminary injunction that required Wey to “remove all photographs or other images and statements from websites under defendants’ control which depict or encourage lynching; encourage the incitement of violence; or that feature statements regarding plaintiff that, in conjunction with the threatening language and imagery with which these statements are associated, continue to incite violence against the plaintiff.” Wey is now seeking permission to appeal this new preliminary injunction to the state’s highest court. We filed our brief in support of that request. There are many things obviously wrong with the preliminary injunction: it was entered without the slightest evidentiary support amidst numerous material evidentiary disputes; it focuses on preventing incitement to violence even though the complaint primarily pleads a defamation case; it accepts that the lynching photos are threatening to Brummer even though the article accuses Brummer of lynching others; it is does not specify exactly what statements are prohibited, and on and on. But our amicus brief, like those we have recently filed in similar cases in Texas and the Seventh Circuit, focused on the fact that orders requiring the takedown of online content are always prior restraints and will be unconstitutional except in the rare situation in which the highly demanding prior restraint test is met: The injunction here is an unconstitutional prior restraint; it prohibits speech before there has been a full and final adjudication that the speech is not constitutionally protected, or in fact that the plaintiff is entitled to any remedy. It cannot withstand the rigorous First Amendment scrutiny due such orders. Indeed, it is highly doubtful this injunction could be justified after a final adjudication. The long-held rule is that “equity will not enjoin a libel.” Injuries to “personal or professional reputation”—the harm Justice Mendez sought to prevent in entering the original preliminary injunction—are addressed by damages remedies. The richness of the English language and the myriad ways of expressing any given thought make it impossible for a trial court to craft an injunction against defamatory or offensive speech that is both effective and does not also bar the publication of protected speech. Even a permanent injunction limited to the exact words found to be actionable in one context might prohibit speech that would not be actionable in another. That the injunction here is a preliminary one issued before a full adjudication on the merits makes the prior restraint even more offensive to the First Amendment. Finally, this Court should reject any suggestion that the advent of Internet publication somehow undermines bedrock First Amendment protections. As a result, it should not allow the injunction in this case to go into effect. Prior restraints should be rare. But takedown orders such as this one seem to be happening with greater frequency. We expect the New York Court of Appeals to nullify this one and remind other courts that speech injunctions the First Amendment rarely allows for speech injunctions. Thanks to law student Delbert Tran for helping with the brief and Andrew Read of Woods Lonergan & Read PLLC for acting as our local counsel.
>> mehr lesen

Innovative Police Transparency Measure Dies in California (Fr, 01 Sep 2017)
We are deeply disappointed to learn that a powerful surveillance transparency reform bill in California has died in the Assembly Appropriations committee today. S.B. 21 sought to hold police departments accountable by giving the public a voice in how law enforcement acquires and deploys new surveillance systems. The bill would have required California sheriffs, district attorneys, and state law enforcement agencies to craft surveillance use policies and hold public meetings before they acquire or use new surveillance equipment and software, as well as to publish such policies online. Dave Maass, EFF investigative researcher, stated:  “We are in a political climate in which advocates for immigrant rights, reproductive rights, racial justice, and other social justice issues are facing increased scrutiny and pressure. Many of these groups may rightly fear police surveillance tools that are designed to safeguard the public being turned against peaceful activists engaged in First Amendment protected speech and assembly. As surveillance tools become cheaper, more widely deployed, and more sophisticated, the public has a right to know and debate what tools are being purchased and used by local police. S.B. 21 was a powerful check on police surveillance abuses, and we are disappointed that the committee failed to advance the bill. This is a blow for the privacy and civil liberties of all Californians.”  We thank everyone who spoke out in support of this bill, especially Sen. Jerry Hill and Sen. Steven Bradford. Many groups fought for this bill, including Asian Law Alliance, California Attorneys for Criminal Justice, California Civil Liberties Advocacy, California Public Defenders Association, Conference of California Bar Associations, Council on American-Islamic Relations California, Media Alliance, Oakland Privacy, and the San Jose Peace & Justice Center. We look forward to working with the bill sponsors to reintroduce it in a future session.  
>> mehr lesen

Stupid Patent of the Month: JP Morgan Patents Interapp Permissions (Do, 31 Aug 2017)
We have often criticized the Patent Office for issuing broad software patents that cover obvious processes. Instead of promoting innovation in software, the patent system places landmines for developers who wish to use basic and fundamental tools. This month’s stupid patent, which covers user permissions for mobile applications, is a classic example.  On August 29, 2017, the Patent Office issued U.S. Patent No. 9,747,468 (the ’468 patent) to JP Morgan Chase Bank, titled “System and Method for Communication Among Mobile Applications.” The patent covers the simple idea of a user giving a mobile application permission to communicate with another application. This idea was obvious when JP Morgan applied for the patent in June 2013. Even worse, it had already been implemented by numerous mobile applications. The Patent Office handed out a broad software monopoly while ignoring both common sense and the real world. The full text of Claim 1 of the ’468 patent is as follows: A method for a first mobile application and a second mobile application on a mobile device to share information, comprising: the first mobile application executed by a computer processor on a mobile device determining that the second mobile application is present on the mobile device; receiving, from a user, permission for the first mobile application to access data from the second mobile application; the first mobile application executed by the computer processor requesting data from the second mobile application; and the first mobile application receiving the requested data from the second mobile application. That’s it. The claim simply covers having an app check to see if another app is on the phone, getting the user’s permission to access data from the second app, then accessing that data.  The ’468 patent goes out of its way to make clear that this supposed invention can be practiced on any kind of mobile device. The specification helpfully explains that “the invention or portions of the system of the invention may be in the form of a ‘processing machine,’ such as a general purpose computer, for example.” The patent also emphasizes that the invention can be practiced on any kind of mobile operating system and using applications written in any programming language.  How was such a broad and obvious idea allowed to be patented? As we have explained many times before, the Patent Office seems to operate in an alternate universe where the only evidence of the state of the art in software is found in patents. Indeed, the examiner considered only patents and patent applications when reviewing JP Morgan’s application. It’s no wonder the office gets it so wrong. What would the examiner have found if he had looked beyond patents? It’s true that in mid-2013, when the application was originally filed, mobile systems generally asked for permissions up front when installing applications rather than interposing more fine-grained requests. But having more specific requests was a straightforward security and user-interface decision, not an invention. Structures for inter-app communication and permissions had been discussed for years (such as here, here, and here). No person working in application development in 2013 would have looked at Claim 1 of the ’468 patent and think it was non-obvious to a person of ordinary skill. JP Morgan’s “invention” was not just obvious, it had been implemented in practice. At least some mobile applications already followed the basic system claimed by the ’468 patent. In early 2012, after Apple was criticized for allowing apps to access contact data on the iPhone, some apps began requesting user permission before accessing that data. Similarly, Twitter asked for user permission as early as 2011, including on “feature phones”, before allowing other apps access to its data. Since it didn’t consider any real world software, the Patent Office missed these examples. The Patent Office does a terrible job reviewing software patent applications. Meanwhile, some in the patent lobby are pushing to make it even easier to get broad and abstract software patents. We need real reform that reduces the flood of bad software patents that fuels patent trolling.
>> mehr lesen

Electronic Frontier Foundation, ACLU Win Court Ruling That Police Can't Keep License Plate Data Secret (Do, 31 Aug 2017)
Police Have Collected Data on Millions of Law-Abiding Drivers Via License Readers San Francisco, California—The Electronic Frontier Foundation (EFF) and the ACLU won a decision by the California Supreme Court that the license plate data of millions of law-abiding drivers, collected indiscriminately by police across the state, are not “investigative records” that law enforcement can keep secret.  California’s highest court ruled that the collection of license plate data isn’t targeted at any particular crime, so the records couldn’t be considered part of a police investigation.  “This is a big win for transparency in California,” attorney Peter Bibring, director of police practices at the ACLU of Southern California, which joined EFF in a lawsuit over the records.  “The Supreme Court recognized that California’s sweeping public records exemption for police investigations doesn’t cover mass collection of data by police, like the automated scanning of license plates in this case. The Court also recognized that mere speculation by police on the harms that might result from releasing information can’t defeat the public’s strong interest in understanding how police surveillance impacts privacy." The ruling sets a precedent that mass, indiscriminate data collection by the police can’t be withheld just because the information may contain some criminal data. This is important because police are increasingly using technology tools to surveil and collect data on citizens, whether it’s via body cameras, facial recognition cameras, or license plate readers. The panel sent the case back to the trial court to determine whether the data can be made public in a redacted or anonymized form so drivers’ privacy is protected. “The court recognized the huge privacy implications of this data collection,” said EFF Senior Staff Attorney Jennifer Lynch. “Location data like this, that’s collected on innocent drivers, reveals sensitive information about where they have been and when, whether that’s their home, their doctor’s office, or their house of worship.” Automated License Plate Readers or ALPRs are high-speed cameras mounted on light poles and police cars that continuously scan the plates of every passing car. They collect not only the license plate number but also the time, date, and location of each plate scanned, along with a photograph of the vehicle and sometimes its occupants. The Los Angeles Police Department (LAPD) and the Los Angeles County Sheriff's Department (LASD) collect, on average, three million plate scans every week and have amassed a database of half a billion records. EFF filed public records requests for a week’s worth of ALPR data from the agencies and, along with American Civil Liberties Union-SoCal, sued after both agencies refused to release the records. EFF and ACLU SoCal asked the state supreme court to overturn a lower court ruling in the case that said all license plate data—collected indiscriminately and without suspicion that the vehicle or driver was involved in a crime—could be withheld from disclosure as “records of law enforcement investigations.” EFF and the ACLU SoCal argued the ruling was tantamount to saying all drivers in Los Angeles are under criminal investigation at all times. The ruling would also have set a dangerous precedent, allowing law enforcement agencies to withhold from the public all kinds of information gathered on innocent Californians merely by claiming it was collected for investigative purposes. EFF and ACLU SoCal will continue fighting for transparency and privacy as the trial court considers how to provide public access to the records so this highly intrusive data collection can be scrutinized and better understood. For the opinion: https://www.eff.org/document/aclu-v-la-superior-court-ca-supreme-court-opinion For more on this case:https://www.eff.org/cases/automated-license-plate-readers-aclu-eff-v-lapd-lasd   Tags:  Automated License Plate Readers (ALPRs) Contact:  Jennifer Lynch Senior Staff Attorney jlynch@eff.org David Colker ACLU SoCal Press & Communications Strategist DColker@ACLUSOLCA.org
>> mehr lesen

Student Privacy Tips for Teachers (Do, 31 Aug 2017)
The new school year starts next week for most schools across the country. As part of the first line of defense in protecting student privacy, teachers need to be ready to spot the implications of new technology and advocate for their students' privacy rights. Our student privacy report offers recommendations for several stakeholder groups. In this post, we'll focus specifically on teachers. Teachers play the role of intermediaries between students and the technology being deployed in classrooms. In addition to administering technology directly to students, teachers can integrate digital literacy and privacy education across their existing curricula. Make digital literacy part of the curriculum. Ensure that students are learning basic digital privacy and security techniques while utilizing new ed tech tools, including creating strong passphrases for their online accounts.39 Additionally, when applicable, convey that the data the students submit as part of their educational activity (including, for example, search terms, browsing history, etc.) will be sent to another entity and they should therefore exercise caution in sharing sensitive personal information. Advocate for better training for teachers. Teachers’ own digital literacy and privacy training is often overlooked when new ed tech services are introduced to the classroom. The best way to sharpen your expertise and protect your students is to enhance your own professional privacy knowledge. Advocate for training within the school/district or seek out support from external resources. Get parental consent. Refrain from signing students up for services without getting explicit written consent from parents. Pick ed tech tools carefully. Exercise caution when choosing what devices, platforms, services, or websites to use in the classroom. When tools are available for free on the web, for example, it can be tempting to adopt and use them in an ad hoc manner. However, each tool may pose different risks to students’ personal data. Instead, go through your school or district’s approval process, or seek additional opinions, before adopting new ed tech tools. Find allies. If you are concerned about a particular technology and its privacy implications, find allies amongst your colleagues. Seek out other staff who share your concerns and coordinate with them to better advocate for student privacy across your school or district. Want to learn more? Read our report Spying on Students: School-Issued Devices and Student Privacy for more recommendations, analysis of student privacy law, and case studies from across the country.
>> mehr lesen

Judge Cracks Down on LinkedIn’s Shameful Abuse of Computer Break-In Law (Di, 29 Aug 2017)
Good news out of a court in San Francisco: a judge just issued an early ruling against LinkedIn’s abuse of the notorious Computer Fraud and Abuse Act (CFAA) to block a competing service from perfectly legal uses of publicly available data on its website. LinkedIn’s behavior is just the sort of bad development we expected after the United States Court of Appeals for the Ninth Circuit delivered two dangerously expansive interpretations of the CFAA last year—despite our warnings that the decisions would be easily misused. The CFAA is a criminal law with serious penalties. It was passed in the 1980s with the aim of outlawing computer break-ins. Since then, it has metastasized in some jurisdictions into a tool for companies and websites to enforce their computer use policies, like terms of service (which no one reads) or corporate computer policies. Violating a computer use policy should by no stretch of the imagination count as felony. But the Ninth Circuit’s two decisions—Facebook v. Power Ventures and U.S. v. Nosal—emboldened some companies, almost overnight, to amp up their CFAA threats against competitors. Luckily, a court in San Francisco has called foul, questioning LinkedIn’s use of the CFAA to block access to public data. The decision is a victory—a step toward our mission of holding the Ninth Circuit to its word and limiting its two dangerous opinions to their “stark” facts. But the LinkedIn case is in only its very early stages, and the earlier bad case law is still on the books. The U.S. Supreme Court has the opportunity to change that, and we urge them to do so by granting certiorari in U.S. v. Nosal. The Court needs to step in and shut down abuse of this draconian and outdated law. Background The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet, but the statute doesn’t tells us what “authorization” or “without authorization” means. This vague language might have seemed innocuous to some back in 1986 when the statute was passed, reportedly in response to the Matthew Broderick movie War Games. In today’s networked world, where we all regularly connect to and use computers owned by others, this pre-Web law is causing serious problems. If you’ve been following our blog, you’re familiar with Facebook v. Power Ventures and U.S. v. Nosal. Both cases adopted expansive readings of “unauthorized access”—and we warned the Ninth Circuit that they threatened to transform the CFAA into a mechanism for policing Internet use and criminalizing ordinary Internet behavior, like password sharing. Unfortunately, we were right. Within weeks after the decisions came out, LinkedIn started sending out cease and desist letters citing the bad case law—specifically Power Ventures—to companies it said were violating its prohibition on scraping. One company LinkedIn targeted was hiQ Labs, which provides analysis of data on LinkedIn user’s publicly available profiles. Linkedin had tolerated hiQ’s behavior for years, but after the Power Ventures decision, it apparently saw an opportunity to shut down a competing service. LinkedIn sent hiQ letters warning that any future access of its website, even the public portions, were “without permission and without authorization” and thus violations of the CFAA.  Scraping publicly available data in violation of a company’s terms of use comes nowhere near Congress’s original intent of punishing those who break into protected computers to steal data or cause damage. But companies like LinkedIn still send out threatening letters with bogus CFAA claims. These letters are all too often effective at scaring recipients into submission given the CFAA’s notoriously severe penalties. Since demand letters are not generally public, we don’t know how many other companies are using the law to threaten competitors and stomp out innovation, but it’s unlikely that LinkedIn is alone in this strategy. Luckily here, in the face of LinkedIn’s threats, hiQ did something that a lot of other companies don’t have the resources or courage to do: it took LinkedIn’s claims straight to court. It asked the Northern District of California in San Francisco to rule that its automated access of publicly available data was not in violation of the CFAA, despite LinkedIn’s threats. hiQ also asked the court to prohibit LinkedIn from blocking its access to public profiles while the court considered the merits of its request. hiQ v. Linkedin: Preliminary Injunction Decision Earlier this month, Judge Edward Chen granted hiQ’s request, enjoining LinkedIn from preventing or blocking hiQ’s access or use of public profiles, and ordering LinkedIn to withdraw its two cease and desist letters to hiQ. Although Judge Chen didn’t directly address the merits of the case, he expressed serious skepticism over LinkedIn’s CFAA claims, stating that “the Court is doubtful that the Computer Fraud and Abuse Act may be invoked by LinkedIn to punish hiQ for accessing publicly available data” and that the “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.” Judge Chen’s order is reassuring, and hopefully a harbinger of how courts going forward will react to efforts to use to the CFAA to limit access to public data. He’s not the only judge who feels that companies are taking the CFAA too far. During a Ninth Circuit oral argument in a different case in July, Judge Susan Graber—one of the judges behind the Power Ventures decision—pushed back on [at around 33:40] Oracle’s argument that automated scraping was a CFAA violation. It’s still discouraging to see LinkedIn actively advocate for such a shortsighted expansion of an already overly broad criminal law—an outcome that could land people in jail for innocuous conduct—rather than trying to compete to provide a better service. The CFAA’s exorbitant penalties have already caused great tragedies, including playing a role in the death of our friend, Internet activist Aaron Swartz. The Internet community should be trying to fix this broken law, not expand it. Opportunistic efforts to expand it are just plain shameful. That’s why we’re asking the Supreme Court to step in and clarify that using a computer in a way that violates corporate policies, preferences, and expectations—as LinkedIn is claiming against hiQ—cannot be grounds for a CFAA violation. A clear, unequivocal ruling would go a long way to help stop abusive efforts to use the CFAA to limit access to publicly available data or to enforce corporate policies. We hope the Supreme Court takes up the Nosal case. We should hear from the high court this fall. In the meantime, we hope LinkedIn takes Judge Chen’s recent ruling as a sign that’s its time to back away from its shameful abuse of the CFAA.  Related Cases:  United States v. David Nosal Facebook v. Power Ventures
>> mehr lesen

Taking the Fight to the Appeals Court: Don’t Lock Laws Behind Paywalls (Di, 29 Aug 2017)
It’s almost too strange to believe, but a federal court ruled earlier this year that copyright can be used to control access to parts of our state and federal laws—forcing people to pay a fee or sign a contract to read and share them. On behalf of Public.Resource.Org, a nonprofit dedicated to improving public access to law, yesterday EFF challenged that ruling in the United States Court of Appeals for the District of Columbia Circuit. Public.Resource.Org acquires and posts a wide variety of public documents, including regulations that have become law through what’s called “incorporation by reference.” That means that they were initially created at private standards organizations before being adopted into law by cities, states, and federal agencies. By posting these documents online, Public Resource wants to make these requirements more available to the public that must abide by them. But six standards development organizations sued Public Resource, claiming that they have copyright in the regulations, and that Public Resource shouldn’t be allowed to post them at all. Laws and regulations incorporated by reference include some of our most important protections for health, safety, and fairness. They include fire safety rules for buildings, rules that ensure safe consumer products, rules for energy efficient buildings, and rules for designing fair and accurate standardized tests for students and employees. Once adopted by a legislature or agency, these rules are laws that can carry civil or criminal penalties. For example, a person was charged with manslaughter this year in connection with the deadly Ghost Ship fire in Oakland, California for violating a fire code that became law through incorporation by reference. According to the district court decision issued in February, the standards development organizations that convene the committees that write these codes and standards can continue to decide who can print them, who can access and post them online, and the price and conditions of that access. It’s as if a lobbyist who submitted a draft bill to Congress could charge fees for access to that bill after Congress and the president pass it into law. Today, while most laws and regulations in the U.S. can be searched and read on the Web, laws incorporated by reference are locked behind paywalls, or cannot be found online at all. Many are available only in expensive printed books, or in a single office in Washington, D.C. that requires an appointment on several weeks’ notice. Public Resource’s website was designed to fill this gap, which is why it was targeted in a lawsuit. In our opening brief, EFF, along with co-counsel at Fenwick & West and attorney David Halperin, argued that giving private organizations the power to limit access violates the First Amendment’s guarantee of free speech, and the due process protections of the Fifth and Fourteenth Amendments and contradicts copyright law. We’re asking the appeals court to fix these errors and uphold the rights of everyone to know the law, and to share it. Related Cases:  Freeing the Law with Public.Resource.Org
>> mehr lesen

India's Supreme Court Upholds Right to Privacy as a Fundamental Right—and It's About Time (Di, 29 Aug 2017)
Last week's unanimous judgment by the Supreme Court of India (SCI) in Justice K.S. Puttaswamy (Retd) vs Union of India is a resounding victory for privacy. The ruling is the outcome of a petition challenging the constitutional validity of the Indian biometric identity scheme Aadhaar. The judgment's ringing endorsement of the right to privacy as a fundamental right marks a watershed moment in the constitutional history of India. The one-page order signed by all nine judges declares: The right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution. The right to privacy in India has developed through a series of decisions over the past 60 years. Over the years, inconsistency from two early judgments created a divergence of opinion on whether the right to privacy is a fundamental right. Last week's judgment reconciles those different interpretations to unequivocally declare that it is. Moreover, constitutional provisions must be read and interpreted in a manner which would enhance their conformity with international human rights instruments ratified by India. The judgment also concludes that privacy is a necessary condition for the meaningful exercise of other guaranteed freedoms. The judgment, in which the judges state the reasons behind the one-page order, spans 547 pages and includes opinions from six judges, creating a legal framework for privacy protections in India. The opinions cover a wide range of issues in clarifying that privacy is a fundamental inalienable right, intrinsic to human dignity and liberty. The decision is especially timely given the rapid roll-out of Aahaar. In fact, the privacy ruling arose from a pending challenge to India's biometric identity scheme. We have previously covered the privacy and surveillance risks associated with that scheme. Ambiguity on the nature and scope of privacy as a right in India allowed the government to collect and compile both demographic and biometric data of residents. The original justification for introducing Aadhaar was to ensure government benefits reached the intended recipients. Following a rapid roll-out and expansion, it is the largest biometric database in the world, with over 1.25 billion Indians registered. The government's push for Aadhaar has led to its wide acceptance as proof of identity, and as an instrument for restructuring and facilitating government services. The Two Cases That Casted Doubts on the Right to Privacy In 2012, Justice K.S. Puttaswamy (Retired) filed a petition in the Supreme Court challenging the constitutionality of Aadhaar on the grounds that it violates the right to privacy. During the hearings, the Central government opposed the classification of privacy as a fundamental right. The government's opposition to the right relied on two early decisions—MP Sharma vs Satish Chandra in 1954, and Kharak Singh vs State of Uttar Pradesh in 1962—which had held that privacy was not a fundamental right. In M.P Sharma, the bench held that the drafters of the Constitution did not intend to subject the power of search and seizure to a fundamental right of privacy. They argued that the Indian Constitution does not include any language similar to the Fourth Amendment of the US Constitution, and therefore, questioned the existence of a protected right to privacy. The Supreme Court made clear that M.P Sharma did not decide other questions, such as “whether a constitutional right to privacy is protected by other provisions contained in the fundamental rights including among them, the right to life and personal liberty under Article 21.” In Kharak Singh, the decision invalidated a Police Regulation that provided for nightly domiciliary visits, calling them an “unauthorized intrusion into a person’s home and a violation of ordered liberty.” However, it also upheld other clauses of the Regulation on the ground that the right of privacy was not guaranteed under the Constitution, and hence Article 21 of the Indian Constitution (the right to life and personal liberty) had no application. Justice Subbarao's dissenting opinion clarified that, although the right to privacy was not expressly recognized as a fundamental right, it was an essential ingredient of personal liberty under Article 21. Over the next 40 years, the interpretation and scope of privacy as a right expanded, and was accepted as being constitutional in subsequent judgments. During the hearings of the Aadhaar challenge, the Attorney-General (AG) representing the Union of India questioned the foundations of the right to privacy. The AG argued that the Constitution’s framers never intended to incorporate a right to privacy, and therefore, to read such a right as intrinsic to the right to life and personal liberty under Article 21, or to the rights to various freedoms (such as the freedom of expression) guaranteed under Article 19, would amount to rewriting the Constitution. The government also pleaded that privacy was “too amorphous” for a precise definition and an elitist concept which should not be elevated to that of a fundamental right. The AG based his claims on the M.P. Sharma and Kharak Singh judgments, arguing that since a larger bench had found privacy was not a fundamental right, subsequent smaller benches upholding the right were not applicable. Sensing the need for reconciliation of the divergence of opinions on privacy, the Court referred this technical clarification on constitutionality of the right to a larger bench. The bench would determine whether the reasoning applied in M.P. Sharma and Kharak Singh were correct and still relevant in present day. The bench was set up not to not look into the constitutional validity of Aadhaar, but to consider a much larger question: whether right to privacy is a fundamental right and can be traced in the rights to life and personal liberty. Aadhaar in jeopardy? Not Quite Yet Given the government's aggressive defense of Aadhaar, many human rights defenders feared the worst. The steady expansion of the scheme and the delay over the nine-judge bench being formed allowed Aadhaar to become an insidious part of Indian citizens' life. Indeed, in many ways the delay has led to Aadhaar being linked to all manner of essential and nonessential services. In last week's 547-page judgment, the Court is clear about the fundamental right to privacy and has overruled these two past judgments insofar as their observations on privacy were concerned. The constitutional framework for privacy clarified last week by the Court will breathe life into the Aadhaar hearings. While it awaited clarification on the right to privacy, the bench hearing the constitutional challenge to Aadhaar passed an interim order restricting compulsory linking of Aadhaar for benefits delivery. The order ends the legal gridlock in the hearings on the validity of the scheme. The identification database that Aadhaar builds will not be easy to reconcile in the framework for privacy drawn up in the judgments. Legal experts are of the opinion that, following the judgment, "it is amply clear that Aadhaar shall have to meet the challenge of privacy as a fundamental right." The Aadhaar hearings, which were cut short, are expected to resume under a smaller three- or five-judge bench later this month. Outside of the pending Aadhaar challenge, the ruling can also form the basis of new legal challenges to the architecture and implementation of Aadhaar. For example, with growing evidence that state governments are already using Aadhaar to build databases to profile citizens, the security of data and limitations on data convergence and profiling may be areas for future privacy-related challenges to Aadhaar. Implications for Future Case and Statute Law The lead judgment calls for the government to create a data protection regime to protect the privacy of the individual. It recommends a robust regime which balances individual interests and legitimate concerns of the state. Justice Chandrachud notes, "Formulation of a regime for data protection is a complex exercise that needs to be undertaken by the state after a careful balancing of requirements of privacy coupled with other values which the protection of data subserves together with the legitimate concerns of the state." For example, the court observes, "government could mine data to ensure resources reached intended beneficiaries." However, the bench restrains itself from providing guidance on the issues, confining its opinion to the clarification of the constitutionality of the right to privacy. The judgment will also have ramifications for a number of contemporary issues pending before the supreme court. In particular, two proceedings—on Aadhaar and on WhatsApp-Facebook data sharing—will be test grounds for the application and contours of the right to privacy in India. For now, what is certain is that the right to privacy has been unequivocally articulated by the highest Court. There is much reason to celebrate this long-due victory for privacy rights in India. But it is only the first step, as the real test of the strength of the right will in how it is understood and applied in subsequent challenges.
>> mehr lesen

Twitter (and Others) Double Down on Advertising and Tracking (Mo, 28 Aug 2017)
In June, Twitter discontinued its support for Do Not Track (DNT), the privacy-protective browser signal it has honored since 2012. EFF argued that Twitter should reconsider this decision, but that call has gone unheeded. In response, EFF’s Privacy Badger has new features to mitigate user tracking both on twitter.com and when you encounter Twitter content and widgets elsewhere on the web. (More technical details are covered in the accompanying technical post.) How did we get here and what can we do about it? Assembling A Data Dragnet In 2012 Twitter began to use tracking data to personalize content recommendations to its users, such as accounts they should follow. Twitter collected this data via the Tweet buttons and widgets integrated on sites all over the web. These widgets can set cookies on users' browsers and tell Twitter where the user goes online. This use of social sharing buttons and embedded content to track users is a common practice—Facebook, Google Plus, LinkedIn, and other social media networks do it as well. From 2013 onwards, Twitter also made deals with ad companies who had data about user browsing activity on sites where Twitter had no foothold. In contrast to its competitors, Twitter once offered users an easy opt-out from tracking: if users enabled the DNT signal in their browser, their browsing history would not be collected. This was a welcome move in 2012, when much of the advertising industry was contesting the definition of "tracking". Rather than support DNT, advertisers pushed their own "opt-out" process, AdChoices, which is difficult to enable and ineffective in practical terms. Apart from a myriad of technical flaws, AdChoices merely exempts users from being shown behavioral ads, not from the data collection behind the ad targeting. This is what Twitter has now bought into.1 The Logic of the Next Investor Call Twitter claims it dumped DNT because 'an industry-standard approach to Do Not Track did not materialize', but a better explanation may be the pressure on them to increase revenue. DNT users were being shown less lucrative ads targeted on context rather than behavior. This reduced ad revenue from the tens of millions of users who had the setting enabled. By making an opt-out more cumbersome, Twitter will bring some of those users back inside the behavioral targeting corral. The day after dumping DNT, Twitter declared its intention to "double down'"on adtech. Our response at EFF and Privacy Badger is that we're all-in on user protection. Trackers: Unknown Stalkers and Household Brands Large-scale tracking is conducted by two types of companies. First are the so-called "third parties": sites that we never visit intentionally, but which sit silently on web pages as an external resource, used to add functionality or to enable tracking by ad networks and data brokers. But the biggest profilers of users are the basic, go-to services that we all use online. These companies leverage the trust that users place in them as first parties to track even more than third-party trackers can. Google, Facebook, and Twitter are the sites with the longest reach, trailed by Oath (formerly Yahoo and AOL). 
These companies represent a different challenge because we visit their sites willingly. Each visit is logged on their servers and this log information is supplemented by data from the advertising, analytics services, and the social media widgets they provide as a third party to other sites. Many of us log in to these sites to access email or messages, upload data, or get personalized content, and the login allows these companies to identify us across our different devices - the computer at work, the laptop at home, the phone, the tablet. Privacy Badger and other tracker blockers can keep you off the third parties’ radar by blocking their resources, but it can’t erase the logs from the sites you visit willingly or the linkages enabled by the login process. Setting Red Lines for First Parties These household brands shouldn’t have a blank check to monitor us everywhere just because we log in to them for specific, narrow purposes. That’s why Privacy Badger blocks them as a third party when they plant widgets on other websites. If you are not willingly accessing their services, why should they know where you are? And even when we visit sites like Twitter and Google, there should be limits to the data they can collect about us. Outbound links are a good example. When we use these platforms to discover people or websites, the logs already reveal a great deal about our inclinations and interests. But platforms can also track the links we click to leave their sites. This is neither acceptable nor necessary. So Privacy Badger will be preventing outbound link tracking on Twitter right away, and on other sites in the future. US Citizens – Second Class Privacy Protection? If tech companies want us to trust their tools with the most sensitive matters of our lives, they should offer a universal privacy opt-out for those who want it. But experience shows that companies only change their practices in the face of major and sustained public pressure or the threat of political action. In the European Union, data collection is already regulated, and a new tougher regime—the General Data Protection Regulation—will come into effect in May 2018. U.S. companies will have to comply with the regulation for their EU users. There will then be two classes of privacy protection in the world, the EU and everywhere else - U.S. users will be stuck in coach. Evidence of this was spelled out in Twitter’s announcement of its new policy: 'We do not store web page visit data for users who are in the European Union and EFTA States.'2 The online advertising industry in the U.S. trumpets self-regulation, but they haven’t shown themselves worthy of it. We need to turn up the pressure. 1. According to TrustE, only  0.00015% of the users who see the “Ad Choices” icon use it.  https://www.economist.com/news/special-report/21615871-everything-people-do-online-avidly-followed-advertisers-and-third-party 2. Twitter confirms that this includes data accessible via their widgets and embedded content, whether it extends to data acquired from other trackers is unclear.
>> mehr lesen

Student Privacy Tips for Students (Fr, 25 Aug 2017)
Students: As you get ready to go back to school, add "review your student privacy rights" to your back-to-school to-do list, right next to ordering books and buying supplies. Exciting new technology in the classroom can also mean privacy violations, including the chance that your personal devices and online accounts may be demanded for searches by school personnel. Our student privacy report offers recommendations for several stakeholder groups. In this post, we'll focus specifically on students. Given that the integration of technology in education affects their data personally, it’s vital that students are especially attentive to what’s being integrated into their curriculum. Below, we provide a few recommendations for students to act to preserve their personal data privacy: Determine if there are privacy settings you can control directly in the device or application. Try to ascertain the privacy practices of the ed tech providers your school uses. Avoid sharing sensitive personal information (which could include, for example, search terms and browser history) if it will be transmitted back to the provider. If you’re concerned by the usage of a certain service and find it intrusive, talk to your parents and explain why you find it concerning. Ask to opt out or use an alternative technology when you do not feel comfortable with the policies of certain vendors. Share your privacy concerns with school administrators. It may work best to gather a few like-minded students and have a joint meeting where everyone shares their concerns and asks the school administrator(s) for further guidance. Want to learn more? Read our report Spying on Students: School-Issued Devices and Student Privacy for more recommendations, analysis of student privacy law, and case studies from across the country.
>> mehr lesen

EFF urges stronger oversight of DOJ’s digital search of J20 protestor website (Do, 24 Aug 2017)
District of Columbia Superior Court Judge Robert Morin ruled today that DreamHost must comply with federal prosecutors’ narrowed warrant seeking communications and records about an Inauguration Day protest website: disruptj20.org; but they will have to present the court with a “minimization plan” that includes the names of all government investigators who will have access to the data and a list of all the methods that they will be using to search the evidence. This is an important step in ensuring judicial oversight of the government’s digital search. While we are glad to see that the judge is taking steps to oversee the government's “narrowed” search, EFF has long warned against the problems with the two-step approach of overseizure of digital information followed by a search of the information for evidence responsive to the warrant. Because of the vast troves of data that the government has access to in these cases, it risks executing a general search, the very danger that the Fourth Amendment is meant to guard against. As the en banc 9th Circuit warned in 2010: “The process of segregating electronic data that is seizable from that which is not must not become a vehicle for the government to gain access to data which it has no probable cause to collect.” Unfortunately, that may well happen if the warrant is enforced in its current iteration. The revised warrant still seeks all “contents of e-mail accounts that are within the @disruptj20.org domain” regardless of their participation or involvement with the January 20th protest. To date, the government has not publicly contended that any of the specific @disruptj20.org email addresses belong to anyone who has been accused of a specific crime during the January 20th protest. Overseizure is especially troubling where, as in this case, First Amendment protected activity and speech is being threatened and chilled by the prospect of government intrusion. Our civil liberties should not be circumvented in the digital space just because the law has failed to keep up with the nuances of technology. As in other cases involving digital searches, EFF believes the government's access to the data should be limited in advance to ensure that it complies with the Fourth Amendment. This could include the use of a neutral third party or special master entrusted with the task of parsing through the relevant evidence to be turned over to law enforcement in order to limit government access to user data to which it has no probable cause to collect. Similarly, the government could be required to expressly articulate ex ante search protocols that outline specific limiting factors (for example: account names or handles, date and time ranges, keywords, file type or size) that can be subject to judicial review. Without these safeguards in place, the fear of unchecked government intrusion may chill individuals from across he ideological spectrum form engaging in the very public discourse that the Bill of Rights is intended to protect.
>> mehr lesen

10+ Years of Activists Silenced: Internet Intermediaries’ Long History of Censorship (Do, 24 Aug 2017)
Recent decisions by technology companies, especially “upstream” infrastructure technology companies, to drop neo-Nazis as customers have captured public attention—and for good reason. The content being blocked is vile and horrific, there is growing concern about hate groups across the country, and the nation is focused on issues of racism and protest. But this is a dangerous moment for Internet expression and the power of private platforms that host much of the speech on the Internet. People cheering for companies who have censored content in recent weeks may soon find the same tactic used against causes they love. We must be careful about what we are asking these companies to do and carefully review the processes they use to do it. A look at previous examples that EFF has handled in the past 10+ years can help demonstrate why we are so concerned. Complaints to “Upstream” Speech Intermediaries This isn’t just a “slippery slope” fear about potential future harm. Complaints to various kinds of intermediaries have been occurring for over a decade. It’s clear that Internet technology companies—especially those further “upstream” like domain name registrars —are simply not equipped or competent to distinguish between good complaints and bad in the U.S. much less around the world. They also have no strong mechanisms for allowing due process or correcting mistakes. Instead they merely react to where the pressure is greatest or where their business interests lie. Here are just a few cases EFF has handled or helped from the last decade where complaints went “upstream” to website hosts and DNS providers, impacting activist groups specifically. And this is not to mention the many times direct user platforms like Facebook and Twitter have censored content from artists, activists, and others. The U.S. Chamber of Commerce sent a complaint about a parody website created by activist group The Yes Men not merely to its hosting service, May First/People Link, but to that service’s upstream ISP, Hurricane Electric. When the hosting service May First/People Link resisted Hurricane Electric’s demands to remove the parody site, Hurricane Electric shut down MayFirst/PeopleLink’s connection entirely, temporarily taking offline hundreds of "innocent bystander" websites as collateral damage.  Shell Oil sent a takedown notice to the ISP of activist group Oil Change International after it launched a campaign aimed at Shell’s sponsorship of New Orleans Jazz Fest. The ISP removed the site, abruptly halting the campaign. Unhappy with a single document published on the giant website Cryptome.org, Microsoft sent complaints to Cryptome’s domain name registrar and web hosting provider, Network Solutions. As a result, hosting provider Network Solutions pulled the plug on the entire Cryptome website — full of legal content — because Network Solutions was not technically capable of targeting and removing the single document. The site was not restored until wide outcry in the blogosphere forced Microsoft to retract its takedown request. Threats to the domain host of a critic of South African diamond conglomerate De Beers resulted in the temporary takedown of a New York Times spoof website that included, in part, a critical fake ad announcing that diamond purchases "will enable us to donate a prosthetic for an African whose hand was lost in diamond conflicts."  Swiss bank Julius Baer pressured the domain name registrar for Wikileaks.org to lock the domain name after the organization posted documents demonstrating financial wrongdoing, and then obtained a court ruling confirming the censorship. In response to legal briefs by EFF and others objecting to this tactic, the district court dissolved the order, leading Julius Baer to dismiss its case. Media giant ABC sent a cease and desist letter on behalf of KSFO-AM radio in San Francisco to the webhost of the blog www.spockosbrain.com, after that site criticized the offensive and violent rhetoric on the radio station aimed at Congresswoman Nancy Pelosi and then-Senator Barak Obama. You’ll notice that complainers in these cases are powerful corporations. That’s not a coincidence. Large companies have the time, money, and scary lawyers to pressure intermediaries to do their bidding—something smaller communities rarely have. When Governments Get Involved The story gets much more frightening when governments enter the conversation. All of the major technology companies publish transparency reports documenting the many efforts made by governments around the world to require the companies to take down their customer’s speech.[1]   China ties the domain name system to tracking systems and censorship. Russia-backed groups flag Ukrainian speech, Chinese groups flag Tibetan speech, Israeli groups flag Palestinian speech, just to name a few. Every state has some reason to try to bend the core intermediaries to their agenda, which is why EFF along with a number of international organizations created the Manila Principles to set out the basic rules for intermediaries to follow when responding to these governmental pressures. Those concerned about the position of the current U.S. government with regard to Black Lives Matter, Antifa groups, and similar left-leaning communities should take note: efforts to urge the current U.S. government to treat them as hate groups have already begun.  The Risks of Embracing Censorship Will the Internet remain a place where small, marginalized voices get heard? For every tech CEO now worried about neo-Nazis there are hundreds of decisions made to silence voices that are made outside of public scrutiny with no transparency into decision-making or easy ways to get mistakes corrected. We understand the impulse to cheer any decisions to stand up against horrific speech, but if we embrace “upstream” intermediary censorship, it may very well come back to haunt us. [1] January-June, 2016, worldwide requests: Facebook: 9,666; Google: 6,552
>> mehr lesen

Will TPP-11 Nations Escape the Copyright Trap? (Mi, 23 Aug 2017)
TPP-11 nations have the historic opportunity to rein in excessive copyright term extension   Latest reports confirm that the Trans-Pacific Partnership (TPP) is being revived. The agreement had been shelved following the withdrawal of the U.S. from the negotiation process. Over the past year, countries eager to keep the pact alive have continued dialogue and rallied support of less enthusiastic members to move forward with the agreement without the U.S. A revised framework is expected to be proposed for approval at the Asia-Pacific Economic Cooperation (APEC) TPP-11 Ministerial Meeting in November. We had previously reported the remaining eleven nations (TPP-11) had launched a process to assess options and consensus on how the agreement should be brought into force. A recent statement by New Zealand's Prime Minister suggests that countries favor an approach that seeks to replicate TPP provisions with minimal number of changes. The revival of the trade bloc comes at a critical juncture. Two trade agreements—the U.S. led North American Free Trade Agreement (NAFTA) and China led Regional Comprehensive Economic Partnership (RCEP) are racing to establish rules to control data flows and the digital economy. The TPP without the U.S offers an alternative to the China-centric and U.S led treaties under negotiation. The TPP without the U.S offers an alternative to the China-centric and U.S led treaties under negotiation. A speedy ratification would ensure TPP-11 nations' leadership on setting digital rules. Not opening up a lengthy renegotiation process would be essential to avoid delays in completing the treaty. This could explain why nations opted for a minimal changes approach. Although pushing through with provisions that countries have reached consensus on sounds reasonable, attempting to revive TPP without addressing the trade-offs made for access to U.S. markets that are no longer beneficial to negotiating countries makes little sense. Avoiding renegotiation or opening up of TPP will lead to enactment of its flawed and untested provisions with far-reaching ramifications on innovation, creativity and culture. Foremost amongst these is the TPP provision on copyright term extension. Copyright Term Extension One of the most controversial provisions included in the TPP negotiations was to increase the copyright term length for six of the signatory countries. The international standard term of copyright set by the Berne Convention is life of the author plus an additional 50 years. This standard term is followed by more than half of the TPP countries including Canada, Japan, Malaysia, New Zealand, Brunei, and Vietnam. Under the TPP terms, all these countries would be required to extend copyright term to a minimum term of the life of the author plus 70 years, mirroring the terms of the controversial US Sonny Bono Copyright Term Extension Act or the “Mickey Mouse Act”. Such copyright term extensions, and their retrospective application to published works, are pushed through by major record and movie production companies such as the MPAA and RIAA which stand to benefit from them the most. The common justification for including such lengthy terms is to create economic incentive for creators. However, extending copyright terms creates little additional income to creators. An empirical study calculating the optimal copyright term to incentivize the most works found that the maximum comes at about 14 years. The world's leading economists agree that such long copyright term makes no sense. Studies in some countries that have extended copyright term have concluded that it ultimately increases costs for consumers as additional royalties are sent out of the country. Excessively long copyright terms have often kept scholars from publishing or even obtaining access to material of real historical or cultural significance. Term extensions also significantly impact creativity as new works don't enter the public domain for long periods. In other words, copyright term extensions create limited gains in terms of fuelling innovation but severely restrict access to cultural heritage. Term extension has been included as part of several bilateral and plurilateral trade agreements as it is easier to persuade countries to accept ever-longer terms as a trade-off for concessions offered in other areas during the negotiations. With the U.S out of the picture however, the TPP-11 have the opportunity to stop the export of the legislative frameworks that create a system where works cannot be freely built on. One good reason to reject the extension of copyright term of published works to life plus 70 from the present 50 years is the cost of doing so. New Zealand has estimated that costs to extend from 50 to 70 years would average around $55 million per year. The costs for all 11 nations would be much higher and unnecessary in the absence of U.S. forcing the mandatory inclusion of the extension. If TPP-11 goes forward and copyright term extension is excluded from the agreement, then countries like Japan, Malaysia, New Zealand, and Vietnam that have already enacted or are considering domestic legislation to implement the original TPP should ensure that this legislation also excludes the copyright term extension provisions. Bringing TPP back from the dead will be easier said than done and there are more problems with this idea than we’ve explored in this post, but even if it does happen, the exclusion of copyright from a resuscitated TPP should be a no-brainer. No matter which way the agreement goes, member nations should ensure that there is no more theft from their rich and diverse public domain. That's the message that we're giving to the TPP-11 nations in the letter that we're sending to the TPP-11 Ministerial Group today. You can read a copy of it below.
>> mehr lesen

It’s Time to Strengthen California’s Public Records Law (Mi, 23 Aug 2017)
Update: On Sept. 7, 2017, we changed our position from supporting the bill to neutral. Read our update here.  In 2015, the Center for Public Integrity undertook a major investigation aimed at grading all 50 states to ascertain their transparency and accountability. When it came to California, the state received an abysmal ‘F’ rating in the category focusing on public access to information. That is unacceptable. Take Action Tell California Senate Appropriations Committee Chair Ricardo Lara to support A.B. 1479 Transparency advocates for years have complained about the enforcement measures in the California Public Records Act (CPRA). There is no appeal process when an agency rejects or ignores a records request.  The burden is on the requester to go to court to fight for the documents.  While the agency may have to pick up the requester’s legal bills, there is no penalty for agencies that willfully, knowingly, and without any good reason violate the law. The union’s most populous state and the sixth largest economy in the world should be setting an example rather than lagging behind the many states—such as North Dakota and New Mexico—that penalize agencies that improperly handle or reject request for public records. A.B. 1479 is a bill by Assemblymember Rob Bonta that aims to strengthen California’s public records law by creating a financial penalty for government agencies that improperly withhold public records, assess outrageous fees to produce those records, or unreasonably delay their release. Its key provision states: (3) (A) If a court finds by preponderance of the evidence that an agency, knowingly and willfully without substantial justification, failed to respond to a request for records as required pursuant to subdivision (c) of Section 6253, improperly withheld a public record from a member of the public that was clearly subject to public disclosure, unreasonably delayed providing the contents of a record subject to disclosure in whole or in part, or improperly assessed a fee upon a requester that exceeded the direct cost of duplication, or otherwise did not act in good faith to comply with this chapter, the court may assess a civil penalty against the agency in an amount not less than one thousand dollars ($1,000) nor more than five thousand dollars ($5,000), which shall be awarded to the requester. In an action alleging multiple violations the court may assess a penalty for each violation, however the total amount assessed shall not exceed five thousand dollars ($5,000). In other words: If the agency’s response to your CPRA request meets the above conditions and you file and win a lawsuit, the court can fine the agency between $1,000 - $5,000, with the money going to the requester. However, there are limits. While the court can assess a fine per violation, cumulatively the fines cannot exceed $5,000. In addition, a court is prohibited from assessing a fine if the agency withheld the records because of “ambiguous or unsettled question” of law. The bill would also require government agencies to tap an employee to be the “custodian of records,” the point-person for all public records requests and public records related questions. This is already a common practice across the state. Despite claims by opponents, it would not require any new hiring, since the title could be given to the staff currently charged with handling public records requests, such as public information officers, city clerks, and legal counsel. Too often, government agencies ignore or reject records requests, knowing that it is unlikely the requester will file a lawsuit. After all, agencies have full-time legal departments and millions set aside for legal settlements, while requesters have no financial incentive at all to pursue litigation.  This penalty provision will put pressure on public agencies to err on the side of disclosure of public records rather than withholding them. By going beyond the current regime of attorney fee awards if a requestor wins in court, it will reduce the personal risk that California citizens incur in pursuing litigation over government transparency. To alleviate concerns that the bill may result in a flood of public records lawsuits, the bill has a built in sunset date of 2023, essentially creating a five-year pilot program. Although opponents claim that the bill will hurt their budgets, it’s important to note that agencies that follow the law can avoid this penalty completely. The California Assembly passed the bill with a near-unanimous vote, and a bipartisan majority in the Senate Judiciary Committee approved it. A.B. 1479 is now before the Senate Appropriations Committee, where it is up to the chair—Sen. Ricardo Lara—to bring it up for a full vote. We call on Sen. Lara and the Senate Appropriations Committee to pass A.B. 1479. It’s time we gave the California Public Records Act some teeth.
>> mehr lesen

DOJ Backs Down From Overbroad J20 Warrant. But Problems Still Remain (Mi, 23 Aug 2017)
The government has backed down significantly in its fight with DreamHost about information related to the J20 protests. Late on Tuesday, DOJ filed a reply in its much publicized (and much criticized) attempt to get the hosting provider to turn over a large amount of data about a website it was hosting, disruptj20.org—a site that was dedicated to organizing and planning protests in Washington, D.C. on the day of President Trump's inauguration. In the brief, DOJ substantially reduces the amount of information it is seeking. It also specifically excludes some information from its demand, including some of the most obvious examples of overreach. DOJ initially demanded that DreamHost turn over nearly 1.3 IP addresses on visitors to the site. Millions of visitors—activists, reporters, or anyone who just wanted to check out the site—would have records of their visits turned over to the government. The warrant also sought production of all emails associated with the account and unpublished content, like draft blog posts and photos. The new warrant parameters exclude most visitor logs from the demand, set a temporal limit for records from July 1, 2016 to January 20, 2017, and also withdraw the demand for unpublished content, like draft blog posts and photos. This was a sensible response on DOJ's part—both legally and politically. But the new warrant is not without its flaws. First, it's not clear from either the warrant itself or the facts of the case whether DOJ is ordering DreamHost to turn over information on one account or multiple accounts. At a minimum, DOJ should be required to specify which accounts are subject to the order. More fundamentally, DOJ is still investigating a website that was dedicated to organizing and planning political dissent and protest. That is activity at the heart of the First Amendment's protection. If, as DOJ claims, it has no interest in encroaching on protected political activity and organizing, then it should allow a third-party—like a judge, a special master, or a taint team—to review the information produced by DreamHost before it is turned over to the government. Anything less threatens to cast a further shadow on the legitimacy of this investigation. The hearing in this case is scheduled for Thursday, August 24, 2017, at 10 a.m. Courtroom 315 — Chief Judge Robert E. Morin Superior Court of the District of Columbia 500 Indiana Ave NW, Washington D.C., 20001 Update: Judge orders Dreamhost to comply with DOJ's narrowed warrant.
>> mehr lesen

Washington State Tries to Crack Down on Cyberbullying — But Routine Criticism Is Blocked Instead (Di, 22 Aug 2017)
The scourge of online harassment can scare many people away from expressing their opinions online. It’s a problem that calls for sophisticated, multi-layered solutions. But a law in Washington state is demonstrating how some approaches to the issue can go terribly wrong, potentially blocking the routine criticism of politicians and others that is an integral part of a functioning democracy. EFF and the American Civil Liberties Union of Washington have filed an amicus brief in a new federal legal case against this law, urging the judge to recognize the critical constitutional questions it raises. At EFF, we’ve been watching Washington’s cyberstalking law for a long time. Among its provisions, it prohibits broadly defined “electronic communications” intended to “embarrass” someone that are made anonymously or repeatedly or include an obscenity. But a big part of political activism is naming and shaming folks who you think should do the right thing—it’s a powerful tool to get officials to do their job. It doesn’t take long to think of activities that could be criminalized by this law: one politician publishing various lists of questionable decisions made by an election challenger; a series of newspaper editorials arguing that a city official should be scorned because of misconduct; or an activist posting multiple videos of a lawmaker doing something unsavory. This is all important speech that is protected by the First Amendment, and no state law should be allowed to undermine these rights. EFF and ACLU-WA’s amicus brief asks the judge to issue a preliminary injunction in this case, blocking enforcement of this unconstitutional law. We hope the judge does the right thing here, as this is a prime example of how a well meaning law can have terrible unintended consequences. EFF thanks our local counsel, Judith A. Endejan of Garvey Schubert Barer, for her help in filing this brief.
>> mehr lesen

As First NAFTA Round Opens in Secrecy, Digital Rights Groups Fear Another TPP (Fr, 18 Aug 2017)
The opening round of a series of negotiations over a proposed revised North American Free Trade Agreement (NAFTA) began this week in Washington, D.C. between trade representatives from the United States, Canada, and Mexico. Already it is clear that the office of the U.S. Trade Representative (USTR) has ignored our specific recommendations (to say nothing of USTR Robert Lighthizer's personal promises) about making the negotiations more open and transparent. Once again, following the failed model of the Trans-Pacific Partnership (TPP), the USTR will be keeping the negotiating texts secret, and in an actual regression from the TPP will be holding no public stakeholder events alongside the first round. This may or may not set a precedent for future rounds, that will rotate between the three countries every few weeks thereafter, with a scheduled end date of mid-2018. Although EFF has been keeping an open mind about the agreement until we have a better idea of what it will contain, the secrecy of its first negotiation round augurs poorly for what is to come. Already, the usual copyright lobbyists have descended upon the negotiations, sending a letter to the USTR this week which directly opposes the inclusion of a "fair use" copyright exception in the agreement, as EFF had suggested. This "creative industry" letter relevantly states: The three-step test strikes the appropriate balance in copyright, and any language mandating broader exceptions and limitations only serves as a vehicle to introduce uncertainty into copyright law, distort markets and weaken the rights of the small and medium businesses and creators we represent. For that reason, we strongly urge USTR to not include “balance” language similar to what appeared in the TPP or any reference to vague, open-ended limitations. But more than two dozen public interest groups, including EFF, Creative Commons, Public Knowledge, Public Citizen, and OpenMedia, have written a letter of our own, in which we counter this argument and raise some of our own key concerns. Aside from the fact that we have been shamefully shut out of the negotiations, without any opportunity to see the texts that are being negotiated on our behalf, our letter also warns against the inclusion of one-sided copyright and digital trade provisions in NAFTA, such as those that had previously been part of the failed TPP: We also share concerns about the suitability of trade mechanisms to create prescriptive policies that govern Internet use, cultural sharing and innovation. In general, developments in technology happen quickly, and trade processes that do not keep pace with technological and social advancement may inhibit each of our respective governments from making necessary and appropriate changes to related rules, especially with regard to intellectual property regulations that impact our rights to culture and free expression. The letter will be delivered to the trade ministries of the three countries today. You can read it in full below.
>> mehr lesen