Deeplinks

From Canada to Argentina, Security Researchers Have Rights—Our New Report (Mi, 17 Okt 2018)
EFF is introducing a new Coders' Rights project to connect the work of security research with the fundamental rights of its practitioners throughout the Americas. The project seeks to support the right of free expression that lies at the heart of researchers' creations and use of computer code to examine computer systems, and relay their discoveries among their peers and to the wider public.   To kick off the project, EFF published a whitepaper today, “Protecting Security Researchers' Rights in the Americas” (PDF), to provide the legal and policy basis for our work, outlining human rights standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights, should use to protect the fundamental rights of security researchers. We started this project because hackers and security researchers have never been more important to the security of the Internet. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work. Computer security researchers work, often independently from large public and private institutions, to analyze, explore, and fix the vulnerabilities that are scattered across the digital landscape. While most of this work is conducted unobtrusively as consultants or as employees, sometimes their work is done in the public interest—which gathers researchers headlines and plaudits, but can also attract civil or criminal suits. They can be targeted and threatened with laws intended to prevent malicious intrusion, even when their own work is anything but malicious. The result is that security researchers work in an environment of legal uncertainty, even as their job becomes more vital to the orderly functioning of society. Drawing on rights recognized by the American Convention on Human Rights, and examples from North and South American jurisprudence, this paper analyzes what rights security researchers have; how those rights are expressed in the Americas’ unique arrangement of human rights instruments, and how we might best interpret the requirements of human rights law—including rights of privacy, free expression, and due process—when applied to the domain of computer security research and its practitioners. In cooperation with technical and legal experts across the continent, we explain that: Computer programming is expressive activity protected by the American Convention of Human Rights. We explain how free expression lies at the heart of researchers’ creation and use of computer code to examine computer systems and to relay their discoveries among their peers and to the wider public. Courts and the law should guarantee that the creation, possession or distribution of tools related to cybersecurity are protected by Article 13 of the American Convention of Human Rights, as legitimate acts of free expression, and should not be criminalized or otherwise restricted. These tools are critical to the practice of defensive security and have legitimate, socially desirable uses, such as identifying and testing practical vulnerabilities. Lawmakers and judges should discourage the use of criminal law as a response to behavior by security researchers which, while technically in violation of a computer crime law, is socially beneficial. Cybercrime laws should include malicious intent and actual damage in its definition of criminal liability. The “Terms of service” (ToS) of private entities have created inappropriate and dangerous criminal liability among researchers by redefining “unauthorized access” in the United States. In Latin America, under the Legality Principle, ToS provisions cannot be used to meet the vague and ambiguous standards established in criminal provisions (for example, "without authorization"). Criminal liability cannot be based on how private companies would like their services to be used. On the contrary, criminal liability must be based on laws which describe in a precise manner which conduct is forbidden and which is punishable. Penalties for crimes committed with computers should, at a minimum, be no higher than penalties for analogous crimes committed without computers. Criminal law punishment provisions should be proportionate to the crime, especially when cybercrimes demonstrate little harmful effects, or are comparable to minor traditional infractions. Proactive actions that will secure the free flow of information in the security research community are needed. We’d like to thank EFF Senior Staff Attorney Nate Cardozo, Deputy Executive Director and General Counsel Kurt Opsahl, International Rights Director Katitza Rodríguez, Staff Attorney Jamie Lee Williams, as well as consultant Ramiro Ugarte and Tamir Israel, Staff Attorney at Canadian Internet Policy and Public Interest Clinic at the Centre for Law, Technology and Society at the University of Ottawa, for their assistance in researching and writing this paper.
>> mehr lesen

What To Do If Your Account Was Caught in the Facebook Breach (Mi, 17 Okt 2018)
Keeping up with Facebook privacy scandals is basically a full-time job these days. Two weeks ago, it announced a massive breach with scant details. Then, this past Friday, Facebook released more information, revising earlier estimates about the number of affected users and outlining exactly what types of user data were accessed. Here are the key details you need to know, as well as recommendations about what to do if your account was affected. 30 Million Accounts Affected The number of users whose access tokens were stolen is lower than Facebook originally estimated. When Facebook first announced this incident, it stated that attackers may have been able to steal access tokens—digital “keys” that control your login information and keep you logged in—from 50 to 90 million accounts. Since then, further investigation has revised that number down to 30 million accounts. The attackers were able to access an incredibly broad array of information from those accounts. The 30 million compromised accounts fall into three main categories. For 15 million users, attackers access names and phone numbers, emails, or both (depending on what people had listed). For 14 million, attackers access those two sets of information as well as extensive profile details including: Username Gender Locale/language Relationship status Religion Hometown Self-reported current city Birthdate Device types used to access Facebook Education Work The last 10 places they checked into or were tagged in Website People or Pages they follow Their 15 most recent searches For the remaining 1 million users whose access tokens were stolen, attackers did not access any information. Facebook is in the process of sending messages to affected users. In the meantime, you can also check Facebook’s Help Center to find out if your account was among the 30 million compromised—and if it was, which of the three rough groups above it fell into. Information about your account will be at the bottom in the box titled “Is my Facebook account impacted by this security issue?” What Should You Do If Your Account Was Hit? The most worrying potential outcome of this hack for most people is what someone might be able to do with this mountain of sensitive personal information. In particular, adversaries could use this information to turbocharge their efforts to break into other accounts, particularly by using phishing messages or exploiting legitimate account recovery flows. With that in mind, the best thing to do is stay on top of some digital security basics: look out for common signs of phishing, keep your software updated, consider using a password manager, and avoid using easy-to-guess security questions that rely on personal information. The difference between a clumsy, obviously fake phishing email and a frighteningly convincing phishing email is personal information. The information that attackers stole from Facebook is essentially a database connecting millions of people’s contact information to their personal information, which amounts to a treasure trove for phishers and scammers. Details about your hometown, education, and places you recently checked in, for example, could allow scammers to craft emails impersonating your college, your employer, or even an old friend. In addition, the combination of email addresses and personal details could help someone break into one of your accounts on another service. All a would-be hacker needs to do is impersonate you and pretend to be locked out of your account—usually starting with the “Forgot your password?” option you see on log-in pages. Because so many services across the web still have insecure methods of account recovery like security questions, information like birthdate, hometown, and alternate contact methods like phone numbers could give hackers more than enough to break into weakly protected accounts. Facebook stated that it has not seen evidence of this kind of information being used “in the wild” for phishing attempts or account recovery break-ins. Facebook has also assured users that no credit card information or actual passwords were stolen (which means you don’t need to change those) but for many that is cold comfort. Credit card numbers and passwords can be changed, but the deeply private insights revealed by your 15 most recent searches or 10 most recent locations cannot be so easily reset. What Do We Still Need To Know? Because it’s cooperating with the FBI, Facebook cannot discuss any findings about the hackers’ identity or motivations. However, from Facebook’s more detailed description of how they carried out the attack, it’s clear that the attackers were determined and coordinated enough to find an obscure, complex vulnerability in Facebook’s code. It’s also clear that they had the resources necessary to automatically exfiltrate data on a large scale. We still don’t know what exactly the hackers were after: were they targeting particular individuals or groups, or did they just want to gather as much information as possible? It’s also unclear if the attackers abused the platform in ways beyond what Facebook has reported, or used the particular vulnerability behind this attack to launch other, more subtle attacks that Facebook has not yet found. There is only so much individual users can do to protect themselves from this kind of attack and its aftermath. Ultimately, it is Facebook’s and other companies’ responsibility to not only protect against these kinds of attacks, but also to avoid retaining and making vulnerable so much personal information in the first place.
>> mehr lesen

Lawsuit Seeking to Unmask Contributors to ‘Shitty Media Men’ List Would Violate Anonymous Speakers’ First Amendment Rights (Di, 16 Okt 2018)
A lawsuit filed in New York federal court last week against the creator of the “Shitty Media Men” list and its anonymous contributors exemplifies how individuals often misuse the court system to unmask anonymous speakers and chill their speech. That’s why we’re watching this case closely, and we’re prepared to advocate for the First Amendment rights of the list’s anonymous contributors. On paper, the lawsuit is a defamation case brought by the writer Stephen Elliott, who was named on the list. The Shitty Media Men list was a Google spreadsheet shared via link and made editable by anyone, making it particularly easy for anonymous speakers to share their experiences with men identified on the list. But a review of the complaint suggests that the lawsuit is focused more broadly on retaliating against the list’s creator, Moira Donegan, and publicly identifying those who contributed to it. For example, after naming several anonymous defendants as Jane Does, the complaint stresses that “Plaintiff will know, through initial discovery, the names, email addresses, pseudonyms and/or ‘Internet handles’ used by Jane Doe Defendants to create the List, enter information into the List, circulate the List, and otherwise publish information in the List or publicize the List.” In other words, Elliott wants to obtain identifying information about anyone and everyone who contributed to, distributed, or called attention to the list, not just those who provided information about Elliot specifically. The First Amendment, however, protects anonymous speakers like the contributors to the Shitty Media Men list, who were trying to raise awareness about what they see as a pervasive problem: predatory men in media. As the Supreme Court has ruled, anonymity is a historic and essential way of speaking on matters of public concern—it is a “shield against the tyranny of the majority.” Anonymity is particularly critical for people who need to communicate honestly and openly without fear of retribution. People rely on anonymity in a variety of contexts, including reporting harassment, violence, and other abusive behavior they’ve experienced or witnessed. This was the exact purpose behind the Shitty Media Men list. Donegan, who after learning she would be identified as the creator of the list, came forward and wrote that she “wanted to create a place for women to share their stories of harassment and assault without being needlessly discredited or judged. The hope was to create an alternate avenue to report this kind of behavior and warn others without fear of retaliation.” It’s easy to understand why contributors to the list did so anonymously, and that they very likely would not have provided the information had they not been able to remain anonymous. By threatening that anonymity, lawsuits like this one risk discouraging anyone in the future from creating similar tools that share information and warn people about violence, abuse, and harassment. To be clear, our courts do allow plaintiffs to pierce anonymity if they can show need to do so in order to pursue legitimate claims. That does not seem to be the case here, because the claims against Donegan appear to be without merit. Given that she initially created the spreadsheet as a platform to allow others to provide information, Donegan is likely immune from suit under Section 230, the federal law that protects creators of online forums like the “Shitty Media Men” list from being treated as the publisher of the information added by other users, here the list’s contributors. And even if Donegan did in fact create the content about Elliott, she could still argue that the First Amendment requires that he show that the allegations were not only false but also made with actual malice. EFF has long fought for robust protections for anonymous online speakers, representing speakers in court cases and also pushing courts to adopt broad protections for them. Given the potential dangers to anonymous contributors to this list and the thin allegations in the complaint, we hope the court hearing the lawsuit quickly dismisses the case and protects the First Amendment rights of the speakers who provided information to it. We also applaud Google, which has said that it will fight any subpoenas seeking information on its users who contributed to the list.  EFF will continue to monitor the case and seek to advocate for the First Amendment rights of those who contributed to the list should it become necessary. If you contributed to the list and are concerned about being identified or otherwise have questions, contact us at info@eff.org. As with all inquiries about legal assistance from EFF, the attorney/client privilege applies, even if we can’t take your case.
>> mehr lesen

Federal Circuit (Finally) Makes Briefs Immediately Available to the Public (Di, 16 Okt 2018)
In a victory for transparency, the Federal Circuit has changed its policies to give the public immediate access to briefs. Previously, the court had marked submitted briefs as “tendered” and withheld them from the public pending review by the Clerk’s Office. That process sometimes took a number of days. EFF wrote a letter [PDF] asking the court to make briefs available as soon as they are filed. The court has published new procedures [PDF] that will allow immediate access to submitted briefs. Regular readers might note that this is the second time we have announced this modest victory. Unfortunately, our earlier blog post was wrong and arose out of a miscommunication with the court (the Clerk’s Office informed us of our mistake and we corrected that post). This time, the new policy clearly provides for briefs to be immediately available to the public. The announcement states: The revised procedure will allow for the immediate filing and public availability of all electronically-filed briefs and appendices. … As of December 1, 2018, when a party files a brief or appendix with the court, the document will immediately appear on the public docket as filed, with a notation of pending compliance review. In our letter to the Federal Circuit, we had explained that the public’s right of access to courts includes a right to timely access. The Federal Circuit is the federal court of appeal that hears appeals in patent cases from all across the country, and many of its cases are of interest to the public at large. We are glad that the court will now give the press and the public immediate access to filed briefs. Overall, the Federal Circuit has a good record on transparency. The court has issued rulings making it clear that it will only allow material to be sealed for good reason. The court’s rules of practice require parties to file a separate motion if they want to seal more than 15 consecutive words in a motion or a brief. The Federal Circuit’s new filing policy brings its docketing practices in line with this record of transparency and promotes timely access to court records.
>> mehr lesen

Ten Legislative Victories You Helped Us Win in California (Di, 16 Okt 2018)
 Your strong support helped us persuade California’s lawmakers to do the right thing on many important technology bills debated on the chamber floors this year. With your help, EFF won an unprecedented number of victories, supporting good bills and stopping those that would have hurt innovation and digital freedoms. Here’s a list of victories you helped us get the legislature to pass and the governor to sign, through your direct participation in our advocacy campaigns and your other contributions to support our work. Net Neutrality for California Our biggest win of the year, the quest to pass California’s net neutrality law and set a gold standard for the whole country, was hard-fought. S.B. 822 not only prevents Internet service providers from blocking or interfering with traffic, but also from prioritizing their own services in ways that discriminate. California made a bold declaration to support the nation’s strongest protections of a free and open Internet. As the state fights for the ability to enact its law—following an ill-conceived legal challenge from the Trump administration—you can continue to let lawmakers know that you support its principles. Increased Transparency into Local Law Enforcement Policies Transparency is the foundation of trust. Thanks to the passage of S.B. 978, California police departments and sheriff’s offices will now be required to post their policies and training materials online, starting in January 2020. The California Commission on Peace Officer Standards and Training will be required to make its vast catalog of trainings available as well. This will encourage better and more open relationships between law enforcement agencies and the communities they serve. Increasing public access to police materials about training and procedures benefits everyone by making it easier to understand what to expect from a police encounter. It also helps ensure that communities have a better grasp of new police surveillance technologies, including body cameras and drones. Public Access to Footage from Police Body Cameras Cameras worn by police officers are increasingly common. While intended to promote police accountability, unregulated body cams can instead become high tech police snooping devices. Some police departments have withheld recordings of high-profile police use of force against civilians, even when communities demand release. Prior to this bill’s introduction, Los Angeles, for example, had a policy that didn’t allow for any kind of public access at all.  The public now has the right to access those recordings. A.B. 748 ensures that starting July 1, 2019, you will have the right to access this important transparency resource. EFF sent a letter stating its support for this law, which makes it more likely that body-worn cameras will be used as a tool for holding officers accountable, rather than a tool of police surveillance against the public. Privacy Protections for Cannabis Users As the legal marijuana market develops in California, it is critical that the state protects the data privacy rights of cannabis users. A.B. 2402 is a step in the right direction, providing modest but vital privacy measures. A.B. 2402 stops cannabis distributors from sharing the personal information of their customers without their consent, granting cannabis users an important data privacy right. The bill also prohibits dispensaries from discriminating against a customer who chooses to withhold that consent. As more vendors use technology such as apps and websites to market marijuana, the breadth of their data collection continues to grow. News reports have found that dispensaries are scanning and retaining driver license data, as well as requiring names and phone numbers before purchases. This new law ensures that users can deny consent to having their personal information shared with other companies, without penalty. Better DNA Privacy for Youths DNA information reveals a tremendous amount about a person – their medical conditions, their ancestry, and many other immutable traits – and handing over a sample to law enforcement has long-lasting consequences. Unfortunately, at least one police agency has demanded DNA from youths in circumstances that are confusing and coercive. A.B. 1584 makes sure that before this happens, kids will have an adult in the room to explain the implications of handing a DNA sample over to law enforcement. Once this law takes effect in January 2019, law enforcement officials must have the consent of a parent, guardian, or attorney, in addition to consent from the minor, to collect a DNA sample. EFF wrote a letter supporting this bill as a vital protection for California’s youths, particularly in light of press reports about police demanding DNA from young people without a clear reason. In one case, police approached kids coming back from a basketball game at a rec center and had them sign forms “consenting” to check swabs. A.B. 1584 adds sensible privacy protections for children, to ensure that they fully understand how police may use these DNA samples. It also guarantees that, if the sample doesn’t implicate them in a crime, it will be deleted from the system promptly. Guaranteed Internet Access for Kids in Foster Care and Juvenile Detention Internet access is vital to succeeding in today’s world. With your support, we persuaded lawmakers to recognize how important it is for some of California’s most vulnerable young people—those involved in the child welfare and juvenile justice systems— to be able to access the Internet, as a way to further their education. A.B. 2448 guarantees that access. EFF testified before a Senate committee to advocate for the 2017 version of this bill, which the governor vetoed with the condition that he would sign a more narrow text. The second version, however, passed Gov. Brown’s muster. Throughout the process, EFF launched email campaigns and enlisted the help of tech companies, including Facebook, to lend their support to the effort. This law affirms that some of the state’s most at-risk young people have access to all the resources the Internet has to offer. And it shows the country that if California can promise Internet access to disadvantaged youth, then other states can, too. Better Privacy Protections for ID Scanning Getting your ID card checked at a bar? The bouncer may be extracting digital information from your ID, and the bar may then be sharing that information with others. California law limits bars from sharing information they collected through swiping your ID, but some companies and police departments believed they could bypass those safeguards as long as IDs were “scanned” rather than “swiped.”  A.B. 2769 closes this loophole. It makes sure that you have the same protections against having your information shared without your consent whether the bouncer checking you out is swiping your card or scanning it. EFF sent a letter in support of this bill to the governor. People shouldn’t lose the right to consent to data sharing simply because the place they go chooses a different method of checking their identification. Thankfully, the governor signed this common-sense bill. Open Access to Government-funded Research A.B. 2192 was a huge victory for open access to knowledge in the state of California. It gives everyone access to research that’s been funded by the government within a year of its publication date. EFF went to Sacramento to testify in support of this bill. We also wrote to explain that it would have at most a negligible financial impact on the state budget to require researchers to make their reports open to the public. This prompted lawmakers to reconsider the bill after previously setting it aside. A.B. 2192 is a good first step. EFF would like to see other states adopt similar measures. We also want California to take further strides to make research available to other researchers looking to advance their work, and to the general public. No Government Committee Deciding What is “Fake News” Fighting “fake news” has become a priority for a lot of lawmakers, but S.B. 1424, a bill EFF opposed, was not the way to do it. The bill would have set up a state advisory committee to recommend ways to “mitigate” the spread of “fake news.” That would have created an excessive risk of new laws that restrict the First Amendment rights of Californians. EFF sent a letter to the governor, outlining our concerns about having the government be the arbiter of what is true and what isn’t. This is an especially difficult task when censors examine complex speech, such as parody and satire. Gov. Brown vetoed this bill, ultimately concluding that it was not needed. “As evidenced by the numerous studies by academic and policy groups on the spread of false information, the creation of a statutory advisory group to examine this issue is not necessary,” he wrote. Helped Craft a Better Bot-Labeling Law California's new bot-labeling bill, S.B. 1001, initially included overbroad language that would have swept up bots used for ordinary and protected speech activities. Early drafts of the bill would have regulated accounts used for poetry, political speech, or satire. The original bill also created a takedown system that could have been used to censor or discredit important voices, like civil rights leaders or activists. EFF worked with the bill's sponsor, Senator Robert Hertzberg, to remove the dangerous language and think through the original bill's unintended negative consequences. We thank the California legislature for hearing our concerns and amending this bill. On to 2019! You spoke, and California’s legislature and governor listened. In 2018, we made great progress for digital liberty. With your help, we look forward to more successes in 2019. Thank you!
>> mehr lesen

New Witness Panel Tells Congress How to Protect Consumer Data Privacy (Do, 11 Okt 2018)
Yesterday’s Senate Commerce Committee hearing on consumer data privacy was a welcome improvement. The last time the Committee convened around this topic, all of the witnesses were industry and corporate representatives. This time, we were happy to see witnesses from consumer advocacy groups and the European Union, who argued for robust consumer privacy laws on this side of the Atlantic. The Dangers of Rolling Back State Privacy Protections Last time, the panel of industry witnesses (Amazon, Apple, AT&T, Charter, Google, and Twitter) all testified in favor of a federal law to preempt state data privacy laws, such as California’s new Consumer Privacy Act (CCPA). Today was different. Chairman Thune kicked off the hearing by reminding the Committee of the importance of hearing from independent stakeholders and experts. We were also glad to hear Chairman Thune say that industry self-regulation is not enough to protect consumer privacy, and that new standards are needed. A single weak federal privacy law will be worse for consumers than a patchwork of robust state laws. The first witness forcefully argued that strong consumer privacy laws do not hurt business. Alastair Mactaggart, who helped pass the CCPA, reminded the Committee that he is a businessman with several successful companies operating in the Bay Area alongside the tech giants. He argued that the CCPA is not anti-business. Indeed, the fact that no major tech companies have made plans to pull out of Europe after the watershed GDPR went into effect earlier this year is proof that business can co-exist with robust privacy protections. The CCPA empowers the California Attorney General to enact—and change—regulations to address evolving tech and other issues. Mactaggart argued that this flexibility is designed to ensure that future innovators can enter the market and compete with the existing giants, while also ensuring that the giants cannot exploit an overlooked loophole in the law. While we have concerns about the CCPA that the California legislature must fix in 2019, we also look forward to participating in the Attorney General’s process to help make new rules as strong as possible. The President and CEO of the Center for Democracy & Technology, Nuala O’Connor, acknowledged that some businesses want a single federal data privacy law that preempts all state data privacy laws, to avoid the challenges of complying with a patchwork of state laws. O’Connor cautioned the committee that the “price of pre-emption would be very, very high”—meaning any federal law that shuts down state laws must provide gold-standard privacy protection. A single weak federal privacy law will be worse for consumers than a patchwork of robust state laws. As explained by Laura Moy, Executive Director and Adjunct Professor of Law at the Georgetown Law Center on Privacy & Technology, a federal law should be a floor, not a ceiling. As we’ve said before, current state laws in Vermont and Illinois, in addition to California, have already created strong protections for user privacy, with more states to follow. If Congress enacts weaker federal data privacy legislation that blocks such stronger state laws, the result will be a massive step backward for user privacy. Asking The Right Questions We were heartened that several Senators understood the complexity of creating a strong, comprehensive federal consumer privacy framework, and are asking the right questions.    In his opening statement, Senator Markey stated that a new law must include, at minimum, “Knowledge, Notice, and No”: Knowledge of what data is being collected, Notice of how that data is being used, and the ability to say “No.” This is a great starting point, and we look forward to seeing his draft of consumer protection legislation. Senator Duckworth asked the witnesses if it is too soon to know if existing laws and regulations are working, and wanted to know how Congress should assess the impact on consumer privacy. These are hard questions, but the right ones. In the hearing with company representatives two weeks ago, Senator Schatz questioned whether companies were coming to Congress simply to block state privacy laws, and raised the prospect of creating an actual federal privacy regulator with broad authority. This time, Senator Schatz again accused some of the companies of trying to “do the minimum” for their consumers, focusing his questions on adequate and robust enforcement. While all the witnesses agreed that robust rulemaking from the FTC is necessary, it is not clear that the current enforcement or penalty structure is where it needs to be. O’Connor said that only 60 employees at the FTC are tasked with enforcing consumer privacy for all of the United States, which is not nearly enough. Senator Schatz also called for stiffer financial penalties, as under the GDPR, explaining that even a $22.5 million fine is only a few hours of revenue for Google. Right to be Let Alone Dr. Andrea Jelinek, Chair of the European Data Protection Board, reminded the Committee of the writings of U.S. Supreme Court Justice Louis Brandeis. Long before he was on the Court, Brandeis wrote in the Harvard Law review in 1890, “Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual … the right ‘to be let alone’ … Numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’” Technology has changed and continues to change, but the right of an individual to privacy and to be let alone has not. Congress should continue to allow the states to protect their citizens, even as it discusses how to build a stronger national framework that supports these efforts.
>> mehr lesen

The Google+ Bug Is More About The Cover-Up Than The Crime (Do, 11 Okt 2018)
Earlier this week, Google dropped a bombshell: in March, the company discovered a “bug” in its Google+ API that allowed third-party apps to access private data from its millions of users. The company confirmed that at least 500,000 people were “potentially affected.” Google’s mishandling of data was bad. But its mishandling of the aftermath was worse. Google should have told the public as soon as it knew something was wrong, giving users a chance to protect themselves and policymakers a chance to react. Instead, amidst a torrent of outrage over the Facebook-Cambridge Analytica scandal, Google decided to hide its mistakes from the public for over half a year. What Happened? The story behind Google’s latest snafu bears a strong resemblance to the design flaw that allowed Cambridge Analytica to harvest millions of users’ private Facebook data. According to a Google blog post, an internal review discovered a bug in one of the ways that third-party apps could access data about a user and their friends. Quoting from the post: Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API. The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public. It’s important to note that Google “found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.” Nevertheless, potential exposure of user data on such a large scale is more than enough to cause concern. A full list of the vulnerable data points is available here, and you can update the privacy settings on your own account here. Potential exposure of user data on such a large scale is more than enough to cause concern. What would this bug look like in practice? Suppose Alice is friends with Bob on Google+. Alice has shared personal information with her friends, including her occupation, relationship status, and email. Then, her friend Bob decides to connect to a third-party app. He is prompted to give that app access to his own data, plus “public information” about his friends, and he clicks “ok.” Before March, the app would have been granted access to all the details—not marked public—that Alice had shared with Bob. Similar to Facebook’s Cambridge Analytica scandal, a bad API made it possible for third parties to access private data about people who never had a chance to consent. Google also announced in the same post that it would begin phasing out the consumer version of Google+, heading for a complete shutdown in August 2019. The company cited “low usage” of the service. This bug’s discovery may have been the final nail in the social network’s coffin. Should You Be Concerned? We know very little about whose data was taken by whom, if any, so it’s hard to say. For many people, the data affected by the bug may not be very revealing. However, when combined with other information, it could expose some people to serious risks. Email addresses, for example, are used to log in to most services around the web. Since many of those services still have insecure methods of account recovery, information like birthdays, location history, occupations, and other personal details could give hackers more than enough to break into weakly protected accounts. And a database of millions of email addresses linked to personal information would be a treasure trove for phishers and scammers. Furthermore, the combination of real names, gender identity, relationship status, and occupation with residence information could pose serious risks to certain individuals and communities. Survivors of domestic violence or victims of targeted harassment may be comfortable sharing their residence with trusted friends, but not the public at large. A breach of these data could also harm undocumented migrants, or LGBTQ people living in countries where their relationships are illegal. Based on our reading of Google’s announcement, there’s no way to know how many people were affected. Since Google deletes API logs after two weeks, the company was only able to audit API activity for the two weeks leading up to the bug’s discovery. Google has said that “up to 500,000” accounts might have been affected, but that’s apparently based on an audit of a single two-week slice of time. The company hasn’t revealed when exactly the vulnerability was introduced. Even worse, many of the people affected may not even know they have a Google+ account. Since the platform’s launch in 2011, Google has aggressively pushed users to sign-up for Google+, and sometimes even required a Google+ account to use other Google services like Gmail and YouTube. Contrary to all the jokes about its low adoption, this bug shows that Google+ accounts have still represented a weak link for its unwitting users’ online security and privacy. It’s Not The Crime, It’s The Cover-Up Google never should have put its users at risk. But once it realized its mistake, there was only one correct choice: fix the bug and tell its users immediately. Instead, Google chose to keep the vulnerability secret, perhaps waiting for the backlash against Facebook to blow over. Google wrote a pitch when it was supposed to write an apology. The blog post announcing the breach is confusing, cluttered, and riddled with bizarre doublespeak. It introduces “Project Strobe,” and is subtitled “Protecting your data...” as if screwing up an API and hiding it for months was somehow a bold step forward for consumer privacy. In a section headed “There are significant challenges in creating and maintaining a successful Google+ product that meets consumers’ expectations,” the company explains regarding the breach, then gives a roundabout, legalistic excuse for not telling the public about it sooner. Finally, the post describes improvements to Google Account’s privacy permissions interface and Gmail’s and Android’s API policies, which, while nice, are unrelated to the breach in question. Overall, the disclosure does not give the impression of a contrite company that has learned its lesson. Users don’t need to know the ins and outs of Google’s UX process, they need to be convinced that this won’t happen again. Google wrote a pitch when it was supposed to write an apology. Public trust in Silicon Valley is at an all-time low, and politicians are in a fervor, throwing around dangerously irresponsible ideas that threaten free expression on the Internet. In this climate, Google needs to be as transparent and trustworthy as possible. Instead, incidents like this hurt its users and violate their privacy and security expectations.
>> mehr lesen

When Police Misuse Their Power to Control News Coverage, They Shouldn’t Be Allowed To Use Probable Cause As a Shield Against Claims of First Amendment Violations (Do, 11 Okt 2018)
Journalists face increasingly hostile conditions covering public protests, presidential rallies, corruption, and police brutality in the course of work as watchdogs over government power. A case before the U.S. Supreme Court threatens press freedoms even further by potentially giving the government freer rein to arrest media people in retaliation for publishing stories or gathering news the government doesn’t like. EFF joined the National Press Photographers Association and 30 other media and nonprofit free speech organizations in urging the court to allow lawsuits by individuals who show they were arrested in retaliation for exercising their rights under the First Amendment—for example, in the case of the news media by newsgathering, interviewing protestors, recording events—even if the police had probable cause for the arrests. Instead of foreclosing such lawsuits, we urged the court to adopt a procedure whereby when there’s an allegation of First Amendment retaliation, the burden shifts to police to show not only the presence of probable cause, but that they would have made the arrests anyway, regardless of the targets’ First Amendment activities. EFF and its partners filed a brief with the Supreme Court October 9, 2018. The court’s decision in this case may well have far-reaching implications for all First Amendment rights, including freedom of the press. Examples abound of journalists and news photographers being arrested while doing their jobs, swept up by police as they try to cover violent demonstrations and confrontations with law enforcement—where press scrutiny is most needed. Last year 34 journalists were arrested while seeking to document or report news. Nine journalists covering violent protests around President Trump’s inauguration were arrested. Police arrested reporters covering the Black Lives Matter protests in Ferguson, Missouri. Ninety journalists were arrested covering Occupy Wall Street protests between 2011 and 2012. Arrests designed to simply halt or to punish one’s speech are common: police haul journalists and photographers into wagons amid protests, or while they’re videotaping police, or for persistently asking questions of a public servant. A tenacious reporter in West Virginia was arrested in the state capitol building for shouting questions to the Secretary of Health and Human Services as he walked through a public hallway. The journalist was charged with disrupting a government process, but, as is typical, the charge was dropped after prosecutors found no crime had been committed. This “catch and release” technique is not unusual, and it disrupts news gathering and chills the media from doing its job. The case at issue before the Supreme Court, Nieves v. Bartlett, doesn’t involve the press, but the potential impact on First Amendment rights broadly and press freedoms, in particular, is clear. The lawsuit involves an Alaska man who sued police for false arrest and imprisonment, and retaliatory arrest, alleging he was arrested for disorderly conduct in retaliation for his refusal to speak with a police officer. The U.S. Court of Appeals for the Ninth Circuit upheld the dismissal of all but the retaliatory arrest charge. The court said that while there was probable cause for the arrest, that didn’t preclude the man from pursuing his claim that his arrest was in retaliation for exercising his First Amendment right to not speak to police. This was the right decision, and we urge the Supreme Court to uphold it. For the brief: https://www.eff.org/document/nieves-v-bartlett-amicus-brief For more on EFF’s work supporting the First Amendment right to record the police: https://www.eff.org/deeplinks/2015/04/want-record-cops-know-your-rightshttps://www.eff.org/press/releases/recording-police-protected-first-amendment-eff-tells-court https://www.eff.org/deeplinks/2017/09/eff-court-first-amendment-protects-right-record-first-responders https://www.eff.org/cases/fields-v-city-philadelphia Related Cases:  Fields v. City of Philadelphia
>> mehr lesen

EU Internet Censorship Will Censor the Whole World's Internet (Mi, 10 Okt 2018)
As the EU advances the new Copyright Directive towards becoming law in its 28 member-states, it's important to realise that the EU's plan will end up censoring the Internet for everyone, not just Europeans. A quick refresher: Under Article 13 of the new Copyright Directive, anyone who operates a (sufficiently large) platform where people can post works that might be copyrighted (like text, pictures, videos, code, games, audio etc) will have to crowdsource a database of "copyrighted works" that users aren't allowed to post, and block anything that seems to match one of the database entries. These blacklist databases will be open to all comers (after all, anyone can create a copyrighted work): that means that billions of people around the world will be able to submit anything to the blacklists, without having to prove that they hold the copyright to their submissions (or, for that matter, that their submissions are copyrighted). The Directive does not specify any punishment for making false claims to a copyright, and a platform that decided to block someone for making repeated fake claims would run the risk of being liable to the abuser if a user posts a work to which the abuser does own the rights. The major targets of this censorship plan are the social media platforms, and it's the "social" that should give us all pause. That's because the currency of social media is social interaction between users. I post something, you reply, a third person chimes in, I reply again, and so on. Now, let's take a hypothetical Twitter discussion between three users: Alice (an American), Bob (a Bulgarian) and Carol (a Canadian). Alice posts a picture of a political march: thousands of protesters and counterprotesters, waving signs. As is common around the world, these signs include copyrighted images, whose use is permitted under US "fair use" rules that permit parody. Because Twitter enables users to communicate significant amounts of user-generated content, they’ll fall within the ambit of Article 13. Bob lives in Bulgaria, an EU member-state whose copyright law does not permit parody. He might want to reply to Alice with a quote from the Bulgarian dissident Georgi Markov, whose works were translated into English in the late 1970s and are still in copyright. Carol, a Canadian who met Bob and Alice through their shared love of Doctor Who, decides to post a witty meme from "The Mark of the Rani," a 1985 episode in which Colin Baker travels back to witness the Luddite protests of the 19th Century. Alice, Bob and Carol are all expressing themselves through use of copyrighted cultural works, in ways that might not be lawful in the EU’s most speech-restrictive copyright jurisdictions. But because (under today's system) the platform typically is only required to to respond to copyright complaints when a rightsholder objects to the use, everyone can see everyone else's posts and carry on a discussion using tools and modes that have become the norm in all our modern, digital discourse. But once Article 13 is in effect, Twitter faces an impossible conundrum. The Article 13 filter will be tripped by Alice's lulzy protest signs, by Bob's political quotes, and by Carol's Doctor Who meme, but suppose that Twitter is only required to block Bob from seeing these infringing materials. Should Twitter hide Alice and Carol's messages from Bob? If Bob's quote is censored in Bulgaria, should Twitter go ahead and show it to Alice and Carol (but hide it from Bob, who posted it?). What about when Bob travels outside of the EU and looks back on his timeline? Or when Alice goes to visit Bob in Bulgaria for a Doctor Who convention and tries to call up the thread? Bear in mind that there's no way to be certain where a user is visiting from, either. The dangerous but simple option is to subject all Twitter messages to European copyright censorship, a disaster for online speech. And it’s not just Twitter, of course: any platform with EU users will have to solve this problem. Google, Facebook, Linkedin, Instagram, Tiktok, Snapchat, Flickr, Tumblr -- every network will have to contend with this. With Article 13, the EU would create a system where copyright complainants get a huge stick to beat the internet with, where people who abuse this power face no penalties, and where platforms that err on the side of free speech will get that stick right in the face. As the EU's censorship plan works its way through the next steps on the way to becoming bindin g across the EU, the whole world has a stake -- but only a handful of appointed negotiators get a say. If you are a European, the rest of the world would be very grateful indeed if you would take a moment to contact your MEP and urge them to protect us all in the new Copyright Directive. (Image: The World Flag, CC-BY-SA)
>> mehr lesen

Chicago Should Reject a Proposal for Private-Sector Face Surveillance (Di, 09 Okt 2018)
A proposed amendment to the Chicago municipal code would allow businesses to use face surveillance systems that could invade biometric and location privacy, and violate a pioneering state privacy law adopted by Illinois a decade ago. EFF joined a letter with several allied privacy organizations explaining our concerns, which include issues with both the proposed law and the invasive technology it would irresponsibly expand. At its core, facial recognition technology is an extraordinary menace to our digital liberties.  At its core, facial recognition technology is an extraordinary menace to our digital liberties. Unchecked, the expanding proliferation of surveillance cameras, coupled with constant improvements in facial recognition technology, can create a surveillance infrastructure that the government and big companies can use to track everywhere we go in public places, including who we are with and what we are doing. This system will deter law-abiding people from exercising their First Amendment rights in public places. Given continued inaccuracies in facial recognition systems, many people will be falsely identified as dangerous or wanted on warrants, which will subject them to unwanted—and often dangerous—interactions with law enforcement. This system will disparately burden people of color, who suffer a higher “false positive” rate due to additional flaws in these emerging systems.  In short, police should not be using facial recognition technology at all. Nor should businesses that wire their surveillance cameras into police spying networks.  Moreover, the Chicago ordinance would violate the Illinois Biometric Information Privacy Act (BIPA). This state law, adopted by Illinois statewide in 2008, is a groundbreaking measure that set a national standard. It requires companies to gain informed, opt-in consent from any individual before collecting biometric information from that person, or disclosing it to a third party. It also requires companies to store biometric information securely, sets a three-year limit on retaining information before it must be deleted, and empowers individuals whose rights are violated to enforce its provisions in court.  Having overcome several previous attempts to rescind or water down its requirements at the state level, BIPA now faces a new threat in a recently proposed municipal amendment in Chicago. The proposal to add a section on “Face Geometry Data” to the city’s municipal code would allow businesses to use controversial and discriminatory face surveillance systems pursuant to licensing agreements with the Chicago Police Department. As the letter we joined makes clear, the proposal suffers from numerous defects. For example, the proposal does not effectively limit authorized uses. While it prohibits “commercial uses” of biometric information, it authorizes “security purposes.” That distinction  is meaningless in the context of predictable commercial security efforts, like for-profit mining and deployment of face recognition data to prevent shoplifting. The attempt to differentiate permissible from impermissible uses also rings hollow because the proposal in no way restricts how biometric data can be shared with other companies, who might not be subject to Chicago’s municipal regulation. Contradicting the consent required by Illinois BIPA, the Chicago ordinance would allow businesses to collect biometric information from customers and visitors without their consent, by merely posting signs giving patrons notice about some—but not all—of their surveillance practices. In particular, the required notice would need not address corporate use of biometric information beyond in-store collection. It would also fail to inform customers who are visually impaired. The Chicago proposal also invites misuse by the police department, which would face no reporting requirements. Transparency is critical, especially given Chicago’s unfortunate history of racial profiling, and other police misconduct (which includes unconstitutionally detaining suspects without access to counsel, and torturing hundreds of African-American suspects into false confessions). Even in cities with fewer historical problems, police secrecy is incompatible with the trend elsewhere across the country towards greater transparency and accountability in local policing.  Also, despite the documented susceptibility of face recognition systems to discrimination and bias, the Chicago ordinance would not require any documentation of, for instance, how often biometric information collected from businesses may be used to inaccurately identify a supposed criminal suspect. And it would violate BIPA’s requirements for data retention limits and secure data storage.  We oppose the proposed municipal code amendment in Chicago. We hope you will join us in encouraging the city’s policymakers to reject the proposal. It would violate existing and well-established state law. More importantly, businesses working hand-in-glove with police surveillance centers should not be imposing facial recognition on their patrons—especially under an ordinance as unprotective as the one proposed in Chicago.
>> mehr lesen

What's Next For Europe's Internet Censorship Plan? (Mo, 08 Okt 2018)
Last month, a key European vote brought the EU much closer to a system of universal mass censorship and surveillance, in the name of defending copyright. Members of the EU Parliament voted to advance the new Copyright Directive, even though it contained two extreme and unworkable clauses: Article 13 ("Censorship Machines") that would filter everything everyone posts to online platforms to see if matches a crowdsourced database of "copyrighted works" that anyone could add anything to; and Article 11 ("The Link Tax"), a ban on quote more than one word from an article when linking to them unless you are using a platform that has paid for a linking license. The link tax provision allows, but does not require, member states to create exceptions and limitations to protect online speech. With the vote out of the way, the next step is the "trilogues." These closed-door meetings are held between representatives from European national governments, the European commission, and the European Parliament. This is the last time the language of the Directive can be substantially altered without a (rare) second Parliamentary debate. Normally the trilogues are completely opaque. But Julia Reda, the German MEP who has led the principled opposition to Articles 11 and 13, has committed to publishing all of the negotiating documents from the Trilogues as they take place (Reda is relying on a recent European Court of Justice ruling that upheld the right of the public) to know what's going on in the trilogues). This is an incredibly important moment. The trilogues are not held in secret because the negotiators are sure that you'll be delighted with the outcome and don't want to spoil the surprise. They're meetings where well-organised, powerful corporate lobbyists' voices are heard and the public is unable to speak. By making these documents public, Reda is changing the way European law is made, and not a moment too soon. Articles 11 and 13 are so defective as to be unsalvageable; when they are challenged in the European Court of Justice, they may well be struck down. In the meantime, the trilogues — if they do their job right — must struggle to clarify their terms so that some of their potential for abuse and their unnavigable ambiguity is resolved. The trilogues have it in their power to expand on the Directive's hollow feints toward due process and proportionality and produce real, concrete protections that will minimise the damage this terrible law wreaks while we work to have it invalidated by the courts. Existing copyright filters (like YouTube's ContentID system) are set up to block people who attract too many copyright complaints, but what about people who make false copyright claims? The platforms must be allowed to terminate access to the copyright filter system for those who repeatedly make false or inaccurate claims about which copyright works are theirs. A public record of which rightsholders demanded which takedowns would be vital for transparency and oversight, but could only work if implemented at a mandatory, EU-level. On links, the existing Article 11 language does not define when quotation amounts to a use that must be licensed, though proponents have argued that quoting more than a single word requires a license. The Trilogues could resolve that ambiguity by carving out a clear safe-harbor for users, and ensure that there’s a consistent set of Europe-wide exceptions and limitations to news media’s new pseudo-copyright that ensure they don’t overreach with their power. The Trilogue must safeguard against dominant players (Google, Facebook, the news giants) creating licensing agreements that exclude everyone else. News sites should be permitted to opt out of requiring a license for inbound links (so that other services could confidently link to them without fear of being sued), but these opt-outs must be all-or-nothing, applying to all services, so that the law doesn’t add to Google's market power by allowing them to negotiate an exclusive exemption from the link tax, while smaller competitors are saddled with license fees. The Trilogues must establish a clear definition of "noncommercial, personal linking," clarifying whether making links in a personal capacity from a for-profit blogging or social media platform requires a license, and establishing that (for example) a personal blog with ads or affiliate links to recoup hosting costs is "noncommercial." These patches are the minimum steps that the Trilogues must take to make the Directive clear enough to understand and obey. They won't make the Directive fit for purpose – merely coherent enough to understand. Implementing these patches would at least demonstrate that the negotiators understand the magnitude of the damage the directive will cause to the Internet. From what we've gathered in whispers and hints, the leaders of the Trilogues recognise that these Articles are the most politically contentious of the Directive — but those negotiators think these glaring, foundational flaws can be finessed in a few weeks, with a few closed door meetings. We’re sceptical, but at least there’s a chance that we’ll see what is going on. We’ll be watching for Reda's publication of the negotiating documents and analysing them as they appear. In the meantime, you can and should talk to your MEP about talking to your country's trilogue reps about softening the blow that the new Copyright Directive is set to deliver to our internet.
>> mehr lesen

Victory! Dangerous Elements Removed From California’s Bot-Labeling Bill (Sa, 06 Okt 2018)
Governor Jerry Brown recently signed S.B. 1001, a new law requiring all “bots” used for purposes of influencing a commercial transaction or a vote in an election to be labeled. The bill, introduced by Senator Robert Hertzberg, originally included a provision that would have been abused as a censorship tool, and would have threatened online anonymity and resulted in the takedown of lawful human speech. EFF urged the California legislature to amend the bill and worked with Senator Hertzberg's office to ensure that the bill’s dangerous elements were removed. We’re happy to report that the bill Governor Brown signed last week was free of the problematic language. This is a crucial victory. S.B. 1001 is the first bill of its kind, and it will likely serve as a model for other states. Here’s where we think the bill went right. First, the original bill targeted all bots, regardless of what a bot was being used for or whether it was causing any harm to society. This would have swept up one-off bots used for parodies or art projects—a far cry from the armies of Russian bots that plagued social media prior to the 2016 election or spambots deployed at scale used for fraud or commercial gain. It’s important to remember that bots often represent the speech of real people, processed through a computer program. The human speech underlying bots is protected by the First Amendment, and such a broadly reaching bill raised serious First Amendment concerns. An across-the-board bot-labeling mandate would also predictably lead to demands for verification of whether individual accounts were controlled by an actual person, which would result in piercing anonymity. Luckily, S.B. 1001 was amended to target the harmful bots that prompted the legislation—bots used surreptitiously in an attempt to influence commercial transactions or how people vote in elections. Second, S.B. 1001’s definition of “bot”—“an automated online account where all or substantially all of the actions or posts of that account are not the result of a person”—ensures that use of simple technological tools like vacation responders and scheduled tweets won’t be unintentionally impacted. The definition was previously limited to online accounts automated or designed to mimic an account of a natural person, which would have applied to parody accounts that didn’t even involve automation, but not auto-generated posts from fake organizational accounts. This was fixed. Third, earlier versions of the bill required that platforms create a notice and takedown system for suspected bots that would have predictably caused innocent human users to have their accounts labeled as bots or deleted altogether. The provision, inspired by the notoriously problematic DMCA takedown system, required platforms to determine within 72 hours for any reported account whether to remove the account or label it as a bot. On its face, this may sound like a positive step in improving public discourse, but years of attempts at content moderation by large platforms show that things inevitably go wrong in a panoply of ways. As a preliminary matter, it is not always easy to determine whether an account is controlled by a bot, a human, or a “centaur” (i.e., a human-machine team). Platforms can try to guess based on the account’s IP addresses, mouse pointer movement, or keystroke timing, but these techniques are imperfect. They could, for example, sweep in individuals using VPNs or Tor for privacy. And accounts of those with special accessibility needs who use speech to text input could be mislabeled by a mouse or keyboard heuristic. This is not far-fetched: bots are getting increasingly good at sneaking their way through Turing tests. And particularly given the short turnaround time, platforms would have had little incentive to make sure to always get it right—to ensure that a human reviewed and verified every decision their systems made to take down or label an account—when simply taking an account offline would have fulfilled any and all legal obligations What’s more, any such system—just like the DMCA—would be abused to censor speech. Those seeking to censor legitimate speech have become experts at figuring out precisely how to use platforms’ policies to silence or otherwise discredit their opponents on social media platforms. The targets of this sort of abuse have been the sorts of voices the supporters of S.B. 1001 would likely want to protect—including Muslim civil rights leaders, pro-democracy activists in Vietnam, and Black Lives Matter activists whose posts were censored due to efforts by white supremacists. It is naive to think that online trolls wouldn't figure out how to game S.B. 1001’s system as well. The takedown regime would also have been hard to enforce in practice without unmasking anonymous human speakers. While merely labeling an account as a bot does not pierce anonymity, platforms might have required identity verification in order for a human to challenge their decisions about whether to takedown an account or label it as a bot. Finally, as enacted, S.B. 1001 targets large platforms—those with 10 million or more unique monthly United States visitors. The problems this new law aims to solve are caused by bots deployed at scale on large platforms, and limiting the law to large platforms ensures that it will not unduly burden small businesses or community-run forums. As with any legislation—and particularly with legislation involving technology—to avoid unintended negative consequences, it is important that policy makers take the time to think about the specific harms they seek to address and tailor legislation accordingly. We thank the California legislature for hearing our concerns and doing that with S.B. 1001.
>> mehr lesen

New North American Trade Deal Has Bad News for Canadian Copyright (Fr, 05 Okt 2018)
Earlier this week, the U.S. Trade Representative announced a replacement deal for the North American Free Trade Agreement, the nearly 25-year-old trade deal between the U.S., Mexico, and Canada. Amid the long list of tariff-free products and restriction-free cheese names [PDF] in the new trade deal, called simply the United States-Mexico-Canada Trade Agreement  or USMCA, there’s a whole section called “intellectual property,” full of new mandates on what the signatories must do with regard to copyrights, patents, and trademarks. One big change is that all three countries in the agreement will have to have a minimum copyright of the life of the creator plus 70 years. For works not tied to the life of a natural person, the copyright term must be at least 75 years. Those minimums won’t affect the U.S., which already has terms of life plus 70 years and 95 years, respectively; or Mexico, which has even longer terms. But it will be a big, and unhelpful, change for Canada. The copyright “floor” that’s being imposed on Canada equals the U.S. copyright term, one that’s already too long. Multiple U.S. copyright term extensions have crippled the public domain. Most recently, the 1998 Copyright Term Extension Act kept works from as early as 1923 locked up under copyright, their commercial potential exhausted and their owners largely unfindable, for the past twenty years. In the United States, we are just now on the verge of growing our public domain again. Works published in 1923, which have been held in a copyright stasis, will become public domain on January 1, 2019, with later works to follow. The U.S. has a chance to finally return to a place with a healthy and growing chunk of public domain works. That allows for collaborative innovations like Wikipedia, and for preservation of our cultural heritage. Now Canada will find itself taking the same slower route to opening up formerly copyrighted material for general use. Monopolizing works for most of a century after the life of the author is bad policy. It harms creativity, making it expensive and risky for authors, filmmakers and other creative types to make new works from what came before. Copyright should be a balance between making sure creators get rewarded, and that the public can access the works. Never-ending terms don’t square with the constitutional purpose of copyright, which is to allow exclusive rights for limited times, in order to “promote the progress” of learning and the arts with new creative works. When it comes to copyright, national legislatures around the world are too often swayed by big content owners. But if these bad policy decisions are baked into trade agreements, the situation will get even worse. At least when elected lawmakers blow it by carving out a bad deal for the public, voters can hold them accountable. Deciding copyright policy through opaque trade deals means that voters in democracies around the world will have much less power to correct their governments. It’s exactly the wrong direction to go, especially when one considers that the U.S. is finally on the verge of re-building its public domain. Right Policy, Wrong Method Canadians will pay a price for this trade deal. Works that should have slipped into the public domain, lowering their price, now will stay locked up by copyright owners. Not everything in the USMCA is as obviously wrongheaded as the policies on copyright terms, though. For instance, it extends a provision of U.S. law that we strongly support— Section 230, which grants immunity to platforms for most of their users’ speech. We’ve always supported Section 230 as a strong pro-free-speech law, which prevents Internet platforms for being taken offline or bankrupted because of the bad actions of a few users. We’ve advocated for protecting it in Congress, and we’re suing to invalidate FOSTA, which undermines Section 230 in an unconstitutional way. Unfortunately, FOSTA has also undermined the Section 230 provisions in this new agreement, as the language of USMCA now accepts that intermediary liability laws in signatory nations can have exceptions like FOSTA. The United States has been pushing for stronger intermediary protection in free trade agreements for years, but this time its negotiators had to weaken their own position in order to accommodate the short-sighted passing of FOSTA back home. Good idea or not, however, trade deals just aren’t the right place to make society-wide decisions about non-trade issues like copyright terms, and how the Internet should function. Citizens should be able to give input, and hold their lawmakers accountable when bad policies get passed. None of that happened here. Differences in laws aren’t a bug; they’re a feature. Countries should be able to design copyright laws and exceptions based on what works for them, and learn from others when they see systems that work—not have unsatisfactory solutions pushed upon them through trade deals.
>> mehr lesen

EFF To Texas AG: Epson Tricked Its Customers With a Dangerous Fake Update (Fr, 05 Okt 2018)
If you've ever bought an inkjet printer, you know just how much the manufacturers charge for ink (more than vintage Champagne!) and you may also know that you can avoid those sky-high prices by buying third-party inks, or refilled cartridges, or kits to refill your own cartridges. The major printer manufacturers have never liked this very much, and they've invented a whole playbook to force you to arrange your affairs to suit their shareholders rather than your own needs, from copyright and patent lawsuits to technological countermeasures that try to imbue printers with the ability to reject ink unless it comes straight from the manufacturer. But in the age of the Internet, it's possible for savvy users to search for printers that will accept cheaper ink. A little bit of research before you buy can save you a lot of money later on. Printer companies know that openly warring with their customers is a bad look, which is why they've invented a new, even sleazier tactic for locking their customers into pricey ink: they trick their customers. Back in 2016, printing giant HP sent a deceitful, malicious update to millions of OfficeJet and OfficeJet Pro printers that disguised itself as a "security update." Users who trusted HP and applied the update discovered to their chagrin that the update didn't improve their printers' security: rather, the updated printers had acquired the ability to reject cheaper ink, forcing the printer owners to throw away their third-party and refilled ink cartridges and buy new ones. Now, Epson has followed suit: in late 2016 or early 2017, Epson started sending deceptive updates to many of its printers. Just like HP, Epson disguised these updates as routine software improvements, when really they were poison pills, designed to downgrade printers so they could only work with Epson's expensive ink systems. EFF found out about this thanks to an eagle-eyed supporter in Texas, and we've taken the step of alerting the Texas Attorney General's office about the many Texas statutes Epson's behavior may violate. If you're in another state and had a similar experience with your Epson printer, please get in touch. With these shenanigans, Epson and HP aren't just engaged in a garden-variety ripoff. Teaching Internet users to mistrust software updates is a dangerous business. In recent years, some of the Internet's most important services have been brought to their knees by malicious software running on compromised home devices. Compromises to your home devices don't just endanger the public Internet, either: once your printer is infected, it can be turned against you, used to steal data from the documents you print, to probe the devices on your local network, and to attack those devices and send the data stolen from them to a criminal's computer. It's bad enough that Epson and HP have pursued their profits through these deceptive and illegitimate means, but what's even worse is that in so doing, they have actively poisoned the cybersecurity well. That's why their misconduct is all of our concern. We'll keep you updated on our dealings with the Texas AG, and don't forget, we want to hear from Epson customers in other states so we can get AGs across America on the case. Related Cases:  Lexmark v. Static Control Case Archive
>> mehr lesen

Privacy Badger Now Fights More Sneaky Google Tracking (Fr, 05 Okt 2018)
With its latest update, Privacy Badger now fights “link tracking” in a number of Google products. Link tracking allows a company to follow you whenever you click on a link to leave its website. Earlier this year, EFF rolled out a Privacy Badger update targeting Facebook’s use of this practice. As it turns out, Google performs the same style of tracking, both in web search and, more concerning, in spaces for private conversation like Hangouts and comments on Google Docs. From now on, Privacy Badger will protect you from Google’s use of link tracking in all of these domains. Google Link Tracking in Search, Hangouts, and Docs This update targets link tracking in three different products: Google web search, Hangouts, and the Docs suite (which includes Google Docs, Google Sheets, and Google Slides). In each place, Google uses a variation of the same technique to track the links you click on. Google Web Search After you perform a web search, Google presents you with a list of results. On quick inspection, the links in the search results seem normal: hovering over a link to EFF’s website shows that the URL underneath does, in fact, point to https://www.eff.org. But once you click on the link, the page will fire off a request to google.com, letting the company know where you’re coming from and where you’re going. This way, Google tracks not only what you search for, but which links you actually click on. Google uses different techniques in different browsers to make this type of tracking possible. In Chrome, its approach is fairly straightforward. The company uses the new HTML “ping” attribute, which is designed to perform exactly this kind of tracking. When you click on a link with a “ping” tag, your browser makes two requests: one to the website you want to go to, and another (in the background) to Google, containing the link you clicked and extra, encoded information about the context of the page. Search result in Chrome and its source code A search result in Chrome (top) and its source code, including the tracking “ping” attribute (bottom). In Firefox, things are more complicated. Hyperlinks there look normal at first. Hovering over them doesn’t change anything, and there’s no obvious “ping” attribute. But as soon as you click on a link, you’ll notice that the URL shown in the bottom left corner of the browser – the one you’re about to navigate to – has changed into a Google link. Animation of clicking a search result in Firefox Watch the URL in the lower left hand corner: before clicking, it looks normal, but after pressing the mouse button down, it’s swapped out for a Google link shim. How did that happen? For each link, Google has set a piece of JavaScript code to execute, in the background, on “mousedown”—the instant your mouse button is pressed on the link (but before you release the click). This code replaces the normal URL with a link shim that redirects you through Google on the way to your destination. Since your browser doesn’t navigate away from the search page until you release the mouse button, the code has more than enough time to slide a tracking link right under your nose. Animation of clicking a search result in Firefox, with source code shown In the background, JavaScript changes the link the instant that you click on it. Google Hangouts and the Google Docs Suite In Hangouts and the Docs suite, the tracking is less sophisticated, but just as effective. Try sending a link to one of your friends in a Hangouts chat. Although the message might look like an innocuous URL, you can hover over the hyperlink to reveal that it’s actually a link shim in disguise. The same thing happens with links in comments on Google Docs, Google Sheets, and Google Slides. That means Google will track whether and when your friend, family member, or co-worker clicks on the link that you sent them. These tracking links are easy to spot, if you know where to look. Simply hover over one and you’ll find that it’s not quite what you expect. Hovering over a link shim in Hangouts Hovering over the link in a Hangouts window (right) reveals that it actually points to a Google link shim (bottom). These link shims may be more nefarious than their web search counterparts. When you use Google search, you’re engaging in a kind of dialog with the company. Many users understand, even if they don’t like it, that Google provides search results in exchange for ad impressions and collects a good deal of information as part of the bargain. But when you use Hangouts to chat with a friend, it feels more private. Google provides the chat platform, but it doesn’t serve ads there, and it shouldn’t have any business reading your messages. Knowing that the company is tracking the links you share, both when you send them and when they’re clicked, might make you think twice about how you communicate. Privacy Badger to the Rescue! The latest version of Privacy Badger blocks link tracking on www.google.com, in Hangouts windows on mail.google.com and hangouts.google.com, and in comments on docs.google.com. This update expands on our previous efforts to block link tracking on Twitter and Facebook. And of course, Privacy Badger’s main job continues to be stopping Google, Facebook, and other third parties from tracking you around the web. We will continue investigating the ways that Facebook, Google, Twitter, and others track you, and we’ll keep teaching Privacy Badger new ways to fight back. In the meantime, if you’re a developer and would like to help, check us out on Github. And if you haven’t yet, be sure to install Privacy Badger!
>> mehr lesen

There are Many Problems With Mobile Privacy but the Presidential Alert Isn’t One of Them (Fr, 05 Okt 2018)
On Wednesday, most cell phones in the US received a jarring alert at the same time. This was a test of the Wireless Emergency Alert (WEA) system, also commonly known as the Presidential Alert. This is an unblockable nationwide alert system which is operated by Federal Emergency Management Agency (*not* the President, as the name might suggest) to warn people of a catastrophic event such as a nuclear strike or nationwide terrorist attack. The test appears to have been mostly successful, and having a nationwide emergency alert system certainly doesn’t seem like a bad idea; but Wednesday’s test has also generated concern. One of the most shared tweets came from antivirus founder John McAfee. Tweet by McAfee claiming that the Presidential Alert is tracking americans through a non-existent E911 chip While there are legitimate concerns about the misuse of the WEA system and myriad privacy concerns with cellular phones and infrastructure (including the Enhanced 911, or E911, system) the tweet by McAfee gets it wrong. How the WEA System Works The Wireless Emergency Alert system is the same system used to send AMBER Alerts, Severe Weather Notifications, and Presidential Alerts to mobile devices. It works by sending an emergency message to every phone provider in the US, which  then push the messages to every cell tower in the affected area. (In the case of a Presidential Alert, the entire country.) The cell towers then broadcast the message to every connected phone. This is a one-way broadcast that will go to every cell phone in the designated area, though in practice not every cell phone will receive the message. McAfee’s tweet gets two key things wrong about this system: There is no such thing as an E911 chip, and the system does not give “them” the information he claims.  In fact, the Presidential Alert does not have any way to send data about your phone back to the mobile carrier, though your phone is sending data to mobile carriers all the time for other reasons. Privacy Issues with Enhanced 911 This isn’t to say that there aren’t serious privacy issues with the E911 system. The E911 system was developed by the FCC in the early 2000’s after concerns that the increased use of cellular telephones would make it harder for emergency responders to locate a person in crisis. With a landline, first responders could simply go to the billing location for the phone, but a mobile caller could be miles from their home, even in another state. The E911 standard requires that a mobile device be able to send its location, with a high degree of accuracy, to emergency responders in response to a 911 call. While this is a good idea in the event of an actual crisis, law enforcement agencies have taken advantage of this technology to locate and track people in real time. EFF has argued that this was not the intended use of this system and that such use requires a warrant. What’s more, the mobile phone system itself has a huge number of privacy issues: from mobile malware which can control your camera and read encrypted data, to Cell-Site Simulators which can pinpoint a phone’s exact location, to the “Upstream” surveillance program exposed by Edward Snowden, to privacy issues in the SS7 system that connects mobile phone networks to each other. There are myriad privacy issues with mobile devices that we should be deeply concerned about, but the Wireless Emergency Alert system is not one of them. A tweet from Snowden about the "Upstream" surveillance program There are legitimate concerns about the misuse of the wireless emergency alert system as well. There could be a false alarm issued through the system, sparking unnecessary panic, as happened in Hawaii earlier this year.For many, the idea that a president could use the WEA to push an unblockable message to their phones is deeply disturbing and sparked concerns that the system could be used to spread unblockable propaganda.  Unlike other emergency alerts, the presidential alert can’t be turned off in phone software, by law. Fortunately for us, activating the WEA system is more complicated than say, sending a tweet. To send out a Presidential Alert the president would have to, at the very least, convince someone in charge of the WEA system at FEMA to send such a message, and FEMA staffers may be reluctant to send out a non-emergency message, which could decrease the effectiveness of future emergency alerts.  As with any new system that is theoretically a good idea, we must remain vigilant that it is not misused. There are many legitimate reasons to be concerned about cellular privacy. It’s important that we keep an eye on the real threats and not get distracted by wild conspiracy theories. Related Cases:  Carpenter v. United States
>> mehr lesen

New York City Home-Sharing Ordinance Could Create Privacy Nightmare (Mi, 03 Okt 2018)
Many cities across the country are struggling with issues surrounding short-term vacation rentals and how they affect the availability and price of housing for local residents. However, New York City’s latest ordinance aimed at regulating home-sharing platforms is an extraordinary governmental overreach with invasive privacy ramifications, and EFF is fighting back.  The Short-Term Residential Ordinance is aimed at blocking the operation of illegal hotels. It compels services like Airbnb and HomeAway to disclose the names, addresses, phone numbers, email addresses, and other personal and financial information of all short-term hosts, along with the number of days their home was rented. That’s not just once, that’s monthly, in perpetuity. Taken all together, this sensitive information it can reveal patterns of home life and vacations, among other private details. Additionally, there are no safeguards in the law for protecting all of this data, and there are no limitations on how it can be used. And the ordinance is for all hosts who use home-sharing platforms, not just the ones who break the law. Airbnb and Homeaway recently filed lawsuits against the city of New York, asking that officials be enjoined from enforcing the ordinance. This week, EFF filed an amicus brief in support of that request for a permanent injunction, arguing that the data collection violates the protections set out by Congress in the Stored Communications Act and is an unconstitutional warrantless search on the government’s behalf. We all have a Fourth Amendment right to protect our private lives—particularly our home lives—and requiring businesses to release this data to the city violates that right. With this ordinance, New York tried to circumvent the constitutional issue by mandating that the home-sharing platforms to obtain hosts’ consent to release their data to the city. But you can’t use a Terms of Service to get people to sign away their constitutional rights. Companies draft terms of service to govern how their platforms may be used; these are rules about the relationship between you and your ISP—not you and your government. It’s essential that senstive information is not disclosed to the government without any allegation of wrongdoing. We hope the court agrees.
>> mehr lesen

Briefing Thursday: EFF’s Eva Galperin and Lookout Discuss, Demo Cybersecurity Attacks On Democracy (Mi, 03 Okt 2018)
Elections Security and Political Campaigns Are Vulnerable Washington, D.C.—On Thursday, Oct. 4, at 2 pm, Electronic Frontier Foundation (EFF) Director of Cybersecurity Eva Galperin will speak at a special session for congressional staffers and representatives about how malware and spyware targeted at mobile devices is being used against dissidents, activists, journalists, and others to disrupt democracy. Galperin’s work at EFF includes uncovering a malware espionage campaign that targeted people in the U.S. and across the globe, and publishing research on malware in Syria, Vietnam, Kazakhstan, and Lebanon. Thursday’s briefing will examine how mobile devices—treasure troves of information about our text messages, locations, work documents, and phone calls—are becoming primary espionage targets. Political campaigns, dissidents, journalists, and activists around the globe are under attack by foreign-government spearfishing attacks. Galperin and security firm Lookout will present scenarios and demos on how cyberattacks occur and can be mitigated. What: A Non-Partisan Threat: Cybersecurity and Its Impact on Democracy special session When: Thursday, Oct. 4, 2 pm Where: Rayburn House Office Building Room 2226 45 Independence Ave SW Washington D.C. For more on state-sponsored malware: https://www.eff.org/issues/state-sponsored-malware
>> mehr lesen

Lifting the Cloak of Secrecy From NYPD Surveillance Technology (Mi, 03 Okt 2018)
For too long, the New York Police Department has secretly deployed cutting-edge spy tech, without notice to the public. Many of these snooping devices invade our privacy, deter our free speech, and disparately burden minority and immigrant communities. Fortunately, a proposed ordinance (“the POST Act”) would lift the cloak of secrecy, and help the people of New York City better control police surveillance technology. Why New York Needs the POST Act For decades the NYPD has committed to righting a legacy of unwarranted surveillance. Yet court proceedings continue to find the Department’s surveillance practices in violation of political, religious, and other fundamental freedoms. Against this troubling historical backdrop, images from more than eight thousand public and privately owned surveillance cameras feed into the Department’s Lower Manhattan Security Coordination Center (LMSCC) each day. In the words of Police Commissioner James O’Neill, “that’s the world we’re living in now. Any street, any incident in New York City, you get to—most of the time—that gets captured on video surveillance”. In addition to these panopticon-levels of video footage, NYPD watch officers and analysts—working alongside “Stakeholder” representatives including Goldman Sachs, JP Morgan Chase, and the Federal Reserve—monitor a treasure trove of data collected and analyzed through ShotSpotter microphones, face recognition technology, license plate readers, and more. How the NYPD disseminates the information collected by this surveillance technology—as well as spy tech used by detectives and officers throughout the city—is largely a mystery to New York residents and lawmakers. Lawmakers must assure that the NYPD delivers public safety without violating New Yorkers’ rights to privacy and association. However, decades of federal grants from the U.S. Department of Homeland Security—which oversees the principal agencies involved with immigration enforcement—have resulted in the NYPD’s development of an arsenal of surveillance technology with far too little oversight from elected officials and their constituents. NYPD surveillance technologies expose how New Yorkers live their daily lives, including visiting health care providers, debt counselors, support groups, lovers, or potential employers. All of this sensitive personal information might be shared, stolen, or misused. Still more questions are raised by the NYPD’s x-ray vans and Cell Site Simulators—devices designed for military use against enemy combatants and brought home for municipal policing. The Department now deploys these technologies and more in relative secrecy. EFF has long raised these concerns, along with New York-based civil society organizations including NYCLU, CAIR-NY, and the Brennen Center for Justice. Our concerns were affirmed, for example, when it was revealed that the NYPD had used Cell Site Simulators more than one thousand times between 2008 and 2015 without a written policy, and without obtaining judicial warrants. How the POST Act Will Protect New Yorkers One lawmaker who knows this reality well is City Council member Vanessa Gibson. From 2014 through 2017, Gibson chaired the Council’s Public Safety Committee. As Chair, Gibson was responsible for oversight of the operations and budget for multiple departments responsible for residents’ safety, including the NYPD, the Civilian Complaint Review Board, and the district attorney's office. Joined by several of her colleagues, Council Member Gibson has introduced legislation aimed to assure that lawmakers and the public are informed about NYPD acquisition and use of technology with the potential to threaten both privacy and security. Commonly referred to as the Police Oversight and Surveillance Transparency (POST) Act, the proposed legislation would require NYPD to report and evaluate all surveillance technologies they intend to acquire or use. The report and associated use policy would include a description of the equipment, its capabilities, guidelines for use, and security measures designed to protect any data it collects. “Law enforcement agencies should establish a culture of transparency and accountability in order to build public trust and legitimacy. This will help ensure decision making is understood and in accord with stated policy.” - President's Task Force on 21st Century Policing In 2015, the President's Task Force on 21st Century Policing concluded: “Law enforcement agencies should establish a culture of transparency and accountability in order to build public trust and legitimacy. This will help ensure decision making is understood and in accord with stated policy.” In keeping with the task force’s recommendation, the POST Act would require the NYPD to publish each policy for a new surveillance technology for a 45-day public comment period. Only after this notice and comment period could the NYPD Commissioner—having taken the comments into account—provide a final version of the surveillance impact and use policy to the City Council, the mayor, and the public. Berkeley, Nashville, Oakland, and Seattle are among a growing roster of cities across the United States that have already enacted similar ordinances. However, unlike the legislation these cities have adopted, the POST Act stops short of empowering City Council members to decide whether to approve or deny spy tech acquisition. Nor does the POST Act provide the New York City Council with any power to order the NYPD to cease use of equipment that it used in violation of the published policy. Instead, the Inspector General for the NYPD would be responsible for auditing the surveillance impact and use policy to ensure compliance. Help Pass the POST Act With an annual budget of over $5 Billion, and more personnel than half of the world’s militaries, the NYPD has a duty to protect the people of New York City. That role as guardian is one of great responsibility, requiring the trust and cooperation of the public they are sworn to serve. The POST Act—following in the steps of successful measures across the nation—is a common-sense step, promising transparency and opening the door to accountability. If you live in New York, please urge your City Council member to support this important local legislation. Electronic Frontier Alliance members across the US are working to enact similar legislation within their communities. To find an EFA-allied group in your area, please visit eff.org/efa-allies.
>> mehr lesen

McSweeney’s and EFF Team Up for “The End of Trust” (Di, 02 Okt 2018)
For 20 years, McSweeney’s has been the first name (or last name, actually) in emerging short fiction. But this November, McSweeney’s will debut the first all-non-fiction issue of Timothy McSweeney’s Quarterly Concern: “The End of Trust” (Issue 54) is a collection of essays and interviews focusing on issues related to technology and privacy compiled with the help of the Electronic Frontier Foundation. Here’s how the editors describe the issue: In this era of constant low-level distrust—of our tech companies and our peers, of our justice system and our democracy—we can’t be sure who’s watching us, what they know, and how they’ll use it. Our personal data is at risk from doxxing, government tracking, Equifax hacks, and corporate data mining. We wade through unprecedented levels of disinformation and deception. Unsure of how our culture of surveillance is affecting the moral development of a generation coming of age online, we continue to opt in.  Across more than 350 pages of essays, debates, interviews, graphs, and manifestos from over thirty writers and with special advisor Electronic Frontier Foundation, this monumental collection asks whether we’ve reached the end of trust, and whether we even care. When McSweeney’s editors approached EFF earlier this year about the project, we jumped at the opportunity. The collection features writing by EFF’s team, including Executive Director Cindy Cohn, Education and Design Lead Soraya Okuda, Special Advisor Cory Doctorow, board member Bruce Schneier and myself, exploring issues related to surveillance, freedom of information, and encryption. book image We also recruited some of our favorite thinkers on digital rights to contribute to the collection: anthropologist Gabriella Coleman contemplates anonymity; Edward Snowden explains blockchain; journalist Julia Angwin and Pioneer Award-winning artist Trevor Paglen discuss the intersections of their work; Pioneer Award winner Malkia Cyril discusses the historical surveillance of black bodies; and Ken Montenegro and Hamid Khan of Stop LAPD Spying debate author and intelligence contractor Myke Cole on the question of whether there’s a way law enforcement can use surveillance responsibly. We’ve read and reviewed every piece, and without spoiling anything, we can say that it’s smart, thought-provoking, entertaining, and altogether freakin’ awesome. What’s even better is that McSweeney’s has agreed that the content should be available to be freely shared under a Creative Commons license. You’ll be able to download that from us when the quarterly launches on Nov. 20, but we highly recommend getting your hands on a print copy to keep as an analog artifact of the strange and changing times we live in. You can preorder The End of Trust (McSweeney’s Issue 54) right now through McSweeney’s site. A photo of the book. All images in this post are courtesy of McSweeney's and shareable under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license
>> mehr lesen

EFF's DEF CON 26 T-Shirt Puzzle (Di, 02 Okt 2018)
In August, EFF unveiled our ninth limited edition DEF CON exclusive member t-shirt. Like previous years, the design of this year’s shirt was inspired by the conference’s theme, 1983. That number isn’t just the year before 1984. It was also the year a brilliant artist named Keith Haring had his work featured in the Whitney Biennial and in the video for Madonna’s Like a Virgin. Haring is one of the most impactful visual artist of the 1980s — he defined the look and the politics of that period for many people. Our shirt design is an homage to Haring and his playful, yet sometimes sinister, view of human nature. As thanks for supporting our work, we included a secret puzzle hidden within the shirt’s design for our members to solve during the conference. Read on for a walkthrough of the puzzle, or try your hand at solving it! (Warning: spoilers ahead!) If one follows the white arrows throughout the design, an aspiring puzzle-solver can assemble this string: a6qtybsy6kuhudkc This isn’t ciphertext as one might expect; it’s the address of a Tor onion service, as hinted at by the slightly out-of-place onion on the shirt. If you’re following along and don’t have Tor set up, our Surveillance Self-Defense project can help you with guides for macOS, Windows, and Linux. Visiting a6qtybsy6kuhudkc.onion reveals the next part of the puzzle. There’s quite a bit of information on the page, though the only immediately apparent component is the iconic image from Wargames, the classic film released in 1983. There are a few hidden hints on the page to assist members with extracting secret information from the image. We created two paths to arrive at the most critical hint. First is the aria-label on the image: “if visual puzzles aren't your thing, check out imsai8080.wav”. Visiting a6qtybsy6kuhudkc.onion/imsai8080.wav allows you to download an audio file which plays back several DTMF tones. The file can be parsed to text using a number of online tools or by skilled listeners, giving the string 116104101321039710910132105115321141011003210810197115 These are ASCII codepoints for text, but directly converting this string results in the corrupted “theA'Ǝ�isA��*�s”. Some familiarity with ASCII helps with cleaning up the string by adding additional zeroes: “116 104 101 032 103 097 109 101 032 105 115 032 114 101 100 032 108 101 097 115”, which converts to “the game is red leas”. This is still a bit garbled, but it’s not a huge jump to realize that the last word is “least”. The second path to a slightly different version of this hint starts with a comment in the HTML of the site. The string “lbh'er oevtug. ybbx pybfryl” is a ROT13-encoded message which decodes to “you’re bright, look closely”. There are a number of ways to proceed from this hint; a simple one involves using image editing software. In GIMP, for example, setting the paint bucket tool to fill at a threshold of 0 and using it on the upper part of the image reveals the left image, and maximizing the brightness and contrast shows the right: Left with this hint, the image on the page, and perhaps a question about the “random” black pixels at the top of the modified image, an avid puzzler turns to steganography, often referred to as simply “stego”. Steganography is the practice of hiding a piece of data inside another piece of data; say, a secret message within an innocent-looking file. It is not to be confused with encryption, which provides stronger guarantees of secrecy: encryption ensures that certain messages can only be read by certain parties. Stego only obfuscates information and can prevent onlookers from noticing secret information being passed around. The two can even be used in tandem-- stego can be used to hide the existence of an encrypted message! “Red least significant bit” refers to the steganographic method used to hide the final flag in the image. Broadly, the method uses the last bit of the red value in an image to store some data. This data can be anything so long as the image it is stored in has enough pixels to contain the binary conversion; in the case of this puzzle, the least significant bits of the red values contain a string. There are a number of ways to approach extracting this string, including manually converting the pixels in the “revealed” image to binary, but a less-straining method is writing a script to analyze the image itself. The Ruby script we used to encode and decode this message can be found here, if you want to try running it on the image yourself! The binary string stored in the image is: 010100100100010101010000010011110101001001010100001000000100000101000010010011000100010101000001010100100100001101001000010001010101001000111000001100110010000001010100010011110010000001000010010011110100111101010100010010 Converting this string to ASCII reveals the solution: REPORT ABLEARCHER83 TO BOOTH ... a reference to an infamous military-exercise-turned-war-scare in 1983, which included a simulation of a DEF CON 1 nuclear alert. The realism of the simulation led to a period of conflict escalation which almost culminated in nuclear war! Shout out to the runner up team, pictured here with one of the puzzle creators, Ken Ricketts (left, @kenricketts) and Justin Collins (right, @presidentbeef). They solved just a couple of hours after the winning team! Congratulations to the winning team: @aaronsteimle@_pseudoku, @zevlag, and @0xCryptoK. Their members also won our 2013, 2015, and 2017 DEF CON shirt challenges, in addition to solving the DEF CON badge challenge for four years in a row. We’d like to extend a special thank-you to everyone who tried their hand at the EFF puzzle, and to the hundreds of individuals who donated to EFF in Las Vegas this year. Seeing the fun that players have solving the puzzle is one of the highlights of our DEF CON experience. We’re looking forward to next year, and are glad you decided that the winning move was to play. Donate ENJOYED THE PUZZLE? SUPPORT EFF!
>> mehr lesen

California’s Net Neutrality Law: What’s Happened, What’s Next (Di, 02 Okt 2018)
Over the weekend, Gov. Jerry Brown signed S.B. 822, which guarantees strong net neutrality protections for citizens of California. Within hours, however, the federal government announced its intention to sue California for stepping in where the feds have abdicated responsibility. What happens next is going to be full of procedural issues and technicalities. First, some background. Millions of Americans opposed the Federal Communications Commission’s (FCC) decision to abandon its authority to enforce net neutrality, competition, and privacy under its Restoring Internet Freedom Order. An expected response to a widely condemned federal act is for the states to take action themselves. And they did. To date, 30 state legislatures have introduced bills that would require their Internet Service Providers (ISPs) to maintain net neutrality as a matter of law. Four of those states (Washington, Oregon, Vermont, and now California) have passed laws with strong bipartisan majorities, and more are promising to follow suit in 2019. Six state governors (Montana, New York, New Jersey, Hawaii, Vermont, and Rhode Island), led by Gov. Steve Bullock of Montana, have issued Executive Orders declaring that the state’s government will not do business with ISPs that violate net neutrality. California’s law represents a tremendous victory, by real people, over a few giant corporations. ISPs such as AT&T, Comcast, and Verizon poured millions into the fight in Sacramento, and, for a moment, it even looked like the bill had been eviscerated. But Californians who believe in a free and open Internet spoke out and strong net neutrality protections were restored. And they kept speaking out, getting the bill passed in the State Senate, the State Assembly, and, finally, getting it signed by the governor. Now the FCC and Department of Justice (DoJ) have stepped in to quell the rebellion. Just hours after California’s Governor Brown signed into law S.B. 822, the DoJ and FCC filed their lawsuit to block its implementation. Can States Protect Net Neutrality? The D.C. Circuit May Decide If you read the fine print of the DoJ and FCC arguments, the thrust against California is not whether states can pass net neutrality laws but rather whether a pending lawsuit in the D.C. Circuit (to which the California Attorney General is a party), which challenges the FCC’s 2017 repeal of net neutrality protections, should be given priority in deciding state power. Their motion for a preliminary injunction—a legal hold on S.B. 822—explicitly states that the “legal validity cannot be adjudicated in this Court.” In other words, this lawsuit cannot decide the legality of California’s law, at least until the case in the D.C. Circuit is decided. If that argument prevails, it simply means that California’s law is temporarily paused until the D.C. Circuit issues its opinion. The Department of Justice Drinks the FCC’s Kool-aid The more substantive argument is this: Based on the same factually wrong history of ISP regulation that the FCC along with cable and telephone lobbyists like to cite, the DoJ insists that the FCC has authority to abandon its oversight role but simultaneously prevent states from filling that vacuum. EFF and other legal experts disagree. For one thing, the power to preempt is related to the power to regulate in the absence of an express statement by Congress. The federal Communications Act says very little about preemption and a whole lot about states’ rights. By abandoning its authority over Internet service providers, the FCC also abandoned any power to preempt state laws.  We look forward to explaining as much to a judge, now that the fight in California has moved from the court of public opinion to an actual court. We are actively participating in the D.C. Circuit case as well where we assert that the FCC's understanding of broadband companies is so factually flawed and its failure to consider the implications for online speech and innovation is so absent that the agency must be reversed. Unless the House of Representatives joins the Senate to overturn the FCC this year, the outcome of these cases will likely determine the fate of net neutrality protections, competition policy, and privacy protections for the ISP industry.  
>> mehr lesen

The Devil Is in The Details Of Project Verify’s Goal To Eliminate Passwords (Mo, 01 Okt 2018)
A coalition of the four largest U.S. wireless providers calling itself the Mobile Authentication Taskforce recently announced an initiative named Project Verify. This project would let users log in to apps and websites with their phone instead of a password, or serve as an alternative to multi-factor authentication methods such as SMS or hardware tokens. Any work to find a more secure and user-friendly solution than passwords is worthwhile. However, the devil is always in the details—and this project is the work of many devils we already know well. The companies behind this initiative are the same ones responsible for the infrastructure behind security failures like SIM-swapping attacks, neutrality failures like unadvertised throttling, and privacy failures like supercookies and NSA surveillance. Research on moving user-friendly security and authentication forward must be open and vendor- and platform-neutral, not tied to any one product, platform, or industry group. It must allow users to take control of our identities, not leave them in the hands of the very same ISP companies that have repeatedly subverted our trust. The Good As the Taskforce lays out in their teaser video, the challenge of using passwords securely is a direct factor in a huge number of account breaches. EFF has long recommended password managers as a way to create and manage strong passwords. Some providers have begun offering Single Sign-On, or SSO, which serves as an alternative to keeping track of multiple passwords. When you see options to “Sign in with Facebook” or “Sign in with Google” on other websites, that’s an example of SSO. A recent Facebook breach points to the pitfalls of an SSO system that is not well implemented or published and developed openly for community auditing, but on the whole this method can be a big win for usable security. Project Verify appears to fall under this category. With Single Sign-on, you authenticate once to the SSO provider: a corporate server, a site using a standard like OpenID, or, in the case of Project Verify, your mobile phone provider. When you then log in to a separate site or app, it can request authentication from the SSO provider instead of asking you to register with a new username and password. You may then have to approve that login or registration with the SSO provider, sometimes using multi-factor authentication. Project Verify also offers its own multi-factor authentication functionality, offering a replacement for other methods like SMS or email verification which, the teaser video notes correctly, have their own weaknesses. Privacy Concerns: Phone Numbers and IP Addresses From EFF's own Privacy Badger to Tor for Android to Safari's Tracking Protection feature on iOS, users have more options than ever before to enhance their privacy when they go online with their mobile devices. They shouldn’t have to compromise that privacy in order to secure their accounts. Stronger alternatives than SMS and email are available now: two-factor authentication through the U2F standard or a Time-based One Time Password (TOTP) each offer superior security. Neither one is perfect on its own—both suffer from accessibility concerns, and TOTP can be abused by advanced phishing attacks. However, neither of these standards compromises the user's privacy. One of the few things we know about the details of Project Verify is that users will be identified using a combination of five data points: phone number, account tenure, phone account type, SIM card details, and IP address. Two of these, phone number and IP address, raise particular concerns. Tying accounts to phone numbers has generated a growing list of problems for users in recent years, including but not at all limited to the weakness of SMS verification mentioned above. An increasingly common scam involves criminals contacting providers with the name and phone number of an account they hope to hack into and claiming they either have a new phone or have lost their SIM card. When the provider sends or gives them the new SIM and deactivates the real user's original card, the hacker is then able to use SMS-based multi-factor authentication and/or account reset tools to take over the users' accounts. The use of phone numbers for verification can cause other sorts of problems when a phone is lost, a phone number is changed, or an employee changes jobs but a service used for work required SMS verification. In the case of a data breach, a personal phone number included in the data can expose a user to scams, harassment, or further hacking attempts. In the U.S., social security numbers have already shown us what can happen when an assigned, nearly impossible-to-change number morphs into an essential identifier and a target for identity thieves.  Our mobile phone numbers are going down the same road as our social security numbers, with the added problem that they were never private in the first place. Let's break that link, not strengthen it. Further, the use of IP addresses could reveal quite a bit to wireless providers or even site operators, even if you are using privacy-protective measures like Tor or a mobile VPN. Tor users in particular should steer well clear of Project Verify’s service for this reason. For Project Verify to work, your logins to third-party apps and websites must talk to your wireless provider, whether or not you're logging in over a VPN, Tor, a local wifi network, or even using a separate device altogether. With ISPs such as those in the Mobile Authentication Taskforce given free reign to track and sell users’ usage data, it is extremely dangerous to give them even more visibility into users’ logins on or off their network. The Project Verify site states, "The platform will only share consumers' data with their consent." However, this still leaves a lot of wiggle room for carriers. Will consent be obtained through explicit and granular opt-in Project Verify functions, or will this be one the many forms of consent buried in the user's subscriber agreement with no clear avenue for opt-out? Users should not have to worry about their data being collected by a third party simply to enable a more secure means of managing logins. Ironically, we can't verify much about the project. What we know is that it's asking us to allow the same mobile carriers responsible for enormous, and intentional, privacy failures to become the gatekeepers of identity authentication in an attempt to combat a real problem with a solution that's both concerning and conveniently beneficial to them—which, if history is any indication, is a verifiably bad idea. 
>> mehr lesen

EFF Urges Ninth Circuit to Let Criminal Defense Teams Vet Forensic Software (Mo, 01 Okt 2018)
You shouldn’t be convicted by secret evidence in a functional democracy. So when the government uses forensic software to investigate and build its case in a criminal prosecution, it should not hide that technological evidence from the defense. In an amicus brief filed today EFF urged the Ninth Circuit Court of Appeals to allow criminal defendants to review and evaluate the source code and developmental materials of forensic software programs used by the prosecution, to help prevent the wrong people ending up behind bars, or worse, on death row. The Constitution requires that defendants be given the opportunity to review, analyze, and confront the prosecution’s evidence. But in the information age, prosecutors are increasingly relying on evidence produced by proprietary forensic software programs–marketed and distributed by private companies to law enforcement–to establish key elements of their case, while still seeking to keep the source code that determines the outputs of that forensic technology a secret. This gamesmanship undermines the public’s trust in the integrity and fairness of the criminal justice system. We are told simply to take the government’s word for it that the software does what it is supposed to do. Ostensibly, the secrecy around proprietary forensic software is meant to prevent competitors from learning the trade secrets of the original program vendor, but it also prevents defendants and the public from discovering flaws that could send innocent people to prison or execution. Time and again, when forensic software is subjected to independent review, errors and inconsistencies are discovered that call into question its viability and suitability for use in the criminal justice system. Forensic software has no special immunity from the bugs and mistakes that plague software in other fields, something that has been amply demonstrated with errors discovered in, for instance, the software used for DNA analysis and breathalyzer tests. A commercial interest in maintaining a trade secret shouldn’t override a defendant’s rights of due process and to confront the evidence against them, nor should it override the public’s interest in knowing that justice is being done. Companies that go into the business of providing forensic tools to law enforcement cannot reasonably expect that they will be able to maintain secrecy over how those tools function. Besides, if a case presents itself where there is a legitimate reason to avoid public disclosure, the court can always issue a ‘protective order’ limiting disclosure to the defense team. This is routine in commercial litigation, even between direct competitors who actually have an incentive to commercialize the trade secrets they might learn. In this case, a defendant was linked to a particular IP address and the government contends that it was able to identify and isolate that IP address as the sole source for a download of contraband material from within a peer-to-peer network using a secret forensic software program. But the defense must be allowed to review the forensic software’s source code, developmental materials, and the underlying assumptions embedded within them, in order to understand and meaningfully confront the prosecution’s contention. What if the forensic software misidentified the computer that it downloaded the contraband from? Or what if the software suggests that the entire file was downloaded from a single source, when in fact it was downloaded from multiple sources, each of which was incomplete? These are questions that the defense cannot answer until they have had a chance to review the software relied upon by the prosecution. That is why EFF urges the Ninth Circuit to reconsider this case en banc, and to determine that the prosecution can’t hide the forensic software that it uses from the defense and the public. For the full amicus brief see U.S. v. Joseph Nguyen EFF amicus. Related Cases:  California v. Johnson
>> mehr lesen

Victory! New California Law Requires Police Policy Transparency (Mo, 01 Okt 2018)
The people of California will now have more insight into how their local law enforcement agencies operate. California Gov. Jerry Brown signed S.B. 978, which requires local police departments to publish their “training, policies, practices, and operating procedures” on their websites starting in January 2020. That opens up access to this information to anyone, not only journalists or activists with the time, money, or knowledge to request them. S.B. 978, introduced by Sen. Steven Bradford, has long had EFF’s support because it helps inform everyone about how police officers are trained. Law enforcement agencies are adopting new policies about new policing technologies all the time, and the community benefits from understanding them. Newer surveillance technologies such as body-worn cameras, biometric scanners, drones, and automatic license plate readers have drawn significant public interest and concern. Posting policies and procedures online ensures that law enforcement agencies are more transparent about what they’re doing. Doing so also helps educate the public about what to expect and how to behave during police encounters. EFF asked you for your support in getting this simple transparency measure passed, in addition to sending our own letter to the governor. Governor Brown in 2017 vetoed a similar bill, which we also supported along with many civil liberties advocates and law enforcement associations. S.B. 978 was narrowed at the governor’s request, and your support helped it pass.  We applaud Governor Brown for signing this bill and improving the transparency and accountability between law enforcement agencies and all Californians. Good relationships are built on trust and communication. Making it possible for everyone to see and understand the policies underpinning police procedure leads to greater understanding and better relations.
>> mehr lesen

Victory! Gov. Brown Expands Access to the Internet for Youth in Juvenile Detention and Foster Care (Mo, 01 Okt 2018)
California Gov. Jerry Brown has a signed a bill into law that opens up the Internet for youth in state care. With A.B. 2448, California now requires that all youth in juvenile hall be granted access to the Internet for educational purposes. Meanwhile youth in foster care are also ensured access to the Internet for social and recreational activities.   The success of this bill should be credited to its author, Assemblymember Mike Gipson, and its organizational sponsor, the Youth Law Center. Together they first introduced the measure in 2017 as A.B. 811, which was passed by strong majorities in the legislature, only to die by the governor’s veto pen. This time around, Gov. Brown signed the new bill, which was narrowed at his request.   EFF joined the effort early: we ran email actions and testified in favor of A.B. 2448 before a Senate committee. We also helped rally tech companies such as Facebook to support the bill. Ultimately, this is an enormous victory on behalf of at-risk youth who don’t have the ability to vote or travel to Sacramento to argue their case.   We hope to see this trend grow across the country, with other states and juvenile detention systems taking note that if California can bring the Internet to disadvantaged youth, so can they. Again, we thank Assemblymember Gipson and the Youth Law Center for championing this bill, and we look forward to collaborating with them on future efforts.
>> mehr lesen

New Witness and New Experts Bolster Our Jewel Case As We Fight Government’s Latest Attempt to Derail Lawsuit Challenging Unconstitutional NSA Spying (Mo, 01 Okt 2018)
EFF has presented its full evidentiary case that the five ordinary Americans who are plaintiffs in Jewel v. NSA were among the hundreds of millions of nonsuspect Americans whose communications and communications records have been touched by the government’s mass surveillance regimes. This presentation includes a new whistleblower and three additional expert witnesses—Professor Matthew Blaze, Dr. Brian Reid, and former Chief Technologist at the Federal Trade Commission Ashkan Soltani—along with AT&T documents and witnesses we first revealed in 2006. We also marshalled key portions of the now massive amount of public admissions by the U.S. government and the most recent example of public judicial review in the Big Brother Watch case in Europe. The goal is to convince a federal judge that the NSA’s current claims of secrecy should not prevent American courts from publicly evaluating the legality of how these surveillance schemes impact millions of innocent Americans. The government now admits that its mass surveillance schemes tap into the very backbone of the Internet. The government admits that for many years, it collected our telephone and Internet records in bulk. It admits that these schemes swept in billions of communications from hundreds of millions of ordinary Americans. The government’s argument, in their latest attempt to dismiss the case, is that public courts should be denied any ability to protect Americans’ rights under established law because—despite admitting that it has subjected tens of millions of us to its surveillance dragnet and the obvious implications of the structure it has created—it is a state secret which tens of millions of us have been impacted by it. This “secret” means that no one has legal “standing” to challenge the scope of the surveillance and its impact on nonsuspect Americans. Really. Our new whistleblower Phillip Long is a former senior AT&T technician who corroborates our evidence that, starting in 2001 AT&T required its technicians to route Internet traffic from throughout California through the San Francisco AT&T facility where the surveillance equipment was installed. Our three new expert witnesses explain that, given how Internet traffic is routed, our case obviously meets the legal standard for standing—that it is more likely than not that our five plaintiffs had at least one of their communications subjected to this surveillance infrastructure over the past 17 years. First, in an affidavit submitted to the judge, Long explains that in the mid-2000s he was ordered to divert a large amount of domestic Internet traffic to an AT&T facility at 611 Folsom Street in San Francisco, without any technological or business justification. We know from our first whistleblower Mark Klein that inside the Folsom Street facility a device called a “fiberoptic splitter” made a complete copy of the “peering” internet traffic that AT&T receives—email, web  browsing requests, and other electronic communications sent to or from the customers of AT&T’s Internet service from people who use another Internet service provider. The copied Internet traffic was diverted into a room, 641A, which is controlled by the NSA. This copying and diversion alone, regardless of what happens later, constitutes a violation of the federal Wiretap Act and the Stored Communications Act. In his previous rulings, the judge in Jewel has disregarded—improperly, we believe—Klein’s testimony and the authenticated internal AT&T documents he provided which describe the surveillance networking. The judge said that Klein alone can’t establish with certainty what the purpose of the room is or what data was being processed. In our latest filing we explain that this ruling was incorrect. The legal and Constitutional violation of your rights occurs when your communications are “intercepted” by the surveillance infrastructure, which happens at the point of copying outside the room. Now we have corroborating evidence of the massive scope of that diversion from Long. Long was responsible for setting up, connecting, and maintaining Internet circuits, including connecting customers to AT&T’s Internet backbone circuits. Long said that not only was he instructed to reroute Internet backbone connections for numerous cities in California to Folsom Street, he was also told to bring fiber optic cable connected to equipment there and leave the terminating end in front of Room 641A instead of plugging it into another piece of equipment, as was standard practice. Later he connected a terminal jack to that end, and another cable then ran from that jack into Room 641A. Second, we present three new expert witnesses, Professor Matt Blaze, Dr. Brian Reid, and former FTC Chief Technologist Ashkan Soltani, who are experts in telecommunications, data networking, cybersecurity, and privacy. They place the evidence in context and show how the AT&T documents and witness statements are corroborated by the government’s own statements about the surveillance and the secret FISA court descriptions of it. The experts conclude that it’s unfathomable that at least one of the plaintiffs’ communications did not travel through AT&T’s Folsom Street facility. For example, cybersecurity expert Ashkan Soltani said in order to ensure that email is not lost, Internet providers of services like Gmail and Yahoo mail have systems that copy, split up, and move users’ communications between data centers around the world in little pieces they call “shards.” “If the NSA or other outsiders intercepted a single shard, they could glean significant information about the communications, including an entire email or chat. Even if a shard did not contain a complete communication, interception of multiple shards would allow the entire communication to be reconstituted,” Soltani wrote in an affidavit to the court. Professor Blaze and Dr. Reid speak both from the public evidence and from their direct experience and knowledge of AT&T’s Internet network. Professor Blaze states that, “[i]t is highly likely that the communications of all plaintiffs passed through the link connected to the splitter (and thus the splitter itself) that Klein describes.” Dr. Reid confirms that given the volatile nature of Internet routing, “it is unfathomable . . . that in 17 years, at least one of plaintiffs’ communications did not travel via the peering points at AT&T’s 611 Folsom Street Facility, a major Internet peering point.” We have been steadfast in the face of government shenanigans to bring the NSA to account for mass surveillance of our emails, phone call information, and other communications. We’ve had our case dismissed but we fought the decision and it was reversed on appeal. We’ve overcome multiple delays. We’ve forced the NSA to produce evidence to the judge about whether our plaintiffs were subjected to mass, warrantless surveillance. And earlier this year, the former NSA director finally submitted a 193-page declaration in response to our questions, in addition to producing thousands of pages of other evidence concerning the NSA’s spying program for the court to review. No case challenging NSA surveillance has ever pushed this far. The government wants the court, and the American people, to believe the remote possibility that the NSA’s surveillance program may have magically excluded every single communication or communication record of our plaintiffs. This is ludicrous and wrong on the law. We need only show that it is more likely than not that a piece of their communications was captures by the programs to have standing. We don’t need to identify any specific communication, or show what happened to it, or whether the NSA looked at it. The law is very clear on that. The American people deserve, at a minimum, a public court decision about whether we are allowed to have a private conversation and private associations in the digital age. We deserve a voice in whether our networks are tapped and watched, regardless of the reason. But this blanket secrecy argument remains our biggest barrier in the U.S., even as the European Court determined that it could evaluate the U.K.’s version of this Internet backbone wiretapping  program in a public process. With a new whistleblower and experts, we have submitted strong and incontrovertible public evidence that our plaintiffs have standing. We’ve urged the court to let our plaintiffs, who in a sense are standing up for all Americans, have their day in a public, adversarial American court. Related Cases:  Jewel v. NSA
>> mehr lesen

Election Security Remains Just as Vulnerable as in 2016 (Sa, 29 Sep 2018)
The ability to vote for local, state, and federal representatives is the cornerstone of democracy in America. With mid-term congressional elections looming in early November, many voices have raised concerns that the voting infrastructure used by states across the Union might be suspect, unreliable, or potentially vulnerable to attacks. As Congress considers measures critical to consumer rights and the functioning of technology (net neutrality, data privacy, biometric identification, and surveillance), ensuring the integrity of elections has emerged as a matter of crucial importance. With mid-term elections in just two months, Secretaries of State should be pressed to do their jobs and increase security before voters cast their ballots. On the one hand, the right to vote may not be guaranteed for many people across the country. Historically, access to the ballot has been hard fought, from the Revolution and the Civil War to the movement for civil rights that compelled the Voting Rights Act (VRA). But recent restrictions on voting rights that have proliferated since the Supreme Court struck down the VRA’s pre-clearance provisions in 2013. Coupled with procedural impediments to voting, unresolved problems continue to plague the security of the technology that many voting precincts use in elections. With mid-term elections in just two months, Secretaries of State should be pressed to do their jobs and increase security before voters cast their ballots. An individual’s experience at the ballot box varies widely across the country. States administer local and national elections, and individual precincts may provide a variety of different ways to vote depending on state rules and funding. In states like Oregon, every eligible voter is mailed a ballot, which they are encouraged to return. In the District of Columbia, voters have the option to choose between casting their vote on a paper ballot that is read by an optical scanner or voting at an electronic voting machine. But in Georgia, Louisiana, Delaware, New Jersey, and South Carolina, voters can only use an electronic voting machine. That may seem problematic in the abstract, but in these states voters never even receive a receipt that allows them—or election auditors—to check to make sure the machine is calibrated correctly and recorded the right vote. And once votes are cast, states use different infrastructure to tally and analyze the vote and decide the election. Investigations into the 2016 Presidential election and Russian interference show that foreign governments and malicious online actors are probing many vulnerabilities within U.S. elections. Whether their goals are to shape the outcome, or simply to cause turmoil, they warrant attention and a serious attempt to secure the election from interference. Security Flaws in Voting Machines The evidence is clear: there are numerous ways to exploit and tamper with voting machines currently in use in the United States. At this year’s DEF CON, an annual security research conference, researchers evaluated a voting machine that’s used in 18 different states. They demonstrated how easy it is to gain administrative access, which lets someone change settings—or even the ballot—in under two minutes. The researchers concluded that because it takes the average voter about 6 minutes to cast a vote “This indicates one could realistically hack a voting machine in the polling place on Election Day within the time it takes to vote.” Another participant turned the voting machine into a jukebox in just a few hours. DEF CON’s “Voting Village” includes electronic voting machines marketed to state election officials in the mid-2000s and in use today, which the organizers were able to buy on eBay. They also tested ballot counting equipment used across the country states, finding that: “A voting tabulator that is currently used in 23 states is vulnerable to be remotely hacked via a network attack. Because the device in question is a high-speed unit designed to process a high volume of ballots for an entire county, ​hacking just one of these machines could enable an attacker to flip the Electoral College and determine the outcome of a presidential election​.” Voting equipment manufacturers have tried to undermine the credibility of these election security researchers, even suggesting that some may be foreign spies. Fortunately, Senators have rebuked this attack on security researchers investigating electronic voting machines saying, “We are disheartened that ES&S chose to dismiss these demonstrations as unrealistic and that your company is not supportive of independent testing.” A congressional working group corroborated security researchers’ findings last January. The working group concluded that election infrastructure is largely insecure across the country. One of the group’s main recommendations was to replace outdated voting machines with paper ballots, which are regarded as the most secure way to cast a vote. The group outlined a number of further security concerns affecting voting machines, including: 42 states using machines 10 years or older, which are susceptible to vote flipping (recording the opposite candidate) and crashing, At least ten states using machines that provide no paper record or receipt for voters to ensure that the right choices were cast Running unsupported software, like Windows XP and Windows 2000 (now discontinued), Including software or hardware that allows for internet connection, even if it is not used by poll workers, And using removable memory cards or USB ports which allows an attacker to physically access the machine, and runs the risk of memory cards being programmed incorrectly by third-party companies. And the National Academies of Sciences and Engineering just released a report recommending that “Every effort should be made to use human-readable paper ballots in the 2018 federal election. All local, state, and federal elections should be conducted using human-readable paper ballots by the 2020 presidential election.” Auditing Voting Results In order to know if a polling station or even state’s votes are accurate and there hasn’t been a security breach, states must check the vote. Paper ballots provide a record that auditors can check against, and risk limiting audits provide an easy procedure for states to verify voter tallies. Risk limiting audits are designed so that state election auditors hand-count a small percentage of the total votes cast, with the percentage changing based on how close an election is predicted to be. For the 2016 Presidential election, researcher Ron Rivest calculated that Michigan, if it had risk limiting audits, could have counted just 11% of the ballots and achieved a 95% chance of spotting an incorrect result. Texas and Missouri, with their wider margins, could have counted 700 ballots and 10 ballots, respectively, to achieve the same confidence. Currently, 33 states require post-election audits, but many election experts note that the methods used are not sufficient to actually determine whether a vote has been tampered with. Only New Mexico and Colorado use risk limiting audits, but Rhode Island will begin using them in this year’s midterms. Other states should consider matching these best practices to increase the reliability of their elections. Attacks on State Infrastructure The investigation into Russian-backed hacking and interference with the 2016 Presidential election shows just how susceptible state election systems (not just the voting equipment) are to outside interference. Special Counsel Robert Mueller’s indictment of 12 Russian Intelligence Operatives details how Russian GRU officers researched state election board domains, hacked into the website of one of the state election boards and stole information about 500,000 voters, and also hacked into computers of a company that supplied software used to verify voter registration information. The congressional report on election security also found that: “The Russian government directed efforts to target voting systems in 21 states prior to the 2016 election. Although there is no evidence of the attacks altering the vote count, Kremlin hackers were able to breach at least two states’ voter registration databases. Russia’s appetite for undermining confidence in western democratic institutions – by disenfranchising voters or calling into question the integrity of election administration by altering voter information – is only growing stronger.” Some states have done very little to actually secure their citizen’s data and votes. A federal lawsuit is now challenging Georgia’s security practices, including allowing the records of more than 6 million registered Georgia voters, password files, and encryption keys to be accessed online by anyone with the right website address. The District Judge found that there is not enough time to enforce the use of paper ballots before the election without causing a disruption to the November election, but argued that the case should proceed to briefing because of “democracy’s critical need for transparent, fair, accurate and verifiable election processes that guarantee each citizen’s fundamental right to cast an accountable vote.” Politics of Do-Nothing With the overwhelming evidence that elections in the United States are insecure, one large question remains: Why hasn’t Congress stepped in? There have been efforts over the past two years to increase funding for election security and even mandate paper ballots and risk limiting audits, the industry gold standard. A bi-partisan group of Senators put forward the Secure Elections Act, which originally included provisions tying funding to paper ballots and risk limiting audits, before being watered down and eventually postponed. Senator Wyden (D-OR) has also authored the Protecting America Votes and Elections Act, which requires all states to implement these recommendations from security researchers. But special interests continue to get in the way. First, electronic voting machine manufacturers continue to lobby Congress and state governments to encourage the use of their machines, even for models which pose significant security concerns by failing to provide paper records. When Congress begins to move forward on election security provisions, states then push back saying that the proposals are not only burdensome but exceed federal authority in elections. In response to the proposed Secure Elections Act, states lobbied so vigorously that the White House intervened and killed the bill because it violated the “principles of Federalism.” It is disturbing (as well as potentially self-defeating) that some of the Secretaries of State blocking federal efforts to secure elections are now running for higher office. Congress has allocated $380 million for states to apply for grants to increase their election infrastructure, but states did not begin receiving the funds until this summer, calling into question whether states will actually be able to purchase new equipment, update software, redesign election website security as their grant proposals lay out. President Trump recently signed an executive order creating a system to impose sanctions on anyone who interferes with U.S. elections at his discretion, which may be the strongest deterrent in preventing foreign interference in this year’s midterms. But Senators on both sides of the aisle say that this order targeting individuals is not enough; instead, state sanctions should be mandatory and Congress should continue to push for legislation to “beef up our election security and prevent future attacks.” You Should Still Vote The beauty of locally administered elections is that people have the ability to demand that local officials protect the vote and enact much-called-for security reform. In each state, you can push for your Secretary of State and state legislature to enact risk limiting audits. In states with no paper trail, you can push for paper ballots or, at the least, machines that provide receipts. And if you support a federal baseline to ensure the minimum security requirements suggested by researchers (paper ballots and risk limiting audits), tell your Congressperson. Voting may be a constitutional right, but it’s time for Congress and state governments to make it a funding necessity.
>> mehr lesen

Victory! Gov. Brown Signs Bill Adding Sensible Requirements for DNA Collection From Minors (Fr, 28 Sep 2018)
California's kids now have common-sense protections against unwarranted DNA collection. Gov. Jerry Brown this week signed A.B. 1584, a new law requiring law enforcement to get either judicial approval or permission from both the minor and a parent, legal guardian, or attorney before collecting a DNA sample from the minor. EFF has supported the bill, introduced earlier this year by Assemblymember Lorena Gonzalez Fletcher, from the beginning. DNA can reveal an extraordinary amount of private information about a person, from familial relationships to medical history to predisposition for disease. Children should not be exposed to this kind of privacy invasion without strict guidelines and the advice and consent of a parent, legal guardian, or attorney. Kids need to have an adult present who represents their interests and can help them understand both their rights and the lifelong implications of handing one’s sensitive genetic material over to law enforcement. This law will make sure that happens. Thanks to A.B. 1584, police will now have to obtain a court order, search warrant, or the consent of both the minor and their parent, legal guardian, or attorney before collecting DNA. They will also have to automatically expunge any voluntary sample collected from a minor within two years, if the sample doesn't implicate that minor as a suspect for a criminal offense. Law enforcement must also give kids a form requesting to have their DNA sample expunged and make reasonable efforts to comply promptly with such requests. This law was necessary to close a loophole in an existing law that attempted to limit the circumstances under which law enforcement could collect DNA from kids. Unlike A.B. 1584, however, that law only applied to DNA seized for inclusion in statewide or federal DNA databases. As Kelly Davis of the Voice of San Diego reported, police in San Diego realized they could get around the existing protections by storing DNA locally, and then instituted a policy of collecting samples from kids for "investigative purposes" by obtaining minors' "consent" but without any parental notice or approval. In at least one case, which has given rise to an ACLU lawsuit, police stopped a group of kids who were walking through a park after leaving a basketball game at a rec center and asked each to sign a form “consenting” to a cheek swab. We applaud Gov. Brown's decision to protect the rights of California's youth by signing A.B. 1584 into law.
>> mehr lesen

Facebook Data Breach Affects At Least 50 Million Users (Fr, 28 Sep 2018)
If you found yourself logged out of Facebook this morning, you were in good company. Facebook forced more than 90 million Facebook users to log out and back into their accounts Friday morning in response to a massive data breach. According to Facebook’s announcement, it detected earlier this week that attackers had hacked a feature of Facebook that could allow them to take over at least 50 million user accounts. At this point, information is scant: Facebook does not know who’s behind the attacks or where they are from, and the estimate of compromised accounts could rise as the company’s investigation continues. It is also unclear the extent to which user data was accessed and accounts misused. What is clear is that the attack—like many security exploits—took advantage of the interaction of several parts of Facebook’s code. At the center of this is the “View As” feature, which you can use to see how your profile appears to another user or to the public. (Facebook has temporarily disabled the feature as a precaution while it investigates further.) Facebook tracked this hack to a change it made to its video uploading feature over a year ago in July 2017, and how that change affected View As. The change allowed hackers to steal Facebook “access tokens.” An access token is a kind of “key” that controls your login information and keeps you logged in. It’s the reason you don’t have to log into your account every time you use the app or go to the website. Apparently, the View As feature inadvertently exposed access tokens for users who were “subject to” View As. That means that, if Alice used the View As feature to see what her profile would look like to Bob, then Bob’s account might have been compromised in this attack. This morning, in addition to resetting the access tokens and thus logging out the 50 million accounts that Facebook knows were affected, Facebook has also reset access tokens for another 40 million that been the subject of any View As look-up in the past year. This breach comes on the heels of an entirely different mishandling of information by Facebook, and in the aftermath of Facebook’s Cambridge Analytica scandal. Days after the Cambridge Analytica news broke, Facebook CEO and founder Mark Zuckerberg told users, “We have a responsibility to protect your data, and if we can't then we don't deserve to serve you.” Yup.
>> mehr lesen

Copyright and Speech Should Not Be Treated Like Traffic Tickets (Fr, 28 Sep 2018)
While there may not be consensus on what they are, there is a shared belief that U.S. copyright law has some serious problems. But the CASE Act, which aims to treat copyright claims like traffic tickets, is not the answer. On Thursday, August 27, the House Judiciary Committee held a hearing on the CASE Act (H.R. 3945). The CASE Act would create a “small claims” system for copyright, but not within the courts. Instead, cases would be heard by “Claims Officers” at the Copyright Office in Washington, D.C. And the Copyright Office has a history of presuming the interests of copyright holders are more valid than other legal rights and policy concerns, including the free expression values protected by fair use. Basically every concern we had about the CASE Act last year remains: Turning over quasi-judicial power, which would include issuing damages awards of up to $15,000 per work infringed or $30,000 per proceeding, and agreements which boil down to binding injunctions, to a body with this history is unwise. In addition to the problem of turning the Copyright Office into a quasi-court with jurisdiction over everyone in the U.S., CASE would invite gamesmanship and abuse, magnify the existing problem of copyright’s unpredictable civil penalties, and would put this new group in charge of punishing DMCA abuse, while also limiting the effectiveness of the DMCA’s deterrence factor. Photographers have a legitimate concern about their work being taken and used whole without proper payment. But copyright claims should not be bulk-processed like traffic tickets—especially not when statutory damages under the CASE Act are so much higher than in traffic court, requiring no proof of actual harm. And especially not when the case won’t be heard by an actual judge, one whose job description doesn’t place copyright at the center of the legal universe. During the hearing, proponents of the bill constantly pointed to the bill’s “opt-out” mechanism as the be-all, end-all answer to this problem. That argument very much misses the point.  Proposed changes to CASE would add a second notice to be served to someone being accused of infringement under the new regime. That means the first notice to opt-out would look like spam and the second would like a legal summons, which people don’t traditionally have the option of opting out of. The average person, faced with being served in the same way they would be for a real lawsuit, is not going to understand that they can opt out of this system. When people have enough trouble understanding how to challenge false DMCA notices, how are they going to know how to respond to a confusing summons from Washington, D.C.? Making it easier to collect damages on copyright claims invites abuse. It invites filing as many copyright claims as one can against whomever is least likely to opt out and most likely to be able to pay. And any proposal to limit the number of claims filed by someone doesn’t fix this problem or truly help the people whose work keeps getting taken and infringed upon. One of the proposed changes is to put a cap on the number of claims that can be made—trolls can have hundreds of “clients” or many corporate identities. The ornaments that have been proposed as “fixes” for this bill only reveal the scale of the trolling problem the bill would cause, and highlight the flaws at its very core. Other cosmetic changes to the bill, including increasing penalties for “bad faith” actions, don’t change the fundamentally rotten core of the bill. All judicial process depends on, and assumes, that parties will act in good faith, but mere requirements to play nicely have never stood in the way of those (like copyright trolls) who are determined to game the system. At the very least, a truly neutral magistrate with no organizational commitment to one side of the dispute is needed to give effect to a rule against “bad faith” claims. Both some members of the House Judiciary Committee and entertainment industry witnesses during the hearing seem convinced that copyright trolls and the average small user who does not understand this process are “hypothetical.” Neither of these things is hypothetical. Lawsuits against individual Internet users alleging copyright infringement over BitTorrent networks—one of the most prolific types of copyright trolling—are just under half of all copyright lawsuits in the U.S.  The plaintiffs in these cases pursue landlords and nursing home operators, elderly people with little or no knowledge of the Internet, and deployed military personnel. As is often the case in situations like these, the people hurt will not be major companies, but small businesses and individuals. Throughout the hearing we also heard, consistently, that there had been a lot of negotiation on CASE, that the “discussion draft” represented a lot of concessions from the content side to the Internet side. The fact that the discussion draft is “better” does not mean it is “good.” Getting large Internet companies to agree to the bill—companies who can identify trolls and know to opt-out—does not mean a bill that protects the average Internet user from abusive litigation. There is a fundamental problem with making it so easy to file these kinds of complaints in a quasi-judicial system with no space for robust appeal. Copyright law fundamentally impacts freedom of expression. It can’t be treated with less care than a traffic ticket.
>> mehr lesen

Stupid Patent of the Month: Trolling Virtual Reality (Fr, 28 Sep 2018)
This month’s stupid patent describes an invention that will be familiar to many readers: a virtual reality (VR) system where participants can interact with a virtual world and each other. US Patent No. 6,409,599 is titled “Interactive virtual reality performance theater entertainment system.” Does the ’599 patent belong to the true inventors of VR? No. The patent itself acknowledges that VR already existed when the application was filed in mid-1999. Rather, it claims minor tweaks to existing VR systems such as having participants see pre-recorded videos. In our view, these tweaks were not new when the patent application was filed. Even if they were, minor additions to existing technology should not be enough for a patent. The ’599 patent is owned by a company called Virtual Immersion Technologies, LLC. This company appears to have no other business except patent assertion. So far, it has filed 21 patent lawsuits, targeting a variety of companies ranging from small VR startups to large defense companies. It has brought infringement claims against VR porn, social VR systems, and VR laboratories. Virtual reality was not new in mid-1999. The only supposedly new features of the ’599 patent are providing a live or prerecorded video of a live performer and enabling audio communication between the performer and a participant. Similar technology was infamously predicted in the Star Wars Holiday Special of 1978. In this sense, the patent is reminiscent of patents that take the form: “X, but on the Internet.” Here, the patent essentially claims video teleconferencing, but in virtual reality. Claim 1 of the ’599 patent is almost 200 words long, but is packed with the kind of mundane details and faux-complexity typical of software patents. For example, the claim runs through various “input devices” and “output devices” assigned to the “performer” and “participant.” But any VR system connecting two people will have such things. How else are the users supposed to communicate? Telepathy? Like many software patents, the ’599 patent describes the “invention” at an absurdly high, and unhelpful, level of abstraction. Any specific language in the patent is hedged to the point that it becomes meaningless. The “input devices” might be things like a “keypad or cyberglove,” but can also be any device that “communicate[s] with the computer through a variety of hardware and software means.” In other words, the “input device” can be almost any device at all. The patent suggests that the “underlying control programs and device drivers” can be written in “in many different types of programming languages.” Similarly, the “network communication functions” can be accomplished by any “protocols or means which may currently exist or exist in the future.” The overall message: build yourself a VR system from scratch and risk infringing. RPX filed an inter partes review petition arguing that claims of the ’599 patent were obvious at the time of the application. The petition argues, persuasively in our view, that earlier publications describe the supposed invention claimed by the ’599 patent. The inter partes review proceeding has since settled, but any defendant sued by Virtual Immersion Technologies, LLC can raise the same prior art (and more) in their defense. Unfortunately, it is very expensive to defend a patent suit and this means defendants are pressured to settle even when the case is weak. The ’599 patent highlights many of the weaknesses of the patent system, especially with respect to software patents. First, the Patent Office failed to find prior art. Second, the patent claims are vague and the patent isn’t tied to any concrete implementation. Finally, the patent ended being used to sue real companies that employ people and make things.
>> mehr lesen

You Gave Facebook Your Number For Security. They Used It For Ads. (Fr, 28 Sep 2018)
Add “a phone number I never gave Facebook for targeted advertising” to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes—or that users never provided at all—for targeted advertising. A group of academic researchers from Northeastern University and Princeton University, along with Gizmodo reporters, have used real-world tests to demonstrate how Facebook’s latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and “shadow” contact information. Two-Factor Authentication Is Not The Problem First, when a user gives Facebook their number for security purposes—to set up 2FA, or to receive alerts about new logins to their account—that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers.) But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It’s not even a problem with the inherent weaknesses of SMS-based 2FA in particular. Instead, this is a problem with how Facebook has handled users’ information and violated their reasonable security and privacy expectations. There are many types of 2FA. SMS-based 2FA requires a phone number, so you can receive a text with a “second factor” code when you log in. Other types of 2FA—like authenticator apps and hardware tokens—do not require a phone number to work. However, until just four months ago, Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies—Google notable among them—also still follow that outdated practice. Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook's repeated claims that we have “complete control” over our own information, but has also seriously damaged users’ trust in a foundational security practice. Until Facebook and other companies do better, users who need privacy and security most—especially those for whom using an authenticator app or hardware key is not feasible—will be forced into a corner. Shadow Contact Information Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example: ...if User A, whom we’ll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we’ll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call “shadow contact information,” about a month later. This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books. Even worse, none of this is accessible or transparent to users. You can’t find such “shadow” contact information in the “contact and basic info” section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a “right to know” what information it has on them. As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal, it needs to put its money where its mouth is. Wiping 2FA numbers and “shadow” contact data from non-essential use would be a good start.
>> mehr lesen

Vermont’s New Data Privacy Law (Fr, 28 Sep 2018)
Data brokers intrude on the privacy of millions of people by harvesting and monetizing their personal information without their knowledge or consent. Worse, many data brokers fail to securely store this sensitive information, predictably leading to data breaches (like Equifax) that put millions of people at risk of identity theft, stalking, and other harms for years to come. Earlier this year, Vermont responded with a new law that begins the process of regulating data brokers. It demonstrates the many opportunities for state legislators to take the lead in protecting data privacy. It also shows why Congress must not enact a weak data privacy law that preempts stronger state data privacy laws. What Vermont’s Law Does Vermont’s new data privacy law seeks to protect consumers from data brokers through four important mechanisms. Transparency. Data brokers must annually register with the state. When doing so, they must disclose whether consumers may opt-out of data collection, retention, or sale, and if so, how they may do so. A data broker must also disclose whether it has a process to credential its purchasers, and its number of security breaches. Duty to secure data. Data brokers must adopt comprehensive data security programs with administrative, technical, and physical safeguards. No fraudulent collection. Data brokers may not collect personal information by fraudulent means, or for the purpose of harassment or discrimination. Free credit freezes. Credit freezes are an important way for consumers to protect themselves from the fallout of a data breach. Many businesses will not extend credit absent a report from a credit reporting agency, and a credit freeze bars these agencies from issuing a report until a consumer lifts the freeze when they actually want credit. Vermont already empowered consumers to use credit freezes to protect themselves from credit fraud. The new Vermont law bars credit agencies from charging consumers fees for this protection. What Vermont Should Do Next Vermont’s legislators must not rest on their laurels. Rather, they should consider three sets of improvements to their state’s data privacy laws. “First party” data miners. The new Vermont law defines a “data broker” as a business that collects and sells personal information from consumers with whom the broker has no direct relationship. Thus, the Vermont law begins to address “third-party” data mining (that is, data mining by companies that have no direct relationship with consumers). But it does not address “first-party” data mining (that is, data mining by companies that do have a direct relationship with consumers). For example, the Vermont law does not cover a social media platform like Facebook, or a retailer like Walmart, when those companies gather information about how consumers interact with their own websites. The Vermont Attorney General is now holding hearings regarding whether Vermont should next regulate first-party data mining (among other things). We hope Vermont will find smart, appropriately tailored ways to do so. More rules for data brokers. Vermont should do more to protect consumers from data brokers. As EFF has explained, new laws should: (i) impose on data brokers a fiduciary duty towards the consumers whose data they harvest and monetize; (ii) establish a government office to assist the victims of data breaches; and (iii) ensure that victims of data breaches can seek compensation for their non-financial injuries, and not just their financial injuries. EFF also supports a consumer’s “right to know” what personal information a data broker has gathered about them, how the broker obtained it, and to whom they sold it. Such legislation must be carefully tailored to avoid undue burdens on free speech and innovation. Under the Vermont law, however, a consumer can only learn which data brokers are operating in the state, and a few general facts about those operations, but nothing about the harvesting of the consumer’s own personal information. Further, the Vermont law does not require any form of consumer consent for data collection or sale. Rather, it only requires data brokers to publicly disclose whether there is a way for consumers to opt-out, and if so, how. In some cases, data brokers should be required to obtain consent to collect or sell a consumer’s personal information. For example, the new Vermont law defines “personal information” to include biometrics, and no one should be allowed to collect or sell someone else’s biometrics without their informed, opt-in consent. Stronger enforcement. The new Vermont law provides that violations of the data security requirement and the ban on fraudulent acquisition are “unfair and deceptive acts” under existing state law. These means consumers can sue violators of these two new rules. This ability to bring a private cause of action is a powerful enforcement tool, because consumers don’t have to wait for the government to hold a data broker accountable. Instead, they can do it themselves. Unfortunately, the same does not hold true for the new Vermont rule requiring transparency from data brokers. It should, and we urge Vermont to look for ways to give consumers a way to enforce the transparency rule as well. The Vermont Attorney General may enforce all of these rules, which is good. But it is no substitute for the empowerment of “private attorneys general” to enforce the law when an Attorney General cannot or will not do so. Note to Congress: Don’t Get In the Way Vermont is helping lead a national movement for data privacy. It joins other states like California, which recently enacted its Consumer Privacy Act, and Illinois, which nearly a decade ago enacted its Biometric Information Privacy Act. EFF hopes more states will enact smart, tailored laws that protect the privacy of technology users, while steering clear of First Amendment concerns and undue burdens. State legislatures have long been known as “laboratories of democracy” and they are serving that role now. But some tech giants aren’t happy about that, and they are trying to get Congress to pass a weak federal data privacy law that would foreclose state efforts. They are right about one thing: it would be great to have one nationwide set of protections – but not if those protections are illusory or inadequate. Over 90% of Americans feel like they have no control over their privacy. Congress should be working to give them that control, instead of letting the companies with the worst privacy track records dictate users’ legal rights.
>> mehr lesen

Platform Censorship: Lessons From the Copyright Wars (Mi, 26 Sep 2018)
There’s a lot of talk these days about “content moderation.” Policymakers, some public interest groups, and even some users are clamoring for intermediaries to do “more,” to make the Internet more “civil,” though there are wildly divergent views on what that “more” should be. Others vigorously oppose such moderation, arguing that encouraging the large platforms to assert an ever-greater role as Internet speech police will cause all kinds of collateral damage, particularly to already marginalized communities. Notably missing from most of these discussions is a sense of context. Fact is, there’s another arena where intermediaries have been policing online speech for decades: copyright. Since at least 1998, online intermediaries in the US and abroad have taken down or filtered out billions of websites and links, often based on nothing more than mere allegations of infringement. Part of this is due to Section 512 of the Digital Millennium Copyright Act (DMCA), which protects service providers from monetary liability based on the allegedly infringing activities of third parties if they “expeditiously” remove content that a rightsholder has identified as infringing. But the DMCA’s hair-trigger process did not satisfy many rightsholders, so large platforms, particularly Google, also adopted filtering mechanisms and other automated processes to take down content automatically, or prevent it from being uploaded in the first place. As the content moderation debates proceed, we at EFF are paying attention to what we've learned from two decades of practical experience with this closely analogous form of “moderation.” Here are a few lessons that should inform any discussion of private censorship, whatever form it takes. 1. Mistakes will be made—lots of them The DMCA’s takedown system offers huge incentives to service providers that take down content when they get a notice of infringement. Given the incentives of the DMCA safe harbors, service providers will usually respond to a DMCA takedown notice by quickly removing the challenged content. Thus, by simply sending an email or filling out a web form, a copyright owner (or, for that matter, anyone who wishes to remove speech for whatever reason) can take content offline. Many takedowns target clearly infringing content. But there is ample evidence that rightsholders and others abuse this power on a regular basis—either deliberately or because they have not bothered to learn enough about copyright law to determine whether the content they object to is unlawful. At EFF, we’ve been documenting improper takedowns for many years, and highlight particularly egregious ones in our Takedown Hall of Shame.  As we have already seen, content moderation practices are also rife with errors. This is unlikely to change, in part because: 2. Robots aren’t the answer Rightsholders and platforms looking to police infringement at scale often place their hopes in automated processes. Unfortunately, such processes regularly backfire. For example, YouTube’s Content ID system works by having people upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database and when the algorithm detects a match, copyright holders are informed. They can then make a claim, forcing it to be taken down, or they can simply opt to make money from ads put on the video. But the system fails regularly. In 2015, for example, Sebastien Tomczak uploaded a ten-hour video of white noise. A few years later, as a result of YouTube’s Content ID system, a series of copyright claims were made against Tomczak’s video. Five different claims were filed on sound that Tomczak created himself. Although the claimants didn’t force Tomczak’s video to be taken down they all opted to monetize it instead. In other words, ads on the ten-hour video could generate revenue for those claiming copyright on the static. Third party tools can be even more flawed. For example, a “content protection service” called Topple Track has been sending a slew of abusive takedown notices to have sites wrongly removed from Google search results. Topple Track has boasted that it was “one of the leading Google Trusted Copyright Program members.” In practice, Topple Track algorithms were so out of control that it sent improper notices targeting an EFF case page, the authorized music stores of both Beyonce and Bruno Mars, a New Yorker article about patriotic songs. Topple Track even sent an improper notice targeting an article by a member of the European Parliament that was about improper automated copyright notices. So if a platform tells you that it’s developing automated processes that will target only “bad” speech, at scale, don’t believe them. 3. Platforms must invest in transparency and robust, rapid, appeals processes With the above in mind, every proposal and process for takedown should include a corollary plan for restoration. Here, too, copyright law and practice can be instructive. The DMCA has a counternotice provision, which allows a user who has been improperly accused of infringement to challenge the takedown and, if the sender doesn’t go to court, the platform can restore to content without fear of liability. But the counternotice process is pretty flawed: it can be intimidating and confusing, it does little good where the content in question will be stale in two weeks, and platforms are often even slower to restore challenge material. One additional problem with counter-notices, particularly in the early days of the DMCA, was that users struggled to discover who was complaining, and the precise nature of the complaint.  The number of requests, who is making them, and how absurd they can get has been highlighted in company transparency reports. Transparency reports can both highlight extreme instances of abuse—such as in Automattic’s Hall of Shame—or share aggregate numbers. The former is a reminder that there is no ceiling to how rightsholders can abuse the DMCA. The latter shows trends useful for policymaking. For example, Twitter’s latest report shows a 38 percent uptick in takedowns since the last report and that 154,106 accounts have been affected by takedown notices. It’s valuable data to have to evaluate the effect of the DMCA, data we also need to see what effects “community standards” would have. Equally important is transparency about specific takedown demands, so users who are hit with those takedowns can understand who is complaining, about what. For example, a remix artist might include multiple clips in a single video, believing they are protected fair uses. Knowing the nature of the complaint can help her revisit her fair analysis, and decide whether to fight back. If platforms are going to operate as speech police based on necessarily vague “community standards” they must ensure that uses can understand what’s being taken down, and why. They should do so on a broad scale by being open about their takedown processes and the results. And then they should put in place clear, simple procedures for users to challenge takedowns, that don’t take weeks to complete.  4. Abuse should lead to real consequences Congress knew that Section 512’s powerful incentives could result in lawful material being censored from the Internet without prior judicial scrutiny. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances, including Section 512(f), which gives users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith. In practice, however, Section 512(f) has not done nearly enough to curb abuse. Part of the problem is that the Ninth Circuit Court of Appeals has suggested that the person whose speech was taken down must prove to a jury the subjective belief of the censor—a standard that will be all but impossible for most to meet, particularly if they lack the deep pockets necessary to litigate the question. As one federal judge noted, the Ninth Circuit’s “construction eviscerates § 512(f) and leaves it toothless against frivolous takedown notices.” For example, some rightsholders unreasonably believe that virtually all uses of copyrighted works must be licensed. If they are going to wield copyright law like a sword, they should at least be required to understand the weapon. “Voluntary” takedown systems could do better. Platforms should adopt policies to discourage users from abusing their community standards, especially where the abuse is obviously political (such as flagging a site simply because you disagree with the view expressed).  5. Speech regulators will never be satisfied with voluntary efforts Platforms may think that if they “voluntarily” embrace the role of speech police, governments and private groups will back off and they can escape regulation. As Professor Margot Kaminski observed in connection with the last major effort to push through new copyright enforcement mechanisms, voluntary efforts never satisfy the censors:   Over the past two decades, the United States has established one of the harshest systems of copyright enforcement in the world. Our domestic copyright law has become broader (it covers more topics), deeper (it lasts for a longer time), and more severe (the punishments for infringement have been getting worse). … We guarantee large monetary awards against infringers, with no showing of actual harm. We effectively require websites to cooperate with rights-holders to take down material, without requiring proof that it's infringing in court. And our criminal copyright law has such a low threshold that it criminalizes the behavior of most people online, instead of targeting infringement on a true commercial scale. In addition, as noted, the large platforms adopted a number of mechanisms to make it easier for rightsholders to go after allegedly infringing activities. But none of these legal policing mechanisms have stopped major content holders from complaining, vociferously, that they need new ways to force Silicon Valley to be copyright police. Instead, so-called “voluntary efforts end up serving as a basis for regulation. Witness, for example, the battle to require companies to adopt filtering technologies across the board in the EU, free speech concerns be damned. Sadly, the same is likely to be true for content moderation. Many countries already require platforms to police certain kinds of speech. In the US, the First Amendment and the safe harbor of CDA 230 largely prevent such requirements. But recent legislation has started to chip away at Section 230, and many expect to see more efforts along those lines. As a result, today’s “best practices” may be tomorrow’s requirements. The content moderation debates are far from over. All involved in those discussions would do well to consider what we can learn from a related set of debates about the law and policies that, as a practical matter, have been responsible for the vast majority of online content takedowns, and still are.  
>> mehr lesen

California's Net Neutrality Bill Should Be Signed Into Law (Mi, 26 Sep 2018)
Millions of Californians are waiting for Gov. Jerry Brown to affirm their call for a free and open Internet. After Congress reversed the Federal Communication Commission’s 2015 Open Internet Order, states have had to step up to ensure that all traffic on the Internet is treated equally. Gov. Brown’s signature would make California the fourth state to pass a law offering net neutrality protections to its residents. While EFF applauds the states that have taken steps to provide net neutrality protections, we believe California’s is the strongest measure in the country. It goes beyond the basic protections laid out in Washington and Oregon to prevent blocking and interference to ensure that Internet service providers cannot circumvent net neutrality protections at any point in delivering service to consumers. The bill also goes further than other measures by prohibiting ISPs in California from using the practice of discriminatory zero-rating – that is, raising costs on competitive services or apps by exempting their own affiliated products or for companies that pay the Internet access provider for preferential treatment. It also does not allow ISPs to charge other companies for access to their customers, a ban that has been in place for decades. California’s decision to fight this battle in the legislature also sets it apart from other states that have enshrined limited protections through a governor’s executive order, which can only dictate ISP’s conduct in state contracts. Laws are also harder to reverse than executive orders, ensuring that these important consumer protections cannot be reversed in the future with the flick of a pen. ISPs such as Verizon, AT&T, and Comcast objected to several protections in the state’s bill, and nearly succeeded in stripping the measure of many of its protections. Outcry from Californians demanding a free and open Internet restored these protections, and ensured the bill passed with bipartisan majorities in both the state legislature and state senate. As a result of that groundswell, the legislation on the governor’s desk is the nation’s most comprehensive, pro-consumer net neutrality measure. Net neutrality remains popular across the country, and the fight to protect the open Internet continues in states across the country and in Washington D.C. The victory that your voices won over industry money in California can be achieved again. We will continue to fight for a free and open Internet in the state of California and encourage other states across the country to look at the bill as a template for their own net neutrality rules. Take Action Californians: tell the governor to sign the net neutrality bill
>> mehr lesen

A Consumer Privacy Hearing With No One Representing Consumers (Mi, 26 Sep 2018)
The Senate Commerce Committee hearing on consumer privacy this morning was exactly what we and other privacy advocates expected: a chorus of big tech industry voices, with no one representing smaller companies or consumers themselves. In his opening remarks, Senator Thune acknowledged the “angst” caused by the Committee's decisions to convene an industry-only panel, and promised more hearings with more diverse voices. We look forward to a confirmed hearing date with a diverse panel of witnesses from academia, advocacy, and state consumer protection authorities. Today’s hearing included witnesses from AT&T, Apple, Amazon, Charter, Google, and Twitter. All of them confirmed their support for a federal law to preempt California’s Consumer Privacy Act. Many recited talking points about the workload required to comply with the “patchwork” of state laws that they anticipate. However, none were able to answer the question of why the U.S. shouldn’t adopt standards along the lines of the EU’s GDPR or California’s CCPA. None of this was surprising. The companies represented largely rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. In the midst of an otherwise disappointing hearing, some Senators took a strong line on privacy that we applaud. Senator Markey requested that companies discuss a strong, privacy-protective bill before considering preemption of California’s new law. Senator Schatz questioned whether companies were coming to Congress simply to block state privacy laws and raised the prospect of creating an actual federal privacy regulator with broad authority. And Senator Blumenthal pointed out that, while the company representatives present claimed that GDPR and the CCPA imposed unreasonable burdens, they all seemed to be successfully complying. Moving Forward These next hearings need to include witnesses who can not only represent users, small businesses, and other interests, but who can also speak to the far-reaching issues implicated by “consumer privacy.” The Senate should call up consumer privacy experts to testify on issues such as facial recognition, locational privacy, biometric data, Internet of Things devices, identity theft and financial fraud, and discriminatory advertising. Over 90% of Americans feel like they have no control over their privacy. Congress should be working to actively give them back their control, instead of letting the companies with the worst privacy track records dictate users’ legal rights. EFF will continue to oppose any federal legislation that weakens today’s hard-fought privacy protections. Current state laws across the country have already created strong protections for user privacy, with more likely on the horizon. If Congress enacts weaker federal data privacy legislation that blocks such stronger state laws, the result will be a massive step backward for user privacy.
>> mehr lesen

Don’t Make the Register of Copyrights into a Presidential Pawn (Mi, 26 Sep 2018)
H.R. 1695 Would Turn an Essential, Non-political Job Into a Partisan Appointee If we’ve learned one thing from this year in American politics, it’s that presidential appointments can be a messy affair. Debates over appointees can become extremely polarized. It’s not surprising: it’s in the President’s best interests to choose a head of the Department of Justice or Education who will loyally carry out the administration’s agenda in those offices. But there’s one office that simply should not be politicized in that way: the Copyright Office. Unfortunately, some lawmakers are looking to turn the Register of Copyrights into a political appointee. The Register of Copyrights Selection and Accountability Act (H.R. 1695) passed the House of Representatives last year, and now, the Senate is looking to take the bill up. Under H.R. 1695, the Register of Copyrights would become a presidential appointee, just like the directors of Executive Branch departments. Naturally, the president would appoint a Register who shares their interpretation of copyright law and other policy stances, and the nomination could come with a highly partisan confirmation process in the Senate. The Copyright Office is at its best when it has no political agenda: it’s a huge mistake to turn the Office into another political bargaining chip. The Register of Copyrights has two important, apolitical jobs: registering copyrightable works and providing information on copyright law to the government. The Office serves officially as an advisor to Congress, much like the Congressional Research Service (both offices are part of the Library of Congress). It has never been the Register’s job to carry out the president’s agenda. That’s why the Copyright Office is situated in Congress, not in the Executive Branch. H.R. 1695 is only the latest step in a trend of increasing politicization in the Copyright Office over the past decade. Former register Maria Pallante infamously said that “Copyright is for the author first and the nation second,” a far cry from copyright’s Constitutional purpose of promoting the advancement of science and knowledge. Under Pallante, the Copyright Office also supported the Stop Online Piracy Act, the Internet blacklist bill that would have been a disaster for free expression online. More recently, after being targeted by an aggressive lobbying campaign by the MPAA, the Copyright Office worked to undermine the FCC’s plan to bring competition to the cable box market. These policy missteps might be just the beginning. Imagine how much worse they would be if the Register were simply another presidential appointee subject to political agendas that have nothing to do with objectively advising Congress. The Copyright Office carries significant sway over how people interact with copyrighted works—and, by extension, how we interact with technology. Beyond its traditional role of registering copyrights, the Office has a new job: granting the public permission to bypass digital locks for purposes like education and security research under Section 1201 of the Digital Millennium Copyright Act. And under a bill that’s likely to pass Congress this year, the Register will be given the authority to decide whether uses of certain historical sound recordings qualify as “noncommercial.” These jobs demand a Register who understands that the purpose of copyright—first, second, and always—is to serve the public. Don’t turn the Register into yet another political appointee.
>> mehr lesen

FOSTA Case Update: Court Dismisses Lawsuit Without Ruling on Whether the Statute is Unconstitutional (Di, 25 Sep 2018)
A federal court considering a challenge to the Allow States and Victims to Fight Online Sex Trafficking Act of 2017, or FOSTA, dismissed the case on Monday. EFF and partner law firms filed a lawsuit in June against the Justice Department on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist to block enforcement of FOSTA. Unfortunately, a federal court sided with the government and dismissed Woodhull Freedom Foundation et al. v. United States. The court did not reach the merits of any of the constitutional issues, but instead found that none of the plaintiffs had standing to challenge the law’s legality. We’re disappointed and believe the decision is wrong. For example, the court failed to apply the standing principles that are usually applied in First Amendment cases in which the plaintiffs’ speech is chilled. The plaintiffs are considering their options for their next steps. FOSTA was passed by Congress for the worthy purpose of fighting sex trafficking, but the poorly-written bill contains language that criminalizes the protected speech of those who advocate for and provide resources to adult, consensual sex workers. Worse yet, the bill actually hinders efforts to prosecute sex traffickers and aid victims. The lawsuit argues that FOSTA forces community forums and speakers offline for fear of criminal charges and heavy civil liability, in violation of their constitutional rights. We asked the federal court to strike down the law, though the government argued that the plaintiffs were not likely to be subject to criminal or civil liability under the law. Check our case page for this lawsuit for updates in the coming weeks. For the full complaint in Woodhull v. United States: https://www.eff.org/document/woodhull-freedom-foundation-et-al-v-united-states-complaint Related Cases:  Woodhull Freedom Foundation et al. v. United States
>> mehr lesen

Remove the Drone Shoot-Down and Biometric Surveillance Sections From the FAA Act (Di, 25 Sep 2018)
Congress should not broadly authorize federal agencies to destroy and wiretap private drones, or give its implied endorsement to biometric screening of domestic travelers and U.S. citizens. Congress definitely should not sneak these invasive provisions into a last-minute, must-pass, thousand-page bill. Yet Congress is poised to do so. Please join EFF in saying “no.” TAKE ACTION To keep the Federal Aviation Administration functioning, Congress must pass a reauthorization bill by September 30th. But the current bill has been stuffed with last-minute provisions that would strip people of their constitutional rights. In the FAA Reauthorization Act, Congress attached the Preventing Emerging Threats Act, with slightly modified language. But the new provisions do nothing to protect private drone operators—flown by journalists, businesses, and hobbyists—from unprovoked, warrantless take-downs and snooping by DOJ and DHS. The FAA Reauthorization also for the first time gives a congressional imprimatur to DHS’ biometric scanning of domestic travelers and U.S. citizens. The basic functioning of a government agency should not be taken hostage by controversial legislation that strips people of their rights to speech and privacy. Unless these provisions are removed, Congress should not pass FAA Reauthorization. Warrantless Drone Take-Downs As we’ve previously reported, DOJ and DHS have been pushing Congress for overbroad authority to destroy, commandeer, and wiretap all unmanned aircraft (regardless of size). These agencies would be empowered to do so when needed to “mitigate” (undefined) a “credible threat” (to be defined later by the agencies) posed by a drone to “the safety or security” (undefined) of “a covered facility or asset” (broadly defined to include nearly any federal property). The pending FAA Authorization Act has changed some of the language from the original bill (Sec. 1601). But the overbroad shoot-down and wiretapping authorization is largely the same, and we’re still opposed to this expansive warrantless authority. The new language narrows the blanket exemptions from U.S. law given to DOJ and DHS to disable drones, but it still entirely exempts the two statutes that protect people’s electronic communications from government spying: the Wiretap Act and the Pen Register and Trap and Trace Act. The new version also includes discretionary notice, where DOJ and DHS are authorized to “warn” drone operators that they are flying in restricted airspace. But this authorization is not a requirement, meaning that the agencies are not actually required to get in touch with drone operators who may be flying over a “covered asset or facility” by mistake. The new language has provisions for reporting to Congress and collaborating with the Administrator of the FAA. But importantly, the bill still has no process for clearly stating what areas are “covered facilities,” so that the public can know where they are allowed to fly. As we’ve noted before, the definition for covered facility would allow CBP to potentially take down reporter’s drones capturing footage of controversial detention facilities, as well as allow DOJ and DHS to take down photography drones used by activists and artists at protests and other “mass-gatherings.” EFF continues to oppose these provisions that endanger the First and Fourth Amendment rights of people on the ground and in the air. Biometric Travel Screening EFF also opposes language in the FAA Reauthorization Act that would give a congressional stamp-of-approval to biometric screening of U.S. citizens and domestic travelers. Congress has authorized biometric screening of non-citizens who cross the international border, but Congress never authorized biometric screening of U.S. citizens or domestic travelers. But in the last few years, DHS has begun unilaterally subjecting U.S. citizens to biometric screening before certain international flights, and has plans to expand such screening to domestic airports. EFF strongly opposes biometric screening of travelers, including facial recognition, fingerprints, and other highly sensitive personal traits. The proposed screening program invades privacy, uses inaccurate technology (especially for travelers who are part of minority groups in the U.S.), creates new opportunities for data theft and misuse, and can easily be diverted to more onerous uses (like screening travelers for arrest warrants on unpaid parking tickets). This invasive screening program is being led by two units of the U.S. Department of Homeland Security: Customs and Border Protection, and the Transportation Security Administration. Buried in the text of the FAA Reauthorization bill, Congress for the first time acknowledges and gives its imprimatur to biometric travel screening of domestic travelers and U.S. citizens. Specifically, the bill requires a study by CBP and TSA to address, among other things, “the process by which domestic travelers are able to opt-out of scanning using biometric technologies” (emphasis added). This assumes biometric screening of domestic travelers. Further, the study must address “the prompt deletion of the data of individual United States citizens after such data is used to verify traveler identities” (emphasis added). This assumes biometric screening of U.S. citizens. These two provisions might be viewed by DHS as a congressional blessing of invasive practices that Congress has never actually authorized. They should therefore be stripped from the bill. The good news is that the current FAA Reauthorization Act does not include troublesome language earlier proposed by Senator Thune (S. 1872) that would authorize TSA to deploy biometric screening throughout domestic airports (not just the gate of international flights) of all travelers (not just non-citizens). EFF opposed that bill. In fact, the Act contains language (at Section 1919) stating the bill shall not be construed to facilitate or expand biometric screening. Also, the Act’s study provision requires CBP and TSA to report on their program’s privacy impact, error rate, and disparate impact on minority travelers. EFF does not object to these parts of the bill. Join EFF in demanding that Congress remove the troublesome drone shoot-down and biometric surveillance provisions from the FAA Authorization Act. TAKE ACTION
>> mehr lesen

UK Surveillance Regime Violated Human Rights (Di, 25 Sep 2018)
On September 13, after a five-year legal battle, the European Court of Human Rights said that the UK government’s surveillance regime—which includes the country’s mass surveillance programs, methods, laws, and judges—violated the human rights to privacy and to freedom of expression. The court’s opinion is the culmination of lawsuits filed by multiple privacy rights organizations, journalists, and activists who argued that the UK’s surveillance programs violated the privacy of millions. The court’s decision is a step in the right direction, but it shouldn’t be the last. While the court rejected the UK’s spying programs, it left open the risk that a mass surveillance regime could comply with human rights law, and it did not say that mass surveillance itself was unlawful under the European Convention on Human Rights (a treaty that we discuss below). But the court found that the real-world implementation of the UK’s surveillance—with secret hearings, vague legal safeguards, and broadening reach—did not meet international human rights standards. The court described a surveillance regime “incapable” of limiting its “interference” into individuals’ private lives when only “necessary in a democratic society.” In particular, the court’s decision attempts to rein in the expanding use of mass surveillance. Originally reserved for allegedly protecting national security or preventing serious threats, use of these programs has trickled into routine criminal investigations with no national security element—a lowered threshold that the court zeroed in on to justify its rejection of the UK’s surveillance programs. The court also said the UK’s mass surveillance pipeline—from the moment data is automatically swept up and filtered to the moment when that data is viewed by government agents—lacked meaningful safeguards. The UK Surveillance Regime In the UK, the intelligence agency primarily tasked with online spying is the Government Communications Headquarters (GCHQ). The agency, which is sort of the UK version of the NSA, deploys multiple surveillance programs to sweep up nearly any type of online interaction you can think of, including emails, instant messenger chats, social media connections, online searches, browser history, and IP addresses. The GCHQ also collects communications metadata, capturing, for instance, what time an email was sent, where it was sent from, who it was sent to, and how quickly a reply was made. The privacy safeguards for this surveillance are dismal. For more than a decade, the GCHQ was supposed to comply with the Regulation of Investigatory Powers Act 2000 (RIPA). Though no longer fully in effect, the law required Internet service providers to, upon government request, give access to users’ online communications in secret and to install technical equipment to allow surveillance on company infrastructure. The UK directly collected massive amounts of data from the transatlantic, fiber-optic cables that carry Internet traffic around the world. The UK government targeted “bearers”— portions of a single cable—to collect the data traveling within, applied filters and search criteria to weed out data it didn’t want, and then stored the remaining data for later search, use, and sharing. According to GCHQ, this surveillance was designed to target “external” communications—online activity that is entirely outside the UK or that involves communications that leave or enter the UK—like email correspondence between a Londoner and someone overseas. But the surveillance also collected entirely “internal” communications, like two British neighbors’ emails to one another. This surveillance was repeatedly approved under months-long, non-targeted warrants. Parts of this process, the court said, were vulnerable to abuse. (In 2016, the UK passed another surveillance law—the Investigatory Powers Act, or IPA—but the court’s decision applies only to government surveillance under the prior surveillance law, the RIPA.) A Failure to Comply with Human Rights Laws  The suit's results can be looked at as a disconnect between the domestic laws allowing government surveillance in the UK and the UK’s international human rights obligations. The court took issue with the UK’s failure to comply with the European Convention on Human Rights—an international treaty to protect human rights in Europe, specified in the convention’s “articles.” The European Court of Human Rights (ECtHR), a regional human rights judicial body based in Strasbourg, France, issued the opinion. Though the lawsuit’s plaintiffs asserted violations of Articles 6, 8, 10, and 14, the court only found violations of Article 8 and 10, which guarantee the right to privacy and the right to freedom of expression. The court’s reasoning relied on applicable law, government admissions, and recent court judgments. The court found two glaring problems in the UK’s surveillance regime—the entire selection process for what data the government collects, keeps, and sees, and the government’s unrestricted access to metadata. How the government chooses “bearers” for data collection should “be subject to greater oversight,” the court said. By itself, this was not enough to violate Article 8’s right to privacy, the court said, but it necessitated better safeguards in the next steps—how data is filtered after initial collection and how data is later accessed.   Both those steps lacked sufficient oversight, too, the court said. It said the UK government received no independent oversight and needed “more rigorous safeguards” when choosing search criteria and selectors (things like email addresses and telephone numbers) to look through already-collected data. And because analysts can only look at collected and filtered data, “the only independent oversight of the process of filtering and selecting intercept data for examination” can happen afterwards through an external audit, the court said. “The Court is not persuaded that the safeguards governing the selection of bearers for interception and the selection of intercepted material for examination are sufficiently robust to provide adequate guarantees against abuse,” the court said. “Of greatest concern, however, is the absence of robust independent oversight of the selectors and search criteria used to filter intercepted communications.” Along with related problems, including the association of related metadata to collected communications, the court concluded the surveillance program violated Article 8. The court also looked at how the UK government accesses metadata in so-called targeted requests to communications providers. It focused on one section of RIPA and one particularly important legal phrase: “Serious crime.”  The UK’s domestic law, the court said, “requires that any regime permitting the authorities to access data retained by [communications services providers] limits access to the purpose of combating ‘serious crime,’ and that access be subject to prior review by a court or independent administrative body.” This means that whenever government agents want to access data held by communications services providers, those government agents must be investigating a “serious crime,” and government agents must also get court or administrative approval prior to accessing that data. Here’s the problem: that language is absent in UK’s prior surveillance law for metadata requests. Instead, RIPA allowed government agencies to obtain metadata for investigations into non-serious crimes. Relatedly, metadata access for non-serious crimes did not require prior court or independent administrative approval, compounding the invasion of privacy. Due to this discrepancy, the court found a violation of Articles 8 and 10. For years, intelligence agencies convinced lawmakers that their mass surveillance programs were necessary to protect national security and to prevent terrorist threats—to, in other words, fight “serious crime.” But recently, that’s changed. These programs are increasingly being used for investigating seemingly every-day crimes. In the UK, this process began with RIPA. The 2000 law was introduced in part to bring Britain’s intelligence operations into better compliance with human rights law because the country’s government realized that the scope of GCHQ’s powers—and any limits to it—were insufficiently defined in law. But as soon as lawmakers began cataloguing the intelligence services’ extraordinary powers to peer into everybody’s lives, other parts of the government took interest: If these powers are so useful for capturing terrorists and subverting foreign governments, why not use them for other pressing needs? With RIPA, the end result was an infamous explosion in the number of agencies able to conduct surveillance under the law. Under its terms, the government set out to grant surveillance powers to everyone from food standards officers to local authorities investigating the illicit movement of pigs, to a degree that upset even the then-head of MI5. The court’s decision supports the idea that this surveillance expansion, if left unchecked, could be incompatible with human rights. Good Findings At more than 200 pages, the court’s opinion includes a lot more than just findings of human rights violations. Metadata collection, the court said, is just as intrusive as content collection. EFF has championed this point for years. When collected in mass, metadata can reveal information so intimate that even the content of a conversation becomes predictable. Take phone call metadata, for example. Metadata reveals a person’s seven-days-a-week, middle-of-the-night, 10-minute phone calls to a local suicide prevention hotline. Metadata reveals a person’s phone call to an HIV testing center, followed up with a call to their doctor, followed up with a call to their health insurance company. Metadata reveals a person’s half-hour call to a gynecologist, followed by another call to a local Planned Parenthood. The court made a similar conclusion. It said: “For example, the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with.” The court also said that an individuals’ right to privacy is applied at the initial moment their communications are collected, not, as the government said, when their communications are accessed by a human analyst. That government assertion betrays our very understanding of privacy and relates to a similar, disingenuous claim that our messages aren’t really “collected” until processed for government use. Turning Towards Privacy Modern telecommunications surveillance touches on so many parts of human rights that it will take many more international cases, or protective action by lawmakers and judges, before we can truly establish its limits, and there is plenty more that’s wrong with how we deal with modern surveillance than is covered by this decision. This is partly why EFF and hundreds of other technical and human rights experts helped create the Necessary and Proportionate Principles, a framework for assessing whether a state’s communication surveillance practices comply with a country’s human rights obligations. And it’s why EFF has brought its own lawsuits to challenge mass surveillance conducted by the NSA in the United States. (The European Court of Human Rights’ opinion has no direct effect on this litigation.) This type of works takes years, if not decades. When it comes to any court remedy, it is often said that the wheels of justice turn slowly. We can at least breathe a little easier knowing that, last week, thanks to the hard work of privacy groups around the world, the wheels made one more turn in the right direction, towards privacy.
>> mehr lesen

Australian Government Ignores Experts in Advancing Its Anti-Encryption Bill (Di, 25 Sep 2018)
The Australian government has ignored the expertise of researchers, developers, major tech companies, and civil liberties organizations by charging forward with a disastrous proposal to undermine trust and security for technology users around the world. On September 10, the Australian government closed the window for receiving feedback about its anti-encryption and pro-surveillance “Access and Assistance” bill. A little more than a week and more than 15,000 comments later, the Minister for Home Affairs introduced a largely-unchanged version of the bill into the House of Representatives. The issue isn’t whether the Australian government read the 15,000 comments and ignored them, or refused to read them altogether. The issue is that the Australian government couldn’t have read the 15,000 comments in such a short time period. Indeed, the bill’s few revisions reflect this—no security recommendations are included. The Access and Assistance bill threatens the trust we place—either by choice or necessity—in our technology. If passed, the bill will allow the Australian government to demand “assistance” from an enormous array of “designated service providers,” from the multibillion-dollar global Internet company to the garage-startup app maker who just earned her first Australian user. The required “assistance” is equally vast. The Australian government could demand web developers to deliver spyware and software developers to push malicious updates, all under the cloak of “national security.” The penalty for speaking about these government orders—which are called technical assistance requests (TAR), technical assistance notices (TAN), and technical capability notices (TCN)—is five years in prison. EFF’s opposition to this bill is widely shared by companies, cybersecurity researchers, and civil liberties groups around the world. New America’s Open Technology Institute and EFF, in comments signed by Apple, Cloudflare, Google, Microsoft, R-Street Institute, CryptoAUSTRALIA, and Privacy International, criticized the bill for failing to provide a “clear process or standard for challenging” TANs and TCNs. Further, the bill’s strict nondisclosure provisions mean that any end-users harmed by a government order will likely never know about it, hindering their ability to defend their privacy rights in court. The Australian government made no meaningful changes to the bill to correct these issues. Multiple Australian rights groups, including Digital Rights Watch, Australian Privacy Foundation, Electronic Frontiers Australia, and New South Wales Council for Civil Liberties, recommended that both the potentially affected “designated service providers” and the types of required “assistance” be “significantly reduced.” The revised bill includes no such changes, and it still leaves open the risk that open-source volunteers can be targeted with government orders. The revised bill is also unclear about whether an individual within a company can receive an order. This may mean that a programmer, systems administrator, or network operator could be forced to comply with an order in secret, potentially harming her company’s services and their users. Riana Pfefferkorn, a cryptography fellow at Stanford Law’s Center for Internet and Society, warned about an inherent disconnect within the bill itself. The draft bill stated that no service provider can be ordered to create a backdoor—or “systemic weakness or systemic vulnerability”—but, Pfefferkorn said, other government orders could have the same result. “The bill risks forcing technology companies to create insecure versions of their products and services that, while ostensibly limited to a single incidence, in fact open the door to the very systemic vulnerabilities the bill professes to avoid,” Pfefferkorn wrote. The Australian government did not fix this. Instead, the revised bill gives the Attorney General and service providers the option to “jointly appoint” a third-party to assess whether a government order will create a systemic weakness. The bill is silent on what happens if the Attorney General and a provider disagree on such an appointment, and whether the Attorney General can override a provider’s recommendation.  These are just a handful of neglected concerns from three comments submitted to the Australian government—just the tip of the iceberg of thousands likely submitted by its own citizens. If the bill clears Australia’s House of Representatives, it could still be sent to a Senate committee for changes. We’ve said exactly what we want. We’ll just have to say it again.
>> mehr lesen

EFF Opposes Industry Efforts to Have Congress Roll Back State Privacy Protections (Mo, 24 Sep 2018)
The Senate Commerce Committee is holding a hearing on consumer privacy this week, but consumer privacy groups like EFF were not invited. Instead, only voices from big tech and Internet access corporations will have a seat at the table. In the lead-up to this hearing, two industry groups (the Chamber of Commerce and the Internet Association) have suggested that Congress wipe the slate clean of state privacy laws in exchange for weaker federal protections. EFF opposes such preemption, and has submitted a letter to the Senate Commerce Committee to detail the dangers it poses to user privacy. Current state laws across the country have already created strong protections for user privacy. Our letter identifies three particularly strong examples from California's Consumer Privacy Act, Illinois' Biometric Privacy Act, and Vermont's Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy. As we explain in our letter: In essence, a federal law that sweeps broadly in its preemption could reduce or outright eliminate privacy protections that Congress has no intent to eliminate, such as laws that protect social security numbers, prohibit deceptive trade practices, and protect the confidentiality of library records. The companies represented at Wednesday's hearing rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. Instead, as we highlight in our letter: The Committee should understand that the only reason many of these companies seek congressional intervention now, after years of opposing privacy legislation both federally and at the states, is because state legislatures and attorney generals have acted more aggressively to protect the privacy interest of their states’ residents. We urge the Committee to recognize the scope of what companies might request before acting on federal legislation. We further urge Congress to consider particular privacy concepts—including opt-in consent, the "right to know," data portability, the right to equal service, and a private right of action—as necessary for any federal legislation that genuinely improves Americans' data privacy.
>> mehr lesen

Facebook Warns Memphis Police: No More Fake “Bob Smith” Accounts (Mo, 24 Sep 2018)
Facebook has a problem: an infestation of undercover cops. Despite the social platform’s explicit rules that the use of fake profiles by anyone—police included—is a violation of terms of service, the issue proliferates. While the scope is difficult to measure, EFF has identified scores of agencies who maintain policies that explicitly flout these rules. Hopefully—and perhaps this is overly optimistic—this is about to change, with a new warning Facebook has sent to the Memphis Police Department. The company has also updated its law enforcement guidelines to highlight the prohibition on fake accounts. This summer, the criminal justice news outlet The Appeal reported on an alarming detail revealed in a civil rights lawsuit filed by the ACLU of Tennessee against the Memphis Police Department. The lawsuit uncovered evidence that the police used what they referred to as a “Bob Smith” account to befriend and gather intelligence on activists. Following the report, EFF contacted Facebook, which deactivated that account. Facebook has since identified and deactivated six other fake accounts managed by Memphis police that were previously unknown. In a letter to Memphis Police Director Michael Rallings dated Sept. 19, Facebook’s legal staff demands that the agency “cease all activities on Facebook that involve the use of fake accounts or impersonation of others.” Read Facebook’s letter to the Memphis Police Department. EFF has long been critical of Facebook’s policies that require users to use their real or “authentic” names, because we feel that the ability to speak anonymously online is key to free speech and that forcing people to disclose their legal identities may put vulnerable users at risk. Facebook, however, has argued that this policy is needed “to create a safe environment where people can trust and hold one another accountable." As long as they maintain this position, it is crucial that they apply it evenly—including penalizing law enforcement agencies who intentionally break the rules. We are pleased to see Facebook acknowledge that fake police profiles undermine this safe environment. In the letter to the Memphis Police Department, Facebook further writes: Facebook has made clear that law enforcement authorities are subject to these policies. We regard this activity as a breach of Facebook's terms and policies, and as such we have disabled the fake accounts that we identified in our investigation. We request that the Police Department, its members, and any others acting on its behalf cease all activities on Facebook that involve impersonation or that otherwise violate our policies. EFF raised this issue with Facebook four years ago, when the Drug Enforcement Administration was caught impersonating a real user in order to investigate suspects. At the time of the media storm surrounding the revelation, Facebook sent a warning to the DEA. But EFF felt that it did not go far enough, since many other agencies—such as police in Georgia, Nebraska, New York, and Ohio—were openly using this tactic, according to records available online. Recently, EFF pointed out to Facebook that this prohibition is not clearly articulated in its official law enforcement guidelines. Facebook has since updated its “Information for Law Enforcement Authorities” page to highlight how its misrepresentation policy also applies to police: People on Facebook are required to use the name they go by in everyday life and must not maintain multiple accounts. Operating fake accounts, pretending to be someone else, or otherwise misrepresenting your authentic identity is not allowed, and we will act on violating accounts. We applaud this progress, but we are also skeptical that a warning alone will deter the activity. While Facebook says it will delete accounts brought to its attention, too often these accounts only become publicly known (say in a lawsuit) long after the damage has been done and the fake account has outlived its purpose. After all, law enforcement often already knows the rules, but chooses to ignore them. A slide presentation for prosecutors at the 2016 Indiana Child Support Conference says it all: The presenter told the audience: “Police and Federal law enforcement may create a fake Facebook profile as part of an investigation and even though it violates the terms and policies of Facebook the evidence may still be used in court.” The question remains: what action should Facebook take when law enforcement intentionally violates the rules? With regular users, that could result in a lifetime ban. But, banning Memphis Police Department from maintaining its official, verified page could deprive residents of important public safety information disseminated across the platform. It’s not an easy call, but it’s one Facebook must address and soon. Or better yet, maybe it should abandon its untenable policy requiring authentic names from everyday people who don’t wear a badge.
>> mehr lesen

ESNI: A Privacy-Protecting Upgrade to HTTPS (Mo, 24 Sep 2018)
Today, the content-delivery network Cloudflare is announcing an experimental deployment of a new web privacy technology called ESNI. We’re excited to see this development, and we look forward to a future where ESNI makes the web more private for all its users. Over the past several years, we at EFF have been working to encrypt the web. We and our partners have made huge strides to make web browsing safer and more privacy through tools like HTTPS Everywhere and the Let’s Encrypt Certificate Authority. But users still face many kinds of online privacy problems even when using HTTPS. An important example: a 15-year-old technology called Server Name Indication (SNI), which allows a single server to host multiple HTTPS web sites. Unfortunately, SNI itself is unencrypted and transmits the name of the site you’re visiting. That lets ISPs, people with access to tap Internet backbones, or even someone monitoring a wifi network collect a list of the sites you visit. (HTTPS will still prevent them from seeing exactly what you did on those sites.) We were disappointed last year that regulations limiting collection of data by ISPs in the U.S. were rolled back. This leaves a legal climate in which ISPs might feel empowered to create profiles of their users’ online activity, even though they don’t need those profiles in order to provide Internet access services. SNI is one significant source of information that ISPs could use to feed these profiles. What’s more, the U.S. government continues to argue that the SNI information your browser sends over the Internet, as “metadata,” enjoys minimal legal protections against government spying. Today, Cloudflare is announcing a major step toward closing this privacy hole and enhancing the privacy protections that HTTPS offers. Cloudflare has proposed a technical standard for encrypted SNI, or “ESNI,” which can hide the identities of the sites you visit—particularly when a large number of sites are hosted on a single set of IP addresses, as is common with CDN hosting. Working at the Internet Engineering Task Force (IETF), Cloudflare and representatives of other Internet companies, including Fastly and Apple, broke a years-long deadlock in the deployment of privacy enhancements in this area. With HTTP, intermediaries see all data exchanged between you and a web site. HTTP protects little of your browsing information... With HTTPS, intermediaries see the site name but not the path. ...HTTPS protects much more... With HTTPS and ESNI, intermediaries no longer see the site name. ...and ESNI protects the site name, too. Hosting providers and CDNs (like Cloudflare) still know which sites users access when ESNI is in use, because they have to serve the corresponding content to the users. But significantly, ESNI doesn’t give these organizations any information about browsing activity that they would not otherwise possess—they still see parts of your Internet activity in the same way either with or without ESNI. So, the technology strictly decreases what other people know about what you do online. And ESNI can also potentially work over VPNs or Tor, adding another layer of privacy protections. ESNI is currently in an experimental phase. Only users of test versions of Firefox will be able to use it, and initially only when accessing services hosted by Cloudflare. However, every aspect of the design and implementation of ESNI is being published openly, so when it’s been shown to work properly, we hope to see it supported by other browsers and CDNs, as well as web server software, and eventually used automatically for the majority of web traffic. We may be able to help by providing options in Certbot for web sites to enable ESNI. We’re thrilled about Cloudflare’s leadership in this area and all the work that they and the IETF community have done to make ESNI a reality. As it gets rolled out, we think ESNI will give a huge boost to the goal of reducing what other people know about what you do online.
>> mehr lesen

EFF’s Katitza Rodríguez Named One of the Most Influential Latinos in Tech (Do, 20 Sep 2018)
In well-deserved recognition of her digital rights and privacy work around the globe, EFF International Rights Director Katitza Rodríguez was named by CNET as one of the most influential Latinos in technology this year. We’re delighted to see Katitza celebrated for her many years of advocacy on behalf of technology users in Latin America and internationally. While the technology industry has much work to do to increase diversity and inclusion among its ranks, CNET’s annual list of top Latino tech leaders underlines the importance of having female leaders and leaders of different nationalities and backgrounds in the field. Katitza has advocated tirelessly in Latin America and elsewhere for users’ rights, greater accountability at tech and telecommunications companies, and an end to unchecked government surveillance. She launched a regional project, “Who Defends Your Data,” based on EFF’s “Who Has Your Back” list, and has worked with local partners to bring the report to Paraguay, Colombia, Chile, Peru, Brazil, Argentina, Spain, and Mexico. A frequent speaker at international tech and human rights global conferences, Katitza has spoken out for freedom of expression, online privacy, and protections for dissidents and journalists in the digital world in front of judges, policymakers, government officials, diplomats, law enforcement agents, and prosecutors from Europe to Latin America and Asia. “There is a breach between where tech is going and where law and policy are right now," said Katitza. “Tech is always ahead, raising new questions in the United States, Latin America, and throughout the world. Our vision is to work with local partners to develop innovative projects to encourage best practices that will protect people’s privacy and allow free expression to flourish. Congratulations Katitza for being recognized for your amazing contributions to digital rights. mytubethumbplay %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FktscAlp3fKQ%3Frel%3D0%26autoplay%3D1%22%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com
>> mehr lesen

You Can Make the House of Representatives Restore Net Neutrality (Do, 20 Sep 2018)
For all intents and purposes, the fate of net neutrality this year sits completely within the hands of a majority of members of the House of Representatives. For one thing, the Senate has already voted to reverse the FCC. For another, 218 members of the House can agree to sign a discharge petition and force a vote to the floor, and nothing could stop it procedurally. This represents the last, best chance for a 2018 end to the FCC’s misguided journey into abandoning consumer protection authority over ISPs such as Comcast and AT&T. But we need you to take the time to contact your elected officials and make your voice heard. Do not underestimate your power to protect the Internet. You’ve done it before when we stopped Congress from passing the Stop Online Piracy Act (SOPA) as it barreled forward towards passage. We’ve even done it on net neutrality just this year. Every time it seemed the ISP lobby had control over the state legislative process and was going to ruin progress on net neutrality laws, we collectively overcame their influence. In fact, every state that has passed net neutrality legislation so far as interim protections have done so on a bipartisan basis. That should come as no surprise as 86 percent of Americans opposed the FCC decision to repeal net neutrality. At the end of the day the House of Representatives is the political body that is explicitly designed to represent the majority opinion in this country. That means you, your friends, and your family have to speak out now to force the change. No amount of special interest influence is more important or more powerful than Team Internet. To help you make your voice heard, EFF has provided a guide on how to contact your Member of Congress and navigate the process of meeting your representative. You can also look up who represents you by going here and contact them. Take Action Tell Congress to Sign the Discharge Petition to Support Net Neutrality
>> mehr lesen

The New Music Modernization Act Has a Major Fix: Older Recordings Will Belong to the Public, Orphan Recordings Will Be Heard Again (Mi, 19 Sep 2018)
The Senate passed a new version of the Music Modernization Act (MMA) as an amendment to another bill this week, a marked improvement over the version passed by the House of Representatives earlier in the year. This version contains a new compromise amendment that could preserve early sound recordings and increase public access to them. Until recently, the MMA (formerly known as the CLASSICS Act) was looking like the major record labels’ latest grab for perpetual control over twentieth-century culture. The House of Representatives passed a bill that would have given the major labels—the copyright holders for most recorded music before 1972—broad new rights in those recordings, ones lasting all the way until 2067. Copyright in these pre-1972 recordings, already set to last far longer than even the grossly extended copyright terms that apply to other creative works, would a) grow to include a new right to control public performances like digital streaming; b) be backed by copyright’s draconian penalty regime; and c) be without many of the user protections and limitations that apply to other works. Fundamentally, Congress should not be adding new rights in works created decades ago. The drafting process was also troubling. It seemed a return to the pattern of decades past, where copyright law was written behind closed doors by representatives from a few industries and then passed by Congress without considering the views of a broader public. Star power, in the form of famous musicians flown to Washington to shake hands with representatives, eased things along. Two things changed the narrative. First, a broad swath of affected groups spoke up and demanded to be heard. Tireless efforts by library groups, music libraries, archives, copyright scholars, entrepreneurs, and music fans made sure that the problems with MMA were made known, even after it sailed to near-unanimous passage in the House. You contacted your Senators to let them know the House bill was unacceptable to you, and that made a big difference. Second, the public found a champion in Senator Ron Wyden, who proposed a better alternative in the ACCESS to Recordings Act. Instead of layering bits of federal copyright law on top of the patchwork of state laws that govern pre-1972 recordings, ACCESS would have brought these recordings completely under federal law, with all of the rights and limitations that apply to other creative works. While that still would have brought them under the long-lasting and otherwise deeply-flawed copyright system we have, at least there would be consistency. Weeks of negotiation led to this week’s compromise. The new “Classics Protection and Access Act” section of MMA clears away most of the varied and uncertain state laws governing pre-1972 recordings, and in their place applies nearly all of federal copyright law. Copyright holders—again, mainly record labels—gain a new digital performance right equivalent to the one that already applies to recent recordings streamed over the Internet or satellite radio. But older recordings will also get the full set of public rights and protections that apply to other creative work. Fair use, the first sale doctrine, and protections for libraries and educators will apply explicitly. That’s important, because many state copyright laws—California’s, for example—don’t contain explicit fair use or first sale defenses. The new bill also brings older recordings into the public domain sooner. Recordings made before 1923 will exit from all copyright protection after a 3-year grace period. Recordings made from 1923 to 1956 will enter the public domain over the next several decades. And recordings from 1957 onward will continue under copyright until 2067, as before. These terms are still ridiculously long—up to 110 years from first publication, which is longer than any other U.S. copyright. But our musical heritage will leave the exclusive control of the major record labels sooner than it would have otherwise. The bill also contains an “orphan works”-style provision that could allow for more use of old recordings even if the rightsholder can’t be found. By filing a notice with the copyright office, anyone can use a pre-1972 recording for non-commercial purposes, after checking first to make sure the recording isn’t in commercial use. The rightsholder then has 90 days to object. And if they do, the potential user can still argue that their use is fair. This provision will be an important testcase for solving the broader orphan works problem. The MMA still has many problems. With the compromise, the bill becomes even more complex, extending to 186 pages. And fundamentally, Congress should not be adding new rights in works created decades ago. Copyright law is about building incentives for new creativity, enriching the public. Adding new rights to old recordings doesn’t create any incentives for new creativity. And copyrights as a whole, including sound recording copyrights, still last for far too long. Still, this compromise gives us reason for hope. Music fans, non-commercial users, and the broader public have a voice—a voice that was heard—in shaping copyright law as long as legislators will listen and act.
>> mehr lesen

Hill-Climbing Our Way to Defeating DRM (Di, 18 Sep 2018)
Computer science has long grappled with the problem of unknowable terrain: how do you route a packet from A to E when B, C, and D are nodes that keep coming up and going down as they get flooded by traffic from other sources? How do you shard a database when uncontrollable third parties are shoving records into it all the time? What's the best way to sort some data when spammers are always coming up with new tactics for re-sorting it in ways that suit them, but not you or your users? One way to address the problem is the very useful notion of "hill-climbing." Hill-climbing is modeled on a metaphor of a many-legged insect, like an ant. The ant has forward-facing eyes and can't look up to scout the terrain and spot the high ground, but it can still ascend towards a peak by checking to see which foot is highest and taking a step in that direction. Once it's situated in that new place, it can repeat the process, climbing stepwise toward the highest peak that is available to it (of course, that might not be the highest peak on the terrain, so sometimes we ask our metaphorical ant to descend and try a different direction, to see if it gets somewhere higher). This metaphor is not just applicable to computer science: it's also an important way to think about big, ambitious, fraught policy fights, like the ones we fight at EFF. Our Apollo 1201 Project aims to kill all the DRM in the world inside of a decade, but we don't have an elaborate roadmap showing all the directions we'll take on the way. There's a good reason for that. Not only is the terrain complex to the point of unknowability; it's also adversarial: other, powerful entities are rearranging the landscape as we go, trying to head us off. As the old saying goes, "The first casualty of any battle is the plan of attack." Instead of figuring out the whole route from A to Z, we deploy heuristics: rules of thumb that help us chart a course along this complex, adversarial terrain as we traverse it. Like the ant climbing its hill, we're feeling around for degrees of freedom where we can move, ascending towards our goal. There are four axes we check as we ascend: 1. Law: What is legal? What is illegal? What chances are there to change the law? For example, we're suing the US government to invalidate Section 1201 of the Digital Millennium Copyright Act (DMCA), the abetting legislation that imposes penalties for bans breaking DRM, even for legal reasons.  If it was legal to break DRM for a legal purpose, the market would be full of products that let you unlock more value in the products you own, and companies would eventually give up on trying to restrict legal conduct. We're also petitioning the US Copyright Office to grant more exemptions to DMCA 1201, despite the fact that those exemptions are limited in practice (e.g., "use" exemptions that let you jailbreak a device, but not "tools" exemptions that let you explain to someone how to jailbreak their device or give them a tool to do so). Why bother petitioning the Copyright Office if they can only make changes that barely rise above the level of cosmetic? Glad you asked. 2. Norms: What is socially acceptable? A law that is widely viewed as unreasonable is easier to change than a law that is viewed as perfectly understandable. Copyright law is complicated and boring, and overshadowed by emotive appeals to save wretched "creators" (like me—my full-time job is as a novelist, and I work part-time for EFF as an activist because sitting on the sidelines while technology was perverted to control and oppress people was unbearable). But in the twenty-first century, a tragic category error (using copyright, a body of law intended to regulate the entertainment industry's supply chain, to regulate the Internet, which is the nervous system of the entire digital world) has led to disastrous and nonsensical results. Thanks to copyright law, computer companies and car companies and tractor companies and voting machine companies and medical implant companies and any other company whose product has a computer in it can use copyright to make it a crime to thwart their commercial plans—to sell you expensive ink, or to earn a commission on every app, or to monopolize the repair market. From long experience, I can tell you that the vast majority of people do not and will never care about copyright or DRM. But they do care about the idea that vast corporations have bootstrapped copyright and DRM into a doctrine that amounts to "felony contempt of business model." They care when their mechanic can't fix their car any longer, or the insulin for their artificial pancreas goes up 1000 percent, or when security experts announce that they can't audit their state's voting machines. The Copyright Office proceedings can carve out some important freedoms, but more importantly, they are a powerful normative force, an official recognition from the branch of the US government charged with crafting and regulating copyright that DRM is messed up and getting in the way of legitimate activity. 3. Code: What is technically possible? DRM is rarely technologically effective. For the most part, DRM does not survive contact with the real world, where technologists take it apart, see how it works, find its weak spots, and figure out how to switch it off. Unfortunately, laws like DMCA 1201 make developing anti-DRM code legally perilous, and people who try face both civil and criminal jeopardy. But despite the risks, we still see technical interventions like papers at security conferences on the weaknesses in DRM or tools for bypassing and jailbreaking DRM. EFF's Coders' Rights project stands up for the right of developers to create these legitimate technologies, and our intake desk can help coders find legal representation when they're threatened. 4. Markets: What's profitable? When a policy goal intersects with someone else's business model, you get an automatic power-up. People who want to sell jailbreaking tools, third-party inkjet cartridges, and other consumables, independent repair services, apps and games for locked platforms are all natural opponents of DRM, even if they're not particularly worried about DRM itself, and only care about the parts of it that get in the way of earning their own living. There are many very successful products that were born with DRM—like iPhones—and where no competing commercial interests were ever able to develop. It's a long battle to convince app makers that competition in app stores would result in their being able to keep more of that 30 percent commission they currently pay to Apple. But in other domains, like the independent repair sector, there are huge independent commercial markets that are thwarted by DRM. Independent repair shops create local, middle-class jobs (no one sends a phone or a car overseas for service!) and they rely on manufacturers for third-party replacement parts and diagnostic tools. Farmers are a particularly staunch ally in the repair fight, grossly affronted at the idea of having to pay John Deere a service charge to unlock the parts they swap into their own tractors (and even more furious at having to wait days for a John Deere service technician to put in an appearance in order to enter the unlock code). Law, Norms, Code, and Markets: these are the four forces that former EFF Board member Lawrence Lessig first identified in his 1999 masterpiece Code and Other Laws of Cyberspace, the forces that regulate all our policy outcomes. The fight to rescue the world from DRM needs all four. When we're hill-climbing, we're always looking for chances to invoke one of these four forces, or better yet, to combine them. Is there a business that's getting shafted by DRM who will get their customers to write to the Copyright Office? Is there a country that hasn't yet signed a trade agreement banning DRM-breaking, and if so, are they making code that might help the rest of us get around our DRM? Is there a story to tell about a ripoff in DRM (like the time HP pushed a fake security update to millions of printers in order to insert DRM that prevented third-party ink) and if so, can we complain to the FTC or a state Attorney-General to punish them? Can that be brought to a legislature considering a Right to Repair bill? On the way, we expect more setbacks than victories, because we're going up against commercial entities who are waxing rich and powerful by using DRM as an illegitimate means to cement monopolies, silence critics, and rake in high rents. But even defeats are useful: as painful as it is to lose a crucial battle, such a loss can galvanize popular opposition, convincing apathetic or distracted sideliners that there's a real danger that the things they value will be forever lost if they don't join in (that would be a "normative" step towards victory). As we've said before, the fight to keep technology free, fair and open isn't a destination, it's a journey. Every day, there are new reasons that otherwise reasonable people will find to break the tech we use in increasingly vital and intimate ways—and every day, there will be new people who are awoken to the need to fight against this temptation. These new allies may get involved because they care about Net Neutrality, or surveillance, or monopolies. But these are all part of the same information ecology: what would it gain us to have a neutral internet if all the devices we connect to it use DRM to control us to the benefit of distant corporations? How can we end surveillance if our devices are designed to treat us as their enemies, and thus able to run surveillance code that, by design, we're not supposed to be able to see or stop? How can we fight monopolies if corporations get to use DRM to decide who can compete with them—or even criticize the security defects in their products? On this Day Against DRM, in a year of terrible tech setbacks and disasters, it could be easy to despair. But despair never got the job done: when life gives you SARS, you make sarsaparilla. Every crisis and catastrophe bring new converts to the cause. And if the terrain seems impassible, just look for a single step that will take you to higher ground. Hill-climbing algorithms may not be the most direct route to higher ground, but as every programmer knows, it's still the best way to traverse unknowable terrain. What step will you take today? (Image: Jacob_Eckert, Creative Commons Attribution 3.0 Unported) Related Cases:  Green v. U.S. Department of Justice
>> mehr lesen

EFF to Court: The First Amendment Protects Criticism of Patent Trolls (Di, 18 Sep 2018)
EFF has submitted an amicus brief [PDF] to the New Hampshire Supreme Court asking it to affirm a lower court ruling that found criticism of a patent owner was not defamatory. The trial judge hearing the case ruled that “patent troll” and other rhetorical characterizations are not the type of factual statements that can be the basis of a defamation claim. Our brief explains that both the First Amendment and the common law of defamation support this ruling. This case began when patent assertion entity Automated Transactions, LLC (“ATL”) and inventor David Barcelou filed a defamation complaint [PDF] in New Hampshire Superior Court. Barcelou claims to have come up with the idea of connecting automated teller machines to the Internet. As the complaint explains, he tried to commercialize this idea but failed. Later, ATL acquired an interest in Barcelou’s patents and began suing banks and credit unions. ATL’s patent litigation did not go well. In one case, the Federal Circuit ruled that some of ATL’s patent claims were invalid and that the defendants did not infringe. ATL’s patents were directed to ATMs connected to the Internet and it was “undisputed” that the defendants’ products “are not connected to the Internet and cannot be accessed over the Internet.” ATL filed a petition asking the U.S. Supreme Court to overturn the Federal Circuit. The Supreme Court denied that petition. Unsurprisingly, ATL’s licensing revenues went down after its defeat in the federal courts. Rather than accept this, ATL and Barcelou filed a defamation suit in New Hampshire state court blaming their critics for ATL’s financial decline. In the New Hampshire litigation, ATL and Barcelou allege that statements referring to them as a “patent troll” are defamatory. They also claim that characterizations of ATL’s litigation campaign as a “shakedown,” “extortion,” or “blackmail” are defamatory. The Superior Court found these statements were the kind of rhetorical hyperbole that is not capable of defamatory meaning and dismissed the complaint. ATL and Barcelou appealed. EFF’s amicus brief [PDF], filed together with ACLU of New Hampshire, explains that Superior Court Judge Brian Tucker got it right. The First Amendment provides wide breathing room for public debate and does not allow defamation actions based solely on the use of harsh language. The common law of defamation draws a distinction between statements of fact and pure opinion or rhetorical hyperbole. A term like “patent troll,” which lacks any settled definition, is classic rhetorical hyperbole. Similarly, using terms like “blackmail” to characterize patent litigation is non-actionable opinion. ATL and Barcelou, like some other critics of the Superior Court’s ruling, spend much of their time arguing that “patent troll” is a pejorative term. This misunderstands the Superior Court’s decision. At one point in his opinion, Judge Tucker noted that some commentators have presented the patent assertion, or troll, business model in a positive light. But the court wasn’t saying that “patent troll” is never used pejoratively or even that the defendants didn’t use it pejoratively. The law reports are filled with cases where harsh, pejorative language is found not capable of defamatory meaning, including “creepazoid attorney,” “pitiable lunatics,” “stupid,” “asshole,” “Director of Butt Licking,” etc. ATL and Barcelou may believe that their conduct as inventors and patent litigants should be praised rather than criticized. They are entitled to hold that view. But their critics are also allowed to express their opinions, even with harsh and fanciful language. Critics of patent owners, like all participants in public debate, may use the “imaginative expression” and “rhetorical hyperbole” which “has traditionally added much to the discourse of our Nation.” Related Cases:  Automated Transactions LLC v. American Bankers Association
>> mehr lesen