Deeplinks

EFF Client Responds to Ludicrous “Collusion” Trademark Threat (Di, 22 Jan 2019)
Sometimes trademark owners seem to think that they own ordinary words. In this case, U.K. clothing giant Asos sent a cease and desist letter [PDF] to an EFF client for registering a domain with the word “collusion” in it. Our client’s domain doesn’t have anything to do with clothing—it’s about contemporary U.S. political debates. It is about as far from trademark infringement as possible. Today, we sent a response letter [PDF] demanding that Asos withdraw its baseless threat. The full backstory is something of a Russian nesting doll of stupidity. It begins with Rudy Giuliani, former New York mayor and current attorney to President Donald Trump. Last year, some Twitter users noticed that Mr. Giuliani was making typographical errors in his tweets in a way that inadvertently created well-formed URLs. A September 15, 2018 tweet read, in part, as follows: “#REALNEWS: Woodward says no evidence of collusion.So does Manafort’s team.” After seeing this tweet, our client registered “collusion.so” and directed the URL to the Lawfare blog’s coverage of connections between President Trump and Russia. Other people have also registered the domain names of Giuliani typos. Giuliani, who was once named a cybersecurity advisor to Trump, falsely claimed that Twitter was “invading” his tweets. What does this have to do with clothes, you ask? Well, in October 2018, Asos launched a new clothing line called “Collusion.” It describes Collusion as “a new fashion brand offering bold, experimental, inclusive styles for the coming age.” Not content with a vaguely dystopian branding choice, Asos followed up by sending a threatening letter to our client claiming that the registration of collusion.so infringes its trademark. Asos’s lawyers are accusing our client of “taking unfair advantage of ASOS’ reputation in the COLLUSION brand and the COLLUSION trade mark by luring customers to your website for your own gain.” This is absurd. Our client wasn’t even aware of the Collusion brand until Asos sent its letter. Our client registered collusion.so as a satirical comment on Giuliani’s tweet and the state of U.S. politics. No one is going to confuse Lawfare’s Trump-Russia coverage with Asos’s self-described “ultimate youth label.” If Asos and its lawyers had spent even a few minutes looking into things before sending their letter, they would have seen that they have no straight-faced trademark claim. While they might not have known the full backstory, two red flags should have stopped them before they started. First, our client registered the domain before Asos launched its clothing line. Second, the URL points to a page about the political meaning of collusion—not to anything about clothing. It’s very disappointing to see a trademark threat over such an obviously unrelated use of a common word. Asos’s letter to our client opens with large-type red text: “Failure to respond to this letter may result in further legal action being taken without further notification to you.” The law firm that sent the threat, Stobbs IP, has a history of abusing the universal domain-name dispute resolution policy, or UDRP, to try to take control of others’ domains. If Stobbs IP tried to do that to our client, it would be yet another attempt at reverse domain hijacking. We hope that Asos and Stobbs IP have enough sense to withdraw their threat. We also hope that this story serves as a lesson. Far too many trademark holders engage in mindless over-enforcement. Trademark law doesn’t give mark holders the right to censor criticism or the power to control the use of ordinary English words. Whether Asos likes it or not, people will continue to discuss collusion on the Internet.
>> mehr lesen

Washington Post Tries to Take Down Parody Site Announcing Trump's Resignation (Tue, 22 Jan 2019)
If you were in Washington, D.C. last week, you had a chance to be one of the lucky recipients of a parody newspaper spoofing the Washington Post and crowing about the “Unpresidented” flight of Donald Trump from the Oval Office as he abandoned the presidency. The spoof, created by activist group the Yes Men, is also visible on the website democracyawakensinaction.org. The Washington Post’s lawyers were not amused, calling the parody an act of trademark infringement and raising copyright threats. We have responded to explain why the parody is protected by the First Amendment and fair use law. Dated May 1, 2019, the parody features a series of increasingly unlikely articles, including a mea culpa by the media for Trump’s rise to power and a story pointing out that the paper’s date is several months in the future in case the reader missed it. The fictional timeline of the paper credits protests like the Women’s March with Trump’s abdication, and includes a link to an action guide for people who want to pursue progressive causes. As we explain in our response letter, numerous appeals courts have held that political speech is strongly protected from trademark claims. Trademark law is fundamentally about protecting members of the public from making mistaken purchasing decisions – believing they are buying one company’s product when, in fact, they’re buying another’s. It is not a general-purpose legal tool for policing language, or even for preventing people from being confused about what a company’s political stances are, and neither is copyright law. The Washington Post is free to set the record straight by distancing itself from the spoof; it’s not free to silence others’ political speech. The parody paper and its call to action are staying online.
>> mehr lesen

Federal Court Orders That Patent Troll Can’t Hide Its Machinations (Fri, 18 Jan 2019)
A federal judge has ordered that prolific patent troll Uniloc cannot hide its shell games from the public. After EFF filed a motion to intervene seeking access to sealed court records, Judge William H. Alsup of the Northern District of California has ordered [PDF] that the relevant documents should be made public. Judge Alsup stayed his order for two weeks, however, to give Uniloc an opportunity to appeal to the Federal Circuit. We are pleased by the court’s ruling and will defend it if appealed. The sealed documents have an importance far beyond this case. As Judge Alsup suggested in court, Uniloc appeared to be using complex machinations to hide its patents or its assets, possibly to avoid being hit with sanctions. The public has a right to know who owns patents, especially patents like the ones Uniloc claims to own, since the company has claimed its patents entitle it to payments from a vast array of technology companies. In the underlying cases, Uniloc has sued Apple alleging that its iPhones and iPads infringe a number of its patents. For example, Uniloc claims that Apple infringes U.S. Patent No. 7,092,671, because “iPads incorporate software that causes an iPad, in response to a user’s selection, to transfer a telephone number wirelessly to a nearby iPhone which dials the selected number.” In a heavily redacted motion to dismiss, Apple appears to argue that Uniloc entities and Fortress Investment Group LLC divided rights in the asserted patents in a way that means the Uniloc entities no longer had a legal right to sue for infringement. We say “appears” because the public cannot see most of the briefing and evidence. Because the redactions (requested by Uniloc) make it impossible to understand the dispute, we moved to intervene to seek public access. Judge Alsup agreed that Uniloc had improperly sought to keep material secret. He described the scope of Uniloc’s sealing requests as “astonishing” and noted that it even extended to redacting quotes from published court opinions. He rejected Uniloc’s request to seal “licensing terms and business plans with respect to various Uniloc entities.” Judge Alsup concluded: Plaintiffs’ generalized assertion of potential competitive harm fails to outweigh the public’s right to learn of the ownership of the patents-in-suit — which grant said owner the right to publicly exclude others. This is especially true given that the law has developed regarding standing issues, which turns on machinations such as those at issue in the instant actions. Under the court’s order, unredacted versions of the relevant documents will be placed on the public docket on February 1st. This gives Uniloc two weeks to seek appellate review at the Federal Circuit. Judge Alsup granted EFF’s motion to intervene “for the purpose of opposing plaintiffs at the United States Court of Appeals for the Federal Circuit in the event plaintiffs seek appellate review of this order.” Judge Alsup has also issued a ruling on Apple’s motion to dismiss. We presume from context that this ruling denies Apple’s motion (but we can’t be certain since we haven’t seen it). If Uniloc does not appeal the related unsealing order, or if it appeals and loses, the public will get access to the full opinion. EFF will, of course, oppose any appeal of Judge Alsup’s unsealing order. The sealed information is central to an important dispute about who can bring a patent suit. Without access to the sealed evidence, the public will not be able to understand the parties’ arguments or the district court’s ruling. Ultimately, Uniloc should not be able to hide which entity owns (or claims to own) a patent. A patent owner’s desire to confound the public cannot outweigh the public’s First Amendment right of access to courts. Related Cases:  Patent Litigation Transparency
>> mehr lesen

Work with EFF this Summer! Apply to be a 2019 Google Public Policy Fellow (Fri, 18 Jan 2019)
Are you passionate about emerging Internet and technology policy issues? Come work with EFF this summer as a Google Public Policy Fellow! This 10-week fellowship gives undergraduate and graduate students a paid opportunity to work alongside EFF’s International team on projects advancing debate on key public policy issues. EFF is looking for someone who shares our passion for the free and open Internet. You’ll have the opportunity to work on a variety of issues, including censorship and global surveillance. Applicants must have strong research and writing skills, the ability to produce thoughtful and original policy analysis, a talent for communicating with many different types of audiences, and be independently driven. More specific information can be found here. This year’s program will run from early June through early August, with regular programming throughout the summer. If selected, you can work with EFF to adjust start and completion dates. The application period opens Friday, January 18, 2019 and all applications must be received by Friday, February 15 at 12:00 p.m. ET/ 9:00 a.m. PT. The accepted applicant will receive a stipend of USD $7,500 in 2019 for their 10-week-long fellowship. To apply with the Electronic Frontier Foundation, follow this link. Note: This internship is associated with EFF’s international team and is separate from EFF’s summer legal internship program.
>> mehr lesen

Article 13 and 11 Update: Even The Compromises are Compromised In This Copyright Trainwreck (Fri, 18 Jan 2019)
Update, January 18: EU ministers have failed to approve the compromise text—with Germany, Belgium, Poland, Sweden, Luxembourg, the Netherlands, Finland and Slovenia, Italy, Croatia,  and Portugal all voting against the current Article 13/11 proposal. Keep up the pressure! If you’re in the Czech Republic, Luxembourg, Germany, Poland, Sweden, or Belgium —tell your government to oppose Article 13 and 11. Politicians are meant to broker compromises in the pursuit of the public good – though in a year that is already overloaded with government shutdowns and Brexit logjams, that skill seems in short supply. But sometimes there are no compromises to be found. Sometimes, even the most talented diplomats are handed an impossible task. The Romanian Presidency is struggling to finish negotiations the Copyright in the Digital Single Market Directive together. But two parts of that law —Article 13, intended to introduce compulsory copyright filters, and Article 11, a new licensing requirement on reproducing snippets of news articles—are so controversial that they risk sinking the entire process. Just hours before a key vote on this Friday, the Presidency has presented their proposed compromise to the negotiators. The text, leaked to Politico Europe, shows just how far they will have to go to bring all the parties together. On Article 13, the Council and the Parliament are struggling over whether small and medium-sized businesses should be excluded from the crushing demands and liability Article 13 would impose on Internet sites. This was one of the concessions that MEP Axel Voss offered in a last-minute attempt to get the Article’s provisions past Parliament. But that’s not good enough for the article’s lobbyists, who believe that any site that allows users to put their content online should be treated as a pirate’s den—even if it’s a small European Internet site hoping to compete with deep-pocketed, US-based Big Tech companies. The trouble is that the whole pitch to Europeans for accepting Article 13’s excesses was that it was aimed at clawing back money from YouTube and other big, foreign hosting sites. MEPs and other elected officials aren’t likely to be so keen on a provision that will also bleed money from the fledgling EU digital sector into the coffers of the established rightsholders. Or, indeed, bleed money from individual European Internet users. Since the draft Directive passed Parliament, another huge battle has emerged over whether Internet users should be covered by the licenses that the Big Tech companies will need to negotiate with Big Content. The rightsholders want the ability to be able to be able to “double dip”—suing to extract cash from the YouTubes and Twitch TV’s of the world, and then again, suing individual Net users to get some more money from prominent YouTubers and Twitchers. (Well, at least, that’s what the recording industry wants. At this point, it’s only the music industry that has any hunger left for Article 13 – all the other major European rightsholders have backed away from its dangerously vague language, and have now turned against it. Given the choice between the current status quo, and a Directive oscillating so wildly between wild demands and unclear carve-outs, they’d rather stick with the current, functioning, Internet. Even the recording industry has denounced the latest proposal, wanting to rewind to earlier versions that lack even the veneer of compromise.) A similar pattern has emerged in Article 11. Lobbyists for Article 11 said it was to intended to stop Big Tech wholesale stealing news articles and money from journalists, and had nothing to do with mere linking to news stories. The negotiators have taken them at their word, and now have suggested that quoting “insubstantial parts” of news articles should be acceptable, and that also some of the money made from the new licenses should go direct to the authors of the articles. Not so fast, say the lobbyists: we still want to limit linking to only allow “individual words”, plus who said anything about giving money directly to journalists? The tragedy in all of this horse-trading is that nobody in the room (or shouting into the keyhole) is actually fighting for Internet users, or the public good. They’re just trying to act as referees between various industrial sectors. Meanwhile digital rights groups, and everyone from actual journalists to actual Internet experts to actual copyright experts have said that there’s no compromise to be made here, because the whole system of copyright filters and special news licenses simply won’t work to remunerate creators, and will instead just break everything online. None of the proposed compromises fix the underlying problem. Even if you exclude small companies, you’ll just be creating a two-tier Internet with the tech giants controlling the licensing, and European Internet startups struggling to stay small enough that they won’t be sued into oblivion. Even if you grant individual Twitchers and YouTubers some protection from being sued, they will still suffer, because of the unaccountable black-box algorithmic filters will constantly block them from broadcasting or recording their legitimate works. And no matter how you quibble about who gets the right to link or quote news stories, the absolute lifeblood of gathering an audience for an independent news site – through people linking to it, and quoting it – will be chilled by companies refusing to take the risk of a lawsuit, and individuals and non-profits being unsure as to how they can keep safe from the 28 national versions of an ambiguously-written law. Meanwhile, in the fantasy world where all of these consequences might be avoided with a few carefully-worded recitals, members state negotiators now have less than 24 hours to work out that perfect wording. The Romanian Presidency is working under a major, self-imposed deadline—if agreement cannot be struck before mid-February, it will be too late to present the text for a final European Parliamentary vote. But a directive that risks sabotaging the entire Internet should not be rushed like this. Compare this pell-mell hurtling into chaos to the leisurely pace of another EU Internet law: the E-Privacy Regulation. That proposal, which takes its aim at the privacy issues surrounding Big Tech and Big Media, has been stuck in pre-Trilogue negotiations for two years. It’s a sad indication of the priorities of the European establishment that improvising new, untested, copyright law is more urgent than tackling the clear challenges of digital privacy. But if the clock really is ticking for the Copyright Directive, the correct compromise on Article 13 and Article 11 is to remove them entirely from the Copyright Directive, and send them back to the drawing board. The European Union’s negotiators are struggling to strike a deal with internally incoherent and unenforceable language. This trilogue has deleted previous articles in its pursuit of a reasonable deal; it can delete these two as well. Anything else is sacrificing the stability of the Internet for laws that nearly everybody opposes, and no-one needs. Countdown to Catastrophe: Article 13 and 11’s Key Dates January 18 (today) Negotiators member state governments try to agree on a new mandate based on Romanian Presidency compromise. NO AGREEMENT. January 21 (Monday) “Last” Trilogue Meeting: Member state negotiators, European Parliament negotiators try to agree on final text. CANCELLED. February If Trilogue reaches agreement, internal votes by member state ministers on accepting the final text. STILL POSSIBLE. March 11-14 or 25-28 If Trilogue and ministers agree, final vote in European Parliament.
>> mehr lesen

Don’t Put Robots in Charge of the Internet (Fri, 18 Jan 2019)
We’re taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation. Last year, YouTube’s Content ID system flagged Sebastian Tomczak’s video five times for copyright infringement. The video wasn’t a supercut of Marvel movies or the latest Girl Talk mashup; it was simply ten hours of machine-generated static. Stories like Tomczak’s are all too common: Content ID even flagged a one-hour video of a cat purring as a likely infringement. Filters are most useful when they serve as an aid to human review. But today’s mandatory filtering proposals turn that equation on its head. But those are only a small glimpse of a potential Internet future. Today, with the European Parliament days away from deciding whether to pass a law that would effectively make it mandatory for online platforms to use automated filters, the world is confronting the role that copyright bots like Content ID should play on the Internet. Here in the US, Hollywood lobbyists have pushed similar proposals that would make platforms’ safe harbor status contingent on using bots to remove allegedly infringing material before any human sees it. Stories like the purring and static videos are extreme examples of the flaws in copyright filtering systems—instances where nothing was copied at all, but a bot still flagged it as infringement. More often, filters ding uploads that do feature some portion of a copyrighted work, but where even the most basic human review would recognize the use as noninfringing. Those instances demonstrate how dangerous it is to let bots make the final decision about whether a work should stay online. We can’t put the machines in charge of our speech. Mandatory Filters Are a Step Too Far A decade ago, online platforms looked to copyright filtering regimes as a means to generate revenue for creators and curry favor with content industries. Under U.S. law, there’s nothing requiring platforms to filter uploads for copyright infringement: so long as they comply with the Digital Millennium Copyright Act’s notice-and-takedown procedure, the law protects them from monetary liability based on the allegedly infringing activities of their users or other third parties. But big rightsholders pressured platforms to do more: YouTube built Content ID in 2007, partially in response to a flurry of lawsuits from big media companies. Since then, Content ID has consistently grown and expanded in scope—with a version of the service now available to any YouTube channel with over 100,000 subscribers. Other companies have followed suit—Facebook now uses a similar filter that even inspects users’ private videos. Now, both in Europe and the United States, lobbyists have pointed to those filters to argue that lawmakers should require every web platform to take even more drastic measures. That would be a huge step in the wrong direction. Filters are most useful when they serve as an aid to human review. But today’s mandatory filtering proposals turn that equation on its head, forcing platforms to remove uploads—including completely legitimate ones—before a human has a chance to review them. Hollywood Goes All In on Filtering The debate over mandatory filters isn’t just about copyright infringement. Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) last year, a misguided and unconstitutional law that forces Internet companies to become more restrictive in the types of speech they allow on their platforms, while doing zilch to fight sex traffickers. For Hollywood lobbyists, FOSTA is just one step toward the goal of a more highly filtered Internet. During the debates over FOSTA, the bill’s supporters claimed that companies would have no problem deploying filters that could take down unlawful speech while leaving everything else intact. A highly touted letter from Oracle suggested that the technology to make those decisions with perfect accuracy was accessible to any startup. That’s absurd: by exposing platforms to overwhelming criminal and civil liability for their users’ actions, the law forces platforms to calibrate their filters to err on the side of censorship, silencing innocent people in the process. It might come as no surprise, then, that two of FOSTA’s biggest supporters were Disney and 20th Century Fox. For Hollywood lobbyists, FOSTA is just one step toward the goal of a more highly filtered Internet. Don’t Write Bots into the Law Like FOSTA, mandatory filtering proposals represent a kind of magical thinking about what technology can do. It’s the “nerd harder” problem, a belief that tech will automatically advance to fit policymakers’ specifications if they only pass a law requiring it to do so. The reality: Bots can be useful for weeding out cases of obvious infringement and obvious non-infringement, but they can’t be trusted to identify and allow many instances of fair use. Unfortunately, as the use of copyright bots has become more widespread, artists have increasingly had to cater their work to the bots’ definitions of infringement rather than fully enjoy the freedoms fair use was designed to protect. To write bots into the law would make the problem much worse. Whether it’s fighting copyright infringement or fighting criminal behavior online, it may be tempting to believe that more reliance on automation will solve the problem. In reality, when we let computers make the final decision about what types of speech are allowed online, we build a wall around our freedom of expression.
>> mehr lesen

Now EVERYBODY Hates the New EU Copyright Directive (Fri, 18 Jan 2019)
Until last spring, everyone wanted to see the new European Copyright Directive pass; then German MEP Axel Voss took over as rapporteur and revived the most extreme, controversial versions of two proposals that had been sidelined long before as the Directive had progressed towards completion. After all, this is the first refresh on EU copyright since 2001, and so the Directive is mostly a laundry list of overdue, uncontroversial technical tweaks with many stakeholders; the last thing anyone wanted was a spoiler in the midst. Anyone, that is, except for German newspaper families (who loved Article 11, who could charge Big Tech for the privilege of sending readers to their sites) and the largest record labels (who had long dreamed of Article 13, which would force the platforms to implement filters to check everything users posted, and block anything that resembled a known copyrighted work, or anything someone claimed was a known copyrighted work). Maybe it's time we stopped holding the future of European copyright to ransom for the sake of a few recording companies. These were the clauses that Voss reinserted, and in so doing, triggered a firestorm of opposition to the Directive from all sides: more than four million Europeans publicly opposed it, along with leading copyright and technical experts—and also the notional beneficiaries of the rules, from journalists to the largest movie studios, TV channels and sports leagues in Europe. Voss has found himself increasingly isolated in his defense of the Directive, just him and the record labels against the rest of the world. And now it's just Voss. The record labels have joined the movie studios in denouncing the working version of Article 13, and calling for the impossible: a rollback of the tiny, largely ornamental changes made in order to give the Directive a hope of passing (they were complaining about Monday's version of the Directive, but the version that leaked yesterday doesn't fix any of their problems). The record labels are willing to risk the whole thing going down in flames rather than tolerate the symbolic gestures to compromise that have been gently draped over the spiderwebbing of cracks in the Directive. Now that Article 13 has not a single friend in the world, save for a single, lonely German MEP, maybe it's time we stopped holding the future of European copyright to ransom for the sake of a few recording companies who are willing to sacrifice the free expression of 500,000,000 Europeans to eke out a few more points of profit. With the national governments and EU going into what is meant to be their final meeting on Monday, now is the time for Europeans to contact their national governments and tell them to stand firm and reject Article 13, lest it bring down the whole Copyright Directive. Take action tell your ministers to stop Articles 11 and 13
>> mehr lesen

Copyright’s Safe Harbors Preserve What We Love About the Internet (Fri, 18 Jan 2019)
How is the Internet different from what came before? We’ve had great art, music, film, and writing for far longer than we’ve had the World Wide Web. What we didn’t have were global conversations and collaborations that millions can participate in. The Internet has lowered barriers to participation in culture, politics, and communities of interest. Copyright’s safe harbors for intermediaries are essential to making this possible. But today, those safe harbors are under threat from laws like Article 13 of the EU’s proposed Copyright in the Digital Single Market Directive. And some voices in the U.S. want to gut the safe harbors here. In the U.S., the safe harbors of the Digital Millennium Copyright Act protect Internet companies of various kinds against the possibility of massive copyright infringement damages when one of their users copies creative work illegally. In return for that protection, Internet companies have to take some concrete steps, like adopting and enforcing a repeat infringer policy. Some companies—the ones that store user-uploaded content—have to register an agent to accept and act on takedown notices from rightsholders (the familiar “DMCA notices”). The law is explicit that Internet companies aren’t required to surveil everything uploaded by users to find possible copyright infringement. It also provides a counter-notice process for users to get non-infringing uploads put back online. This system is far from perfect. At EFF, we spend a lot of time calling out abuses of the DMCA notice and takedown regime—abuses that the law makes far too easy. We’ve also fought to make the penalties for improper takedowns a meaningful deterrent. But for all our criticism of the existing safe harbor, it is vital to preserving many of the things we all love about the Internet—especially the ease of participation that it enables. To understand why, we need to look at some basic principles. First, the Internet is fundamentally a big copying machine. Every transmission of data, whether it’s downloading a Web page, submitting data in a form, or interacting with an app, involves copying data. Second, copyright is automatic and ubiquitous. In the majority of the world, most forms of creative work are covered by copyright the moment they are recorded. Not only are professionally made books, films, software, sheet music, and musical recordings automatically copyrighted, so too are personal emails, posts, snapshots, and home videos—the very things that we’re used to sharing. Third, copyright nominally covers every act of copying or distribution, even acts that don’t harm the rightsholders in any way. While fair use protects many of these uses, it does so without a lot of certainty in many cases. And fourth, the penalties for copyright infringement can be mind-bogglingly high. U.S. law allows rightsholders to ask for “statutory damages” of up to $150,000 for each work, without having to produce any evidence of harm. The damages awarded by courts are frequently massive and unpredictable, making many uses of creative works a game of financial Russian roulette. For a platform that hosts thousands of works, even an honest mistake or oversight can mean bankruptcy. What does this mean for Internet users? Websites and apps that operate more like traditional media have ways of addressing copyright risk. A site that creates and curates a limited amount of creative material can make reasonably sure that none of it infringes copyright, and buy insurance to address the risk of mistakes. On the other hand, any Internet site that accepts contributions from a broad public—even a simple comments section—has no practical way to verify that every upload isn’t infringing, and that a rightsholder won’t appear from anywhere on the globe brandishing a company-destroying lawsuit. Website owners need a set of practical, feasible steps they can take that will protect them from this risk if they’re going to accept user-generated content at all. That’s where safe harbors like the DMCA come in. Without the safe harbors, the only way for Internet companies to reduce their liability risk to a level that insurers and investors will accept is to stop accepting user-uploaded content at all. That would make the Internet look a lot like cable television, broadcasting, or traditional publishing: a bounded collection of creative material selected as much by financial clout and elite power relationships as by interest or artistic merit. Much of the Internet’s uniqueness and vitality would be lost. In order for the safe harbors to safeguard public participation on the Internet, they need to be usable by a wide variety of Internet users, from the largest social media platforms to the humblest blogs and apps. A safe harbor that puts onerous requirements on intermediaries to filter, police, and remove alleged copyright infringements risks the same results as losing the safe harbor entirely. That’s what makes the current efforts by the recorded music industry and its allies so troubling. They are asking lawmakers to condition the safe harbor on Internet companies’ agreement to filter all uploads against some database of copyrighted works, and remove or block the uploads that match a work in the database. The problems with this approach are legion. Because copyright is automatic and ubiquitous, nearly every Internet user is also a copyright holder. Building a database of every bit of creative work that traverses the Internet is impossible, but without that, Internet companies would be vulnerable to crippling lawsuits from potentially millions of rightsholders. Such a system would also be wide open to censorious abuse. It’s often very hard to find out who holds the copyright in a particular bit of creative work. Even the major media and entertainment companies, who are in the best position to know what works they own, often struggle with this. Today, the ease of sending a DMCA takedown notice to make user-uploaded material disappear from the Internet, and the lack of effective deterrents for sending a false or careless takedown, make the DMCA ripe for abuse. A new requirement for upload filters, shared databases of content to be filtered, or other proposals, would make this form of censorship even more powerful and ubiquitous. Even without bad actors, automatic filters simply cannot account for fair use and the other limitations on copyright intended to protect freedom of speech. The same audio, video, or text may be infringing in one context and perfectly legal in another. Takedown bots cannot tell a parodic, journalistic, or educational use of creative work from an unjustified commercial use. And they can’t determine how much of a work is OK to use, because that depends greatly on context. Bots can’t even tell if a particular user is licensed to post content, which is why takedown services hired by the major entertainment companies sometimes take down clips uploaded by the companies themselves. Users whose posts and uploads are caught in these filters would find themselves having to prove their innocence on a regular basis—or give up and leave only the professionally produced content of the well-connected up online. This is what the EU’s Article 13 will do. Though the current drafts don’t explicitly mention filters, they leave no other way to avoid copyright liability, short of ditching all user-generated content. If Article 13 becomes law in Europe (or even if it doesn’t) we can expect to see the Recording Industry Association of America and its allies push for something similar in the U.S. The loudest voices calling to gut the safe harbors point to Google, Facebook, and a few other large Internet companies as the targets of a new law. Those massive platforms certainly dominate many people’s experience of the Internet today, and they collectively host and transmit billions of copyrighted works. And given the growing, well-deserved criticism of these companies’ privacy and content moderation practices, it’s tempting to support any legal changes that would further regulate their behavior. But copyright’s safe harbors don’t just protect Internet companies from liability—they protect all of us from being arbitrarily and unaccountably silenced online whenever our digital lives intersect with some bit of creative work to which a stranger has laid claim. Besides, Google and Facebook are not the entirety of the Internet. We’ve yet to see a proposal that would successfully remove the safe harbors for those companies while reliably preserving it for others. Second- and third-tier Internet companies and even nonprofit projects like Wikipedia would be at risk. And while the search and social media titans may have the cash to implement massive copyright filters and the political clout to shape the law’s rough edges, small and mid-sized Internet companies and projects will be caught in an expensive bind. As we wrote recently, while America’s tech giants would prefer no regulation, they'll happily settle for regulation that's so expensive it clears the field of all possible competitors. Copyright’s safe harbors are vital to preserving many of the Internet’s unique strengths, especially for smaller companies and individual users. This inescapable fact needs to be part of every discussion about how to regulate the Internet going forward.
>> mehr lesen

Anyone—Even the Government—Can Ask the Patent Office to Review Invalid Patents (Thu, 17 Jan 2019)
The exclusive rights granted by a U.S. patent create monopolies that can threaten innovation. We all benefit from the pro-innovation effects that come from cancelling monopolies that should not exist. That’s why the 2012 America Invents Act broadly allows “[a]ny person other than the patent owner” to challenge a patent at the U.S. Patent and Trademark Office.   But what if the government itself was banned from asking for this type of patent challenge? That would mean patent holders can demand big payments from government agencies, with access to taxpayer funds—yet those same agencies wouldn’t be able to efficiently test whether the patents are valid.   Now, the Supreme Court is poised to consider the question. EFF has filed an amicus brief, explaining that the government should be able to bring challenges in the Patent Office, based on century-old legal principles, as well as public policy concerns today. Limiting the government’s ability to challenge invalid patents efficiently deprives the public of these benefits for no good reason. Patent owners, unlike most parties in court, can sue the United States government for using their patent rights without a license. In Return Mail Inc. v. United States Postal Service, the patent owner, Return Mail, demanded that the Postal Service pay it for a patent license. But the Postal Service decided to challenge the patent, petitioning for a kind of review at the Patent Office. That challenge succeeded, convincing both the Patent Office and the US Court of Appeals for the Federal Circuit that Return Mail’s patent is invalid. Return Mail is still fighting the case, but it isn’t arguing its patent is valid on the merits. It’s arguing that the government shouldn’t have been allowed to file a petition for a post-grant review at all. If the Supreme Court agrees, its decision would revive Return Mail’s invalid patent, allowing it to be asserted against the Postal Service and others. The provision authorizing such petitions came into effect as part of the America Invents Act of 2011 (“AIA”). The AIA broadly allows “[a]ny person other than the patent owner” to file a petition for review. Return Mail, however, is arguing that the word “person” necessarily excludes government entities such as the Postal Service. As our brief explains, that argument ignores both the Patent Office’s longstanding practices and the AIA’s goal of weeding out low-quality patents (like Return Mail’s). If the Supreme Court decides to bar the government from initiating post-grant review proceedings, that will make a huge difference—and not in a good way. At the Patent Office, granted patents get no presumption of validity, making it easier for challenges that should succeed to do so. At the same time, the only issues that Patent Office proceedings decide are questions of validity. That makes the proceedings more streamlined—faster and cheaper—than in federal court. But it also means the proceedings are more likely to produce outcomes affecting the broader public. Decisions on questions of patent validity produce external benefits: greater clarity about the scope of granted patents and extent of the public domain. EFF and its community know firsthand how powerfully developments in patent law can affect incentives and opportunities for innovation, creativity, and access to knowledge. We hope the Court considers the broad implications of its decision for the public and the patent system. The public should not bear the costs of shielding invalid patents from government-initiated review.
>> mehr lesen

Belgium: Say No To Article 13 and 11 (Wed, 16 Jan 2019)
The European Union is on the brink of handing even more power to a handful of giant American tech companies, in exchange for a temporary profit-sharing arrangements with a handful of giant European entertainment companies—at the expense of mass censorship and an even weaker bargaining position for working European artists.         ​ It’s been more than four months since EU parliamentary negotiators and representatives of Europe’s national governments disappeared behind closed doors to make the new Copyright in the Single Digital Market Directive ready for a vote. Despite all that time and all that blissful solitude, they have not managed it. TAKE ACTION NOW WRITE TO BELGIUM'S EU NEGOTIATORS AND SAY NO TO ARTICLE 13 AND 11 The Directive has the same problems it’s had from the start: Article 11: A proposal to make platforms pay for linking to news sites by creating a non-waivable right to license any links from for-profit services (where those links include more than a word or two from the story or its headline). Article 11 fails to define “news sites,” “commercial platforms” and “links,” which invites 28 European nations to create 28 mutually exclusive, contradictory licensing regimes. Additionally, the fact that the “linking right” can’t be waived means that open-access, public-interest, nonprofit and Creative Commons news sites can’t opt out of the system. Article 13: A proposal to end the appearance of unlicensed copyrighted works on big user-generated content platforms, even for an instant. Initially, this included an explicit mandate to develop “filters” that would examine every social media posting by everyone in the world and check whether it matched entries in an open, crowdsourced database of supposedly copyrighted materials. In its current form, the rule says that filters “should be avoided” but does not explain how billions of social media posts, videos, audio files, and blog posts should be monitored for infringement without automated filtering systems. In both cases, the EU proposals may result in some small transfers from America’s Big Tech companies to Europe’s copyright industries—German newspaper families, the EU subsidiaries of global record labels—but at a terrible cost. Take Article 11: the rule allows newspapers to decide who can link to them, and to charge whatever they think the market can bear for those links. While it’s unlikely that Europe’s news giants will forbid each other from linking to their articles, the same can’t be said for the established news giants and the upstart, critical press. Small, independent press outlets can be blocked altogether from linking to established news sources—even for the purposes of criticism and commentary—or they could be charged much more than their counterparts in the mainstream. And while Google and Facebook will regret the loss of a few million euros they will have to pay the major news services, it’s nothing compared the long-run benefit of tech giants never having to worry about a Made-in-the-EU upstart growing to challenge them. These little guys won’t have the millions to spend that US Big Tech does. Article 11 gets the independent sector coming and going: not only will they have to pay to link to the mainstream press, they won’t be able to allow others to link freely to their own news. The rules set out by Article 11 state that the public-interest, crowdfunding, open access and Creative Commons news-sites can no longer allow anyone to link to them: instead, they must negotiate a linking license with each commercial site and collect fees in every case. Article 13 is even worse. Though the current draft says that “filters are to be avoided,” it also is designed to guarantee that filters will be required. The past three months have been spent adding clauses insisting that some theoretically perfect technology to filter hundreds of billions of communications and sort them into “infringing” and “not infringing” is can be legislated into existence (it can’t). Building Article 13’s filters will likely cost hundreds of millions of euros, a price that only the biggest US firms can afford, and none of Europe’s companies can bear. The exemption that allows firms with less than 20 million euros in annual turnover to avoid the filters is irrelevant: if these companies are to challenge the US giants, they will have to grow, and they can’t grow past 20 million euro businesses if that means finding hundreds of millions of euros to comply with Article 13. The EU is selling Big Tech a very cheap ticket to a guarantee of continued Internet dominance. Without competition, they only need fear each other. Meanwhile, as the number of tech companies controlling access to the Internet dwindles, their power will grow. The ability of independent artists to and production companies to negotiate fair deals will steadily weaken, allowing Big Tech and the big entertainment companies to command an ever-larger slice of the product of creators’ labours. Of course, the vast majority of Europeans are not in the entertainment industry and only a tiny minority of the Internet’s uses are entertainment-oriented. Article 13 will hold the Internet usage of 500 million Europeans to ransom in a harebrained scheme to eke out tiny gains in artists’ livings, and in the meantime, the censoring filters of Article 13 will churn out the same useless, error-prone judgments that have come to epitomise algorithmic discrimination in the 21st Century. It’s not too late: the European Council—made up of representatives from EU member states like Belgium—will soon vote on the Directive. Their decision will shape future of the Internet, possibly for generations to come. We need Belgians to act, to tell their representatives to strike a blow for fairness and against market concentration and censorship. TAKE ACTION NOW WRITE TO BELGIUM'S EU NEGOTIATORS AND SAY NO TO ARTICLE 13 AND 11
>> mehr lesen

Luxembourg: Save the Internet from the Copyright Directive (Wed, 16 Jan 2019)
Take Action Contact Luxembourg's Negotiators Today! This month, the EU hopes to conclude the Copyright in the Single Digital Market Directive, with no sign that they will improve or delete Articles 11 and 13. This is a dangerous mistake, because these articles have the power to crush small European tech startups, concentrating power in the hands of American Big Tech, while exposing half a billion Europeans to mass, unaccountable algorithmic censorship. We had hoped that the EU and national government negotiators would delete Article 13, the “censorship machines” rule that requires online platforms to hand their users’ videos, texts, audio and images to black-box machine learning filters that would unilaterally decide whether that content might infringe copyright and thus whether it should be censored or allowed to be published. Instead, the current text goes to enormous lengths to obscure its mandate for AI filters. The new language says that filters “should be” avoided, and that companies can escape liability if they use “best practices” to fight infringement. But the rule also says that the limitation of liability doesn’t apply where there is “economic harm”—meaning that a user has any commercial content—and it also requires “notice and staydown,” which means that once a platform has been notified that a given file infringes copyright, it must prevent all of its users from ever posting that content again. Thus, Article 13 can only be satisfied with filters—filters like the ones that Tumblr has been using in a disastrous attempt to block adult material. Article 13’s filters will have to process vastly more materials, in every format, and they will not fare better. And since Article 13 penalises companies that allow a user to infringe copyright, but does not penalise companies that overblock and censor their users, it’s obvious what the outcome will be. Building Article 13’s filters will likely cost hundreds of millions of euros, a price that only the biggest US firms can afford and none of Europe’s companies can bear. The exemption that allows firms with less than 20 million euros in annual turnover to avoid the filters is irrelevant: if these companies are to challenge the US giants, they will have to grow, and they can’t grow past 20 million euro businesses if that means finding hundreds of millions of euros to comply with Article 13. Meanwhile, as the number of tech companies controlling access to the Internet dwindles, their power will grow. The ability of independent artists to and production companies to negotiate fair deals will steadily weaken, allowing Big Tech and the big entertainment companies to command an ever-larger slice of the product of creators’ labours. Article 11, the rule banning links without a license, is also bad news for small businesses already struggling with abuse by the US ad platforms. While the giant newspapers will be able to afford to link to one another after Article 11 is law, these smaller news entities will have to find cash they don’t have to pay for these licenses, and nothing in Article 11 requires newspapers to sell licenses to them at any price, let alone at a fair price in line with the sums paid by the other establishment news entities. To make things worse, Article 11 has no opt-out: every news company must charge for links, and so the burgeoning world of Creative Commons, nonprofit, public interest news sites is snuffed out at the stroke of a pen. Of course, the vast majority of Europeans are not in the entertainment industry, and only a tiny minority of the Internet’s uses are entertainment-oriented. Article 13 will hold the Internet usage of 500 million Europeans to ransom in a harebrained scheme to eke out tiny gains in artists’ livings, and in the meantime, the censoring filters of Article 13 will churn out the same useless, error-prone judgments that have come to epitomise algorithmic discrimination in the 21st Century. It’s not too late: the European Council—made up of representatives from EU member states like Luxembourg—now get to negotiate the Directive. Their decision will shape future of the Internet, possibly for generations to come. We need you to act, to tell your representatives to strike a blow for fairness and against market concentration and censorship. TAKE ACTION CONTACT LUXEMBOURG'S NEGOTIATORS TODAY!
>> mehr lesen

The EU's Copyright Directive Charm Offensive Pats Europeans On the Head and Tells Them Leave it Up to the Corporations (Wed, 16 Jan 2019)
When it comes to the new Copyright Directive, some in the EU would prefer that Europeans just stop paying attention and let the giant corporations decide the future of the Internet. In a new Q&A about the Directive, the European Parliament – or rather, the JURI committee, which, headed by Axel Voss, spearheaded the shepherding of Article 13 and 11 through a skeptical Parliament, sets out a one-sided account of the most far-reaching regulation of online speech in living memory, insisting that "online platforms and news aggregators are reaping all the rewards while artists, news publishers and journalists see their work circulate freely, at best receiving very little remuneration for it." The author of JURI’s press release is right about one thing: artists are increasingly struggling to make a living, but not because the wrong corporations are creaming off the majority of revenue that their work generates. For example, streaming music companies hand billions to music labels, but only pennies reach the artists. Meanwhile, a handful of giant companies make war with one another over which ones will get to keep the spoils of creators' works. In a buyers' market, sellers get a worse deal, and when there are only five major publishers and four record labels and five Internet giants, almost everyone is a seller in a buyers' market. The EU's diagnosis is incomplete, and so its remedy is wrong. Article 13 of the new Copyright Directive requires filters for big online platforms that watch everything that Europeans post to the Internet and block anything that anyone, anywhere has claimed as a copyrighted work. Recent versions of Article 13 have gone to great lengths to obscure the fact that it requires filters, but any rule that requires platforms to know what hundreds of millions of people are posting, all the time, and not allow anything that seems like a copyright infringement is obviously about filters. When Big Content and Big Tech sit down to make a meal out of creators, it doesn't matter who gets the bigger piece. These filters don't come cheap. Google's comparatively modest version of an Article 13 filter, the YouTube ContentID system, cost $60 million to develop and tens of millions more to maintain, and it can only compare videos to a small database of copyright claims from a trusted group of rightsholders. It would cost hundreds of millions of Euros to develop a filter that could manage every tweet, Facebook update, YouTube comment, blog post, and other form of expressive speech online to ensure that it's not a match for a crowdsourced database that any of the Internet's two billion users can add anything to. Google has hundreds of millions of Euros, and so do Facebook and the other US Big Tech companies. Notably, Europe's struggling online sector — who represent a competitive alternative to US Big Tech — do not. While America’s tech giants would prefer no regulation, they'll happily settle for regulation that's so expensive it clears the field of all possible competitors. For creators, this is a terrible state of affairs. The consolidation of the online dominance of the Big Tech players won't increase the competitive market for their works — nor will any of the billions squandered on black-box censorship algorithms enrich creators. Instead all that money will go to the tech companies that build and operate the filters. And since Big Content will have the only direct lines into Big Tech to get wrongly censored material restored, creators will find themselves more hemmed in. When Big Content and Big Tech sit down to make a meal out of creators, it doesn't matter who gets the bigger piece. Meanwhile, for everyone else, the Directive will mean that all of our non-entertainment expression will be liable to being blocked by the copyright algorithms, and since the Internet is how we take care of our health, do our jobs, get our educations, fall in love, stay in touch with our families, and participate in civic and political life, all of that will be risked on an absurd bet that slightly tilting the balance between two giant industrial sectors will make a little more money trickle down to artists. This "clarification" of the Copyright Directive is a whitewash, and an insult to the more than four million Europeans who have opposed the Directive, and the hundreds of MEPs that have so far questioned the direction of the Directive and its terrible proposals. Europeans have a right and a duty to engage with the EU's processes: here's a way to get in touch with European national lawmakers and tell them to reject Articles 13 and 11.
>> mehr lesen

Sweden — and You! — Can Save the Internet from the Copyright Directive (Wed, 16 Jan 2019)
The European Union is on the brink of handing even more power to a handful of giant American tech companies, in exchange for a temporary profit-sharing arrangements with a handful of giant European entertainment companies — at the expense of mass censorship and an even weaker bargaining position for working European artists. It’s been more than four months since EU parliamentary negotiators and representatives of Europe’s national governments disappeared behind closed doors to make the new Copyright in the Single Digital Market Directive ready for a vote. Despite all that time and all that blissful solitude, they have not managed it. TAKE ACTION WRITE TO SWEDEN'S EU NEGOTIATORS AND SAY NO TO ARTICLE 13 AND 11 The Directive has the same problems it’s had from the start:  Article 11: A proposal to make platforms pay for linking to news sites by creating a non-waivable right to license any links from for-profit services (where those links include more than a word or two from the story or its headline). Article 11 fails to define “news sites,” “commercial platforms” and “links,” which invites 28 European nations to create 28 mutually exclusive, contradictory licensing regimes. Additionally, the fact that the “linking right” can’t be waived means that open-access, public-interest, nonprofit and Creative Commons news sites can’t opt out of the system. Article 13: A proposal to end the appearance of unlicensed copyrighted works on big user-generated content platforms, even for an instant. Initially, this included an explicit mandate to develop “filters” that would examine every social media posting by everyone in the world and check whether it matched entries in an open, crowdsourced database of supposedly copyrighted materials. In its current form, the rule says that filters “should be avoided” but does not explain how billions of social media posts, videos, audio files, and blog posts should be monitored for infringement without automated filtering systems. In both cases, the EU proposals may result in some small transfers from America’s Big Tech companies to Europe’s copyright industries — German newspaper families, the EU divisions of global record labels — but at a terrible cost.  Take Article 11: the rule allows newspapers to decide who can link to them, and to charge whatever they think the market can bear for those links. While it’s unlikely that Europe’s news giants will forbid each other from linking to their articles, the same can’t be said for the established news giants and the upstart, critical press. Small, independent press outlets can be blocked altogether from linking to established news sources — even for the purposes of criticism and commentary — or they could be charged much more than their counterparts in the mainstream.  And while Google and Facebook will regret the loss of a few million euros they will have to pay the major news services, it’s nothing compared the long-run benefit of tech giants never having to worry about a Made-in-the-EU upstart growing to challenge them. These little guys don’t have the millions to spend that US Big Tech does. Article 11 gets the independent sector coming and going: not only will they have to pay to link to the mainstream press, they won’t be able to allow others to link freely to their own news. The rules set out by Article 11 state that the public-interest, crowdfunding, open access and Creative Commons news-sites can no longer allow anyone to link to them: instead, they must negotiate a linking license with each commercial site and collect fees in every case.  Article 13 is even worse. Though the current draft says that “filters are to be avoided,” it also is designed to guarantee that filters will be required. The past three months have been spent adding a laundry-list of impenetrable, contradictory, incoherent clauses insisting that some theoretically perfect technology to filter hundreds of billions of communications and sort them into “infringing” and “not infringing” is can be legislated into existence (it can’t).  Building Article 13’s filters is likely to cost hundreds of millions of euros, a price that only the biggest US firms can afford and none of Europe’s companies  can bear. The exemption that allows firms with less than 20 million euros in annual turnover to avoid the filters is irrelevant: if these companies are to challenge the US giants, they will have to grow, and they can’t grow past 20 million euro businesses if that means finding hundreds of millions of euros to comply with Article 13.  The EU is selling Big Tech a very cheap ticket to a guarantee of continued Internet dominance. Without competition, they only need fear each other.  Meanwhile, as the number of tech companies controlling access to the Internet dwindles, their power will grow. The ability of independent artists to and production companies to negotiate fair deals will steadily weaken, allowing Big Tech and the big entertainment companies to command an ever-larger slice of the product of creators’ labours.  Of course, the vast majority of Europeans are not in the entertainment industry and only a tiny minority of the Internet’s uses are entertainment-oriented. Article 13 will hold the Internet usage of 500 million Europeans to ransom in a harebrained scheme to eke out tiny gains in artists’ livings, and in the meantime, the censoring filters of Article 13 will churn out the same useless, error-prone judgments that have come to epitomise algorithmic discrimination in the 21st Century  It’s not too late: the European Council — made up of representatives from EU member states like Sweden — will soon vote on the Directive. Their decision will shape future of the Internet, possibly for generations to come. We need Swedes to act, to tell their representatives to strike a blow for fairness and against market concentration and censorship. TAKE ACTION WRITE TO SWEDEN'S EU NEGOTIATORS AND SAY NO TO ARTICLE 13 AND 11
>> mehr lesen

Poland, Take Action Now: Tell Negotiators to Oppose Article 13 and 11 (Wed, 16 Jan 2019)
Six years ago, Polish netizens thronged the streets to save Europe from ACTA, a US-originated treaty that would have imposed broad censorship and surveillance on the Internet in copyright’s name. Today, Poles are centre-stage again, fighting against “ACTA2”: the Copyright in the Single Digital Market Directive, and your help has never been more desperately needed. TAKE ACTION NOW Tell Poland's Negotiators to Oppose Article 13 and 11 This month, the EU will negotiate the latest (and possibly final) draft of the Directive, and we are dismayed (but not surprised) to learn that none of its deficiencies — rules that would lead to worse censorship and market concentration that even ACTA — had been improved upon. In fact, in some ways, these are now worse. The EU and its member-states have been negotiating since September, but they have not found any way to improve on the Directive’s two controversial clauses: Article 11: A proposal to make platforms pay for linking to news sites by creating a non-waivable right to license any links from for-profit services (where those links include more than a word or two from the story or its headline). Article 11 fails to define “news sites,” “commercial platforms” and “links,” which invites 28 European nations to create 28 mutually exclusive, contradictory licensing regimes. Additionally, the fact that the “linking right” can’t be waived means that open-access, public-interest, nonprofit and Creative Commons news sites can’t opt out of the system. Article 13: A proposal to end the appearance of unlicensed copyrighted works on big user-generated content platforms, even for an instant. Initially, this included an explicit mandate to develop “filters” that would examine every social media posting by everyone in the world and check whether it matched entries in an open, crowdsourced database of supposedly copyrighted materials. In its current form, the rule says that filters “should be avoided” but does not explain how billions of social media posts, videos, audio files, and blog posts should be monitored for infringement without automated filtering systems. (Almost) everybody hates these ideas. Not only have four million Europeans signed a petition opposing the Directive’s passage in the current form; it has also been roundly condemned by Europe’s largest movie companies and sports leagues and the Internet’s most esteemed technical experts, including the Father of the Internet Vint Cerf, and the inventor of the World Wide Web Sir Tim Berners-Lee. And yet, the best the EU and the national negotiators who produced this draft can offer us is…nothing. This revision still fails to correct the glaring defects in Article 11, the Link Tax, including failing to adequately define any of the key features of the regulation. The new Article 13 is identical in effect to the last one, and has not been improved by adding a laundry-list of impenetrable, contradictory, incoherent clauses insisting that some theoretically perfect technology to filter hundreds of billions of communications and sort them into “infringing” and “not infringing” is can be legislated into existence (it can’t). This is the time for Poland to act: a rare moment when the country’s left and right agree on something, and that something is that ACTA2 must not be crammed down the throats of Europeans who do not want it. TAKE ACTION NOW TELL POLAND'S NEGOTIATORS TO OPPOSE ARTICLE 13 AND 11
>> mehr lesen

The Public Domain Is Back, But It Still Needs Defenders (Wed, 16 Jan 2019)
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation. After twenty years stuck in Mickey Mouse’s shadow, the public domain is finally growing again. On January 1st, thousands of works became free for the public to distribute, perform, or remix. Every book, film, or musical score published in 1923 is now in the public domain. This policy win, like the public domain itself, belongs to everyone. How can you use the public domain? You could preserve and distribute books. Or you could, say, add zombies to a literary classic. You can choose between a faithful or radical production of a play without fear of a legal fight with the heirs of the playwright. Technology blog Techdirt has a contest to create games out of new public domain works. The only limit on the use of the public domain is the limit of human creativity. The public domain has benefits beyond remixes of high-profile works. Copyright terms are extremely complex. Figuring out whether something is in the public domain or not can require knowing if it was a corporate work or not, knowing whether it was registered and renewed or not, or knowing when the author died. For many works, it is impossible for archivists to answer any of these questions. That’s why clear cut-off dates are important. They give preservationists certainty. When Congress first passed the copyright law in 1790, it provided for a 14-year term with an optional 14-year renewal period. Since then, Congress has ratcheted the term upwards many times. In 1998, Disney and others got a 20-year extension without much public opposition. But advocates for the public interest in copyright policy have since become more informed and better organized. In the lead-up to January 1, 2019, there was no serious effort to extend copyright terms further. That is likely because groups that might want longer terms know they would face an angry public. The quiet victory for the public domain might be the result of earlier, louder battles like the fight against SOPA/PIPA. While we can celebrate new public domain works, we still have copyright terms that are far too long. We face the twin problems of orphan works and disappearing culture. Ultimately, copyright terms that enrich a tiny number of great-grandchildren at the expense of cultural preservation are a bad deal. In addition to opposing term extensions, we should begin a serious conversation about reducing copyright terms. We will also need to be vigilant about efforts to extend copyright by other means. Corporate interests might try to assert trademarks or other creative legal theories. Fortunately, the Supreme Court has already ruled that trademark law cannot be repurposed as a “mutant copyright law” to prevent access to the public domain. But that won’t stop companies from trying. Whatever happens, EFF will be there to defend the public domain.
>> mehr lesen

Germans Can Help Save the Internet from the Copyright Directive! (Wed, 16 Jan 2019)
This month, the EU is seeking to finalise the Copyright in the Single Digital Market Directive, and there’s little hope hope that they would improve or delete Articles 11 and 13, which have the power to crush small European tech startups, concentrating power in the hands of American Big Tech, while exposing half a billion Europeans to mass, unaccountable algorithmic censorship. TAKE ACTION NOW Write to Germany's EU Negotiators and say No to Article 13 and 11 We had hoped that the EU and national government negotiators would delete Article 13, the “censorship machines” rule that requires online platforms to hand their users’ videos, texts, audio and images to black-box machine learning filters that would unilaterally decide whether these infringed copyright and thus whether they would be censored or allowed to be published. Instead, the current text goes to enormous lengths to obscure its mandate for AI filters. The new language says that filters “should be” avoided, and that companies can escape liability if they use “best practices” to fight infringement. But the rule also says that the limitation of liability doesn’t apply where there is “economic harm”—meaning that a user has any commercial content—and it also requires “notice and staydown,” which means that once a platform has been notified that a given file infringes copyright, it must prevent all of its users from ever posting that content again. Thus, Article 13 can only be satisfied with filters—filters like the ones that Tumblr has been using in a disastrous attempt to block adult material. Article 13’s filters will have to process vastly more materials, in every format, and they will not fare better. And since Article 13 penalises companies that allow a user to infringe copyright, but does not penalise companies that overblock and censor their users, it’s obvious what the outcome will be. Building Article 13’s filters is likely to cost hundreds of millions of euros, a price that only the biggest US firms can afford and none of Europe’s companies can bear. The exemption that allows firms with less than 20 million euros in annual turnover to avoid the filters is irrelevant: if these companies are to challenge the US giants, they will have to grow, and they can’t grow past 20 million euro businesses if that means finding hundreds of millions of euros to comply with Article 13. Article 11, the rule banning links without a license, is also bad news for small businesses already struggling with abuse by the US ad platforms. While the giant newspapers will be able to afford to link to one another after Article 11 is law, these smaller news entities will have to find cash they don’t have to pay for these licenses, and nothing in Article 11 requires newspapers to sell licenses to them at any price, let alone at a fair price in line with the sums paid by the other establishment news entities. To make things worse, Article 11 has no opt-out: every news company must charge for links, and so the burgeoning world of Creative Commons, nonprofit, public interest news sites is snuffed out at the stroke of a pen. Germany has a contradictory relationship to the new Directive. Article 11 is the brainchild of Germany’s old newspaper families, and the Directive’s staunchest supporter is German MEP Axel Voss. But at the same time, the official national German position has been to oppose Articles 11 and 13, and another German MEP, Julia Reda, has led the charge against the worst aspects of the Directive. Germans have a special role to play here: with your MEPs and your newspaper giants driving so much of the agenda, and with a vote coming soon it is vital that you act now! TAKE ACTION NOW WRITE TO GERMANY'S EU NEGOTIATORS AND SAY NO TO ARTICLE 13 AND 11
>> mehr lesen

Even the Rightsholders Think Europe’s Article 13 is a Mess, Call for an Immediate Halt in Negotiations (Tue, 15 Jan 2019)
With only days to go before the planned conclusion of the new EU Directive on Copyright in the Single Digital Market, Europe's largest and most powerful rightsholder groups — from the Premier League to the Motion Picture Association (MPA) and the Association of Commercial Television in Europe — have published an open letter calling for a halt to negotiations, repeating their message from late last year: namely, that the Directive will give the whip hand to Big Tech. Article 13 — which still mandates copyright filters for big platforms, despite months of obfuscation — is the brainchild of the music recording industry, who invented the idea of the "value gap" as a synonym for "when we negotiate with YouTube for music licenses, we don't get as much as we'd like." Seen in this light, the unworkability of Article 13 is a feature, not a bug. Putting Google on the hook to give in on license negotiations or be forced to do the impossible is a powerful negotiating stick for the recording industry to hit Google with. The problem is that this tool will not only be wielded by record executives against Google: it will allow any of the Internet's two billion users to claim copyright over anything (including the record industry's most popular works) and improperly collect license fees, or simply block the material from public view. That's not the only problem, though. In the course of negotiating Article 13, European lawmakers made concessions that make the proposal (barely) coherent and affordable by Google (though not, importantly, by Google's small European competitors, who stand to be squashed flat by the dancing elephants of Big Tech and Big Content). Those concessions have enraged the rest of the entertainment industry, which had bigger plans. After a few European court judgments, the "audiovisual sector" appears to have revived its old plan to assume control over the online platforms outright through litigation, and they're worried that Article 13 will make that far more difficult — essentially locking down the music industry’s idea of a reasonable Internet, instead of how the rest of the media industry would like it.. This alternative future is not an outcome we'd be pleased with. The Internet isn't a video-on-demand service:it's the place where we do education, family, employment, politics, civics, charity, romance, and so much more (including entertainment). Neither the Article 13 proposal that the record industry hopes for nor the dystopian vision of the Internet as a subsidiary arm of the sports leagues and movie companies are futures we're willing to sign up for. But as the letter from the sports leagues and movie companies shows, Article 13 is not ready to become law. It represents the narrow interests of a handful of music companies, hastily bodged to get through a sceptical Parliament and the demands of Big Tech, not the broad interests of Europeans (more than 4,000,000 of whom have objected to it) nor the interests of other giant players in the entertainment sector. The Copyright Directive is the first update to EU copyright law in 17 years and it mostly consists of badly needed technical tweaks. By reintroducing controversial, half-baked versions of Articles 11 and 13 last spring, the German MEP Axel Voss has put the whole project in danger, and is holding all of Europe's copyright users to ransom in order to advance the interests of newspaper proprietors and record executives. As the EU national negotiators get ready to sit down with their counterparts from the European Parliament to finalise the Directive, it's vital that Europeans contact their national governments and tell them not to allow these proposals to pass.
>> mehr lesen

Device ‘Ownership’ Is a Civil Liberties Issue (Tue, 15 Jan 2019)
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation. The technology you rely on to interact with the world and express yourself should ultimately obey you, not the company that made it. If the devices in our pockets, on our bodies, and all around us are going to help us advance our own values, it has to be possible to control and customize them so they don’t just do whatever their manufacturer envisioned. A sad fact of modern technology is that many “smart” devices use their smarts to act as their manufacturer’s spy and digital enforcer. They monetize your private data and are designed not to empower you, but to maximize the profits you bring to their manufacturer. The companies that make mass-market devices often have values that either are at odds with the interests of the human beings who rely on them (e.g., devices laden with secretive spyware or printers that refuse to use competitors’ ink) or simply aim to satisfy what’s perceived as the most common use case without regard to the harms this causes people who don’t fall within the norm, typically members of marginalized demographics (e.g., soap dispensers that can’t see Black people). One of the neat things about technology is that you can build on what’s come before. If you want to buy something close to what you need and tinker with it to make it suit your purpose, you should be able to. Yet copyright law has become one of the largest obstacles to this kind of innovation. To be clear, if it were only traditional copyright law being considered, then the fair use doctrine and other limitations on copyright would protect your right to tinker. The culprit is Section 1201 of the Digital Millennium Copyright Act. Section 1201 makes it unlawful to bypass access controls on copyrighted works–even when those access controls are inside a device you own, controlling access to your copy of a work. Congress intended to prevent infringement by stopping people from, for instance, descrambling cable channels they hadn’t paid for. But secure digital systems often use access controls, such as encryption, and if you don’t have the digital keys to look at and modify the code in your devices, then breaking that encryption can get you into legal trouble, even for devices you’ve bought and own. This Copyright Week, take a moment to appreciate the tinkering and personalization that improve your life and help you express yourself. Enjoy your rights in the analog world and try out one of the exemptions to Section 1201 that we and our allies have won, such as jailbreaking your phone or reprogramming your electric scooter. Let’s all work on getting rid of this awful law so that our digital future remains in our hands.
>> mehr lesen

A Nazi Romance Movie Versus Memes: When Copyright Shuts Down Criticism (Mon, 14 Jan 2019)
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation. In theory, here is how copyright and speech are supposed to interact: copyright grants certain exclusive rights—including the right to make and sell copies of a work—for a limited period of time. The idea is that this will incentivize creativity and innovation by providing people a way to make money selling their creations. However, exclusive ability to make use of words, images, etc. naturally runs against free speech rights. So, in order to mediate this conflict, we have the right to make use of copyrighted material without permission and payment under certain circumstances. That’s fair use, and it’s really important. One important form of fair use is criticism. The most effective, clear way to criticize something is to share part of it and then break down what is wrong with it. If the goal is to save people from spending money on something that is bad, then people have the right to not only say something is not good, but show why. And using copyright to try to stop people from critiquing your work is obviously in bad faith. It is also a very obvious way to censor speech. It’s a tactic so obvious that doing it can get you as much bad attention as the negative critique. It’s so obvious you’d think we wouldn’t be seeing it in 2019. And yet. Where Hands Touch is, to be very charitable, a historical romance film. It is about the romance of a biracial woman and a member of the Hitler Youth. It is, by all accounts, a very clumsy telling of that story. As the movie became available via streaming in the U.S., people have been sharing short clips of the movie to show the problems they have with it. And that’s when copyright law stepped in. The Digital Millennium Copyright Act (DMCA) has a safe harbor provision which protects websites and other service providers from liability for the alleged copyright infringement of their users. In order to get these protections, though, companies have to meet certain conditions, including having a “notice and takedown” procedure. The DMCA safe harbor is one of the key legal safeguards that has made the Internet a medium where everyone can create and contribute. But the notice-and-takedown regime is rife with abuse. Under the DMCA, a copyright holder can file a takedown notice with a website saying that someone is infringing on their rights. After receiving the notice, the company has to remove the allegedly infringing material and inform the user. If the user doesn’t respond with a counter-notice defending their posting—for example, asserting fair use—then the content remains vanished. So what do you think the makers of Where Hands Touch did when people started commenting on a scene from the movie where the two leads have a moving romantic moment in a concentration camp, using a clip of that scene to illustrate their critiques? The answer is not “responded thoughtfully to the criticism being made by the people sharing the clip.” It is filing DMCA takedowns. Haaniyah Angus tweeted about her reactions and feelings about the movie, including a 14-second clip of the above-mentioned scene. Her in-depth critique of the movie—humorous, certainly, but valid criticism—led to a conversation about the movie on Vulture and to the most important kind of Internet culture, memes. Angus’ clip got hit by a DMCA takedown. Filmmaker Charlie Lyne’s thread criticizing the use of the DMCA to remove Angus’ negative review, a thread which also included the clip, likewise got hit with a takedown. So did the memes. The use of clips made abundantly clear why the film was a mess. That’s why Angus’ original tweet took off. Scrubbing the clip leaves a hole where understanding should be. Commenting and criticizing a film is a clear instance of fair use, especially when the clip used is brief. The Internet has made it easier for people to share commentary and criticism. Critiquing a movie effectively often means sharing parts of that movie, something that used to be very hard to do without a TV show of your own. Now, anyone can do it. And given the endemic lack of diversity in film criticism, opening up this world can only be a good thing. And yet, the DMCA takedown process is ripe for abuse, stripping away the benefits the Internet offers to people making use of it to criticize and comment on things. Neither the DMCA or copyright law in general requires copyright holders to go after anyone using their work, so they are free to ignore memes and clips shared in positive ways and only go after the speech they don’t like. While the takedowns are appealable, appeals mean giving a lot of personal information, consent to jurisdiction if the other side decides to sue you, and a sworn statement that you believe the takedown was wrong. And then it can take weeks for the content to be restored. Which, if your goal is to warn people away from something, is harmful. In the case of Where Hands Touch, the takedowns have been so thorough that, if you go looking now, it’s very difficult to truly grasp what was wrong with the movie. Where Hands Touch hasn’t been released all over the world yet, and now people who have seen it can’t use the global reach of the Internet to tell others how bad it is. The only comfort is, as ever, that the stories about the bad behavior of the filmmakers will speak even louder to how little confidence they have in the movie.
>> mehr lesen

It's Copyright Week 2019: Join Us in the Fight for Better Copyright Law and Policy (Mon, 14 Jan 2019)
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation. Copyright affects so much about our daily lives, often in ways people don’t even realize. It obviously impacts the movies we watch, the books we read, and the music we listen to. But it also impacts everything from who can fix a tractor to what information is available to us to when we communicate online. That means that copyright law and policy should be made to serve everyone. Unfortunately, that’s not the way it tends to work. Instead, copyright law is often treated as the exclusive domain of major media and entertainment industries. They’ve been able to shape a law that affects us all to suit their desires, making it harder and harder to access, use, and work with content, information, and devices that we have rights to. That doesn’t mean we can’t change the status quo. Seven years ago this week, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection. This year sees another positive development in the world of copyright: January 1, was the first day in decades that saw new works enter the public domain in the United States. In theory, copyright is supposed to grant exclusive rights for a limited period—enough time for creators to make money off of their works, incentivizing them to create them. Once copyright expires, works enter the public domain, where anyone can make any use of them, perpetuating the cycle of culture building on itself that drives innovation and creation. In an example of how large media and entertainment companies successfully make copyright laws for themselves, they prefer to have control of things long after the people who actually created them have passed away. They successfully lobbied to have the term of copyright extended, essentially keeping the public domain from growing for decades. The public domain is an important resource for anyone looking to study, build on, or preserve culture. This year, we finally see it grow again. We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: Monday: Copyright as a Tool of Censorship. Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it. Tuesday: Device and Digital Ownership. As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it–meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way. Wednesday: Public Domain and Creativity. Copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture. Thursday: Safe Harbors. Safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain. Friday: Filters. Whether as a result of corporate pressure or regulation, overreliance on automated filters to patrol copyright infringement presents a danger to free expression on the Internet. Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on Twitter. As we said last year, and the year before that, if you too stand behind these principles, please join us by supporting them, sharing them, and telling your lawmakers you want to see copyright law reflect them.
>> mehr lesen

The Internet is Facing a Catastrophe For Free Expression and Competition: You Could Tip The Balance (Mon, 14 Jan 2019)
Update: Monday 14th 2019: We've added contact details and an action for Belgium. Update: Tuesday 15th 2019: We've added contact details and an action for Czechia. Sweden: Take Action Here. Germany: Take Action Here. Luxembourg: Take Action Here. Poland: Take Action Here. Belgium: Take Action Here. Czechia: TAKE ACTION HERE. The new EU Copyright Directive is progressing at an alarming rate. This week, the EU is asking its member-states to approve new negotiating positions for the final language. Once they get it, they're planning to hold a final vote before pushing this drastic, radical new law into 28 countries and 500,000,000 people. While the majority of the rules in the new Directive are inoffensive updates to European copyright law, two parts of the Directive represent pose a dire threat to the global Internet: Article 11: A proposal to make platforms pay for linking to news sites by creating a non-waivable right to license any links from for-profit services (where those links include more than a word or two from the story or its headline). Article 11 fails to define “news sites,” “commercial platforms” and “links,” which invites 28 European nations to create 28 mutually exclusive, contradictory licensing regimes. Additionally, the fact that the “linking right” can’t be waived means that open-access, public-interest, nonprofit and Creative Commons news sites can’t opt out of the system. Article 13: A proposal to end the appearance of unlicensed copyrighted works on big user-generated content platforms, even for an instant. Initially, this included an explicit mandate to develop “filters” that would examine every social media posting by everyone in the world and check whether it matched entries in an open, crowdsourced database of supposedly copyrighted materials. In its current form, the rule says that filters “should be avoided” but does not explain how billions of social media posts, videos, audio files, and blog posts should be monitored for infringement without automated filtering systems. Taken together, these two rules will subject huge swaths of online expression to interception and arbitrary censorship, and give the largest news companies in Europe the power to decide who can discuss and criticise their reporting, and undermining public-interest, open-access journalism. The Directive is now in the hands of the European member-states. National ministers are going to decide whether or not Europe becomes a global exporter of censorship and surveillance. Your voice counts: when you contact your ministers, you are speaking as one citizen to another, in a national context, about issues of import to you and your neighbours. Your national government depends on your goodwill to win the votes to continue its mandate. This is a rare moment in European lawmaking when local connections from citizens matter more than well-funded, international corporations. If you live in Sweden, Germany, Luxembourg, Poland, Belgium, or Czechia: Please contact your ministers to convey your concern about Article 13 and 11. We’ve set up action pages to reach the right people, but you should tailor your message to describe who you are, and your worries. Your country has previously expressed concerns about Article 13 and 11, and may still oppose it. Sweden: Take Action Here. Germany: Take Action Here. Luxembourg: Take Action Here. Poland: Take Action Here. Belgium: Take Action Here. CZECHIA: TAKE ACTION HERE. If you live in the rest of Europe: please contact the ministers working on the Copyright in the Digital Single Market directive. We'll update this page with more information as we get it.
>> mehr lesen

Bird Rides Inc. Demands Takedown of News Report on Lawful Re-use of Scooters (Fri, 11 Jan 2019)
Every now and then we have to remind someone that it's not illegal for people to report facts that they dislike. This time, the offender is electric scooter rental company Bird Rides, Inc. Electric scooters have swamped a number of cities across the US, many of the scooters carelessly discarded in public spaces. Bird, though, has pioneered a new way to pollute the commons by sending a meritless takedown letter to a journalist covering the issue. The company cites the Digital Millennium Copyright Act and implies that even writing about the issue could be illegal. It's not. Bird sent a "Notice of Claimed Infringement" over this article on Boing Boing, one of the Internet’s leading sources of news and commentary on social, educational, political, artistic and scientific development in popular culture. The article reports on the fact that large numbers of Bird scooters are winding up in impound lots, and that it's possible to lawfully purchase these scooters when cities auction them off, and then to lawfully modify those scooters so they work without the Bird app. The letter is necessarily vague about exactly how the post infringed any of Bird's rights, and with good reason: the post does no such thing, as we explain in a letter on behalf of Happy Mutants LLC, which owns and operates Boing Boing. The post reports on lawful activity, nothing more. In fact, the First Amendment would have protected it even if reported on illegal conduct or advocated for people to break the law. (For instance, a person might lawfully advocate that an electric scooter startup should violate local parking ordinances. Hypothetically.) So, in a sense, it doesn't matter whether Bird is right or wrong when it claims that it's illegal to convert a Bird scooter to a personal scooter. Either way, Boing Boing was free to report on it. But here's the fun part (we may have a strange idea of fun): Bird cites Section 1201 of the Digital Millennium Copyright Act. That's the section that prohibits you from getting around technologies that lock you out of accessing copyrighted works, like software, even when you own the device that software is on. It also prohibits various forms of 'trafficking' in products for circumventing those kinds of technological restrictions. Bird probably did not know that the journalist who wrote the post, Cory Doctorow, has been reporting on and challenging this overly broad law and its harmful consequences, both at Boing Boing and as a Special Adviser on EFF’s Apollo 1201 project, for years. They likely also didn’t know EFF has launched litigation to invalidate the law in its entirety and, in the meantime, has successfully pushed for numerous exemptions to the law -- including one that specifically permits repair and modification of motorized land vehicles (for instance, say, an electric scooter). As fun as it might have been (again... fun for us) to have a legal fight about the nuances of Section 1201, it's pretty clear here that there's no claim to be made. The fundamental reason Bird doesn't have a claim is that Section 1201's ban on trafficking concerns products that circumvent either access controls or use controls on a copyrighted work. To simplify a bit, it concerns a device that cracks a technological measure in order to access or make an infringing use of a copyrighted work. To turn a Bird scooter into a regular personal scooter, you just open it up and replace the motherboard that contains Bird code with a different motherboard (you could even use the official stock motherboard for this model of scooter, the Xiaomi Mijia m365). You literally throw away the copy of the Bird code residing on the unwanted motherboard, rather than accessing or copying or modifying it. We have long had serious concerns that Section 1201 can be abused to block repair and tinkering. But while the law is overbroad, it is not so broad that it prohibits you from simply replacing a motherboard. In sum, Bird sent a "Notice of Claimed Infringement" to a news site for reporting about people doing legal things that Bird does not like. If Bird plans to send letters like this to every outlet that does the same, its legal team faces a monumental task—almost as vast as collecting the scooters littering parks and sidewalks.
>> mehr lesen

(Don't) Return to Sender: How to Protect Yourself From Email Tracking (Thu, 10 Jan 2019)
Tracking is everywhere on the Internet. Over the past year, a drumbeat of tech-industry scandals has acclimated users to the sheer number of ways that personal information can be collected and leaked. As a result, it might not come as a surprise to learn that emails, too, can be vectors for tracking. Email senders can monitor who opens which emails, when, and what device they use to do it. If you work for a business or a non-profit that sends mass emails, maybe you’ve used tools to perform this kind of tracking before. Even if you have used them, this might be the first you’ve heard of it — because unfortunately, in email marketing software, tracking is often enabled by default.   There are a lot of different ways to track email, and different techniques can lie anywhere on the spectrum from marginally acceptable to atrocious. Responsible tracking should aggregate a minimal amount of anonymous data, similar to page hits: enough to let the sender get a sense of how well their campaign is doing without invading users’ privacy. Email tracking should always be disclosed up-front, and users should have a clear and easy way to opt out if they choose to. Lastly, organizations that track should minimize and delete user data as soon as possible according to an easy-to-understand data retention and privacy policy. Unfortunately, that’s often not how it happens. Many senders, including the U.S. government, do email tracking clumsily. Bad email tracking is ubiquitous, secretive, pervasive, and leaky. It can expose sensitive information to third parties and sometimes even others on your network. According to a comprehensive study from 2017, 70% of mailing list emails contain tracking resources. To make matters worse, around 30% of mailing list emails also leak your email address to third party trackers when you open them. And although it wasn’t mentioned in the paper, a quick survey we did of the same email dataset they used reveals that around 80% of these links were over insecure, unencrypted HTTP. In addition, several of these third-party email tracking technologies will try to share and correlate your email address across different emails that you open, and even across different websites that you visit, further shaping your invisible online profile. And since people often access their email from different devices, email address leaks allow trackers (and often network observers) to correlate your identity across devices. It doesn’t have to be that way. For users, there are usually ways to “opt out” of tracking within your email client of choice. For mail client developers, including a few simple features can help protect your users’ privacy by default. And if you’re at an organization that does perform tracking, you can take a proactive approach to respecting user privacy and consent. Here are some friendly suggestions to help make tracking less pervasive, less creepy, and less leaky. How can users protect themselves? There are many popular email clients which behave differently and have different settings, so protections may vary. Here are some general guidelines for improving your email privacy and security hygiene. Limit your email client’s image/resource loading. A common tracking practice includes embedded links to “pixels” or other pieces of content that are hosted on a remote server. When your client tries to load the content, it sends out a request that allows you to be tracked. Blocking third-party resources limits the ability of email senders to track when you read or open emails. Some clients, including Thunderbird and Outlook, have it disabled by default, and both Gmail and Apple Mail allow you to disable it by choice. If you need to view images in a particular email, you can selectively turn on this feature for that particular email, but be aware that this allows email-open trackers to work. For even more security, you can turn off HTML email completely. This will remove formatting from your emails, but it will completely stop any form of remote content tracking. If you’re not sure how well your email client protects you, the Email Privacy Tester is a useful tool to check whether you’re vulnerable to a variety of different tracking techniques. For example, even though Gmail uses a proxy to serve images in emails, the privacy tester reveals that using Gmail won’t actually protect you from pixel tracking (though it will mask your IP address). Try using it to test each of your email clients, especially the one you use on your mobile phone. Be careful when clicking links. Don’t click links in email unless you absolutely have to, and try to view the link URL beforehand. This is good practice in general to avoid security risks like phishing as well as privacy-invasive tracking. If you use a webmail client, standard web hygiene techniques work well for email also. To prevent email trackers from getting even more information about you, turn off third-party cookies in your browser and install a tracker-blocker like Privacy Badger. In addition, to prevent your email browsing behavior from being visible to ISPs and snoops on your network, limit your exposure to HTTP. You can use an extension like HTTPS Everywhere to block HTTP resources from loading by default. How can email clients do more to protect their users? Email clients should represent the interests of their users as they interact with the Internet. That includes using sensible protections by default and including strong privacy-preserving options for especially concerned users. If they have the resources, clients can proxy content that’s embedded in emails, like Gmail does. It’s not perfect, but has some security and privacy benefits, like preventing HTTP requests from leaking onto the network, blocking cookies, and hiding IP address and User Agent information from the tracker. If you’re a client developer, there’s even more that you can do. Tracking should be opt-in, not opt-out, so if you don’t already, turn off remote content loading for your users by default. At the very least, you can give your users the option to do this. Also, give users the ability to turn off HTML email. You can check for any further leaks on your client using the Email Privacy Tester. Even if your users regularly employ end-to-end encryption, after decrypting the email, clients often render the email as they would a regular one, so you’ll still need to think about these tracking protections. How can email senders respect their readers? The need for feedback on email campaigns drives the ubiquity of pixel and link tracking, and many of these techniques have been used for decades. But it’s unfortunately rare to see these tracking technologies being implemented securely and responsibly. Here’s how to make sure the analytics tools on your email campaign respect and protect users’ privacy. Rule #1: use TLS! An astounding number of link-tracking domains are served over HTTP, and many large email senders don’t use STARTTLS. Make sure your links are over HTTPS, and that your mail server supports outgoing STARTTLS. There’s no reason network eavesdroppers should know what mailing lists folks are subscribed to when users open their emails or their email-link browsing history. Don’t obfuscate your links. The practice of obfuscating tracked links is especially dangerous, as it trains your readers to click unrecognizable links. This can lead users to click suspicious links from phishers. 91% of cyberattacks start with a phishing email, and normalizing suspicious-looking links in email makes life easier for phishers. Lastly, and most importantly, think before you track. Who are you exposing your readers’ private information to? Do you really need to embed their email addresses in your URLs? At what privacy cost do “insightful analytics” come at? Nothing about counting the number of visitors coming to your site via email is inherently bad. But do you really need to store exactly who clicked which link from which email? Campaigns can get quite a bit of signal without invading their users’ privacy and trust just from aggregated counting, rather than individualized tracking of every user’s interaction. And think twice before hiring a third-party service to do your tracking for you. Read their privacy policy, and make sure you’re not selling out your users’ data for a few useful numbers. Email sanitation, security, and privacy is a team effort. Stay vigilant, and keep good email hygiene!
>> mehr lesen

The Federal Government Offers a Case Study in Bad Email Tracking (Thu, 10 Jan 2019)
The U.S. government sends a lot of emails. Like any large, modern organization, it wants to “optimize” for “user engagement” using “analytics” and “big data.” In practice, that means tracking the people it communicates with—secretly, thoroughly, and often, insecurely. Granicus is a third-party contractor that builds communication tools to help governments engage constituents online. The company offers services for social media, websites, and email, and it boasts of serving over 4,000 federal, state, and local agencies, from the city of Oakland to the U.S. Veterans Administration to HealthCare.gov. In 2016, the company merged with GovDelivery, another government-services provider. It appears that parts of the federal government have been working with GovDelivery, now Granicus, since at least 2012. Last October, we took a closer look at some of the emails sent with Granicus’s platform, specifically those from the whitehouse.gov mailing list, which used the GovDelivery email service until very recently. The White House changed its email management platform shortly after we began our investigation for this article. However, several other agencies and many state and city governments still use Granicus as their mailing list distributors. The emails we looked at, sent to subscribers of the Whitehouse.gov email list in October 2018, happen to be an exemplary case study of everything wrong with the email tracking landscape, from unintentional and intentional privacy leaks to a failure to adhere to basic security standards. Not only does Granicus know exactly who is opening which email and when, but in the emails we studied, all of that information is sent without encryption by default, so network observers can see it too. Ironically, even the White House’s Privacy Policy is hidden behind one of the tracking links. How does it work? We inspected an email from the White House’s “1600 Daily” newsletter sent October 22, 2018. The email uses two common methods to monitor user behavior: pixel tracking and link tracking. We’ll break them down one at a time, using examples from the email itself to illustrate how those methods work in the common case. In addition, we’ve written guidelines for users, email clients, and email providers to protect against these techniques. Pixel Tracking Today, almost all emails are sent and read in HTML. An HTML email is treated much like a static web page, with text formatting, custom fonts, and, most importantly, embedded images. When you open an email, your computer or phone needs to load each image from the Internet, which means, depending on the email client you use, your device might send a request to the server that hosts the image. In emails, a tracking pixel is an “image” included for the purpose of tracking you. It’s usually small (1 by 1 pixel) and invisible. Trackers will often tag on a bunch of extra identifying information to the end of the “image” URL. For instance, they often include information about which email was opened and which email address it was originally sent to. In the White House newsletter I received, the tracking pixel looks like this: When you open the email, your email client (like Thunderbird or Apple Mail) might send a request to the URL above. As you can see, it points to links.govdelivery.com, a domain owned by Granicus. The biggest part of the URL is the enid parameter, a base64-encoded string. If we decode my email’s enid, we can read the information that’s sent to the third party: Every time I open this email, my device sends Granicus my email address and a unique identifier for the email that I opened. Granicus knows exactly who I am, which email I’m reading, and when I opened it—and potentially, so might a network observer. Link Shims The email also uses link shimming, the practice of obfuscating URLs in emails for tracking purposes, to track which links you click on. (Link shimming, and link tracking more generally, is commonly used on the web by search engines and social media companies.) Take a look at a sample link from the newsletter. When rendered by your email client, it looks like this: By inspecting the source code, we can see that the blue text above actually points to the following URL: The first part of the link, in yellow, is nearly identical to the tracking pixel URL we saw before. The redirect URL, in green, points to the article you intended to click. UTM parameters, in blue, allow whitehouse.gov to collect more contextual information about your click. That mess will take you on a brief visit to govdelivery.com before being redirected to whitehouse.gov, the location of the real press release. Once again, the redirect sends Granicus the enid data, including information about who you are and where you’re coming from. These data, combined with the pixel data from above, allow Granicus to offer “subscriber segmentation” services to its customers (i.e. the government). According to its website, customers can filter individual subscribers by their “targeted message” activity, including whether they received, opened, or clicked a specific email message within a given time frame. Privacy or Security: Choose None It’s frustrating enough that the government has been using a third-party service to surreptitiously monitor who opens emails they send, what they click on, when, and from where. What’s worse, in several of the emails we looked at, the tracking is performed over an unencrypted connection using HTTP. This means that all the requests made to Granicus are legible to anyone who could eavesdrop on your connection. If you open one of the emails on unsecured WiFi at an airport or a coffee shop, anyone could be able to monitor your activity and collect your email address. Perhaps more concerning, using an unencrypted connection allows Internet service providers (ISPs) to collect that sensitive information no matter where you are. Thanks to recent deregulation, ISPs are now legally permitted to sell data about their customers—which could include your email address, political preferences, and information about which government agencies you interact with. Normally, HTTPS protects sensitive information from ISPs’ prying eyes. But in this case, not only can Granicus see which email user clicks on which links; anyone on the network, including the ISP, can too. The practice of link shimming poses a subtle security risk as well: it makes users more susceptible to phishing. If users are led to click links that look like garbage, they are much more likely to be duped into clicking links from less-than-reputable sources. 91% of cyber attacks start with a phishing email, including many attacks on the government itself. That means that training users to trust insecure, illegible links to unrecognizable domains is a serious problem. To top it all off, Granicus’s emails are often sent without STARTTLS, a basic protection against passive dragnet surveillance. That means the emails travel around the Internet backbone without encryption, which is just another channel where data about you and your interests may be exposed to snoops on the network. (We recently launched STARTTLS Everywhere to make email delivery more secure.) Conflicting Reports After beginning our investigation on October 22, we reached out to both the White House and Granicus for comment regarding their privacy and security practices. The White House didn’t reply, but we did receive a response from Granicus Chief Product Officer Bob Ainsbury: The private information of both Granicus govDelivery users and govDelivery subscribers is secure. Any claim to the contrary is a very serious allegation and completely inaccurate. ... Further, email addresses cannot be identified through HTTP connections. All HTTP requests made for the purposes of tracking are transmitted in unrecognizable data and do not allow users’ private information to be compromised at any time. The claim that the HTTP requests are secure and “do not allow users’ private information to be compromised” is, as we’ve shown above, demonstrably false. The data Granicus transmits are not encrypted, but encoded in base64, which can be decoded by literally anyone. Furthermore, the company claimed that: Granicus govDelivery is one of the few email platform providers that has adopted the highest level of data security standards necessary to deliver digital communications for government agencies. That security standard is FedRAMP, which requires platform providers to: encrypt all traffic with FIPS 140-2 validated encryption modules, utilizing TLS 1.1 or higher ... Its continued use of HTTP for email tracking and failure to support STARTTLS for in-transit email encryption indicate that Granicus has not adopted encryption anywhere near “across the board” when it comes to users’ private information. In that context, the reference to “utilizing TLS 1.1” for “all traffic” is baffling, as we have seen evidence the company continues to use unencrypted HTTP for many of its emails. Schrödinger’s Trackers In a strange coincidence, it appears that the White House’s newsletter, “1600 Daily,” ceased using Granicus as its service provider on October 30, 2018, two days before we reached out for comment. It now uses MailChimp for email analytics. MailChimp performs similar types of tracking, using invisible pixels to track email opens and link shims to track clicks, but the company does employ industry-standard security practices like HTTPS. The new tracking pixels are a little more compact, but just as potent: An example of a tracking pixel from a more recent “1600 Daily” newsletter, which sends information to Mailchimp’s list-manage.com domain (in orange) over HTTPS (in blue) contianing a custom tracking string (in yellow) According to the Privacy Policy, the White house still uses pixels and link shims to collect “automatically generated email data” from subscribers, including: A list of “automatically generated email data” the White House collects, according to https://www.whitehouse.gov/privacy-policy/ Other government agencies still use Granicus, such as the Department of Veterans Affairs’ “My HealtheVet” newsletter, the Social Security administration, and HealthCare.gov Alerts. These mailing lists all perform the same kinds of link shimming and pixel tracking we observed in the original White House emails. Some of the emails we've received from Granicus use HTTPS connections to perform tracking, but others still use insecure HTTP. And the company still does not support outbound server-to-server email encryption with STARTTLS. Moreover, Granicus’s response, included in full below, shows that it doesn’t understand what “secure” means in the context of sensitive user data. Government agencies should be asking some hard questions about how they continue to handle our information. Protect Your Users; Protect Yourself Techniques like pixel and link tracking are extremely common and have been around for decades, and it’s unfortunately rare to see them being used responsibly. If you’re a sender, we implore you to think before you track. Unfortunately, many federal agencies still use Granicus' services, dubious security and all. These agencies should drop GovDelivery in favor of more ethical, more secure analytics, and evaluate how much information they really need to collect to fulfill their missions. Although the White House is no longer using Granicus, it, too, performs extensive tracking on subscribers to its lists. And the only way it offers to opt out is to unsubscribe. As a user, there’s no fool-proof way to opt-out of leaky email tracking, but there are ways to practice good email hygiene and prevent most forms of it. At the end of the day, the most effective way to avoid the tracking is to follow the White House’s advice and unsubscribe. Just be aware that the “unsubscribe” link is tracked, too. On November 1, 2018, we reached out to Granicus to request a comment on the company's use of email tracking in services to the U.S. government. The company's response, attributed to Bob Ainsbury, Chief Product Officer at Granicus, is included in its entirety here: The private information of both Granicus govDelivery users and govDelivery subscribers is secure. Any claim to the contrary is a very serious allegation and completely inaccurate. Granicus govDelivery is one of the few email platform providers that has adopted the highest level of data security standards necessary to deliver digital communications for government agencies. That security standard is FedRAMP, which requires platform providers to: encrypt all traffic with FIPS 140-2 validated encryption modules, utilizing TLS 1.1 or higher provide two-factor authentication to all customers conduct monthly security scans, providing the results to the FedRAMP JAB for review on a monthly basis conduct an annual penetration test and audit of controls to ensure compliance. Like the world’s other leading email platforms – including several other email systems used at the White House - we do use pixels to track open rates and link shims to track click rates. This is an industry standard that has been in use for over 20 years. It’s used by virtually every major commercial and public sector communicator to track simple email opens and link clicks. It is worth noting, that Granicus govDelivery is configurable, allowing customers to turn off activity capture.  Further, email addresses cannot be identified through HTTP connections. All HTTP requests made for the purposes of tracking are transmitted in unrecognizable data and do not allow users’ private information to be compromised at any time. Granicus is committed to the privacy and security for over 4,000 government clients and the citizens who subscribe to receive digital messages using our software, which is why we’ve made the investment to remain FedRAMP, ISO 27001 and GDPR compliant. Privacy and security are our highest and most important priorities at Granicus.
>> mehr lesen

Apple Says Patent Troll Case Should Be Dismissed Because [REDACTED] but the Public Should Know Why (Wed, 09 Jan 2019)
At EFF, we review court dockets to monitor the conduct of the most active patent trolls. But when court records are redacted or sealed, it can be impossible for EFF and other members of the public to know what is going on. Today we filed a motion to intervene in Uniloc v. Apple seeking public access to key briefing about whether Uniloc should be able to bring the case at all. Uniloc is one of the most active patent trolls in the world, and filed more than 170 lawsuits in 2018. It is the patent owner that sued Austin Meyer for offering his X-Plane flight simulator on app stores. That suit led to a documentary called The Patent Scam (available on Amazon Prime). Since then, Uniloc has been a big purchaser of patents, and various Uniloc entities have filed hundreds of patent suits. In 2017, Uniloc filed a wave of patent litigation against Apple and other defendants. In some of those cases, Apple has moved to dismiss on the basis that Uniloc lacks standing. Apple’s motion to dismiss was heavily redacted, but it appears to relate to deals Uniloc has made with Fortress Investment Group LLC. Apple seems to be arguing that Uniloc and Fortress divided rights in the underlying patents in a way that means Uniloc entities no longer had a legal right to sue for infringement. We say it “appears” that Apple is making these arguments because the briefing is redacted to an almost comical level. For example, here are some representative pages from the “background” section of Apple’s motion to dismiss: And here is the opening of the section where Apple argues that Uniloc lacks standing: The legal argument continues with several more pages of black highlighter. Uniloc has insisted on these redactions claiming that they are needed to protect its confidential business information. The extensive redactions in this case leave the public with no way to understand the dispute. Our motion to intervene explains that this is a violation of the public’s common law and First Amendment right of access to courts. Uniloc is opposing our motion, while Apple takes no position on it. Apple’s arguments on this matter will have an impact beyond one case. If Uniloc’s deal with Fortress means that it isn’t allowed to sue, dozens of other companies could be saved from litigation. The ruling could also impact other patent trolls that play shell-games with patent ownership rights, such as Intellectual Ventures. Excessive secrecy is an ongoing problem in patent litigation. Unfortunately, when the parties agree to keep things secret, courts often allow them to seal documents that should be public. That means someone else has to step in to advocate for access to courts. EFF has intervened in other patent cases to get access to court records. The public has a right to know what’s going on between Uniloc and Apple. It’s a patent dispute that could affect the cost and operations of the millions of devices sold by Apple. That’s why we’ll continue to work for transparency, in this patent case and others. Related Cases:  Patent Litigation Transparency
>> mehr lesen

An Update on Facebook’s Smear Campaign Against Critics (Tue, 08 Jan 2019)
Back in late November, the New York Times revealed that Facebook had paid a corporate PR firm called Definers Public Affairs to develop and peddle a smear campaign aimed at some of its Open Society Foundations-funded critics, including members of the Freedom From Facebook coalition.  In response, we asked three basic questions of Facebook, all aimed at the same issue: what did Facebook do with the smear campaign information on the Facebook platform itself? Did Facebook promote the smears on its platform? Did Facebook develop different versions to target different audiences, including Congressional staffers and other influencers, as it does for key advertising customers? And, most important, what is the boundary between Facebook’s own policy interests and the operation of the platform? Just before the holiday break, Facebook answered our questions in a telephone call with two of its legal and communications staff. The short answer: Facebook asserts that it did not help promote Definers's messages on its own platforms. Facebook said it does not allow its own policy work to be promoted on its platforms (for example, through the ads you see or the posts that show up in your Newsfeed) without clear and unequivocal notice to its users. This is good as far as it goes. But Facebook must do much more if it wants to regain any of the trust it lost from this episode, especially given the dangerous waters that it chose to swim in. Facebook must do much more if it wants to regain any of the trust it lost from this episode. First, while Facebook said this time that it did not use its platform to promote its own policy positions, Facebook needs to state publicly that it will not use its own platform to, for example, secretly further attacks against critics. This should take the form of a clear, written, publicly available policy that Facebook will not use the Facebook platform for its own policy purposes without clear notice. Facebook is of course entitled to take policy positions and even to use its platform to promote them, but it must be crystal clear when it’s doing so. Facebook has publicly said it was reviewing its policies and procedures concerning its communications work, including with external firms, and that this process is being led by Nick Clegg. A rule ensuring strong separation and transparency requirements—with serious consequences for violations—should be part of that process. Second, Facebook still needs to find out what actually happened with this information. We know that Facebook’s algorithms often promote controversial and divisive content, and that both Definers and its affiliate the NTK Network have Facebook presences. Facebook might not have needed to intentionally promote this material for it to have circulated widely on Facebook. Facebook needs to investigate how and where this content spread on the platform, and tell its user base. Finally, Facebook must take steps to ensure that it does not participate in efforts to undermine civil society groups around the world. It is certainly reasonable for Facebook to do research into its political opponents, including the Freedom from Facebook coalition. But Facebook went over the line when it tried to push the story that these groups’ funding from George Soros meant they were were not really grassroots, and that, by actions of the philanthropies he funds, Soros might have been engaged in financial manipulation aimed at Facebook’s stock price. At best, this betrays a fundamental misunderstanding at Facebook about how nonprofit funding and philanthropy work. More likely, since we know that folks at Facebook know better than this, this was a cynical play into larger, ugly efforts to undermine nonprofit advocacy and the role of civil society in public debate. And that’s where Facebook’s strategy here is particularly dangerous. Delegitimizing civil society based upon attacks on philanthropic funding sources has long been a key part of the authoritarian playbook. The United Nations directly recognized this problem in 2013. Attempts to cut off foreign funding for civil society groups, or what Kenneth Roth of Human Rights Watch has called “The Great Civil Society Choke-Out,” have even been held to violate international law. And it's not just a historical problem.  This is the strategy used today by those seeking to undermine civil society, including the increasingly authoritarian governments in Egypt, Hungary, Macedonia, and Russia. Facebook should ask whether that's the company it wants to keep.
>> mehr lesen

You Should Have the Right to Sue Companies That Violate Your Privacy (Tue, 08 Jan 2019)
It is not enough for government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary for these laws to have bite, to ensure companies do not ignore them. The best way to do so is to empower ordinary consumers to bring their own lawsuits against the companies that violate their privacy rights. Such “private rights of action” are among EFF’s highest priorities in any data privacy legislation. For example, while there is a lot to like about the new California Consumer Privacy Act (A.B. 375 and S.B. 1121), a significant flaw is its lack of a private right of action (except as to some kinds of data breaches). We will work this year to amend CCPA to add consumer enforcement. The California Attorney General, who has the primary duty to enforce the CCPA, supports consumer enforcement, explaining: The lack of a private right of action, which would provide a critical adjunct to governmental enforcement, will substantially increase the OAG’s need for new enforcement resources. I urge you to provide consumers with a private right of action under the CCPA. Likewise, when EFF reviews the many federal data privacy bills that have circulated since the Cambridge Analytica scandal first broke earlier this year, one of our primary goals is to ensure that these bills include a private right of action. (We also work to ensure that any federal data privacy bill does not preempt stronger state laws.) Consumer enforcement is part of EFF’s “bottom-up” approach to public policy. Ordinary technology users should have the power to decide for themselves whether to bring a lawsuit to enforce their statutory privacy rights. EFF itself has gone to court to enforce digital privacy statutes. We also have long advocated for private rights of action to be included in data privacy laws, among other kinds of laws. This is how legislators normally approach privacy laws. Many privacy statutes contain a private right of action, including federal laws on wiretaps, stored electronic communications, video rentals, driver’s licenses, credit reporting, and cable subscriptions. So do many other kinds of laws that protect the public, including federal laws on clean water, employment discrimination, and access to public records. Enforcement by government officials is a start, but not enough by itself. Agencies may fail to enforce privacy laws due to lack of resources. For example, the ongoing federal budget impasse shut down the FTC’s investigation of Facebook’s data privacy practices, including whether Facebook violated its 2011 consent order with the FTC. Agencies may likewise be hamstrung by competing priorities. Further, there is the inherent risk of regulatory capture, meaning undue influence over an enforcement agency by the companies supposedly subject to its overight. The recent leashing of the federal Consumer Financial Protection Bureau is just one example of why we should broadly diffuse the power to enforce statutes that protect the public. In essence, if everyone has the power to protect their own privacy, then special interests will have a harder time using their influence to shield themselves from accountability. Looking ahead, a federal data privacy law might be passed with great fanfare. It might have all the substantive rules that EFF has long sought, including opt-in consent to collect or share a consumer’s data, a right to know what data was collected, data portability, and information fiduciary duties for the companies that we entrust with our data. But without a strong enforcement regime, such a law will protect privacy in name only.The best privacy enforcers are ordinary people. Legislators should give them the power to defend their own privacy.
>> mehr lesen

Give Up the Ghost: A Backdoor by Another Name (Mon, 07 Jan 2019)
This article was first published on Just Security. Government Communications Headquarters (GCHQ), the UK’s counterpart to the National Security Agency (NSA), has fired the latest shot in the crypto wars. In a post to Lawfare titled Principles for a More Informed Exceptional Access Debate, two of Britain’s top spooks introduced what they’re framing as a kinder, gentler approach to compromising the encryption that keeps us safe online. This new proposal from GCHQ—which we’ve heard rumors of for nearly a year—eschews one discredited method for breaking encryption (key escrow) and instead adopts a novel approach referred to as the “ghost.” But let’s be clear: regardless of what they’re calling it, GCHQ’s “ghost” is still a mandated encryption backdoor with all the security and privacy risks that come with it. Backdoors have a (well-deserved) horrible reputation in the security community. But that hasn’t dissuaded law enforcement officials around the world from demanding them for more than two decades. And while the Internet has become a more dangerous place for average users, making encryption more important than ever, this rhetoric has hardly changed. What has changed is the legal landscape governing encryption and law enforcement, at least in the UK. 2016 saw the passage of the Investigatory Powers Act, which gives the UK the legal ability to order a company like Apple or Facebook to tamper with security features in their products—while simultaneously being prohibited from telling the public about it. As far as is publicly known, the UK has not attempted to employ the provisions of the Investigatory Powers Act to compromise the security of the products we use. Yet. But GCHQ’s Lawfare piece previews the course that the agency is likely to take. The authors lay out six “principles” for an informed debate, and they sound pretty noncontroversial. Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there’s a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorisation. Investigative tradecraft has to evolve with technology. Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time. Targeted exceptional access capabilities should not give governments unfettered access to user data. Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users. Transparency is essential. So far so good. I absolutely agree that law enforcement should only act where there’s a legitimate need and only when authorized by a court, in a way that evolves with the tech, that doesn’t have unrealistic expectations, that doesn’t enable mass surveillance, that doesn’t undermine the public trust, and that is transparent. But unfortunately, the authors fail to apply the principles so carefully laid out to the problem at hand. Instead, they’re proposing a way of undermining end-to-end encryption using a technique that the community has started calling the “ghost.” Here’s how the post describes it: It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have. Applying this idea to WhatsApp, it would mean that—upon receiving a court order—the company would be required to convert a 1-on-1 conversation into a group chat, with the government as the third member of the chat. But that’s not all. In WhatsApp’s UX, users can verify the security of a conversation by comparing “security codes” within the app. So for the ghost to work, there would have to be a way of forcing both users’ clients to lie to them by showing a falsified security code, as well as suppress any notification that the conversation’s keys had changed. Put differently, if GCHQ’s proposal went into effect, consumers could never again trust the claims that our software makes about what it’s doing to protect us. The authors of the Lawfare piece go out of their way to claim that they are “not talking about weakening encryption or defeating the end-to-end nature of the service.” Hogwash. [T]he ghost will require vendors to disable the very features that give our communications systems their security guarantees in a way that fundamentally changes the trust relationship between a service provider and its users.  They’re talking about adding a “feature” that would require the user’s device to selectively lie about whether it’s even employing end-to-end encryption, or whether it’s leaking the conversation content to a third (secret) party. Is the security code displayed by your device a mathematical representation of the two keys involved, or is it a straight-up lie? Furthermore, what’s to guarantee that the method used by governments to insert the “ghost” key into a conversation without alerting the users won’t be exploited by bad actors? Despite the GCHQ authors’ claim, the ghost will require vendors to disable the very features that give our communications systems their security guarantees in a way that fundamentally changes the trust relationship between a service provider and its users. Software and hardware companies will never be able to convincingly claim that they are being honest about what their applications and tools are doing, and users will have no good reason to believe them if they try. And, as we’ve seen already seen, GCHQ will not be the only agency in the world demanding such extraordinary access to billions of users’ software. Australia was quick to follow the UK’s lead, and we can expect to see similar demands, from Brazil and the European Union to Russia and China. (Note that this proposal would be unconstitutional were it proposed in the United States, which has strong protections against governments forcing actors to speak or lie on its behalf.) The “ghost” proposal violates the six “principles” in other ways, too. Instead of asking investigative tradecraft to evolve with technology, it’s asking technology to build investigative tradecraft in from the ground floor. Instead of targeted exceptional access, it’s asking companies to put a dormant wiretap in every single user’s pocket, just waiting to be activated. We must reject GCHQ’s newest “ghost” proposal for what it is: a mandated encryption backdoor that weakens the security properties of encrypted messaging systems and fundamentally compromises user trust. GCHQ needs to give up the ghost. It’s just another word for an encryption backdoor.
>> mehr lesen

From Encrypting the Web to Encrypting the Net: A Technical Deep Dive on Using Certbot to Secure your Mailserver (Mon, 07 Jan 2019)
We’ve come a long way since we launched Encrypt the Web, our initiative to onboard the World Wide Web to HTTPS. Not only has Let’s Encrypt issued over 380 million certificates, but also nearly 85% of page loads in the United States are over HTTPS, and both figures are still on an upward trajectory. However, TLS, the technology that helps to secure HTTP connections, can and should be used to protect all Internet communications—not just the HTTP protocol used to fetch webpages. Though HTTP/S makes up the majority of Internet traffic, there are other network protocols out there that are extremely important to secure. The Internet’s address book, file-sharing between computers, and email don’t use HTTP; they use other communication protocols which are also insecure by default and can benefit from TLS as well, to varying degrees of success. Since most of Certbot’s users are website owners, most of Certbot’s documentation (and our own TLS messaging) is geared towards this demographic. The “Software” section of Certbot’s instruction generator only lists popular webserver software and reverse proxies. But we’ve decided to expand our mission—from Encrypting the Web to Encrypting the Internet. And we’re tackling SMTP, the protocol that servers used to send email, next! With the most recent release of Certbot v0.29.1, we’ve added some features which make it much easier to use with both Sendmail and Exim. In this guide, we’ll explain how to use Certbot and Let’s Encrypt if you’re trying to secure a mailserver (or actually, anything that isn’t a webserver). Brief background: How does Certbot work? Let’s Encrypt is a Certificate Authority, which issues certificates, and Certbot is a piece of software you run on your server that requests them for you. Certbot makes these requests to Let’s Encrypt’s servers using a standardized protocol called ACME. As part of the ACME protocol, Let’s Encrypt will issue a “challenge” to your server, asking it to prove control over the domain you’re trying to get a certificate for. The most common way to do this requires your server to use port 80 to serve a file with a particular set of contents. Obtaining and renewing your TLS Certificate Since the most common ACME challenge that Certbot performs is over port 80, the complexity in Certbot’s most popular webserver plugins (namely, Apache and Nginx) are so that website owners can obtain and renew certificates while still serving content from the same port 80 without experiencing downtime. If you’re running a mailserver, you might not have a complex service competing for port 80 on your machine, so you don’t need all these bells and whistles. If you do have a webserver running on port 80, you can also supply a webroot directory for Certbot to use. Either way, Certbot is still easy to use! First, use our instruction generator for the recommended way to install Certbot on your platform: Choose “None of the above” in the software selector. In the system selector, choose the closest match to the operating system where you’re running the mailserver. Then, you’ll want to follow the instructions for running Certbot with the --standalone flag with your mailserver's hostname as the domain flag. sudo certbot certonly --standalone -d <mail.example.com> [If you are running a webserver on the same machine, you’ll need to use our webroot plugin instead of the `standalone` flag!] Make sure to also follow through the Automating renewal section, and set up a regular cronjob, systemd timer, or equivalent on your system to run certbot renew regularly. A note about port 80 If you've got a firewall blocking port 80 on your machine, you'll have to punch a port-80-shaped hole for the duration of Certbot's challenge. You can do this by adding the following to /etc/letsencrypt/cli.ini: pre-hook = <allow port 80 traffic through firewall>post-hook = <disallow port 80 traffic through firewall> There's more information about these renewal hooks in Certbot's documentation. Installing the certificate Once you’re done, your certificate and associated private key will be stored at: /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem/etc/letsencrypt/live/<HOSTNAME>/privkey.pem Where <HOSTNAME> is the hostname for your mailserver; for instance, mail.example.com. Point your mailserver configuration files at these filepaths. You should be able to read up on your particular mailserver’s guide for setting up TLS; we’ve included some examples for popular email software below. If you have trouble at this step, or your documentation isn’t clear, ask for help! Some folks at the Let’s Encrypt Community Forums may be able to help you install your shiny new certificate. Congratulations! That’s it. You now have your very own certificate. Guides for particular mailservers The most recent release of Certbot (v0.29.1) provides some features that make it easier to use with some mailserver software, including Exim and Sendmail. In particular, you can set the group owner and group mode on the private key, which should be preserved on each renewal. Postfix Run the following commands: sudo postconf -e smtpd_tls_cert_file=/etc/letsencrypt/live/<HOSTNAME>/fullchain.pemsudo postconf -e smtpd_tls_key_file=/etc/letsencrypt/live/<HOSTNAME>/privkey.pem Your new certificates should roll over in about a day; if you’d like this change to take place immediately, run sudo postfix reload to reload Postfix. Dovecot Most Linux distributions throw Dovecot SSL configs into /etc/dovecot/conf.d/10-ssl.conf. Edit the file to point to your new certificates: ssl_cert = </etc/letsencrypt/live/<HOSTNAME>/fullchain.pemssl_key = </etc/letsencrypt/live/<HOSTNAME>/privkey.pem Then, ask Dovecot to reload its configuration. sudo doveadm reload Sendmail (Certbot 0.29.1+) Check where the TLS options are set on your system. For instance, on Debian distributions, this is /etc/mail/tls/starttls.m4 by default. Set the following variables to point to your new certs: define(`confSERVER_CERT', `/etc/letsencrypt/live/<HOSTNAME>/fullchain.pem')dnldefine(`confSERVER_KEY', `/etc/letsencrypt/live/<HOSTNAME>/privkey.pem')dnldefine(`confCACERT', `/etc/letsencrypt/live/<HOSTNAME>/chain.pem')dnldefine(`confCACERT_PATH', `/etc/ssl')dnl As of Certbot 0.29.1, the permissions should be set properly on your private key. If your Certbot version is earlier than this, you’ll have to put chmod 600 /etc/letsencrypt/live/<HOSTNAME>/privkey.pem in a hook. Then re-compile your configs and restart sendmail: make -C /etc/mail install && make -C /etc/mail restart Exim (Certbot 0.29.1+) Exim usually doesn’t run under root, but under a different user group. Set the permissions of the cert directory and key material, as well as the appropriate places in the `archive` directory. HOSTNAME=<mail.example.com, for instance>GROUPNAME=<Debian-exim, for instance>DIRECTORIES=”/etc/letsencrypt/live /etc/letsencrypt/live/$HOSTNAME /etc/letsencrypt/archive /etc/letsencrypt/archive/$HOSTNAME”chmod 640 /etc/letsencrypt/live/$HOSTNAME/privkey.pemchmod 750 $DIRECTORIESchgrp $GROUPNAME $DIRECTORIES /etc/letsencrypt/live/$HOSTNAME/privkey.pem As of Certbot 0.29.1, the permissions you set on your private key material should be preserved between renewals. If your Certbot version is earlier than this, you’ll have to put the above in a hook or your renewal cronjob. Then, set the following variables in your Exim configuration: tls_certificate= /etc/letsencrypt/live/<mail.example.com>/fullchain.pemtls_privatekey = /etc/letsencrypt/live/<mail.example.com>/privkey.pem And restart Exim. A note about older versions of Certbot Both Sendmail and Exim have permissions requirements for the private key file that you give them. Versions of Certbot older than 0.29 may not preserve your keys’ permissions settings, so you’ll have to perform the permissioning adjustments mentioned above in a post hook or in your renewal cronjob.
>> mehr lesen

Fair Use Continued to Bear the Weight of Protecting Speech and Innovation: 2018 in Review (Tue, 01 Jan 2019)
Fair use provides breathing space in copyright law, making sure that control of the right to copy and distribute doesn’t become control of the right to create and innovate. New technologies and services depend on the creation of multiple copies as a matter of course. At the same time, copyright terms cover works many decades old and copyrighted software appears in more and more devices. Taken together, these developments mean the potential reach of copyright may extend ever further. Fair use makes sure that the rights of the public expand at the same time. Unfortunately, the courts did not always let fair use play that role in 2018. On the plus side, the long-running litigation over the online publication of building codes and other standards that governments have adopted as binding law offered a prime example of fair use’s importance. As part of its mission of creating a comprehensive, fully accessible database of the law, Public Resource.org posts those binding standards on its website. Six industry groups, known as standards development organizations, accused PRO of copyright and trademark infringement for posting those standards online. In effect, they claimed the right to decide who can copy, share, and speak the law. In 2017, a federal district court ruled in favor of the standards organizations, and ordered PRO not to post the standards. In July 2018, the Court of Appeals for the D.C. Circuit reversed that decision, ruling that the district court did not properly consider copyright’s fair use doctrine. It rejected the injunction and sent the case back to district court for further consideration of the fair use factors at play. “[I]n many cases,” wrote the court, “it may be fair use for PRO to reproduce part or all of a technical standard in order to inform the public about the law.” This is an important ruling for the common-sense rights of all people. As Judge Katsas, the demands of the industry groups for exclusive control of the law "cannot be right: access to the law cannot be conditioned on the consent of a private party." Based on today’s unanimous ruling, EFF is confident we can demonstrate that Public Resource's posting of these standards is protected fair use. The law belongs to all of us. We all have a right to read, understand and share it. Unfortunately, fair use defenses were rejected in two other long-running cases. In February, the Second Circuit ruled against part of the service offered by TVEyes, which creates a text-searchable database of broadcast content from thousands of television and radio stations in the United States and worldwide. The service is used by exactly who you’d think would need a service like this: journalists, scholars, politicians, and so on, in order to monitor what’s being said in the media. If you’ve ever read a story where a public figure’s words now are contrasted with contradictory things they said in the past, that story likely relied on TVEyes. In 2014, the district court held that a lot of what TVEyes does is fair use, but asked to hear more about customers’ ability to archive video clips, share links to video clips via email, download clips, and search for clips by date and time (as opposed to keywords). In 2015, the district court found the archiving feature to be a fair use, but found the other features to be “infringing.” On appeal, the Second Circuit reversed [PDF] the 2015 finding that the archiving was fair use and upheld the finding that the rest of the TVEyes’ video features are not fair use. That’s a hugely disappointing result that could result in a decrease in news analysis and commentary. The following month, the Federal Circuit did its part to thwart innovation by holding that Google’s use, in its Android mobile operating system, of Java API labels infringed Oracle’s copyright. Rejecting the jury verdict, the district court’s holding, and established law, the appellate court held that Google’s use was not a fair use. This case should never have reached that stage. The works at the heart of the case are Java API labels that, as Google (and EFF) argued, should not even be eligible for copyright protection. Judge Alsup, who demonstrated some proficiency with programming Java in the first leg of the case, came to the same conclusion. But then it went to the Federal Circuit on appeal. The Federal Circuit, which usually focuses on patent issues, had jurisdiction because Oracle’s lawsuit originally contained a patent claim. Because the case was litigated in the Northern District of California, however, the Federal Circuit was supposed to apply Ninth Circuit law. Instead, it misread that law, reversed Judge Alsup’s ruling, and sent everyone back to San Francisco to litigate the question of whether Google’s use was a fair use. Here, again, the Ninth Circuit generally protects innovation by recognizing that copying of a functional work, like an API, is often necessary and appropriate in order to make something new. Consistent with that longstanding principle, when the jury was asked to evaluate the facts and apply the applicable legal standard, it found that Google’s use was fair. Having gone to the trouble of sending this case to a jury, you might think the Federal Circuit would respect the jury’s decision. It did not. In the court’s view, the jury’s finding was simply advisory and incorrect as a matter of law. The court overruled it and found that Google’s use of a highly functional work was not fair. The Federal Circuit has upended decades of software industry practice and created legal uncertainty that will chill innovation. Google is now looking to the Supreme Court to help clean up this mess. Let’s hope the Court takes the case. As we move past 2018, we’ll work to make sure the common-sense idea that access to laws can’t require permission is protected. We’ll also continue to fight for fair use for tools that help with analysis and commentary. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

The Long Fight to Stop Mass Surveillance: 2018 in Review (Tue, 01 Jan 2019)
EFF is in it for the long run, especially in the important, hard fights for your rights. One of the longest running fights in online civil liberties is over your right to have a private conversation over a digital network. Whether it’s for our intimate relationships, our healthcare, our associations and political organizing, or for our business relationships, we all need assurance that Big Brother isn’t accessing our data and communications. 2018 saw some creeping steps forward towards freeing our digital networks from mass surveillance, but the road remains long. Ensuring that national security doesn’t become an excuse for ubiquitous untargeted surveillance isn’t easy. And it doesn’t move quickly. EFF has fought for years to stop the NSA from tapping the Internet. In our flagship case Jewel v. NSA, the court went further this year than any has so far, requiring the government to answer basic questions about the scope of its tapping into the Internet backbone. The government was also required to provide information about several discontinued programs, including the mass telephone records collection and its collection of Internet metadata. Getting the Court to require this information from the government was a hard-fought victory, but unfortunately, the public won’t see those answers anytime soon. The court flatly rejected our request to get access to any of the new information it provided to the court in secret, even though that information was in response to our litigation discovery requests. Instead, the court again required us to demonstrate our legal standing, but instead of letting us use the new information it received, it required us to do so with only the publicly available evidence. At the same time, it allowed the the government to move for summary judgment in its favor while relying on the secret evidence, putting us at a tremendous disadvantage. Nevertheless, we have persevered and presented four new witnesses to buttress our previous whistleblower and other public evidence. The government played even more tricks along the way. It refused to confirm that two important NSA Office of Inspector General reports were authentic. These reports, which are plainly what they purport to be, confirm that AT&T participated in mass surveillance. In order to ensure that the reports could be entered into evidence, EFF presented a declaration from Edward Snowden about one and a declaration from the New York Times’ general counsel about the other. As of December 7, 2018, the briefing in this unprecedented process is now complete and the Court has set a February 1, 2019, hearing date. One of the new bits of evidence we asked the court to take note of in Jewel came from the UK, where NGOs including Big Brother Watch have also been challenging surveillance by the NSA and its closest international partner, GCHQ. In September, after years of litigation, the European Court of Human Rights found that GCHQ’s mass surveillance programs violated important human rights principles of privacy and free speech. While the court in the Big Brother Watch case disappointingly left open the possibility that mass surveillance can be consistent with these human rights, it found that the safeguards intended to limit GCHQ’s activities were woefully inadequate. In addition to progress in Jewel, we can look forward to a number of other developments in 2019. On the litigation front, the appeals court in United States v. Hasbajrami will likely issue its ruling on whether warrantless NSA surveillance violated a defendant’s Fourth Amendment rights. And Congress will once again take up Section 215, the law that NSA relied on to collect Americans’ phone call records for decades until it was reformed in 2015 as part of the USA Freedom Act. Section 215 is set to expire in December 2019, and we’ll be drawing on our experience with USA Freedom and the largely disappointing reforms to Section 702 at the beginning of this year. Ensuring that national security doesn’t become an excuse for ubiquitous untargeted surveillance isn’t easy. And it doesn’t move quickly. But in the end, it is one of the most enduring and important battles to ensure that freedom continues to exist in the digital world. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!  Related Cases:  United States v. Hasbajrami Jewel v. NSA
>> mehr lesen

Data Privacy Scandals and Public Policy Picking Up Speed: 2018 in Review (Mon, 31 Dec 2018)
2018 may be remembered as the Year of the Facebook Scandal, and rightly so. The Cambridge Analytica fiasco, Mark Zuckerberg’s congressional testimony, a massive hack, and revelations of corporate smear campaigns were only the tip of the iceberg. But many more companies mishandled consumer privacy in 2018, too. From the Strava heatmap exposing military locations in January to the gigantic Marriot hack discovered in November, companies across Silicon Valley and beyond made big mistakes with consumer data this year—and lawmakers and the public have taken notice. Tech Companies Putting Their Profits Before Your Privacy The problem that came into focus in 2018 was not just hacks, breaches, or unauthorized bad guys breaking into systems. Instead, 2018’s worst privacy actors were the tech companies themselves, harvesting of mountains of users’ data and employing flawed systems to use and share it. Facebook’s Cambridge Analytica scandal, for example, was the result of a feature of Facebook’s Graph API in 2014. In this case, Facebook was designed to collect as much user information as possible, and then share it indiscriminately with third-party developers. In a set of newly revealed emails from 2012, Mark Zuckerberg acknowledged that he knew “we leak info to developers,” but didn’t think there was enough “strategic risk” to do anything about it. Google’s social network didn’t perform much better. The final nails in the coffin of Google+ came with two API bugs: one quietly announced in October that exposed the personal information of half a million users, and an even bigger one revealed in December. Unlike Facebook’s Cambridge Analytica problems, these bugs were unintended engineering mistakes. But they exposed users to the same risk: the exposure of users’ personal information to third-party developers without anything resembling informed consent. 2018 also saw tech companies creep further into our wallets and our homes. Facebook and Google reportedly partnered with banks and bought financial data in secret, raising serious privacy concerns about giving companies access to yet another sensitive category of information. The torrent of data-related scandals this year drove new popular awareness of privacy issues. Big companies made big new investments in the Internet of Things, with Facebook introducing Portal and Google introducing the Home Hub, both designed to put their manufacturers at the center of home life. Companies also gave users new reasons to question the privacy limits on their home assistant devices. One couple’s Amazon Alexa silently recorded one of their conversations and sent it to a colleague. And Facebook was unable to clearly say whether data collected through Portal could or would be used for targeting ads. The torrent of data-related scandals this year drove new popular awareness of privacy issues. The Pew Research Center found that a whopping 74 percent of American adults had adjusted their Facebook privacy settings, taken a break from the platform, or deleted its app from their phones. More broadly, it also found that people are worried about their personal information online, and that the vast majority of American adults say it is important to them to be in control of who can get information about them. User Privacy and the Law Many legislators agree. 2018 was a blockbuster year for legislative action on privacy. On May 25, Europe’s General Data Privacy Regulation (GDPR) took effect. The law includes some of the most ambitious privacy protections ever put into force. However, the immediate impact of the regulation has been a mixed bag. On paper, GDPR prohibits tracking unless the user has opted in. In reality, users are being confronted with “consent management” pop-ups which enable “consent” with one click but erect an obstacle course for anyone who wants to refuse. A challenge moving forward is to successfully engineer meaningful systems of consent that are not stymied by evasive company systems that generate consent fatigue. In the United States, 2018 may go down as the year that government began to get serious about privacy. Some sites, such as Facebook and Yahoo, simply deny access to users who don't agree to allow tracking, making a mockery of the idea of choice. Other organizations, like ICANN, made some privacy-positive improvements under GDPR, but did not take the opportunity to go far enough. And it remains to be seen whether the GDPR can curb the most entrenched and sophisticated trackers, including companies that currently use browser fingerprinting to sidestep users’ attempts to opt out. Worst of all, the government of Romania tried to use GDPR to force journalists to reveal their sources, underlining the importance of strong exceptions for newsgathering in any privacy legislation. In the United States, 2018 may go down as the year that government began to get serious about privacy. The deluge of privacy scandals, from Equifax to Cambridge Analytica, made room for serious privacy proposals on the legislative floor. Responding to the Equifax debacle, Vermont passed a trailblazing new law that begins to regulate data brokers. The California Consumer Privacy Act (CCPA), though far from perfect, is a good start—and there is a lot of work to be done before it goes into effect in 2020. EFF will fight to improve the law and oppose industry efforts to weaken it. The Federal Trade Commission scheduled a series of hearings about “Competition and Consumer Protection in the 21st Century,” with digital privacy as a central theme. As part of our ongoing investigation into the overlap between corporate concentration and civil liberties, EFF submitted comments calling for increased scrutiny of mergers and acquisitions that would combine large, sensitive sets of user data in the hands of the tech giants. We’ve drawn attention to the way Google, which owns the largest browser and largest tracking network in the world, uses its power to protect its own interests rather than protecting its users. We’ve also lobbied the U.S. Department of Commerce to apply a users’ rights framework to any future policy proposals. Even as some lawmakers moved to protect users’ privacy, corporations increased their lobbying at the both the state and federal levels to try to protect their own interests. In Illinois, hostile bills and legal attacks threatened to defang the state’s Biometric Information Privacy Act, the strongest protection for biometrics like fingerprints, voiceprints, and facial recognition in the country. In California, as noted above, EFF is fighting industry efforts to weaken the newly-passed CCPA. And in Washington, DC, Big Tech has attempted to “preempt” (a legal term for “dismantle”) strong state-level privacy laws with weaker federal legislation. We’ve resisted those efforts. While the tech industry has been pitching its version of “privacy law,” EFF has outlined its own recommendations for a legal framework that protects users’ civil liberties online without undermining innovation. We’ve explained how legislatures at every level can establish smart, effective, and carefully-tailored rules to protect user privacy, defend the freedom to tinker, and avoid impeding speech or innovation. We’ve also endorsed the idea of treating tech companies as information fiduciaries, which would legally require them to use your information in your best interests. The tech company scandals and legislative complexity around consumer privacy show no signs of slowing down in 2019—and neither will we. EFF will be here to keep fighting for users’ privacy rights in 2019 and beyond. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Wrangling With Monopolies: 2018 in Review (Mon, 31 Dec 2018)
This year has brought numerous stories of large Internet companies using their dominance of key Internet functions in ways that harm users and shut out competitors. From Google’s treatment of competing search companies in its results, to Facebook’s playing favorites with its developer APIs, to AT&T and Comcast’s ongoing quest to charge websites for the privilege of reaching you, monopoly power and its abuses are on vivid display. Worse, unlike in previous technology cycles, the dominance of these companies has proven to be sticky. The world has taken notice, with voices from across the political spectrum calling for new approaches. Market concentration and monopoly power in the online world have always shaped EFF’s work. This year, we’ve begun to tackle competition issues head-on. One focus is the legal doctrine that deals directly with problems of monopoly power—antitrust law. This year, we’ve given comment and testimony to the Federal Trade Commission on ways that U.S. antitrust could evolve to deal with today’s Internet. We argued in favor of a broader version of antitrust law’s consumer welfare standard that looks to speech, privacy, and innovation harms, not to consumer prices alone. There are stiff headwinds in the antitrust world. The Supreme Court issued a major decision on antitrust in “two-sided markets” this year that could make it harder to bring claims against the Internet giants. In Ohio v. American Express, the Court ruled that companies who facilitate transactions between two groups of customers (in that case, merchants and credit card users) aren’t liable for practices that raise one group’s prices as long as the other group's benefits are greater. Another pending case will test whether Apple can structure its relationship with app developers in a way that blocks ordinary consumers from suing Apple for inflating app prices. Antitrust isn’t the only tool for promoting competition. That’s why we’ve continued working to reform legal doctrines that have been misused to thwart competition, including the Computer Fraud and Abuse Act (CFAA), section 1201 of the Digital Millennium Copyright Act (DMCA), and the unthinking enforcement of website terms of service. We’ve also taken a close look at data portability and interoperability proposals and how they could help break the dominance of the big Internet platforms by allowing users to move without leaving their friends and data behind. And we’ve continued our work on competition at the Internet service provider level, helping to win strong net neutrality protections in California and fighting to preserve small and midsized ISPs as alternatives to Comcast and AT&T. In the coming year, we plan to work with other groups that are focusing on antitrust and competition in the Internet economy to make antitrust a more useful tool, and to make sure that other laws (like copyright, patent, computer intrusion, and contract law) help to promote competition, not stifle it. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

Where Governments Hack Their Own People and People Fight Back: 2018 in Review (Sun, 30 Dec 2018)
Throughout 2018, new surveillance practices continued to erode the privacy of people in Latin America. Yet local and regional digital rights organizations continue to push back with strategic litigation, journalists and security researchers investigate to shed light on government use of malware, and local activists work tirelessly to fight overarching surveillance laws and practices across the region. Brazil: Secretly Tracking 600,000 Subway Riders In a win for privacy, the São Paulo Court of Justice ordered a halt to the collection of subway passengers’ data, using advertisements on subway trains that tracked user's facial expressions and traits. The Brazilian Institute of Consumer Protection (IDEC) and the Latin American Network of Surveillance, Technology and Society Studies (LATVIS), sued Via Quatro, a concessionaire in São Paulo’s subways, defending the privacy rights of around 600,000 Brazilians who use the public transport system everyday. Mexico: Murders and Hacking #GobiernoEspia Mexico has remained in the headlines this year for privacy violations. In 2018, Citizen Lab, with the ARTICLE 19 Office for Mexico and Central America, Mexican NGO R3D, and SocialTIC, revealed that two journalists from Rio Doce—an independent news outlet covering drug cartels—were targeted with malware. The journalists received text messages laced with Pegasus malware made by the Israeli spyware firm NSO Group. They links were sent to them after their colleague, award-winning Mexican journalist and Rio Doce co-founder Javier Valdez died of 12 bullet wounds. The Mexican government has been denounced before for illegally spying on twenty of its most outspoken critics. Despite abundant evidence pointing to the illegal use of Pegasus in Mexico, NSO Group has apparently maintained its relationship with the Mexican government. In response, R3D last August filed civil lawsuits in Israel and Cyprus against NSO Group alleging negligence and complicity to human rights violations. R3D is demanding that NSO Group cease its services and be held accountable for its role in the Mexican government's human rights violations. R3D also seeks to hold Mexican officials responsible for these abuses. Guatemala: Planting Malicious Software on Citizens' Computers Governments have used the same malicious software that petty internet criminals use to take over innocent users' computers, for the purpose of social control. This year, El Nuevo Diario published a groundbreaking report revealing a years-old, vast, and illegal spying operation against Guatemalan activists, entrepreneurs, politicians, journalists, diplomats, and social leaders. The report found the government of the Patriot Party (Partido Patriota) spent more than 90 million quetzales (US $12 million) on IMSI-catchers and software to monitor and collect social media information for investigations and surveillance. They also purchased malicious software from the world's most notorious malware providers: Hacking Team’s Galileo and NSO Group’s Pegasus. The news revealed that the government used those tools to target protesters fighting government corruption in 2015. Digital rights organizations such as Fundacion Acceso and IPANDETEC used this opportunity to raise awareness about privacy rights, despite the country's deeply rooted culture of secrecy surrounding surveillance. Argentina: Dangerous Attempts to Legalize Indiscriminate Government Hacking 2018 saw dangerous legislative efforts to authorize the unregulated use of government hacking by both the city of Buenos Aires and at the federal level. The Centro de Estudios Legales y Sociales, Asociación por los Derechos Civiles, la Asociación Civil por la Igualdad y la Justicia, Fundación Vía Libre, and others fought back against a reform to the Buenos Aires’ Criminal Procedure Code and the Federal Criminal Procedure Code to enable "special investigative measures," such as the government use of malware in criminal investigations. In a win for privacy, those provisions were dropped. These technologies are invasive and surreptitious, and raise far different privacy and security concerns than traditional wiretapping. Each of these new powers is a ticking time-bomb for potential abuse. The dangerous bill failed to provide even basic controls necessary to constrain its use, an independent judiciary who will enforce those limits, or any public oversight mechanism that would allow the general public to know what its country's most secretive government agents are doing in their name. Chile: Creating False Evidence "Operation Hurricane," run by police in Chile's La Araucanía region, prompted the 2017 arrest of eight Mapuche community members, an indigenous group in South Central Chile, accused of forming an illicit terrorist association, using electronic chats as evidence. This year, Operation Hurricane thrust state surveillance into the digital age to the forefront of Chilean public opinion. In a shocking turn, the Chief Prosecutor (Fiscal) of the High Complexity Unit of La Araucanía confirmed the prosecution of officials from the Police Intelligence Directorate of Carabineros for obstruction of justice by producing false evidence to incriminate the Mapuche community members. The Latin American digital rights group Derechos Digitales has been demanding the truth about Operation Hurricane, calling for reforms of Chilean intelligence services, and stressing the need to adopt laws that comply with Chile’s human rights obligations. The Fight Goes On This year, privacy rights have faced unprecedented attacks from Latin American governments and companies—attacks that the Latin American digital rights community has been instrumental in repelling.  In addition those we've already mentioned, these groups include: the Karisma Foundation in Colombia, which is fighting against facial recognition and CCTV cameras in Colombian subway stations; TEDIC from Paraguay, which raises awareness of surveillance practices such as the use of biometric systems; and international organizations such as ARTICLE 19, with regional offices in Mexico and Brazil, supported by an international office in London. While concerns and actions in Europe and the United States often get the international headlines, local groups in Latin America are doing the vital groundwork of investigating transgressions, lobbying for change, and litigating for justice. We hope that, as the public begins to recognize the growing threats, they will also do more to support organizations doing important work in Latin America. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Terrorism Lawsuits Threaten Lawful Speech: 2018 in Review (Sun, 30 Dec 2018)
One of the most important principles underpinning the Internet is that if you say something illegal, you should be held responsible for it—not the owners of the site or service where you said it. That principle has seen many threats this year—not just in federal legislation, but also in a string of civil lawsuits intended to pin liability on online platforms for allegedly providing material support to terrorists. Several federal trial courts dismissed such suits this year, but some of these cases are on appeal and plaintiffs have filed several new ones. If these suits are successful, they could be detrimental for the Internet: platforms would have little choice to become much more restrictive in what sorts of speech they allow. Without definitive rulings that these cases cannot stand under existing law, they continue to threaten the availability of open online forums and Internet users’ ability to access information. That’s why EFF filed legal briefs in 2018 asking two different federal appellate courts to dismiss material support cases against social media platforms. As well-intentioned as these cases are, they pose a threat to the online communities we all rely on. The good news: So far, courts have been quick to toss out these material support lawsuits, including the U.S. Court of Appeals for the Ninth Circuit, the first federal appellate court to hear one. Although the facts and claims vary, the majority of the cases seek to hold platforms such as Twitter, YouTube, and Facebook liable under the federal Anti-Terrorism Act. The lawsuits usually claim that by allowing alleged terrorists to use their publishing or messaging services, online platforms provided material support to terrorists or aided and abetted their terrorist activities. A key allegation of many of these lawsuits is that the pro-terrorism content posted by particular groups radicalized or inspired the actual perpetrators of the attacks, thus the platforms should be liable for the harm suffered by the victims. The facts underlying all of these cases are tragic. Most are brought by victims or family members of people who were killed in attacks such as the 2016 Pulse nightclub shooting in Orlando. As well-intentioned as these cases are, they pose a threat to the online communities we all rely on. In seeking to hold online platforms liable for what terrorists and their supporters post online—and the violence they ultimately perpetrate—such lawsuits threaten Internet users’ and the platforms’ First Amendment rights. They also jeopardize one of the Internet’s most important laws, Section 230 (47 U.S.C. § 230). Section 230 protects online platforms, in part, from civil lawsuits based on content created or posted by their users. The law is largely responsible for the creation and continued availability of a plethora of online forums and services that host a diverse array of user speech, ensuring that all views—even controversial ones—can be shared and heard. Section 230 lets anyone—regardless of resources, technical expertise, or geography—communicate with others around the world. Despite Section 230 barring all civil claims for hosting user-generated content, if the lawsuits brought under the Anti-Terrorism Act succeed in imposing liability on the social media companies, they would open up a huge exception to Section 230 and undermine its legal protections for all online platforms. That would have dire repercussions: if online platforms no longer have Section 230 immunity for hosting content even remotely related to terrorism, those forums and services will take aggressive action to screen their users, review and censor content, and potentially prohibit anonymous speech. The end result would be sanitized online platforms that would not permit discussion and research about terrorism, a prominent and vexing political and social issue. Although federal trial courts and one appellate court have largely avoided undermining Section 230 and Internet users’ First Amendment rights, they have not entirely shut the door on these types of lawsuits. The U.S. Court of Appeals for the Ninth Circuit, for example, missed a clear opportunity to rule that Section 230 bars these types of lawsuits. That’s why EFF this year filed two friend-of-the-court briefs in cases before the United States Courts of Appeals for the Second and Sixth Circuits arguing that the courts should rule that Section 230 and the First Amendment prevent lawsuits under the Anti-Terrorism Act that seek to hold online platforms liable for content posted by their users—even if some of those users are pro-terrorism or terrorists themselves. We hope that the courts vindicate Section 230 and the First Amendment in these material support cases. We will continue to monitor these cases and stand up for Internet users’ rights. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

From Encrypting the Web to Encrypting the Net: 2018 Year in Review (Sat, 29 Dec 2018)
We saw 2017 tip the scales for HTTPS. In 2018, web encryption continues to improve. EFF has begun to shift its focus towards email security, and the security community is shifting its focus towards further hardening TLS, the protocol that drives encryption on the Internet. By default, all Internet traffic is unencrypted and subject to tampering, including HTTP. A technology called TLS (Transport Layer Security) can provide authenticated encryption and message integrity so no one can mess with or listen in on your Internet traffic. Since 2010, EFF has been actively campaigning to encrypt the entire web—that is, for websites to adopt HTTPS, which is TLS added to HTTP. Due to the success we’ve seen on web, EFF is zooming out and tracking encryption of the entire Internet, starting with email. Let’s take a closer look at what has happened this year in encrypting not just the web, but the entire Internet! Continuing the Trend in Encrypting the Web It has been a landmark year in encrypting the web. As of writing, 77% of pageloads across the world in Firefox are over HTTPS, and that number looks even higher on Chrome. On the browser side, HTTPS Everywhere continues to see improvements in both user experience and security. With over a million daily active users and over five million downloads just this year, the extension is in a great position to provide more security features to users as HTTPS support continues to rise. The extension provides a more complete and up-to-date dataset of websites that support HTTPS, which can help navigate more severe security errors and help push insecure sites to make the move to HTTPS through user advocacy. We hope in the next year to provide a platform for users to encourage even more sites to support HTTPS. Thanks to Let’s Encrypt and Certbot, it’s easier than ever to turn on HTTPS for your website. In February, we were excited that Let’s Encrypt had issued 50 million active certificates. Today, this number has reached 87 million! Certbot operates at a similar scale, with millions of users using Certbot every month to obtain and renew their certificates. And it’s continuing to improve—at the beginning of the year, a new version of ACME (the protocol that drives Let’s Encrypt and Certbot) was released, allowing website owners to obtain wildcard certificates in an easy and automated way. And it’s not just EFF. The entire ecosystem is working together to make web more secure. In July, Chrome began marking HTTP sites as “not secure,” leading to a noticeable increase in worldwide HTTPS adoption. Hosting providers like GitHub Pages have started providing Let’s Encrypt certificates too, making “turning on HTTPS” a one-click process for their customers. And these examples are just a couple small drops in a giant wave of HTTPS. The ecosystem is on board and as excited as we are to make the insecure web a relic of the past. Onwards, Towards Encrypting the Net Given the success in encrypting the web, EFF is broadening the scope of its mission to encrypting the entire Internet—starting with email. As of this year, Let’s Encrypt certificates are now trusted by all major root programs, meaning it’s trusted by major operating systems and devices, in addition to browsers. We can safely assume that every modern computing device has the means to authenticate a Let’s Encrypt certificate, so let’s get started! This year, EFF rebooted STARTTLS Everywhere, an initiative to track the security of the email ecosystem. According to Google’s Transparency Report, approximately 90% of emails sent to or from Gmail are encrypted using STARTTLS. However, not only is STARTTLS vulnerable to a simple downgrade attack, but email has no widely-used TLS certificate authentication mechanism. This means it’s also vulnerable to on-path impersonation attacks. Similar to HTTPS Everywhere’s rulesets and the HSTS preload list on modern browsers, we’re maintaining and distributing a list of mailservers’ TLS information. Certbot has also released some improvements that make it easier to use with mailserver software. And just a couple months ago, the Internet Engineering Task Force (IETF) published two RFCs (Request for Comments, typically documents that describe new Internet standards), MTA-STS and TLSRPT, which have been in the works since 2014. MTA-STS provides a way for mailservers to discover other mailservers’ TLS information, and TLSRPT closes an error-reporting feedback loop that may help lower breakages from TLS misconfigurations, thus lowering the risk of deploying new security standards. Improving TLS In the realm of Encrypting the Net, 2018 has also seen several improvements to TLS itself. The specification for TLS 1.3 has landed, making TLS way faster by shortening the initial handshake drastically, and hardening its security by enabling forward secrecy by default. To work properly, TLS relies on third parties called Certificate Authorities (CAs) like Let’s Encrypt to behave. Certificate Transparency, a technology to dramatically increase CA accountability and auditability, has gained a lot of traction in 2018. Starting in April, Chrome started requiring Certificate Transparency for all newly issued certificates. Let’s Encrypt also rolled out full support by embedding Certificate Transparency proofs in their issued certificates. Finally, we saw a number of experiments and continuing work with DNS-over-HTTPS, DNS-over-TLS, and Encrypted SNI, which help protect Internet-browsing metadata from being exposed to network eavesdroppers. We’ve come a long way, but still have a long way to go. Let’s resolve to close the gap and really get “HTTPS everywhere” next year. Here’s to hoping 2019 will be as fruitful for Internet security as the past couple of years have been for web security. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Congress Censors the Internet, But EFF Continues to Fight FOSTA: 2018 in Review (Sat, 29 Dec 2018)
EFF fought FOSTA in 2018. We fought the bill in Congress and, when the president signed it into law, immediately set our sights on challenging it in court. The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) was ostensibly passed to fight sex trafficking. The reality, however, is that the law makes sex trafficking victims less safe while also criminalizing the protected speech of those who advocate for, and provide resources to, adult consensual sex workers. It’s the broadest Internet censorship law in more than two decades. Numerous free speech organizations raised concerns about the bill. So did a litany of sex trafficking experts, who pointed out that it would directly put trafficking victims in more danger. From the time that FOSTA was introduced in Congress (along with its sibling bill SESTA, the Stop Enabling Sex Traffickers Act), tens of thousands of you wrote or called your members of Congress and urged them to reject the bill. You understood that FOSTA would do nothing to stop sex trafficking, but it would force online platforms to become much more restrictive, silencing a lot of marginalized voices in the process. You weren’t alone: numerous free speech organizations raised concerns about the bill. So did a litany of sex trafficking experts, who pointed out that it would directly put trafficking victims in more danger. Unfortunately, we were all drowned out. At the end of 2017, big Internet companies began to endorse and lobby for the bill, leaving free speech advocates outnumbered. After some minor amendments, the bill passed in March. It didn’t take long to start seeing the bill’s harmful effects. Websites started removing forums for speech that were even remotely related to sexual content, such as Craigslist removing its entire personals section. Other platforms shut down entirely rather than face FOSTA’s crushing criminal and civil liability. The resulting censorship has acutely harmed sex workers, who relied on the Internet to help screen clients, avoid violence by sharing electronic “bad date” lists, and seek healthcare and other resources. Some sex workers report that FOSTA has forced them back into more dangerous, street-based work. FOSTA also represented the first time Congress repealed portions of the most important law protecting online speech: Section 230. Section 230 is largely responsible for creating the diverse array of Internet platforms and other services that allow anyone to speak, publish, and organize online. Although EFF and its allies fought hard against FOSTA throughout 2017 and early 2018, we were not able to stop it from becoming law. But the fight to stop FOSTA did not end there. In July, EFF and a team of outstanding lawyers filed a lawsuit challenging FOSTA’s constitutionality on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist. The lawsuit argues that that FOSTA silences online speech by muzzling Internet users and forcing online platforms to censor their users, violating the First Amendment in multiple respects. It punishes certain types of speech, including expressing certain viewpoints that advocate for decriminalization of sex work. Despite its supporters’ claims, the law is not narrowly tailored to ban only criminal acts. It broadly sweeps up a host of protected speech within its prohibitions, much of which is not defined. Further, the terms in the law are so vague that it’s unclear what exactly Congress sought to prohibit, creating uncertainty for many Internet speakers as to whether what they say creates liability under the law. FOSTA also violates the Fifth Amendment’s Due Process Clause because of its vague and undefined terms. And because FOSTA explicitly made Internet speakers and online platforms liable for speech that occurred well before Congress passed the law, it violates the Constitution’s prohibition on ex post facto laws. The trial court hearing the challenge dismissed the case in late September without addressing whether FOSTA was unconstitutional. We think the decision is wrong and have asked a federal appellate court to reverse the dismissal. The appellate court has scheduled briefing in the case to begin in early 2019, and we look forward to having our day in court to demonstrate that FOSTA should be struck down. The fight over FOSTA is just one example of how EFF’s work in Congress and in the courts complement each other. We lost the fight in Congress, but we are hopeful that constitutionally protected rights will prevail. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! Related Cases:  Woodhull Freedom Foundation et al. v. United States
>> mehr lesen

The Year of the GDPR: 2018’s Most Famous Privacy Regulation in Review (Fri, 28 Dec 2018)
To the extent that 260-page regulations can ever be said to be “famous,” Europe’s General Data Protection Regulation (GDPR) certainly had its moment in limelight in 2018. When it came into force on May 25, it was heralded by a flurry of emails from tech companies, desperate to re-establish their absolutely bona-fide relationships with your email address before the regulations’ stricter rules around user consent came into force. The barely-concealed panic in some corners led to editorials, memes, and even a meditation app that marketed itself (presumably in compliance with the GDPR) by offering to lull its users to sleep with spoken excerpts from the law. Did the GDPR live up to the year’s hype, good or bad? As Premier Zhou Enlai didn’t quite say about the French Revolution, it’s too early to say. There are plenty of ways that the GDPR can help with defending privacy online, but the real proof of the GDPR’s provisions will be in how they are enforced, and against whom. And those patterns will only emerge as European regulators begin to flex their new powers. They have quite the backlog already. Hours after the GDPR came into effect, Max Schrems (2016 EFF Pioneer Award winner, and the successful challenger of the EU’s privacy safe harbor with the United States) filed a series of complaints in his home country of Austria. Aimed at Google, Instagram, WhatsApp and Facebook, the cases revolve around the claim that these services gave customers no real choice in accepting the new privacy policies – which would be a breach of the tougher GDPR rules. In November, Privacy International filed another series of complaints aimed at the practices of Europe’s leading data-brokers, credit agencies, and ad-tech companies. It wasn’t just non-profits: the company behind the Brave browser also filed a GDPR complaint in Ireland, challenging the basis of the modern online advertising business. We’re waiting for the results of those complaints, and their inevitable appeals. Even without key enforcement decisions, GDPR’s broad popularity has already prompted regulators and lawmakers around the world to increase their oversight of personal data. In Italy, it was competition regulators that fined Facebook ten million euros for misleading its users over its personal data practices. Brazil passed its own GDPR-style law this year; Chile amended its constitution to include data protection rights; and India’s lawmakers introduced a draft of a wide-ranging new legal privacy framework. The GDPR increases fines and the ability of regulators to intervene on behalf of potential privacy violations – but with great power can come great irresponsibility. If you’ve seen how copyright law can be twisted to turn into an engine for censorship and surveillance, it will have come as no surprise when Romanian authorities attempted to use the GDPR’s wide powers to threaten journalists investigating corruption in the country. The EU body in charge of the GDPR, the European Data Protection Supervisor, has yet to publicly comment on what is happening in Romania, but it’s a vivid reminder that even the most well-intentioned laws can have unimagined consequences. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

Pushing Back Against Backdoors: 2018 Year in Review (Fri, 28 Dec 2018)
This wasn’t a great year for those of us whose job it is to defend the use of encryption. In the United States, we heard law enforcement officials go on about the same “going dark” problem they’ve been citing since the late 90s, but even after all these years, they still can’t get basic facts straight. The National Academy of Sciences was entirely (and unsurprisingly) unhelpful. And in the courts, there was at least some action surrounding encryption, but we don’t know exactly what. The real movement happened on the other side of the Pacific, so we’ll start there.  The Land Down Under—Or the Upside Down? Long-time readers of this blog will know Australia’s fraught history with attempts to regulate encryption…and math. In mid-2017, then-Prime Minister Malcom Turnbull said: “The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia.” He made this laughable claim in the context of a proposed ban on end-to-end encryption in his country. Turnbull was forced from office before his luddite’s dream could become reality, but unfortunately, his dream didn’t fade as quickly as his political fortunes. Late 2018 saw the Australian Parliament pass the Assistance and Access Act into law, with—as EFF’s Danny O’Brien put it—indecent speed and the barest nod to debate. Based in part on the UK’s Investigatory Powers Act that became law in 2016, the Assistance and Access Act isn’t an outright ban on encryption. Rather, it gives the government the power to issue secret orders to tech companies and individual technologists to re-engineer software and hardware under their control, so that it can be used to spy on their users. Incredibly, and unlike the UK’s Investigatory Powers Act, this includes the power to compel individual network administrators, sysadmins, and open source developers to comply with secret demands, including potentially to force them to keep their cooperation secret from their managers, lawyers, and executive leadership. Combined with another power claimed by the Australian government—an expanded ability to censor and filter the Internet—we can see a potential dystopic future in the Land Down Under: one where only backdoored communication tools are permitted in Australia, and all other services and protocols will face government-mandated blocking and filtering. The only silver lining to the encryption situation in Australia is that the government hasn’t attempted to exercise its new powers…yet. The DOJ Shoots at Messenger, and Misses In the United States, in EFF’s own backyard in California, the Department of Justice did something to challenge the use of end-to-end encryption. We’re not exactly sure what, but the one thing we do know about the fight is that we won. According to press reports, the DOJ tried to get a court to order Facebook to do something to enable the wiretapping of encrypted Facebook Messenger voice calls. Because the entire episode occurred under seal, we don’t know the specifics. We know that it involved an investigation into suspected MS-13 gang activity in California’s Central Valley, and the interplay between the Wiretap Act and encrypted VOIP calling. To our knowledge, this hasn’t been done before, and it raises novel questions about modern communication providers’ duties to assist with wiretaps involving encryption. Despite the mystery surrounding the entire episode, one thing is clear: either Facebook won the court battle, or the DOJ gave up, and the court didn’t end up ordering Facebook to redesign its systems. But we’re not going to let the DOJ’s (failed) fight remain secret if we can help it. In November, EFF—along with co-counsel at the ACLU and Stanford—moved the court to unseal and release all court orders and related materials in the sealed Messenger case. A hearing has been set for January 2019 and we’ll keep you updated on the results. The National Academy of Sciences Didn’t Help In February 2018, after a two-year effort, the National Academy of Sciences (NAS) released a report attempting to move the encryption debate forward by proposing a “framework for decisionmakers.” We were not impressed. The NAS report collapsed the question of whether the government should mandate encryption backdoors with how the government could accomplish that mandate. The report barely mentioned the benefits of encryption, the civil liberties implications of a ban, or the international implications of U.S. government action in the space. We wish that the NAS had taken it upon itself not only to inform how to implement a particular backdoor policy but whether to undertake that policy in the first place.  The FBI Can’t Do Basic Arithmetic In 2018, we learned that the FBI had been fundamentally misleading not only the public but also Congress in its incessant “going dark” rhetoric. For much of 2018, the Bureau had claimed that encryption prevented it from legally searching the contents of nearly 7,800 devices in 2017. But in May the Washington Post reported that the actual number is far lower. That’s why EFF submitted a FOIA request for records related to the FBI Director’s talking points about the “7,800” unhackable phones and the FBI’s use of outside vendors to bypass encryption. Looking Forward(?) to 2019 We’d quite obviously be lying if we told you we knew what was going to happen in the encryption debate in 2019: we’re now two years into the Trump Administration and it has yet to propose any legislation potentially affecting encryption. We’re more than two years since the UK passed its Investigatory Powers Act, and only a matter of weeks since Australia passed its equivalent Assistance and Access Act—but to our knowledge, neither country has attempted to use their new powers.  Whatever 2019 brings, and wherever those challenges arise, you can be sure we’ll be on the front lines defending your right to use strong encryption without backdoors. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

Victories in the State Legislatures: 2018 In Review (Thu, 27 Dec 2018)
States are often the “laboratories of democracy,” to borrow a phrase from U.S. Supreme Court Justice Louis Brandeis. They lead the way to react quickly to technological advances, establish important rights, and sometimes pass laws that serve as a template for others across the country. This year, EFF worked—and fought—alongside state legislators in California and across the country to pass legislation that promotes innovation and digital freedoms. Increased Transparency into Local Law Enforcement Thanks to the passage of S.B. 978, California police departments and sheriff’s offices must post their policies and training materials online. This will encourage better and more open relationships between law enforcement agencies and the communities they serve. Californians also now have new rights to access recordings from police-worn body cameras, with the passage of A.B. 748, which EFF supported. Starting in July 2019, the public will be able to access this important transparency resource. This makes it more likely that body-worn cameras will be used as a tool for officer accountability, rather than a method of police surveillance against the public. Better Privacy Protections California law already limits bars from sharing information collected by swiping your ID. But some companies and police departments believed they could bypass this safeguard as long as IDs were “scanned” rather than “swiped.” A.B. 2769, which EFF supported, closed this loophole, so now state law provides the same privacy protections whether someone is swiping or scanning your card. EFF also supported the data privacy rights of cannabis users through A.B. 2402, which stops cannabis distributors from sharing the personal information of their customers without their consent. The bill also prohibits dispensaries from discriminating against a customer who chooses to withhold that consent. Protecting Youth Rights DNA information reveals a tremendous amount about a person, and handing over a sample to law enforcement has long-lasting consequences. Unfortunately, at least one police agency has demanded DNA from youths in circumstances that are confusing and coercive. EFF wrote a letter supporting A.B. 1584, a new law that makes sure kids will have a supportive adult in the room to explain the implications of handing over a DNA sample. With your support, we also persuaded lawmakers that kids in the child welfare and juvenile justice systems need access to the Internet for their education. A.B. 2448 guarantees that access, as well as the right for kids in foster care to use the Internet for social and extracurricular activities.  This law protects the rights of some of the state’s most at-risk young people, and illustrates that if California can promise Internet access to disadvantaged youth, then other states should, too. Open Access to Government-funded Research A.B. 2192 was a huge victory, giving everyone access to scholarly and scientific research that’s been funded by the government, within a year of the research’s publication date. EFF went to Sacramento to support this bill, and explained it would have at most a negligible financial impact on the state budget. This prompted lawmakers to reconsider the bill after previously setting it aside. mytubethumbplay %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fs2KxWq2r7GE%3Fautoplay%3D1%22%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com EFF would like to see other states adopt similar measures. California itself can take further strides to make research available to the public, and to other researchers looking to advance their work. We Fought Bad Bills, Too Fighting “fake news” has become a priority for a lot of lawmakers, but S.B. 1424, a bill EFF opposed and Gov. Jerry Brown vetoed, was not the way to do it. The bill would have created a state advisory committee to recommend ways to “mitigate” the spread of “fake news.” This committee was all too likely to promote new laws to restrict the First Amendment rights of Californians. EFF also worked with Senator Robert Hertzberg on California’s new bot-labeling bill, S.B. 1001, which initially included overbroad language that would have swept up bots used for ordinary and protected speech activities. The original bill also created a takedown system that could have been used to censor or discredit important voices. We thank the California legislature for taking the time to think through the issue, and avoid the original bill’s unintended negative consequences. Finally, three cheers for Electronic Frontiers Georgia, one of the members of the Electronic Frontier Alliance, for its key role in defending the rights of independent security researchers and tech users in Georgia. S.B. 315 would have both criminalized most computer security research in Georgia, and allowed dangerous “active defense” tactics by tech users against each other. With Electronic Frontiers Georgia and computer security researchers, as well as help from EFF supporters, we successfully persuaded Governor Nathan Deal to veto the bill. As we look to 2019, we will continue to take up fights for digital rights and to protect innovation in states across the country. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Year in Review: Airport Surveillance Takes Off in a New, Dangerous Direction (Thu, 27 Dec 2018)
In 2018, we learned that expanded biometric surveillance is coming to an airport near you. This includes face recognition, iris scans, and fingerprints. And government agencies aren’t saying anything about how they will protect this highly sensitive information. This fall, the Transportation Security Administration (TSA) published their Biometrics Roadmap for Aviation Security and the Passenger Experience, detailing plans to work with Customs and Border Protection (CBP) to roll out increased biometric collection and screening for all passengers, including Americans traveling domestically. Basically, CBP and TSA want to use face recognition and other biometric data to track everyone from check-in, through security, into airport lounges, and onto flights. If implemented, there might not be much you can do to avoid it: the Department of Homeland Security (DHS) has said that the only way we can ensure that our biometric data isn’t collected when we travel is to “refrain from traveling.” The roots of this program go back a few years. In 2016 and 2017, DHS began ramping up its plans to collect face images and iris scans from travelers on a nationwide scale. In pilot programs in Georgia and Arizona in 2016, CBP used face recognition to capture pictures of all travelers boarding a flight out of the country and walking across a U.S. land border and compared those pictures to previously recorded photos from passports, visas, and “other DHS encounters.” Now, agencies plan to roll out the program to all international flights and border crossings. They’re also partnering with private airlines and airports to collect and maintain the data. The government has said it will retain photos of U.S. citizens and lawful permanent residents for two weeks and information about their travel for 15 years and retain data on “non-immigrant aliens” for 75 years. There are no restrictions on how long private companies can hold onto the data or what they can do with it. Flying domestic won’t keep your biometrics out of a database. The TSA roadmap explicitly outlines plans for collection of any biometrics they want from all travelers, wherever they use the airport. In the future, their database could be used outside of an airport context—after all, TSA’s Precheck, as well as Clear (a private company), have already begun using their technology at stadiums to “allow” visitors a faster entry. It’s unprecedented for the government to collect, store, and share this kind of data, with this level of detail, with this many agencies and private partners. And the risk to all of us is real. India’s Aadhaar biometric database, built to reduce corruption and expanded for use by other public and private groups, keeps getting hacked. It is not only cheap to buy the information of one of the 1.19 billion people in the database, but the hacks also allow for new information to be entered into the database. Rather than increasing security, India’s biometric database created more problems and opportunities for corruption. This is all particularly shocking when you consider that, at bottom, much of this data is not reliable at all. There are significant accuracy problems with current face recognition software, especially for non-white and female people. For example, earlier this summer the ACLU published a test of Amazon’s facial recognition program, comparing the official photos of 435 Members of Congress with publicly available mugshots. The ACLU found 28 false matches, even in this relatively small data set. According to the FAA, 2.5 million passengers fly through U.S. airports every day, meaning that even a 2% error rate would cause thousands of people to be misidentified every day.  These airport biometrics programs threaten privacy on a mass scale. By collecting and retaining face recognition data and partnering with private companies that face no restrictions on data sharing, DHS is laying the groundwork for a vast surveillance and tracking network that could impact all of us for years to come. DHS could soon build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras. TSA should not move forward on this plan. In 2019, EFF will continue fighting to make sure that we all able to travel safely. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Bloggers and Technologists Whose Voices Are Offline: 2018 in Review (Wed, 26 Dec 2018)
This year, we refocused our attention on Offline, our project that seeks to raise awareness of and provide actions readers can take to support imprisoned bloggers, digital activists, and technologists. Originally launched in 2015, Offline currently features six individuals from four countries whose critical voices have been silenced by their governments. Take Eman Al-Nafjan, the Saudi Arabian blogger and women’s rights activist who has long been critical of her government’s human rights abuses while living in the country. In May, Al-Nafjan was arrested along with several other women while filming a woman driving a car—just one month before the ban on women driving was officially lifted. A report from Human Rights Watch has found that a number of the women imprisoned in the crackdown have faced torture and sexual harassment in prison. Although Saudi Arabia has always been restrictive of speech, this year has proven truly frightening for human rights defenders. While liberal pundits in the Western media were busy praising Crown Prince Mohammad Al Salman as a reformist, the de facto ruler of the country was busy consolidating power. And now, following reports of torture and the murder of journalist Jamal Khashoggi, we are particularly fearful for the fate of Al-Nafjan and her compatriots. To take action to free Eman Al-Nafjan, click here. In Egypt, suppression has been the rule for decades but following the 2013 military coup, journalists and human rights defenders are at greater risk than ever before. This year saw dozens of arrests, including that of activist Amal Fathy and journalist Wael Abbas. Although a date has finally been set for Fathy’s appeal and Abbas was granted conditional release, both were held in pre-trial detention for months and still face a long road to freedom. Amnesty International offers actions you can take for Amal Fathy. PEN International provides a set of actions for Wael Abbas. Prominent activist Alaa Abd El Fattah was sentenced in 2014 to fifteen years in prison, which was reduced to five years following a retrial the next year. Supporters all over the world took action for his release, but to no avail. Still, we’re happy to say that in March 2019, Alaa will finally go home to his family...but only during the day. The conditions of his parole require him to sleep in his local police station for the next five years. Alaa poses with his sister Mona while wearing an EFF t-shirt We’re thrilled that Alaa will soon be reunited with his family, and encourage readers to visit 100 Days for Alaa, where they can learn more about his case and read his recent essays on technology and life. And until March, supporters can visit CPJ’s website to send him a postcard. We turn to Iran, where designer and programmer Saeed Malekpour languishes in Tehran’s infamous Evin prison. Earlier this year, he turned 43, the tenth birthday he’s spent behind bars. In October, he suffered a heart attack and was rushed to hospital, where he was handcuffed to his bed for four days before returning to prison. According to his sister, he has also suffered kidney stones, prostate issues and arthritis. To find out how you can support Saeed Malekpour, click here. Not all news is bad news It wasn’t only bad news in 2018: In February, Ethiopian journalist Eskinder Nega was freed by the country’s new after serving six years in prison. His journey hasn’t been easy—not long after his release, he was detained wrongfully for twelve days along with several other writers and journalists. We’re keeping a close eye on Ethiopia but are thrilled that Nega and his colleagues finally have their freedom. Watch EFF’s Rainey Reitman in conversation with Eskinder Nega. Palestinian poet Dareen Tatour’s story gives us hope: Although the poet, photographer, and activist served three years of house arrest and another 42 days in prison, she hasn’t been defeated. After her release, she bravely came out as a survivor of rape, has given tough interviews on her experience, and most recently, launched an exhibition of her photographs. Tatour still faces challenges: The Israeli government has sought to strip funding from her exhibition and a play written about her plight, which would effectively censor the works. But Dareen has something that can’t be challenged: her freedom. In Memoriam Finally, we wish to remember Bassel (Safadi) Khartabil, the tireless advocate for open culture who was executed in 2015, a fact that was only revealed last year. Like many of the individuals whose highlight, several of us had personal connections to Bassel, having met him at events around the world or corresponded with him over the years. The author with Bassel in Beirut in December 2009 He was a friend, a sometime contributor to EFF’s work, and an incredible human being whose life is a great loss not just for his family and friends but for the world. Bassel, you are greatly missed. To learn more about all of these brave individuals, visit Offline. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Europe Speeds Ahead on Open Access: 2018 in Review (Wed, 26 Dec 2018)
Open access is the common-sense idea that scientific research (especially scientific research funded by the government or philanthropic foundations) should be available to the public—ideally with no legal or technical barriers to access and reuse. EFF is a longtime supporter of the open access movement: we think that promoting broad access to knowledge and information helps to ensure that everyone can speak out and participate in society. For over five years now, EFF and our allies in the open access world have been campaigning for the Fair Access to Science and Technology Research Act (FASTR, S. 1701, H.R. 3427). Despite broad support from both parties and barely any opposition from anyone besides major publishers, Congress continues to snooze on FASTR year after year. While Congress dragged its feet on important legislative fixes, the most exciting changes came in Europe and at the state level. This year, though, something changed. Europe soared ahead of the United States with the Plan S initiative, a plan to require government-funded research to be made available to the public on the date of publication by the year 2020. Thirteen government agencies that fund research have endorsed Plan S, as well as a few foundations. Plan S reflects a more aggressive open access policy than FASTR does. FASTR requires that government agencies that fund scientific research require grantees to make their papers available to the public within a year of publication; the original publication can happen in a traditional, closed journal. (Most U.S. government agencies already have that requirement under a 2013 White House memo.) Plan S takes that much further, requiring grantees to publish their research in an open access journal or repository from day one. What’s more, grantees must publish their papers under an open license allowing others to share and reuse them. In discussions on open access laws, EFF has long urged lawmakers to consider including open licensing mandates. Allowing the public to read the research is a great first step, but allowing the public to reuse and adapt it (even commercially) unlocks its true economic and educational potential. We hope to see more similarly strong open access reforms, both in the U.S. and around the world. Congress failed to pass FASTR this year, but 2018 did see a smaller legislative win. Buried in the details of a routine funding bill came a welcome reform: Congressional Research Service reports are now officially available to the public. This is a huge step for government transparency—the public should be able to access the government reports that Congress relies on to make its decisions. And in the final days of the 2017-18 session, Congress passed the OPEN Government Data Act, a bill that requires government agencies to share their data in machine-readable formats. We also gained important ground here in California. EFF helped pass A.B. 2192, a law that makes all peer-reviewed, scientific research funded by the state of California available to the public no later than a year after publication. As we have on the federal level, we urged legislators to consider a stronger bill similar to the Plan S model. Regardless, A.B. 2192 is a solid start and we hope to see other states emulate it. The University of California continued the momentum by announcing that it may cancel its contracts with the notorious publishing giant Elsevier unless the company makes changes to show better support for open publishing. When we reflected on the state of open access a year ago, we wondered if the U.S. government was beginning to lose its status as a key player in the fight for open access. In some ways, that seems to have happened in 2018. While Congress dragged its feet on important legislative fixes, the most exciting changes came in Europe and at the state level. But we hope to see this year’s wins breathe new life into open access globally. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Investigative Scoops Worth Rereading: Year in Review 2018 (Tue, 25 Dec 2018)
In an era where political and corporate leaders are attacking the free press as “the enemy of the people,” it’s crucial that we recognize the truth: journalists every day are uncovering stories that protect our rights and hold those in power accountable. Meanwhile, as the media landscape shrinks, non-profits are also stepping in to carry the load. Here are some of the investigative bombshells that we read, re-read, and shared this year. (And a big thank you to our friends and supporters on Twitter who helped us remember all the great scoops from 2018.)    Give Me My Data Engadget reporters undertook a massive task to find out who controls your online data: A team of nine Engadget reporters in London, Paris, New York and San Francisco filed more than 150 subject access requests—in other words, requests for personal data—to more than 30 popular tech companies, ranging from social networks to dating apps to streaming services. We reached out before May 25th—when previous laws for data access existed in the EU—as well as after, to see how procedures might have changed.  The result was one of the most comprehensive examinations of the state of play of consumer privacy across the major platforms. Who Controls Your Data? - Engadget Georgia On My Fender One of the nation's most ubiquitous surveillance technologies—automated license plate readers (ALPR)—proliferate in the Peach State. Following in the footsteps of EFF’s research in California, reporters from different Atlanta media outlets teamed up to generate the first comprehensive portrait of ALPR data collection across the state. They filed numerous public records requests with jurisdictions deploying the technology, including obtaining two days worth of data from the Georgia Department of Public Safety. They created maps of how and where this data was collected and generated a visualization showing how one car can be tracked in near real-time over the course of a single data. Follow The Trail of a License Plate - Knight Lab Eyes On The Road - Atlanta Journal-Constitution Fake Friends on Facebook EFF has long raised concerns with the practice of law enforcement officials creating fake profiles to infiltrate private groups on social media sites such as Facebook. This year saw incredible reporting on this epidemic, most notable from The Appeal, which revealed how Memphis Police Department created a “Bob Smith” profile to spy on Black Lives Matter activists. This lead EFF to successfully pressure Facebook to send MPD a cease and desist letter and update its online guidelines for law enforcement. Meanwhile, NBC News also probed the issue nationwide, while Gizmodo filed public records requests around the country to gauge how departments are writing policies for these types of covert investigations. Meet ‘Bob Smith,’ The Fake Facebook Profile Memphis Police Allegedly Used To Spy On Black Activists - The Appeal Very Few Police Departments Have Rules for Undercover Cops on Facebook, The Wildly Unregulated Practice of Undercover Cops Friending People on Facebook - The Root Propaganda Bonanza Visit EFF’s offices and you’ll be amused to find printouts of vintage National Security Agency posters taped to the walls (particularly in the restroom). These propaganda artifacts were obtained and released by GovernmentAttic.org, a transparency organization that does incredible work.  The NSA Just Released 136 Historical Propaganda Posters - Motherboard Facing Down Amazon The American Civil Liberties Union kicked off multiple national and local news cycles—and energized an internal worker resistance—when it revealed through public records that Amazon was providing real-time face recognition (“Rekognition”) technology to local police departments. Documents obtained by the ACLU detailed how the program "can identify, track, and analyze people in real time and recognize up to 100 people in a single image" and "quickly scan information it collects against databases featuring tens of millions of faces." EFF called for Amazon to stop providing its technology to law enforcement to power surveillance. We also called on technology companies to adopt a "Know Your Customer" program and for employees as those companies to advocate for implementing these programs to protect against future corporate involvement in these sorts of government efforts. Amazon Teams Up With Law Enforcement to Deploy Dangerous New Face Recognition Technology - ACLU of Northern California I’m an Amazon Employee. My Company Shouldn’t Sell Facial Recognition Tech to Police. - Medium Power Trip Scooping the Messenger In August, Reuters published a bombshell in the battle over encryption: the U.S. Department of Justice was trying to force Facebook to break its Messenger encryption in a sealed court case. As Dan Levine and Joseph Menn reported: The potential impact of the judge’s coming ruling is unclear. If the government prevails in the Facebook Messenger case, it could make similar arguments to force companies to rewrite other popular encrypted services such as Signal and Facebook’s billion-user WhatsApp, which include both voice and text functions, some legal experts said. In November, EFF, the ACLU, and Riana Pfefferkorn of Stanford Law School’s Center for Internet and Society teamed up to file a petition asking the court to release all court orders and related materials in the case. U.S. Government Seeks Facebook Help to Wiretap Messenger - Reuters FBI Exaggerations Speaking of encryption: for years, FBI and Justice Department officials have pursued backdoors into the crucial technology that keeps our communications safe. They complain about the problem of “Going Dark,” the idea that encryption prevents them from investigating communications and devices. But it turns out that the FBI egregiously exaggerated in Congressional testimony. As The Washington Post’s Devlin Barrett reported: The FBI has repeatedly provided grossly inflated statistics to Congress and the public about the extent of problems posed by encrypted cellphones, claiming investigators were locked out of nearly 7,800 devices connected to crimes last year when the correct number was much smaller, probably between 1,000 and 2,000, The Washington Post has learned… The FBI first became aware of the miscount about a month ago and still does not have an accurate count of how many encrypted phones they received as part of criminal investigations last year, officials said. Last week, one internal estimate put the correct number of locked phones at 1,200, though officials expect that number to change as they launch a new audit, which could take weeks to complete, according to people familiar with the work. In May, EFF filed a Freedom of Information Act request with the FBI to get to the bottom of this misinformation, and we currently working our way through the tedious FOIA process. FBI Repeatedly Overstated Encryption Threat Figures to Congress, Public - The Washington Post Silicon Valley Scandals It’s been a year of huge scoops revealing how Facebook, Google, and other Silicon Valley companies have routinely failed to protect the privacy of their users (or actively violated it). In fact, we’ve had so many tabs open that we can’t adequately capture them all. However, here are a few that really stuck out: Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis - New York Times Facebook Is Giving Advertisers Access to Your Shadow Contact Information - Gizmodo (and just about everything else from Kashmir Hill this year) As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants- New York Times Google Tracks Your Movements, Like it or Not - Associated Press Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret - New York Times Other Excellent Pieces from 2018 Still looking for more to read? Here are a few more pieces that we highly recommend: Service Meant to Monitor Inmates’ Calls Could Track You, Too - New York Times Cops Around the Country Can Now Unlock iPhones, Records Show - Motherboard ICE is about to start tracking license plates across the US - The Verge The Cambridge Analytica Files - The Guardian This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

EFF's New Logo: 2018 Year in Review (Tue, 25 Dec 2018)
2018 marked the launch of EFF's new logo, nicknamed “Insider”—the first new logo we’ve had since EFF’s founding in 1990! The logo is the result of approximately a year of development. In 2017, EFF received a generous offer of a pro bono logo and identity design from Pentagram, a major design firm fronted by an amazing designer named Michael Bierut. Bierut was inspired to make this offer by our work defending a blogger against bogus take-down orders. Having had only one logo for over 20 years, and knowing the excellent work that has made Pentagram a giant of the design world, we were more than thrilled. This logo is actually a kind of logo system—it can be reconfigured in various ways. It can stack horizontally, vertically, into a neat “L” shape, and expand to include artwork and other text. For our small design team, it’s been a pleasure to work, and a huge improvement over the old logo. Check out this video to see the logo in action: mytubethumbplay %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FUNctTruLFIc%3Fautoplay%3D1%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allowfullscreen%3D%22allowfullscreen%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com   We’d like to acknowledge the many voices among our supporters who responded to the new look. We heard feedback ranging from “great work!” to “I could have done that in my spare time.” We hope any critics will consider an interesting insight from Bierut about how logo identity really works: They think they’re judging a diving competition, but actually all these organizations are in swimming competitions. It’s not what kind of splash you make when you hit the water. It’s how long you keep your head above that water. EFF has a bold, straightforward vision for the future. We love the way this new logo reflects and serves that vision, and we are looking forward to the use of this identity for years to come. Join EFF and get a shirt with the new logo today! This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today!
>> mehr lesen

Expression on the Corporate Web: 2018 Year in Review (Mon, 24 Dec 2018)
If 2017 was the year that corporate platforms were finally forced to recognize their outsized role on the Web, then 2018 should be remembered as the year that many such platforms began to reckon with it. From Facebook finally instituting an appeals process to Tumblr banning adult content, here are some of the ways that corporate platforms tried to take responsibility...with mixed results for freedom of expression. Facebook improves on accountability For years, advocates called on Facebook to implement a mechanism to appeal content takedowns...and in May, the company finally took its first steps toward doing so, announcing that appeals would first be rolled out for photos, videos, and posts in two categories. Also in May, the social media giant , offering greater detail on how content decisions are made, and published its first Community Standards enforcement report Later in the year—following a letter from more than 70 civil society groups from around the world and across the US led by several groups, including EFF—the company further expanded its appeals process to include other categories of content. Sex takes a hit at Tumblr, Facebook, and YouTube The passing of SESTA/FOSTA in early 2018 ushered in a new threat to free expression online, and pushed companies to take sweeping action against certain speech. Although not every instance of policy changes can be directly attributed to the law, it’s hard not to see its influence. From Tumblr’s early December to Facebook’s blunt new policy on sexual solicitation, it’s clear that we’re seeing a chilling effect. Additionally, measures presumably intended to minimize takedowns—such as demonetization on YouTube—are appearing to have an outsized impact on users whose work deals with sex, such as sexual health educators and LGBTQ+ YouTubers. All in all, the window for sexual expression on the corporate Web narrowed in 2018. Twitter offers more transparency in time for the holidays In mid-December, Twitter proferred a holiday gift to users in the form of an expanded transparency report that includes a section on the company’s Rules enforcement. The report shows the number of accounts upon which various enforcement actions were taken across six categories of speech, a solid step in the right direction for transparency. Unfortunately, the company’s transparency report also showed an 80% increase in global legal demands for content takedowns, impacting more than twice as many accounts as the previous reporting period. What’s yet to come… 2018 marked the inaugural year of our Who Has Your Back? Censorship Edition, in which we rated sixteen platforms across five categories. We look forward to seeing companies take more steps toward accountability and transparency in the new year! This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Grassroots Networks Mobilize From Coast-to-Coast to Promote Digital Rights: 2018 Year in Review (Mon, 24 Dec 2018)
The digital rights movement showed its strength this year by projecting influence in jurisdictions across the United States. Community organizations on both coasts, as well as the Midwest and the South, took action, promoting issues from net neutrality and civilian oversight of local police surveillance to the right to repair and digital security.  Among the 84 groups in the Electronic Frontier Alliance, grassroots community organizations in many cities demonstrably improved public policy, while their allies elsewhere built awareness online, as well as in their local communities. EFF invites anyone seeking mentorship, guidance, or support to find a local grassroots network in your area, or recruit a handful of nearby allies to form a new group and join the Alliance in the new year. Impacting Public Policy In Atlanta, Electronic Frontiers-Georgia mobilized to challenge a proposed computer crime law that threatened independent security research performed in the public interest. By chilling legitimate “white hat” security research, the law would have placed at risk millions of Internet users—from students to bankers—by criminalizing important efforts to identify and patch tech vulnerabilities before they can be exploited by malicious hackers. Frequent data breaches have recently impacted an expanding array of sectors, from credit rating agencies to dating sites arranging extramarital liaisons. As they consider potential responses, policymakers should consider predictable consequences that they might not intend, and commit to doing no harm. EF-GA ensured that state policymakers heard that message, by lobbying their state legislature, attending hearings to demonstrate grassroots concern, organizing and livestreaming a panel discussion, and mobilizing opposition from hundreds of Georgia residents and dozens of computer security professionals, including professors at Georgia Tech. Ultimately, the measure was vetoed by the Governor. Local organizers fear that another version will return next year, so they have already begun preparing their response. Meanwhile, groups in other major cities mobilized to challenge local police surveillance. Chicago advocates defeated a state legislative proposal that would have subjected protesters to drone surveillance. Recognizing that spying on protests threatens values enshrined in both the First Amendment (expression) and Fourth Amendment (privacy), local organizations including Lucy Parsons Labs raised the alarm, leveraging their longstanding work investigating police surveillance through public records requests. Their hard work helped ensure that the bill failed. Organizers have already turned their attention to a local proposal to extend police face surveillance across Chicago into commercial establishments, which would not only undermine privacy, but also violate a well-established and leading state law enacted to protect biometric privacy. Pursuing similar goals, allied grassroots groups in New York City continued to seek community control over the privacy parameters governing public wifi kiosks. They also advocated for transparency into NYPD surveillance, which recently expanded with the Department’s acquisition of a fleet of drones. Grassroots community organizations on both coasts secured legal requirements for civilian oversight of police surveillance, including (and beyond) the transparency goals at issue in New York City. In Oakland and Berkeley, California, as well as Cambridge, MA, local groups in the EFA mobilized and successfully advocated for these requirements, joining just a dozen communities across the country that have adopted similar measures in recent years. EFA allies in other cities, from St. Louis to San Diego, continue to organize support for local civilian oversight. Finally, EFA groups across California, from Access Humboldt in the state’s far north to techLEAD in San Diego, mobilized in their respective communities to help support net neutrality. By lobbying their respective state Assembly members, they achieved the passage of SB 822, a groundbreaking state law that—if it survives a federal pre-emption challenge—could set a model for other states.  Organizing on College Campuses  Many EFA groups working to promote digital rights did so from college campuses. For instance, Yale Privacy Lab organized a project to map the location of surveillance cameras around New Haven, CT, while also hosting workshops on applied digital privacy and advocating for campus libraries to host TOR nodes. Across the country, README at UCLA hosted monthly gatherings, including “cryptosocial” events at CRASH Space, a hacker space and EFA ally in Los Angeles. Other student groups, like the Hacking Club at San Francisco State University, and Hack UCF at the University of Central Florida, focused on competitive cybersecurity and hacking competitions, while sharing digital security tips and practices with classmates and other campus organizations. Providing Public Information Resources EFF has compiled several case studies examining highlights from across the Alliance. Many groups in the EFA have organized local workshops to inform their communities, and a handful have posted public resources online from which anyone can learn. Digital security workshops with public audiences have been a focus for grassroots allies from LA Cryptoparty to the CyPurr Collective in Brooklyn. EFA groups in Portland, Seattle, Fresno, Austin, St. Louis, Chicago, Philadelphia, Raleigh-Durham, Baltimore, and Orlando are just some of those who have offered their expertise to their neighbors. Many took advantage of EFF's Security Education Companion, a resource for trainers interested in helping others. Other groups have compiled video archives of their events, and in one case, a slideshow with guidance on leveraging public records requests to fight government secrecy and advance transparency. In Atlanta, Electronic Frontiers-Georgia collected videos from its many events at DragonCon, an annual cosplay convention in Atlanta where the group has hosted a track dedicated to digital rights since 2012. This year’s highlights included a session on the “Legal Risks of Security Research,” featuring current and former EFF staff.  Groups in the Southwest have also collected video archives of their events. The Phoenix Linux Users Group posted video from a series of workshops, particularly exploring an array of advanced digital security topics. In Austin, EFF-Austin has also collected its events in an online video archive. The group organized the world’s first “Cyborg Pride Parade” this summer, and have begun planning their state lobbying strategy for 2019. Finally, Lucy Parsons Labs in Chicago compiled an online slideshow explaining best practices for leveraging public records requests. The group is led by a diverse group of young technologists and has developed a sophisticated online tool to advance police accountability while also conducting prolific investigations of local police. One led to its discovery of disturbing connections between the Chicago Police Department’s civil asset forfeiture practices and its funding stream for surveillance equipment used to spy on cellular networks, largely without civilian oversight. Onward to 2019 The Electronic Frontier Alliance brings together community networks in dozens of cities and towns across the United States. Each of them hosts public events, shares information, and builds local community among Internet users concerned about digital rights. EFF invites anyone seeking mentorship, guidance, or support to find a local grassroots network in your area, or recruit a handful of nearby allies to form a new group and join the Alliance in the new year. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Patent Progress and Its Discontents: 2018 in Review (Sun, 23 Dec 2018)
In 2018, technologists and users continued to be plagued by abstract, ridiculous software patents. The good news is there are more ways than ever before to fight back against those patents—some of them pretty effective. Unfortunately, patent trolls and abusive patent owners are working overtime to knock down those recent improvements, and bring the patent system back to the proverbial “bad old days.” Before the Alice v. CLS Bank decision—four years old as of last June—it could cost millions of dollars just to convince a court to invalidate a single abstract patent. That was true even when those patents clearly described aspects of everyday life, like running a contest, displaying a menu with pictures, or teaching a foreign language. Lobbyists for patent trolls and patent lawyers keep seeking to roll back Alice, promoting terrible legislation like the STRONGER Patents Act. Such proposals weaken our systems to challenge bad patents, and will hurt U.S. entrepreneurs and send innovation overseas. Despite that, we expect bills like that to come back in 2019, and we’ll be ready to fight back on behalf of startups and innovators. Patent owners are pushing to neutralize Alice through the courts, as well. The most recent attempt is a case called Berkheimer v. H-P, in which a panel of Federal Circuit judges ruled that patent eligibility under Alice can require a full trial. This makes Alice much harder and more expensive to apply and, in our view, is inconsistent with the Supreme Court’s ruling. Last month, we asked the Supreme Court to take up the case and consider overturning Berkheimer. A second crucial reform that needs defending is the inter partes review system, often abbreviated as IPR, that Congress created in 2012. IPRs allow those accused of patent infringement, or outside groups like EFF, to have an administrative law judge at the Patent Office take a second look at a patent grant. It’s a way of figuring out what patents should be allowed that’s far less expensive and more efficient than drawn-out court litigation. IPR has been so effective at knocking out bad patents that, perhaps unsurprisingly, the process is under attack. In the most important patent case this year, Oil States Energy Services, LLC v. Greene’s Energy Group, LLC, the U.S. Supreme Court took up arguments that the IPR process violated the U.S. Constitution. No surprise, dozens of patent trolls and heavy patent licensors stepped forward, urging the Supreme Court to throw out IPRs. Together with Public Knowledge, Engine Advocacy, and the R Street Institute, EFF filed a brief [PDF] explaining how IPR is a legitimate exercise of Congressional power. In April, the high court voted 7-2 to uphold IPR, a big relief for those of us looking for a balanced patent system. 2018 also saw progress in stopping venue abuse, in which patent trolls wrangled defendants into far-off, troll-friendly venues like the Eastern District of Texas. Once there, companies accused of infringement couldn’t transfer out, or even convince judges to consider motions under the rules set forth by Alice. At one point, the Eastern District of Texas was home to almost half of all patent lawsuits nationwide. The Supreme Court tightened up this venue loophole last year, in a case called TC Heartland v. Kraft Foods. A recent LexMachina analysis shows that in May of 2017, two judges in the Eastern District of Texas got 35 percent of the nation's patent lawsuits assigned to them. In the same period of 2018, the same two judges received only 13 percent. That’s still an outsized share for a remote district without much of a technology industry, but it’s a big improvement. Venue reform, IPR, and the Alice litigation rules, are all changes that have made the patent system more fair for everyday people. It was the IPR process that allowed EFF to challenge the so-called “podcasting patent” owned by Personal Audio LLC. This year, we killed off that outrageous patent for good, and its owner can’t threaten podcasters anymore. The Alice decision means we can all stand up against other abusive patent threats, like one EFF fought off this year, in which a publishing company claimed it owned a patent on teaching language, and tried to force our client (a language teacher) to stop providing online lessons. Some patent-maximalist lobbyists are already talking about the “overreach” of these reforms, but the fact is, they don’t go far enough. Throughout 2018, more than 80 percent of patent lawsuits in the tech sector were filed by patent trolls. Even in the post-Alice era, we’re seeing thousands of lawsuits filed by shell companies, which produce nothing but headaches for real inventors. We need to keep moving in the direction of a patent system that considers users, entrepreneurs, and citizens, not just patent owners. That’s what we’ll be fighting for in 2019. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

The Year Without the Open Internet Order: 2018 Year in Review (Sun, 23 Dec 2018)
In the waning hours of 2017, the Federal Communications Commission voted to repeal the 2015 Open Internet Order, ending net neutrality protections for the millions of Americans who support them. The fallout of that decision continued all throughout 2018, with attempts to reverse the FCC in Congress, new state laws and governor executive orders written to secure state-level protections, court cases, and ever-increasing evidence that a world without the Open Internet Order is simply a worse one. The story surrounding net neutrality has always been one of the greed of the largest Internet Service Providers (ISPs) versus the desires of the majority of people, the actual way the Internet is structured, and the ideal of a free and open Internet. Every win this year represented a win by actual people speaking out over big ISP money.  In the States The so-called “Restoring Internet Freedom Order” didn’t go into effect until June 11, 2018, but states started preparing for the FCC’s abdication of oversight over the Internet early on. State leaders immediately committed before the ink was dry to stand up for net neutrality. A net neutrality bill, S.B. 822 was introduced the very first day of the California legislative session. Governor Bullock of Montana issued his Executive Order only weeks later in January. By March, Washington’s state legislature had overwhelmingly passed net neutrality legislation to the Governor. In April, Oregon signed into law H.B. 4155, which required any ISP getting money from the state to adhere to net neutrality principals. California’s S.B. 822 turned out to be a particularly contentious battle, as that was the strongest bill moving this year. After passing in the California Senate, a state Assembly hearing gutted it, removing the strong protections that made the bill a net neutrality gold standard. But  Californians spoke out en masse and a potent coalition of fire fighters, college students, startups, local ISPs, public interest advocates, and low-income advocates made clear the support a free and open Internet has. As a result, the final bill retained the ban blocking, throttling, and paid prioritization—paid prioritization has been a particular target of misleading ISP arguments. The ban on certain kinds of zero rating—the kinds that lead consumers to services that ISPs want them to use rather than giving them choices—also remained. And so did the ban on access fees, which means ISPs will not be able to get around these protections by charging fees at the places where data enters their networks. The result was a bill that passed with bipartisan support and was signed by Governor Jerry Brown in September. By the end of 2018, six states—Hawaii, New Jersey, New York, Montana, Rhode Island, and Vermont—had executive orders preventing the state from contracting with ISPs that didn’t adhere to net neutrality principles. Four—the aforementioned Oregon, California, and Washington, as well as Vermont —had net neutrality laws. Every time the opportunity arose a bipartisan group of lawmakers voted in favor of net neutrality. In the Courts Almost immediately after California’s law was signed, the state was sued by the federal government along with the major ISPs that supported the repeal. The FCC in an attempt to keep any states from daring to step in to the vacuum it created when it repealed the Open Internet Order, had included language in the “Restoring Internet Freedom Order” that “preempted” states from writing net neutrality laws. EFF and other legal experts contend that when the FCC gave up its authority to regulate net neutrality, it also gave up the authority to tell the states what to do in that field. Because the fundamental question of whether it is even legal for the FCC to preempt states in such a way, California agreed to put enacting S.B. 822 on hold until the D.C. Circuit case is resolved. The preemption effort after all was in response to a last minute ask by Verizon and other wireless industry players and even asked the FCC to block state privacy laws. Nothing in the federal law has any statutory language that authorizes the FCC to block state action. The central case, Mozilla Corporation versus the FCC, also began this year. Mozilla, Vimeo, twenty-two states and the District of Columbia have sued the FCC, as the FCC’s “Restoring Internet Freedom Order” was based on the FCC being simply wrong about how the Internet works and because the FCC failed to adequately consider all the things it is legally required to do, the order is “arbitrary and capricious” and therefore not valid. EFF filed an amicus brief in this case on behalf of 130 technologists who helped develop core Internet technologies. In this brief we explained the ways that the “Restoring Internet Freedom Order” mischaracterized how broadband Internet access service works; for example, the FCC claimed that because you can download movies using your broadband connection, your ISP is actually the entity responsible for providing those movies—not Youtube or Netflix or whatever video provider actually hosts the movies. We also explained the importance of net neutrality to speech and innovation.  We’ll be watching to see if the FCC repeats these false conclusions about the Internet when the case continues in 2019.  In Congress  Throughout 2018 there remained the possibility that Congress could overturn the FCC’s decision. The Congressional Review Act (CRA) allows a simple majority vote by Congress to overturn an agency order, provided the vote happens soon after the order is published. In May, the Senate reflected the will of the 86% of Americans who support net neutrality by voting 52-47 for the CRA. It remained for the House of Representatives to follow suit, but although 180 Representatives signed the Discharge Petition calling for the vote before the end of the year, they didn’t get the majority of 218 required to get there. That said, next year’s Congress could still pass net neutrality legislation that overturns the FCC, just in a different form. We will also have to be vigilant as major ISPs push for their own fake net neutrality bills through their allies in Congress. Undoubtedly they have realized that absent convincing Congress to delete non-discrimination from the Communications Act, their days of pushing an anti-net neutrality agenda in complete defiance of the American public will eventually come to an end. Other Consequences of the Repeal of the 2015 Open Internet Order 2018 kept illuminating ways the 2015 Open Internet Order protected users beyond just net neutrality rules but in areas of public safety and competition. In August it was revealed that Verizon had throttled the Santa Clara fire department’s “unlimited” service during a wildfire. In the midst of trying to fight the Mendocino Complex Fire, Santa Clara’s fire department found its Verizon Internet slowing to dial-up speeds, and when it called Verizon to ask what was going on, Verizon told them to switch to a plan that cost twice as much. The 2015 Open Internet Order would have let the FCC investigate Verizon’s actions in this situation and if necessary to use its power to remedy the situation in the future, but, instead, an overburdened and limited in power FTC is left and likely cannot penalize Verizon for conduct the company itself admits full fault. The service outages in Florida (some that may still be present today) following the hurricanes likewise reminded the world that a FCC with no authority over broadband companies is reduced to issuing press releases asking them to treat people better. Both of these situations show that a post-Open Internet Order world is having problems resolving issues relating to public safety. That makes sense, since public safety is the role of the government and maximizing profit is the concern of companies. Fortunately, 2018 didn’t see a sudden avalanche of countries joining the United States’ approach of abandoning net neutrality. South Korea contemplated following suit and abandoning net neutrality in light of the FCC action at the behest of their ISPs. EFF testified before the parliament to explain the level of opposition American regulators were facing from the public as well as the dire competitive situation United States broadband market faces due to a lack of fiber to the home competition (a problem completely unheard of in South Korea). In light of the local circumstances surrounding what is arguably the biggest mistake in Internet policy history, from the national protests to the fire department being unjustly throttled, the South Korean parliament decided to stick with net neutrality. 2018 proved just how important the 2015 Open Internet Order was to protecting net neutrality. It showed states and the Senate stepping up to try to rectify the FCC’s decision. And it started the court battles that will decide the future of the free and open Internet. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen

Can the Government Block Me on Twitter?: 2018 Year in Review (Sat, 22 Dec 2018)
In 2018, federal courts across the country have been asked whether members of the public have a First Amendment right to speak on government social media pages. Three of these cases have been bumped up to appellate courts for review prompting numerous people to write into EFF, their local papers, and request public records asking “Can X official block me on Twitter?” Social media has become so pervasive that government institutions all over the world now use Facebook, Twitter, or other social media pages to announce government services, hold community meetings, and answer questions from their citizens. Every member of the U.S. Senate and most of the members of the U.S. House of Representatives have at least one social media page that they use for official business.   But we keep hearing reports that people are being blocked by their elected officials and by government agencies on social media pages for posting comments that the government disagrees with. California Governor Jerry Brown blocked over 1,500 people from his Twitter and Facebook accounts, until a records request from the First Amendment Coalition convinced him to change the practice. Investigative reporting agency ProPublica has created a guide to help members of the public use transparency laws to see who local government officials are blocking, and one transparency hobbyist now runs a blog detailing her records requests on what accounts government officials and agencies are blocking across the United States. So how do traditional speech protections translate to social media pages operated by government officials? The answer depends on the specific facts at issue, but hopefully in 2019 appellate courts will continue the trend of protecting the public’s right to speak to government officials online, in channels created by the government. The case of Knight First Amendment Institute v. Donald J Trump is one of the most high-profile of these new social media cases. The Knight First Amendment Institute and a diverse group of journalists, activists, and other individuals are suing President Trump and members of his communications team for blocking the plaintiffs from the President’s Twitter account, @realDonaldTrump. On May 23, 2018, the Southern District of New York declared that the “interactive spaces” of the President’s Twitter account, i.e. the comments below each of Trump’s tweets, are a forum for public speech and that blocking individuals because the President doesn’t like what they are saying is textbook viewpoint discrimination, a type of unlawful censorship that the First Amendment prohibits. Soon after, the government appealed this decision to the Second Circuit. EFF filed amicus briefs in both the Second Circuit and the New York district court describing the prevalence of government social media accounts and the public’s right to access speech made by government representatives. In 2019, we’ll be watching to see if the Second Circuit follows the lower court’s strong First Amendment decision.   President Trump isn’t the only government executive to block constituents on social media. Kentucky Governor Matt Bevin is notorious for blocking hundreds of constituents on Facebook and Twitter. Two Kentucky residents brought suit, requesting a preliminary injunction (an emergency measure used to stop the government from doing something unlawful before a case is fully argued before a judge). But on March 30, 2018, a district judge in the Eastern District of Kentucky denied the injunction, finding that Governor Bevin is not engaging in speech discrimination, but “merely culling his Facebook and Twitter accounts to present a public image that he desires.” The case is now proceeding on a regular schedule, and although the judge ruled in favor of the Kentucky governor at the injunction motion, he has recently ruled in favor of the Plaintiff’s efforts to collect evidence to prove speech discrimination. Furthermore, cases about government attacks on public speech on social media were not restricted to executive-level politicians like the President or state governors. In Texas, EFF is representing People for the Ethical Treatment of Animals (PETA) in a case against the University of Texas A&M for using tools on their Facebook page to target and block PETA from speaking on the A&M Facebook Page and block PETA’s advocacy campaign to end a controversial do experimentation lab at A&M. Texas A&M is the second largest public school in the country, and receives numerous federal research grants. Public universities have long been recognized as government actors under the law, which means when Texas A&M removes speech from the Facebook page that they created and operate as a forum for members of the general public to engage with it on, it's not just blocking people, it's engaging in government censorship. The case raises interesting questions about how machine learning and text recognition tools can be used to censor speech, and we look forward to the case proceeding in the new year. As shown by Texas A&M, censorship often happens at the local level, and we’re seeing numerous cases of local government bodies and officials restricting constituents from speaking on local social media pages.  The Fourth Circuit Court of Appeals is now hearing a case where the chairwoman of a Virginia town’s board of supervisors blocked a constituent, Brian Davison, from her official Facebook page after he was critical of the local school board at a town hall meeting. The case raises important questions of when government officials operate private accounts for themselves versus when they operate public official accounts, where any removal of speech or blocking of a person could be unlawful discrimination of that person’s viewpoint. The Fifth Circuit has also just heard oral argument in a case where a local police department in Texas deleted a women’s comment and then blocked her from speaking on the Facebook Page because she called police officers “terrorist pigs.” The police’s Facebook page had a disclaimer that the page was not a public forum and that they would delete any comment they found “inappropriate,” raising the question: can the government operate a space as a public forum, but then just post a notice that it isn’t? A ruling in favor of the government could create an uneven playing field where online public forums are given less protection than physical forums, which is why EFF filed an amicus brief in the case, and why we’ll be following in 2019.   These censorship cases are popping up across the country, and across all levels of government. So far, we’re seeing an early trend in 2018 of judges extending speech protections given to people in physical spaces to people in online spaces. We’re hoping that 2019 will settle the questions raised in the cases above, and protect the public’s right to speak with and about government officials on social media. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018. DONATE TO EFF Like what you're reading? Support digital freedom defense today! 
>> mehr lesen