Deeplinks

"FREE from Chains!": Eskinder Nega is Released from Jail (Sa, 17 Feb 2018)
Eskinder Nega, one of Ethiopia's most prominent online writers, winner of the Golden Pen of Freedom in 2014, the International Press Institute's World Press Freedom Hero for 2017, and PEN International's 2012 Freedom to Write Award, has been finally set free. Eskinder is greeted by well-wishers on his release. Picture by Befekadu Hailu Eskinder has been detained in Ethiopian jails since September 2011. He was accused and convicted of violating the country's Anti-Terrorism Proclamation, primarily by virtue of his warnings in online articles that if Ethiopia's government continued on its authoritarian path, it might face an Arab Spring-like revolt. Ethiopia's leaders refused to listen to Eskinder's message. Instead they decided the solution was to silence its messenger. Now, within the last few months, that refusal to engage with the challenges of democracy has led to the inevitable result. For two years, protests against the government have risen in frequency and size. A new Prime Minister, Hailemariam Desalegn, sought to reduce tensions by introducing reforms and releasing political prisoners like Eskinder. Despite thousands of prisoner releases, and the closure of one of the country's more notorious detention facilities, the protests continue. A day after Eskinder's release, Desalegn was forced to resign from his position. A day later, and the government has declared a new state of emergency. Even as it comes face-to-face with the consequences of suppressing critics like Eskinder, the Ethiopian authorities pushed back against the truth. Eskinder's release was delayed for days, after prison officials repeatedly demanded that Eskinder sign a confession that falsely claimed he was a member of Ginbot 7, an opposition party that is banned as a terrorist organization within Ethiopia. Eventually, following widespread international and domestic pressure, Eskinder was released without concession. Eskinder, who was in jail for nearly seven years, joins a world whose politics and society have been transformed since his arrest. His predictions about the troubles Ethiopia would face if it silenced free expression may have come true, but his views were not perfect. He was, and will be again, an online writer, not a prophet. The promise of the Arab Spring that he identified has descended into its own authoritarian crackdowns. The technological tools he used to bypass Ethiopia's censorship and speak to a wider public are now just as often used by dictators to silence them. But that means we need more speakers like Eskinder, not fewer. And those speakers should be carefully listened to, not forced into imprisonment and exile.
>> mehr lesen

New National Academy of Sciences Report on Encryption Asks the Wrong Questions (Fr, 16 Feb 2018)
The National Academy of Sciences (NAS) released a much-anticipated report yesterday that attempts to influence the encryption debate by proposing a “framework for decisionmakers.” At best, the report is unhelpful. At worst, its framing makes the task of defending encryption harder. The report collapses the question of whether the government should mandate “exceptional access” to the contents of encrypted communications with how the government could accomplish this mandate. We wish the report gave as much weight to the benefits of encryption and risks that exceptional access poses to everyone’s civil liberties as it does to the needs—real and professed—of law enforcement and the intelligence community. From its outset two years ago, the NAS encryption study was not intended to reach any conclusions about the wisdom of exceptional access, but instead to “provide an authoritative analysis of options and trade-offs.” This would seem to be a fitting task for the National Academy of Sciences, which is a non-profit, non-governmental organization, chartered by Congress to provide “objective, science-based advice on critical issues affecting the nation.” The committee that authored the report included well-respected cryptographers and technologists, lawyers, members of law enforcement, and representatives from the tech industry. It also held two public meetings and solicited input from a range of outside stakeholders, EFF among them. EFF’s Seth Schoen and Andrew Crocker presented at the committee’s meeting at Stanford University in January 2017. We described what we saw as “three truths” about the encryption debate: First, there is no substitute for “strong” encryption, i.e. encryption without any intentionally included method for any party (other than the intended recipient/device holder) to access plaintext to allow decryption on demand by the government. Second, an exceptional access mandate will help law enforcement and intelligence investigations in certain cases. Third, “strong” encryption cannot be successfully fully outlawed, given its proliferation, the fact that a large proportion of encryption systems are open-source, and the fact that U.S. law has limited reach on the global stage. We wish the report had made a concerted attempt to grapple with that first truth, instead of confining its analysis to the second and third. We recognize that the NAS report was undertaken in good faith, but the trouble with the final product is twofold. First, its framing is hopelessly slanted. Not only does the report studiously avoid taking a position on whether compromising encryption is a good idea, its “options and tradeoffs” are all centered around the stated government need of “ensuring access to plaintext.” To that end, the report examines four possible options: (1) taking no legislative action, (2) providing additional support for government hacking and other workarounds, (3) a legislative mandate that providers provide government access to plaintext, and (4) mandating a particular technical method for providing access to plaintext. EFF raised concerns that encryption does not just support free expression, it is free expression. But all of these options, including “no legislative action,” treat government agencies’ stated need to access to plaintext as the only goal worth study, with everything else as a tradeoff. For example, from EFF’s perspective, the adoption of encryption by default is one of the most positive developments in technology policy in recent years because it permits regular people to keep their data confidential from eavesdroppers, thieves, abusers, criminals, and repressive regimes around the world. By contrast, because of its framing, the report discusses these developments purely in terms of criminals “who may unknowingly benefit from default settings” and thereby evade law enforcement. By approaching the question only as one of how to deliver plaintext to law enforcement, rather than approaching the debate more holistically, the NAS does us a disservice. The question of whether encryption should or shouldn’t be compromised for “exceptional access” should not be treated as one of several in the encryption debate: it is the question. Second, although it attempts to recognize the downsides of exceptional access, the report’s discussion of the possible risks to civil liberties is notably brief. In the span of only three pages (out of nearly a hundred), it acknowledges the importance of encryption to supporting values such as privacy and free expression. Unlike the interests of law enforcement, which are represented in every section, the report discusses the risks to civil liberties posed by exceptional access as just one more tradeoff, and addresses them as a stand-alone concern. To emphasize the report’s focus, the civil liberties section ends with the observation that criminals and terrorists use encryption to “take actions that negatively impact the security of law-abiding individuals.” This ignores the possibility that encryption can both enhance civil liberties and preserve individual safety. That’s why, for example, experts on domestic violence argue that smartphone encryption protects victims from their abusers, and that law enforcement should not seek to compromise smartphone encryption in order to prosecute these crimes. Furthermore, the simple act of mandating that providers break encryption in their products is itself a significant civil liberties concern, totally apart from privacy and security implications that would result. Specifically, EFF raised concerns that encryption does not just support free expression, it is free expression. Notably absent is any examination of the rights of developers of cryptographic software, particularly given the role played by free and open source software in the encryption ecosystem. It ignores the legal landscape in the United States—one that strongly protects the principle that code (including encryption) is speech, protected by the First Amendment. The report also underplays the international implications of any U.S. government mandate for U.S.-based providers. Currently, companies resist demands for plaintext from regimes whose respect for the rule of law is dubious, but that will almost certainly change if they accede to similar demands from U.S. agencies. In a massive understatement, the report notes that this could have “global implications for human rights.” We wish that the NAS had given this crucial issue far more emphasis and delved more deeply into the question, for instance, of how Apple could plausibly say no to a Chinese demand to wiretap a Chinese user’s FaceTime conversations while providing that same capacity to the FBI. In any tech policy debate, expert advice is valuable not only to inform how to implement a particular policy but whether to undertake that policy in the first place. The NAS might believe that as the provider of “objective, science-based advice,” it isn’t equipped to weigh in on this sort of question. We disagree.
>> mehr lesen

EFF and MuckRock Are Filing a Thousand Public Records Requests About ALPR Data Sharing (Fr, 16 Feb 2018)
EFF and MuckRock have a launched a new public records campaign to reveal how much data law enforcement agencies have collected using automated license plate readers (ALPRs) and are sharing with each other. Over the next few weeks, the two organizations are filing approximately 1,000 public records requests with agencies that have deals with Vigilant Solutions, one of the nation’s largest vendors of ALPR surveillance technology and software services. We’re seeking documentation showing who’s sharing ALPR data with whom. We are also requesting information on how many plates each agency scanned in 2016 and 2017 and how many of those plates were on predetermined “hot lists” of vehicles suspected of being connected to crimes. You can see the full list of agencies and track the progress of each request through the Street-Level Surveillance: ALPR Campaign page on MuckRock. As Easy As Adding a Friend on Facebook “Joining the largest law enforcement LPR sharing network is as easy as adding a friend on your favorite social media platform.” That’s a direct quote from Vigilant Solutions in its promotional materials for its ALPR technology. Through its LEARN system, Vigilant Solutions has made it possible for government agencies—particularly sheriff’s offices and police departments—to grant 24-7, unrestricted database access to hundreds of other agencies around the country. ALPRs are camera systems that scan every license plate that passes in order to create enormous databases of where people drive and park their cars both historically and in real time. Collected en masse by ALPRs mounted on roadways and vehicles, this data can reveal sensitive information about people, such as where they work, socialize, worship, shop, sleep at night, and seek medical care or other services. ALPR allows your license plate to be used as a tracking beacon and a way to map your social networks. Here’s the question: who is on your local police department’s and sheriff office’s ALPR friend lists? Perhaps you live in a “sanctuary city.” There’s a very real chance local police are sharing ALPR data with Immigration & Customs Enforcement, Customs & Border Patrol, or one of their subdivisions. Perhaps you live thousands of miles from the South. You’d be surprised to learn that scores of small towns in rural Georgia have round-the-clock access to your ALPR data. This includes towns like Meigs, which serves a population of 1,000 and did not even have full-time police officers until last fall. In 2017, EFF and the Center for Human Rights and Privacy filed records requests with several dozen law enforcement agencies in California. We found that police departments were routinely sharing ALPR data with a wide variety of agencies that may be difficult to justify. Police often shared with the DEA, FBI, and U.S. Marshals—but they also shared with federal agencies with a less clear interest, such as the U.S. Forest Service, the U.S. Department of Veteran Affairs, and the Air Force base at Fort Eustis. California agencies were also sharing with public universities on the East Coast, airports in Tennessee and Texas, and agencies that manage public assistance programs, like food stamps and indigent health care. In some cases, the records indicate the agencies were sharing with private actors. Meanwhile, most agencies are connected to an additional network called the National Vehicle Locator System (NVLS), which shares sensitive information with more than 500 government agencies, the identities of which have never been publicly disclosed. Here are the data sharing documents we obtained in 2017, which we are seeking to update with our new series of requests. Anaheim Police Department Antioch Police Department Bakersfield Police Department Chino Police Department Clovis Police Department Elk Grove Police Department Fontana Police Department Fountain Valley Police Department Glendora Police Department Hawthorne Police Department Irvine Police Department Livermore Police Department Lodi Police Department Long Beach Police Department Montebello Police Department Orange Police Department Palos Verdes Estates Police Department Red Bluff Police Department Sacramento Police Department San Bernardino Police Department San Diego Police Department San Rafael Police Department San Ramon Police Department Simi Valley Police Department Tulare Police Department We hope to create a detailed snapshot of the ALPR mass surveillance network linking law enforcement and other government agencies nationwide. Currently, the only entity that has the definitive list is Vigilant Solutions, which, as a private company, is not subject to state or federal public record disclosure laws. So far, the company has not volunteered this information, despite reaping many millions in tax dollars. Until they do, we’ll keep filing requests. For more information on ALPRs, visit EFF’s Street-Level Surveillance hub.
>> mehr lesen

Federal Judge Says Embedding a Tweet Can Be Copyright Infringement (Fr, 16 Feb 2018)
Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability. This case began when Justin Goldman accused online publications, including Breitbart, Time, Yahoo, Vox Media, and the Boston Globe, of copyright infringement for publishing articles that linked to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage (the photo was newsworthy because it showed Brady in the Hamptons while the Celtics were trying to recruit Kevin Durant). Goldman said those stories infringe his copyright. Courts have long held that copyright liability rests with the entity that hosts the infringing content—not someone who simply links to it. The linker generally has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. This “server test,” originally from a 2007 Ninth Circuit case called Perfect 10 v. Amazon, provides a clear and easy-to-administer rule. It has been a foundation of the modern Internet. Judge Katherine Forrest rejected the Ninth Circuit’s server test, based in part on a surprising approach to the process of embedding. The opinion describes the simple process of embedding a tweet or image—something done every day by millions of ordinary Internet users—as if it were a highly technical process done by “coders.” That process, she concluded, put publishers, not servers, in the drivers’ seat: [W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result. She also argued that Perfect 10 (which concerned Google’s image search) could be distinguished because in that case the “user made an active choice to click on an image before it was displayed.” But that was not a detail that the Ninth Circuit relied on in reaching its decision. The Ninth Circuit’s rule—which looks at who actually stores and serves the images for display—is far more sensible. If this ruling is appealed (there would likely need to be further proceedings in the district court first), the Second Circuit will be asked to consider whether to follow Perfect 10 or Judge Forrest’s new rule. We hope that today’s ruling does not stand. If it did, it would threaten the ubiquitous practice of in-line linking that benefits millions of Internet users every day. Related Cases:  Perfect 10 v. Google
>> mehr lesen

The False Teeth of Chrome's Ad Filter (Fr, 16 Feb 2018)
Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed.  Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination.  The Coalition for Better Ads Lacks a Consumer Voice The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition. This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves. But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus.  Stopping the "Biggest Boycott in History" Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers.  Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers. What Difference Will It Make? Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus. User Protection or Monopoly Power? The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets. Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications. There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.) Chrome Falls Behind on User Privacy  This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns. Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap. (1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.
>> mehr lesen

Customs and Border Protection's Biometric Data Snooping Goes Too Far (Fr, 16 Feb 2018)
The U.S. Department of Homeland Security (DHS), Customs and Border Protection (CBP) Privacy Office, and Office of Field Operations recently invited privacy stakeholders—including EFF and the ACLU of Northern California—to participate in a briefing and update on how the CBP is implementing its Biometric Entry/Exit Program. As we’ve written before, biometrics systems are designed to identify or verify the identity of people by using their intrinsic physical or behavioral characteristics. Because biometric identifiers are by definition unique to an individual person, government collection and storage of this data poses unique threats to privacy and security of individual travelers. EFF has many concerns about the government collecting and using biometric identifiers, and specifically, we object to the expansion of several DHS programs subjecting Americans and foreign citizens to facial recognition screening at international airports. EFF appreciated the opportunity to share these concerns directly with CBP officers and we hope to work with CBP to allow travelers to opt-out of the program entirely. You can read the full letter we sent to CBP here.
>> mehr lesen

Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report (Do, 15 Feb 2018)
Independent Oversight, Privacy Protections Are Needed San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today. Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems. This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA. Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets. But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively. And because of well-documented racially biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants. For both reasons, inaccuracies in face recognition systems will disproportionately affect people of color. “The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.” Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes. “People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.” For the report: Online version: https://www.eff.org/wp/law-enforcement-use-face-recognition PDF version: https://www.eff.org/files/2018/02/15/face-off-report-1b.pdf One pager on facial recognition: https://www.eff.org/document/facial-recognition-one-pager Contact:  Jennifer Lynch Senior Staff Attorney jlynch@eff.org
>> mehr lesen

Court Dismisses Playboy's Lawsuit Against Boing Boing (For Now) (Do, 15 Feb 2018)
In a win for free expression, a court has dismissed a copyright lawsuit against Happy Mutants, LLC, the company behind acclaimed website Boing Boing. The court ruled [PDF] that Playboy’s complaint—which accused Boing Boing of copyright infringement for linking to a collection of centerfolds—had not sufficiently established its copyright claim. Although the decision allows Playboy to try again with a new complaint, it is still a good result for supporters of online journalism and sensible copyright. Playboy Entertainment’s lawsuit accused Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. In a February 2016 post, Boing Boing told its readers that someone had uploaded scans of the photos, noting they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing. EFF, together with co-counsel Durie Tangri, filed a motion to dismiss [PDF] on behalf of Boing Boing. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The motion noted that another judge in the same district had recently dismissed a case where Quentin Tarantino accused Gawker of copyright infringement for linking to a leaked script in its reporting. Judge Fernando M. Olguin’s ruling quotes the Tarantino decision, noting that: An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement. Given this standard, the court was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.” From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website. Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit. Related Cases:  Playboy Entertainment Group v. Happy Mutants
>> mehr lesen

Will Canada Be the New Testing Ground for SOPA-lite? Canadian Media Companies Hope So (Mi, 14 Feb 2018)
A consortium of media and distribution companies calling itself “FairPlay Canada” is lobbying for Canada to implement a fast-track, extrajudicial website blocking regime in the name of preventing unlawful downloads of copyrighted works. It is currently being considered by the Canadian Radio-television and Telecommunications Commission (CRTC), an agency roughly analogous to the Federal Communications Commission (FCC) in the U.S. The proposal is misguided and flawed. We’re still analyzing it, but below are some preliminary thoughts. The Proposal The consortium is requesting the CRTC establish a part-time, non-profit organization that would receive complaints from various rightsholders alleging that a website is “blatantly, overwhelmingly, or structurally engaged” in violations of Canadian copyright law. If the sites were determined to be infringing, Canadian ISPs would be required to block access to these websites. The proposal does not specify how this would be accomplished. The consortium proposes some safeguards in an attempt to show that the process would be meaningful and fair. It proposes the affected websites, ISPs, and members of the public would be allowed to respond to any blocking request. It also suggests that any blocking request would not be implemented unless a recommendation to block were adopted by the CRTC, and any affected party would have the right to appeal to a court. Fairplay argues the system is necessary because, according to Fairplay, unlawful downloads are destroying the Canadian creative industry and harming Canadian culture. (Some of) The Problems As Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa points out, Canada had more investment in film and TV production last year than any other time in history. And it’s not just investment in creative industries that is seeing growth: legal means of accessing creative content is also growing, as Bell itself recognized in a statement to financial analysts. Contrary to the argument pushed by the content industry and other FairPlay backers, investment and lawful film and TV services are growing, not shrinking. The Canadian film and TV industries don’t need website-blocking. The proposal would require service providers to “disappear” certain websites, endangering Internet security and sending a troubling message to the world: it’s okay to interfere with the Internet, even effectively blacklisting entire domains, as long as you do it in the name of IP enforcement. Of course, blacklisting entire domains can mean turning off thousands of underlying websites that may have done nothing wrong. The proposal doesn’t explain how blocking is to be accomplished, but when such plans have been raised in other contexts, we’ve noted the significant concerns we have about various technological ways of “blocking” that wreak havoc on how the Internet works. And we’ve seen how harmful mistakes can be. For example, back in 2011, the U.S. government seized the domain names of two popular websites based on unsubstantiated allegations of copyright infringement. The government held those domains for over 18 months. As another example, one company named a whopping 3,343 websites in a lawsuit as infringing on trademark and copyright rights. Without an opposition, the company was able to get an order that required domain name registrars to seize these domains. Only after many defendants had their legitimate websites seized did the Court realize that statements made about many of the websites by the rightsholder were inaccurate.  Although the proposed system would involve blocking (however that is accomplished) and not seizing domains, the problem is clear: mistakes are made, and they can have long-lasting effect.  But beyond blocking for copyright infringement, we’ve also seen that once a system is in place to take down one type of content, it will only lead to calls for more blocking, including that of lawful speech. This raises significant freedom of expression and censorship concerns. We’re also concerned about what’s known as “regulatory capture” with this type of system, the idea that the regulator often tends to align its interests with those of the regulated. Here, the system would be initially funded by rightsholders, would be staffed “part-time” by those with “relevant experience,” and would get work when rightsholders view it as a valuable system. These sort of structural aspects of the proposal have a tendency to cause regulatory capture. An impartial judiciary that sees cases and parties from across a political, social, and cultural spectrum helps avoid this pitfall. Finally, we’re also not sure why this proposal is needed at all. Canada already has some of the strongest anti-piracy laws in the world. The proposal just adds complexity and strips away some of the protections that a court affords those who may be involved in legitimate business (even if the content owners don’t like those businesses). These are just some of the concerns raised by this proposal. Professor Geist’s blog highlights more, and in more depth. What you can do The CRTC is now accepting public comment on the proposal, and has already received over 4,000 comments. The deadline is March 1, although an extension has been sought. We encourage any interested members of the public to submit comments to let the Commission know your thoughts. Please note that all comments are made public, and require certain personal information to be included.
>> mehr lesen

Let's Encrypt Hits 50 Million Active Certificates and Counting (Mi, 14 Feb 2018)
In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default. Image of Let's Encrypt's statistics on a line graph, showing (roughly) Certificates Active reaching 66 million, Certificates at 50 million, and Registered Domains at 23 million Source: https://letsencrypt.org/stats/ as of February 14, 2018 Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings. 50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together. One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here. Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites. No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.
>> mehr lesen

The Revolution and Slack (Mi, 14 Feb 2018)
UPDATE (2/16/18): We have corrected this post to more accurately reflect the limits of Slack's encryption of user data at rest. We have also clarified that granular retention settings are only available on paid Slack workspaces. The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.   We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community. The Mismatch Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it. We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace. We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack. First, The Good News Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas. Slack also stores your data in encrypted form when it’s at rest. This method will protect against someone walking into one of the data centers Slack uses and stealing a hard drive. But Slack does not claim to encrypt that data while it is stored in memory, so it is not protected against attacks or data breaches. This is also not useful if you are worried about governments or other entities putting pressure on Slack to hand over your information. Risks With Slack In Particular And now the downsides. These are things that Slack could change, and EFF has called on them to do so. Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement. Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes.  Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content. Risks With Third-Party Platforms in General Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them. Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you.  Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data. Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations. A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial. Making Slack as Secure as Possible If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible. By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. If you are using a paid workspace, the lowest-hanging privacy fruit is to change a workspace’s retention settings. Workspace admins have the ability to set shorter retention periods, which can mean less content available for government requests or legal inquiries. Unfortunately, this kind of granular retention control is currently only available for paid workspaces. Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages.  The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law. Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.
>> mehr lesen

Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas (Di, 13 Feb 2018)
Egyptian journalist Wael Abbas holds a special distinction: Over the years, he’s experienced censorship at the hands of four of Silicon Valley’s top companies. Although more extreme, his story isn’t so different from that of the many individuals who, following a single misstep or mistake at the hands of a content moderator, find themselves unceremoniously removed from a social platform. When YouTube was still fairly new, Abbas began posting videos depicting police brutality in his native Egypt to the platform. The award-winning journalist and anti-torture activist found utility in the global platform, which even then had massive reach. One of the videos he had posted even resulted in a rare conviction of police officers in Cairo. But in late 2007, he found that his account had been removed without warning. The reason? His content, often graphic in nature, had been receiving large numbers of complaints. Rights activists rallied around Abbas and were able to convince YouTube to restore his account; his archive of videos were eventually restored. YouTube later adjusted its rules to be more permissive of violent content that is documentarian in nature. Around the same time, Abbas’ Yahoo! email account was shut down—and later restored—on accusations that he was spamming other users. More recently, Abbas has faced off with Facebook over an erroneous content decision made by the company. In November 2017, Abbas was issued a 30-day suspension by Facebook for a post in which he named and accused an individual of running a scam and threatening other people. As a result of the suspension, Abbas was unable to post to Facebook or use Messenger or other platform tools. After we contacted the company the suspension was reversed and Abbas’s access restored. In another, more recent instance, Abbas had an image removed from Facebook, and received only a vague notification stating: You uploaded a photo that violates our Terms of Use, and this photo has been removed. Facebook does not allow photos that attack an individual or group, or that contain nudity, drug use, violence, or other violations of the Terms of Use. These policies are designed to ensure Facebook remains a safe, secure and trusted environment for all users, including the many children who use the site. Although Facebook pointed to their policies, they did not identify to Abbas which of his photos had actually violated the Terms of Use, leaving him guessing as to what he’d done wrong. A Facebook spokesperson commented: In most instances involving content removals, we send people a generic message to let them know that they've violated our Community Standards. We're in the process of trying to be more specific with our language so that people have a better understanding of why we've taken down their content and how can they avoid similar removals in the future. Wael Abbas posts on Facebook about his experience with Twitter Wael Abbas writes about his Twitter account being suspended Abbas was able to hold on to his Facebook account, but with Twitter, he wasn’t so lucky. In December, he was suddenly suspended from the platform without warning or notification. His account, which was verified and had 350,000 followers, was described by Egyptian human rights activist Sherif Azer as “a live archive to the events of the revolution and till today one of few accounts still documenting human rights abuses in Egypt.” EFF contacted Twitter about the suspension, but the company did not respond to our query. Platforms must be accountable to their users Social media companies took great pride in the role they were said to have played in the 2011 Arab uprisings. But as a recent article from Middle East Eye points out, Egyptians are facing a significant increase in content takedowns on Facebook. The article asks the question: “Would those social media accounts which supported Egypt's uprisings in 2011 now be shut down?” In fact, the most famous of those social media accounts—the page entitled “We Are All Khaled Said” that first called for protests on January 25, 2011—was actually shut down by Facebook in 2010, just a few months before the uprising. The page, which was later revealed to have been created by Google executive Wael Ghonim, was removed because Ghonim had been using a fake name, and only restored after US-based NGOs stepped in to help. Similarly, Abbas was only able to have his suspension overturned after contacting EFF.  Verified Egyptian Reuters journalist Amina Ismail was able to get a Twitter suspension overturned through her contacts. Abbas and Ismail are both high-profile journalists, however—most users don’t have access to contacts at Silicon Valley’s top tech companies. Wael Abbas's experience demonstrates the precarity of our online lives, and the dire need for platforms to institute transparent practices. As we recently wrote, social media platforms must notify users clearly when they violate a policy, and offer a clear path of recourse so that all users have an opportunity to appeal content decisions. Abbas's experience is the tip of the iceberg: for every prominent journalist documenting injustice who manages to get through their filters, how many more have lost the fight against the censors before they had a chance to reach a wider public? It is vital that technology companies recognize the role they play in fostering free expression and act accordingly. To learn more about our efforts to hold companies accountable on freedom of expression, visit Onlinecensorship.org.
>> mehr lesen

We Don’t Need New Laws for Faked Videos, We Already Have Them (Di, 13 Feb 2018)
Video editing technology hit a milestone this month. The new tech is being used to make porn. With easy-to-use software, pretty much anyone can seamlessly take the face of one real person (like a celebrity) and splice it onto the body of another (like a porn star), creating videos that lack the consent of multiple parties. People have already picked up the technology, creating and uploading dozens of videos on the Internet that purport to involve famous Hollywood actresses in pornography films that they had no part in whatsoever. While many specific uses of the technology (like specific uses of any technology) may be illegal or create liability, there is nothing inherently illegal about the technology itself. And existing legal restrictions should be enough to set right any injuries caused by malicious uses. As Samantha Cole at Motherboard reported in December, a Reddit user named “deepfakes” began posting videos he created that replaced the faces of porn actors with other well-known (non-pornography) actors. According to Cole, the videos were “created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.” Just over a month later, Cole reported that the creation of face-swapped porn, labeled “deepfakes” after the original Redditor, had “exploded” with increasingly convincing results. And an increasingly easy-to-use app had launched with the aim of allowing those without technical skills to create convincing deepfakes. Soon, a marketplace for buying and selling deepfakes appeared in a subreddit, before being taken off the site. Other platforms including Twitter, PornHub, Discord, and Gfycat followed suit. In removing the streams, each platform noted a concern that the people depicted in the deepfakes did not consent to their involvement in the videos themselves. We can quickly imagine many terrible uses for this face-swapping technology, both in creating nonconsensual pornography and false accounts of events, and in undermining the trust we currently place in video as a record of events. But there can be beneficial and benign uses as well: political commentary, parody, anonymization of those needing identity protection, and even consensual vanity or novelty pornography. (A few others are hypothesized towards the end of this article.) The knee-jerk reaction many people have towards any new technology that could be used for awful purposes is to try and criminalize or regulate the technology itself. But such a move would threaten the beneficial uses as well, and raise unnecessary constitutional problems. Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes. What Laws Apply If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations. On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily. To win a false light lawsuit, a plaintiff—the person harmed by the deepfake, for example—must typically prove that the defendant—the person who uploaded the deepfake, for example—published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense, in such a way that would be highly offensive to a reasonable person, and caused the plaintiff mental anguish or suffering. It seems that in many situations the placement of someone in a deepfake without their consent would be the type of “highly offensive” conduct that the false light tort covers. The Supreme Court further requires that in cases pertaining to matters of public interest, the plaintiff must also prove an intent that the audience believe the impression to be true. This is the actual malice requirement found in defamation law.  False light is recognized as a legal action in about two-thirds of the states. It can be difficult to distinguish false light from defamation, and many courts treat them identically. The courts that treat them differently focus on the injury: defamation compensates for damage to reputation, false light compensates for being subject to offensiveness.  But of course, a plaintiff could sue for defamation if a deepfake has a natural tendency to damage their reputation. The tort of Intentional Infliction of Emotional Distress (IIED) will also be available in many situations. A plaintiff can win an IIED lawsuit if they prove that a defendant—again, for example, a deepfake creator and uploader—intended to cause the plaintiff severe emotional distress by extreme and outrageous conduct, and that the plaintiff actually suffered severe emotional distress as a result of the extreme and outrageous conduct. The Supreme Court has found that where the extreme and outrageous conduct is the publication of a false statement and when the statement is about either a matter of public interest or a public figure, the plaintiff must also prove an intent that the audience believe the statement to be true, an analog to defamation law’s actual malice requirement. The Supreme Court has further extended the actual malice requirement to all statements pertaining to matters of public interest. And to the extent deepfakes are sold or the creator receives some other benefit from them, they raise the possibility of right of publicity claims as well by those whose images are used without their consent. Lastly, one whose copyrighted material–either the facial image or the source material into which the facial image is embedded–may have a claim for copyright infringement, subject of course to fair use and other defenses. Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.
>> mehr lesen

How Have Europe's Upload Filtering and Link Tax Plans Changed? (Di, 13 Feb 2018)
Although we have been opposing Europe's misguided link tax and upload filtering proposals ever since they first surfaced in 2016, the proposals haven't been standing still during all that time. In the back and forth between a multiplicity of different Committees of the European Parliament, and two other institutions of the European Union (the European Commission and the Council of the European Union), various amendments have been offered up in an attempt at political compromise. Unfortunately, the point at which these compromises seem to have landed still poses the same problems as before. What Has Happened with the Link Tax? Article 11 is its official designation, but "link tax" is a far better informal description of this proposal, which would impose a requirement for Internet platforms to pay money to news publishers for providing links to news articles, accompanied by a short summary of what they are linking to. This isn't a copyright, because the link tax is paid to the publisher rather than the author, and because it is payable even if the portion of the news article taken isn't copyright-protected, falls within a copyright exception, or is freely licensed. It's unclear why this proposal wasn't abandoned a long time ago. A similar link tax in Spain resulted in the closure of the Spanish version of Google News, a German equivalent has also been deemed a dismal failure, and both small publishers and even a European Commission-funded study have slammed the proposal. Nevertheless as of February 2018, it remains firmly on the table, with virtually nothing to sweeten the thoroughly rotten deal that it offers to Internet platforms and publishers alike. The most recent attempt at compromise comes in a discussion paper [PDF] from the Bulgarian Council Presidency, prepared as input for a meeting of the Council's Intellectual Property Working Party that was held on February 12. The paper proposes only minor tweaking to the European Commission's original text, such as excluding individual Internet users from liability for the tax, and carving out "individual words or very short excerpts of text" from its scope, but without specifying what "very short excerpts" actually means.  The discussion paper also briefly acknowledges the alternative proposal of dropping the link tax altogether, and instead addressing publishers' concerns without creating any new copyright-like impost. This alternative proposal would create a legal presumption that news publishers are entitled to enforce the existing copyrights in news articles written by their journalists. If Internet platforms are reproducing such large parts of news articles that permission from the copyright owner is required, this would enable the publishers to negotiate directly with those platforms to license that use. This is the only sensible compromise that can be made to the Article 11 proposal, but it is one that the Bulgarian Presidency unfortunately gives short shrift. What has Happened with Upload Filtering? The same discussion paper also tinkers around the edges of the upload filtering mandate, without addressing the fundamental dangers that it continues to pose to freedom of expression online. For those who came in late, the European Commission's initial upload filter proposal,  formally designated as Article 13, would require Internet platforms to put in place costly and ineffective automatic filters to prevent copyright-infringing content from being uploaded by users, creating a kind of robotic censorship regime. What has changed since then? Not much. The Bulgarian Presidency proposes being slightly more specific about what kinds of online platforms are the target of the measure ("online content sharing services"). It also proposes introducing a new, expansive definition of "communication to the public"; an exclusive right reserved to copyright holders in Europe that had previously only been defined by way of a complicated series of court decisions. By deeming an Internet platform to be engaged in "communication to the public" whenever it allows a user to upload a copyright-protected work for sharing, the Bulgarian Presidency aims to justify excluding that platform from the copyright safe harbor that the existing E-Commerce Directive provides. The only other change worth noting is that the proposal is now more equivocal about whether Internet platforms would actually have to install automated upload filters, or whether it would be sufficient for them to prevent the uploading of copyright-infringing material in some other way. But as European Digital Rights (EDRi) has cogently pointed out, this is a distinction without a difference. To comply with Article 13 and to avoid liability under the E-Commerce Directive (per the Bulgarian Presidency's amendment), platforms are required to "take effective measures to prevent the availability on its services of ... unauthorized works or other subject-matter identified by the rightholders," and if such works do nevertheless appear on the platform, must "act expeditiously to remove or disable access to the specific unauthorized work or other subject matter and ... take steps to prevent its future availability." There is no way in which platforms could possibly comply with this directive other than by agreeing to monitor all of the content they accept, either manually or automatically. By daring not to speak this uncomfortable truth, the Bulgarian Presidency skirts around the fact that such a general monitoring obligation would contravene both Article 14 of the E-Commerce Directive and European human rights law. But that kind of clever circumlocution can't hide the repressive nature of this censorship proposal, and does nothing to improve on the flaws of the original text. What Can You Do? The fight against Article 11 and Article 13 is entering its closing days. That makes every voice that we can raise in opposition to these harmful proposals more important than ever before. European voices are best placed to convince European policymakers of the harm that their proposals would wreak upon European businesses and users. Thankfully, our allies in Europe are on the case, and if you are European or have colleagues or friends in Europe, here are the links you need to contact your representatives and speak out against their misguided plans: Mozilla has put together an awesome call-in tool and response guide, which makes it easy to identify your specific concerns as a technologist, creator, innovator, scientist or librarian. You can also read more on Mozilla's site about how all of these category of user, and more, are affected by the Article 11 and Article 13 proposals, along with some of the other more obscure (but still important) provisions of the broader Digital Single Market Directive. A coalition called Create.Refresh have a brilliant, viral campaign that encourages creators to create and share their own works that address the problems inherent in restrictive filtering systems, such as those that Article 13 would effectively mandate. OpenMedia's Save the Link network has updated their click-to-call website this month with a brand new petition on Article 11 that enables you to identify yourself as one of the impacted groups, from a drop down menu on the new page. If you are a librarian, software developer, creator, researcher, or journalist, you'll be able to demonstrate how the link tax proposals are harmful to you specifically. As you can see, there are many options for you to get involved in this fight—and with the final Committee vote in the European Parliament coming up on March 26-27, now is the best time to do so. If we lose this one, the link tax and upload filtering mandates could be here to stay, and the Internet as we know it will never be the same.
>> mehr lesen

Internet Users Spoke Up To Keep Safe Harbors Safe (Di, 13 Feb 2018)
Today, we delivered a petition to the U.S. Copyright Office to keep copyright’s safe harbors safe.  We asked the Copyright Office to remove a bureaucratic requirement that could cause websites and Internet services to lose protection under the Digital Millennium Copyright Act (DMCA). And we asked them to help keep Congress from replacing the DMCA safe harbor with a mandatory filtering law. Internet users from all over the U.S. and beyond added their voices to our petition. Under current law, the owners of websites and online services can be protected from monetary liability when their users are accused of infringing copyright through the DMCA “safe harbors.” In order to take advantage of these safe harbors, owners must meet many requirements, including participating in the notorious notice-and-takedown procedure for allegedly infringing content. They also must register an agent—someone who can respond to takedown requests—with the Copyright Office. The DMCA is far from perfect, but provisions like the safe harbor allow websites and other intermediaries that host third-party material to thrive and grow without the constant threat of massive copyright penalties. Without safe harbors, small Internet businesses could face bankruptcy over the infringing activities of just a few users. Now, a lot of those small sites risk losing their safe harbor protections. That’s because of the Copyright Office’s rules for registering agents. Those registrations used to be valid as long as the information was accurate. Under the Copyright Office’s new rules, website owners must renew their registrations every three years or risk losing safe harbor protections. That means that websites can risk expensive lawsuits for nothing more than forgetting to file a form. As we’ve written before, because the safe harbor already requires websites to submit and post accurate contact information for infringement complaints, there’s no good reason for agent registrations to expire. We’re also afraid that it will disproportionately affect small businesses, nonprofits, and hobbyists, who are least able to have a cadre of lawyers at the ready to meet bureaucratic requirements. Many website owners have signed up under the Copyright Office’s new agent registration system, which is designed to send reminder emails when the three-year registrations are set to expire. While the new registration system is a vast improvement over the old paper filing system, the expiration requirement is unnecessary and dangerous. We explained these problems in our petition, and we also explained how the DMCA faces even greater threats. If certain major media and entertainment companies get their way, it will become much more difficult for websites of any size to earn their safe harbor status. That’s because those companies’ lobbyists are pushing for a system where platforms would be required to use computerized filters to check user-uploaded material for potential copyright infringement. Requiring filters as a condition of safe harbor protections would make it much more difficult for smaller web platforms to get off the ground. Automated filtering technology is expensive—and not very good. Even when big companies use them, they’re extremely error-prone, causing lots of lawful speech to be blocked or removed. A filtering mandate would threaten smaller websites’ ability to host user content at all, cementing the dominance of today’s Internet giants. If you run a website or online service that stores material posted by users, make sure that you comply with the DMCA’s requirements. Register a DMCA agent through the Copyright Office’s online system, post the same information on your website, and keep it up to date. Meanwhile, we’ll keep telling the Copyright Office, and Congress, to keep the safe harbors safe.
>> mehr lesen

Imprisoned Blogger Eskinder Nega Won't Sign a False Confession (Di, 13 Feb 2018)
Online publisher and blogger Eskinder Nega has been imprisoned in Ethiopia since September 2011 for the "crime" of writing articles critical of his government. He is one of the longest-serving prisoners in EFF's Offline casefile of writers and activists unjustly imprisoned for their work online. Now a chance he may finally be freed has been thrown into doubt because of the Ethiopian authorities' outrageous demand that he sign a false confession before being released. The Ethiopian Prime Minister, Hailemariam Desalegn, announced in January surprise plans to close down the notorious Maekelawi detention center and release a number of prisoners. The Prime Minister said that the move was intended to "foster national reconciliation." While Ethiopia's own officials have declined to call the recipients of the amnesty "political prisoners," the bulk of the candidates named so far for release are either opposition politicians and activists, or others, like Eskinder, caught up in previous crackdowns on dissent and free speech. Despite the government's apparent desire to use the release to moderate tensions in Ethiopia, prison officials have undermined its message—and Eskinder's chance at freedom—by requiring him to sign a false confession before his release. The document, given to Eskinder without warning last week, included a claim that Eskinder was a member of Ginbot 7, a group the government has previously declared a terrorist organization. Eskinder refused to sign the document, and was subsequently returned to his cell, even as other prisoners were being released. The Committee to Protect Journalists subsequently told Quartz Africa that Eskinder was asked to sign the form a second time over the weekend. EFF continues to follow Eskinder's case closely, and urges the Ethiopian government to live up to its promise of a new era of reconciliation and renewal by returning Eskinder to his friends and family, unconditionally and immediately.
>> mehr lesen

Oregon Steps Up to the Plate on Network Neutrality This Month (Mo, 12 Feb 2018)
It should not be surprising that arguably the biggest mistake in Internet policy history is going to invoke a vast political response. Since the FCC repealed federal Open Internet Order in December, many states have attempted to fill the void. With a new bill that reinstates net neutrality protections, Oregon is the latest state to step up. Oregon’s Majority Leader Jennifer Williamson recently announced her intention to fight to restore much of what the FCC repealed last December under its so-called “Restoring Internet Freedom Order.” Her legislation, H.B. 4155, responds to the FCC’s decision by requiring that any ISP that receives funds from the state to adhere to net neutrality principles—not blocking or throttling content or prioritizing its own content over that of competitors, for example. If you’re an Oregonian, tell your state representative to act to restore net neutrality. Oregon is following in what is clearly a trend of state legislatures and executives acting to protect their citizens’ digital rights where the federal government has abdicated responsibility. To date, 17 states have introduced network neutrality legislation and four Governors have issued Executive Orders (Montana, New York, New Jersey, and Hawaii). The national response to the FCC’s decision to abandon its role as the consumer protection agency overseeing cable and telephone companies is to be expected. It is wildly unpopular with voters of all political leanings; 83% of voters overall including 3 out of 4 Republican voters opposed the FCC decision. Yet despite millions of Americans submitting comments to the FCC to oppose the decision, they were promptly ignored in favor of the interests of AT&T, Verizon, and Comcast. Where else should this vast swath of the American public go if not their state and local representation? And while both Verizon and their association, the CTIA, made last minute requests to the FCC to try to prevent state privacy and network neutrality laws, they are not going to be successful. Their problem is the plan to eviscerate the law that empowers the FCC also disables the agency’s ability to block state laws. In other words, they cannot have it both ways. While the FCC's order did contain a lot of words about how states cannot pass their own network neutrality laws, it did so without citing any specific legal authority. We remain skeptical that the FCC itself has that power. And while states still have to navigate the Commerce Clause, EFF has provided guidance on how to do that. Notably, states and local government and in particular governors have caught onto the obvious weakness in the FCC’s authority and have acted. EFF will continue working to support the states in their effort to protect a free and open Internet until we are able to fully restore the protections we once had at the federal level. Take Action Tell your state representatives to support H.B. 4155 and restore the neutral net
>> mehr lesen

The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data (Fr, 09 Feb 2018)
This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world. The Clarifying Overseas Use of Data (CLOUD) Act expands American and foreign law enforcement’s ability to target and access people’s data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access “the contents of a wire or electronic communication and any record or other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider—like Google, Facebook, or Snapchat—to hand over a user’s content and metadata, even if it is stored in a foreign country, without following that foreign country’s privacy laws.[1] Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire users’ data stored in the other country, without following each other’s privacy laws. For example, because U.S.-based companies host and carry much of the world’s Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws. This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called “Microsoft Ireland”), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law. Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years. The CLOUD Act and the US-UK Agreement The CLOUD Act’s proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States. In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department’s revamped bill. The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ’s 2017 bill. None of EFF’s concerns have been addressed. The legislation still: Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment. Fails to require foreign law enforcement to seek individualized and prior judicial review. Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act. Fails to place adequate limits on the category and severity of crimes for this type of agreement. Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.) The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the “public” from the unlawful disclosure of their personal communications. An Expansion of U.S. Law Enforcement Capabilities The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information – meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States. EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case. When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government’s laws. To do so, the company must object within 14 days, and undergo a complex “comity” analysis – a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments. Failure to Support Mutual Assistance Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation. It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries. While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining. The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system. A growing chorus of privacy groups in the United States opposes the CLOUD Act’s broad expansion of U.S. and foreign law enforcement’s unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities “in the wrong direction, by sacrificing digital rights.” CDT and Access Now also oppose the bill. Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a “good start.” Nor does it do a “remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security.” Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies. Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people’s privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it. [1] The text of the CLOUD Act does not limit U.S. law enforcement to serving orders on U.S. companies or companies operating in the United States. The Constitution may prevent the assertion of jurisdiction over service providers with little or no nexus to the United States. Related Cases:  In re Warrant for Microsoft Email Stored in Dublin, Ireland
>> mehr lesen

IPR Process Saves 80 Companies From Paying For a Sports-Motion Patent (Do, 08 Feb 2018)
The importance of the US Patent Office’s “inter partes review” (IPR) process was highlighted in dramatic fashion yesterday. Patent appeals judges threw out a patent [PDF] that was used to sue more than 80 companies in the fitness, wearables, and health industries. US Patent No. 7,454,002 was owned by Sportbrain Holdings, a company that advertised a kind of ‘smart pedometer’ as recently as 2011. But the product apparently didn’t take off, and in 2016, Sportbrain turned to patent lawsuits to make a buck. A company called Unified Patents challenged the ’002 patent by filing an IPR petition, and last year, the Patent Office agreed that the patent should be reviewed. Yesterday, the patent judges published their decision, canceling every claim of the patent. The ’002 patent describes capturing a user’s “personal data,” and then sharing that information with a wireless computing device and over a network. It then analyzes the data and provides feedback. After reviewing the relevant technology, a panel of patent office judges found there wasn’t much new to the ’002 patent. Earlier patents had already described collecting and sharing various types of sports data, including computer-assisted pedometers and a system that measured a skier’s “air time.” Given those earlier advances, the steps of the Sportbrain patent would have been obvious to someone working in the field. The office cancelled all the claims. That means the dozens of different companies sued by Sportbrain won’t have to each spend hundreds of thousands of dollars—potentially millions—to defend against a patent that, the government now acknowledges, never should have been granted in the first place. A Critical Tool for Innovators Bad patents like the one asserted by Sportbrain are a drain on the innovation economy, especially for small businesses. But the damage that could be caused by such patents was much worse before the advent of IPRs. The IPR process has proven to be the most effective part of the 2012 America Invents Act. In most cases, the IPR process is far more efficient than federal courts when it comes to evaluating a patent to figure out if it’s truly new and non-obvious. IPRs have other advantages for small companies. Often, companies that get sued or threatened by patent trolls will end up paying a licensing fee, even though they don’t think the patents are legitimate. Through the IPR process, defendants can band together to file IPRs.  That’s enabled the success of membership-based for-profit companies like RPX and Unified Patents—in fact, it was member-funded Unified that filed the petition which shut down the Sportbrain Holdings patent. The IPR process also enables non-profits like EFF to fight bad patents. That’s how EFF was able to knock out the Personal Audio “podcasting” patent. The petition was paid for by the more than 1,000 donors who gave to our “Save Podcasting” campaign. Last year, EFF’s victory in that case was upheld by a federal appeals court. But the IPR process could be in danger. Senator Chris Coons has twice proposed legislation (the STRONG Patents Act and the STRONGER Patents Act) that would gut the IPR system. EFF has opposed these bills. Other opponents of IPRs have taken their complaints to the courts. One company has asked the Supreme Court to declare the process unconstitutional. This case, Oil States, will decide the future of IPRs. We’ve submitted a brief explaining why we think the process of reviewing patents at the Patent Office is not only constitutional, it’s good public policy. We hope both Congress and the high court see their way to upholding this critical tool that saved 80 companies from damaging litigation—and that was just yesterday. Related Cases:  EFF v. Personal Audio LLC
>> mehr lesen

John Perry Barlow, Internet Pioneer, 1947-2018 (Mi, 07 Feb 2018)
With a broken heart I have to announce that EFF's founder, visionary, and our ongoing inspiration, John Perry Barlow, passed away quietly in his sleep this morning. We will miss Barlow and his wisdom for decades to come, and he will always be an integral part of EFF. It is no exaggeration to say that major parts of the Internet we all know and love today exist and thrive because of Barlow’s vision and leadership. He always saw the Internet as a fundamental place of freedom, where voices long silenced can find an audience and people can connect with others regardless of physical distance. Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'” Barlow’s lasting legacy is that he devoted his life to making the Internet into “a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth . . . a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” In the days and weeks to come, we will be talking and writing more about what an extraordinary role Barlow played for the Internet and the world. We've updated our collection of his work here.  And as always, we will continue the work to fulfill his dream.
>> mehr lesen

Newly Released Surveillance Orders Show That Even with Individualized Court Oversight, Spying Powers Are Misused (Mi, 07 Feb 2018)
Once-secret surveillance court orders obtained by EFF last week show that even when the court authorizes the government to spy on specific Americans for national security purposes, that authorization can be misused to potentially violate other people’s civil liberties. These documents raise larger questions about whether the government can meaningfully protect people’s privacy and free expression rights under Section 702 of the Foreign Intelligence Surveillance Act (FISA), which permits officials to engage in warrantless mass surveillance with far less court oversight than is required under the “traditional” FISA warrant process. The documents are the third and final batch of Foreign Intelligence Surveillance Court (FISC) opinions released to EFF as part of a FOIA lawsuit seeking all significant orders and opinions of the secret court. Previously, the government released opinions dealing with FISA’s business records and pen register provisions, along with opinions under Section 702. Although many of the 13 opinions are heavily redacted—and the government withheld another 26 in full—the readable portions show several instances of the court blocking government efforts to expand its surveillance or ordering the destruction of information obtained improperly as a result of its spying. Court Rejects FBI Effort to Log Communications of Individuals Not Targeted by FISA Order For example, in a 40-page opinion issued in 2004 or 2005, FISC Judge Harold Baker rejected the FBI’s proposal to log copies of recorded conversations of people who, while not targeted by the agency, were still swept up in its surveillance. This likely occurred when innocent people used the same communications service as the FBI’s target, possibly a shared phone line. The opinion demonstrates both the risks of overcollection as part of targeted surveillance as well as the benefits of engaged, detailed court oversight. Here’s how that oversight works: Once the FISC approves electronic surveillance under FISA’s Title I, the FBI can record a target’s communications, but it must follow “minimization procedures” to avoid unnecessarily listening in on conversations by others who are using the same “facility” (like a telephone line). In this case, however, the FBI employed a surveillance technique that apparently captured a lot of innocent communications. (This is often referred to as “incidental collection” because the recording of these conversations is incidental to spying on the target who uses the same phone line.) Although redactions make it difficult to understand details of the FBI’s request to the court, it apparently sought to mark these out-of-scope conversations for later use, which would be inconsistent with the “Standard Minimization Procedures” approved for use in FISA Title I cases. The FBI seems to have presented its request to the FISC as no big deal, with “minimal, if any” impact on the Fourth Amendment. Judge Baker saw it differently. He explained that “it is not sufficient to assert that, because the Standard Procedures already permit the FBI a great deal of latitude, it is reasonable to grant a little more.” More fundamentally, the court took the FBI to task for the “surprising occasion” of seeking to expand its use of incidentally collected communications, rather than getting rid of them. It faulted the FBI for failing to account “for the possibility that overzealous or ill-intentioned personnel might be inclined to misuse information, if given the opportunity.” As the court put it, “the advantage of minimization at the acquisition stage is clear. Information that is never acquired in the first place cannot be misused.” NSA Makes Ridiculous Argument to Keep Communications it Obtained Without Court Authorization Other opinions EFF obtained detail the NSA’s unauthorized surveillance of a number of individuals and the government’s efforts to hold onto the data despite a FISA court’s order that the communications be destroyed. A December 2010 order by FISC Judge Frederick Scullin, Jr. describes how over a period between 15 months and three years, the NSA obtained a number of communications of U.S. persons. The precise number of communications obtained is redacted. Rather than notifying the court that it had destroyed the communications it obtained without authorization, the NSA made an absurd argument in a bid to retain the communications: because the surveillance was unauthorized, the agency’s internal procedures that require officials to delete non-relevant communications should not apply. Essentially, because the surveillance was unlawful, the law shouldn’t apply and the NSA should get to keep what it had obtained. The court rejected the NSA’s argument. “One would expect the procedures’ restrictions on retaining and disseminating U.S. person information to apply most fully to such communications, not, as the government would have it, to fail to apply at all,” the court wrote. The court went on to say that “[t]here is no persuasive reason to give the (procedures) the paradoxical and self-defeating interpretation advanced by the government.” The court then ordered the NSA to destroy the communications it had obtained without FISC authorization. But another opinion issued by Judge Scullin in May 2011 shows that rather than immediately complying with the order, the NSA asked the FISC once more to allow it to keep the communications. Again the court rejected the government’s arguments. “No lawful benefit can plausibly result from retaining this information, but further violation of law could ensue,” the court wrote. The court then ordered the NSA to not only delete the data, but to provide reports on the status of its destruction “until such time as the destruction process has been completed.” If Government Abuse of Surveillance Powers Occurs With Careful Oversight, What Happens Under Section 702? The new opinions show that even when the FISC judges actually approve targeted surveillance on particular individuals, the government still collects the contents of innocent people’s communications in ways that are incompatible with the law. Which raises the question: what is the government getting away with when it engages in surveillance that has even less FISC oversight? Although the opinions discussed above concern FISA’s statutory requirements of minimization rather than constitutional limits, these are the sort of concerns that EFF has raised in the context of the NSA’s warrantless surveillance under Section 702 of FISA. Unlike FISA Title I, Section 702 does not require the FISC to conduct such detailed oversight of the government’s activities. The court does approve minimization procedures, but it does not review targets or facilities, meaning that it has less insight into the actual surveillance. That necessarily reduces opportunities to prevent overbroad collection or check an intelligence agency’s incremental loosening of its own rules. And, as we’ve seen, it has led to significant “compliance violations” by the NSA and other agencies using Section 702.  All surveillance procedures come with risks, especially with the level of secrecy involved in FISA. Nevertheless, opinions like these demonstrate that detailed court oversight offers the best hope of curtailing these risks. We hope it informs future debate in those areas where oversight is limited by statute, as with Section 702. If anything, the decisions are more evidence that warrantless surveillance must end.  Related Cases:  Significant FISC Opinions
>> mehr lesen

EFF vs IoT DRM, OMG! (Mi, 07 Feb 2018)
What with the $400 juicers and the NSFW smart fridges, the Internet of Things has arrived at that point in the hype cycle midway between "bottom line" and "punchline." Hype and jokes aside, the reality is that fully featured computers capable of running any program are getting cheaper and more powerful and smaller with no end in sight, and the gadgets in our lives are transforming from dumb hunks of electronics to computers in fancy cases that are variously labeled "car" or "pacemaker" or "Alexa." We don't know which designs and products will be successful in the market, but we're dead certain that banning people from talking about flaws in existing designs and trying to fix those flaws will make all the Internet of Things' problems worse. But a pernicious American law stands between the Internet of Defective Things and your right to know about those defects and remediate them. Section 1201 of the Digital Millennium Copyright Act bans any act that weakens or bypasses a lock that controls access to copyrighted works (these locks are often called Digital Rights Management or DRM). These locks were initially used to lock down the design of DVD players and games consoles, so that manufacturers could prevent otherwise legal activities, like watching out-of-region discs or playing independently produced games. Today, these locks have proliferated to every device with embedded software: cars, tractors, pacemakers, voting machines, phones, tablets, and, of course, "smart speakers" used to interface with voice assistants. Corporations have figured out that they can deploy DRM to control how you use your device, and then use DMCA 1201 to threaten competitors whose products unlock legal, legitimate features that benefit you, instead of some company's shareholders. This means that, for example, a printer company can use digital locks to control who can refill your printer-ink cartridges, ensuring that you buy ink from them, at whatever price they want to charge. It means that cellphone manufacturers get to decide who can fix your phone and tractor companies can choose who can fix your tractors. What's worse: companies have exploited DMCA 1201 to attack security researchers who came forward to report defects in their products, arguing that any disclosures of vulnerabilities in the stuff you own might help you break the DRM, meaning that it's illegal to tell you truthful things about the risks you face from your badly secured gadgets. Every three years, the US Copyright Office lets us petition for limited exemptions to this law, and we have been slowly, surely carving out a space for Americans to bypass digital locks in order to use their property in legitimate, legal ways—even if there's some DRM between them and that use. In 2015, we won the right to jailbreak your phones and tablets—to change how they're configured so that you can unlock features that you want (even if the manufacturer doesn't), and remove the ones you don't. We also won an exemption that protects security researchers' right to bypass DRM to investigate and test the security of all sorts of gadgets. Taken together, these two rights—the right to discover defects and the right to change your device configuration—form a foundation on which solutions to the pernicious problems of our vital, ubiquitous, badly secured gadgets can be built. This year, we're liberating your smart speakers: Apple HomePods, Amazon Echos, Google Homes, and lesser-known offerings from other manufacturers and platforms. These gadgets are finding their way into our living rooms, kitchens—even our bedrooms and bathrooms. They have microphones that are always on and listening (many of them have cameras, too), and they're connected to the Internet. They only run manufacturer-approved apps, and use encryption that prevents security researchers from investigating them and ensuring that they're working as intended. We've asked the Copyright Office to extend the jailbreaking exemption to cover these smart speakers, giving you the right to load software of your choosing on them—and letting security researchers probe them to make sure they're not sneaking around behind your back. These exemptions include the right to bypass the devices' bootloaders and to activate or disable hardware features. These are rights that you've always had, for virtually every gadget you've ever owned—that is, until manufacturers discovered DMCA 1201's potential to control how you use of their products after they become your property. We don't have all the answers about how to make smart speakers better, or more secure, but we are one hundred percent certain that banning people from finding out what's wrong with their smart speakers and punishing anyone who tries to improve them isn't helping. These Copyright Office hearings are important, because they help the Copyright Office understand and acknowledge that DMCA 1201 is causing problems for people who want to do legitimate activities, but the hearings are still grossly insufficient. DMCA 1201 says the Copyright Office can give you the right to use your device in ways that are prevented by DRM, but not the right to acquire a tool to enable you to make that use. Under the DMCA's rules, every person who has the right to bypass DRM is expected to hand-whittle a tool for their own personal use and treat the design of that tool as a matter of strictest secrecy. This is absurd. It's one of the reasons we're suing the U.S. government over the constitutionality of DMCA 1201, with the intention of having a court rule that the law is unenforceable, killing it altogether or sending it back to Congress for a major overhaul that terminates the ability of corporations to use a so-called anti-piracy law to ban activities that have no connection to copyright infringement.
>> mehr lesen

Startup Won't Give In to Motivational Health Messaging's $35,000 Patent Demand (Mi, 07 Feb 2018)
Trying to succeed as a startup is hard enough. Getting a frivolous patent infringement demand letter in the mail can make it a whole lot harder. The experience of San Francisco-based Motiv is the latest example of how patent trolls impose painful costs on small startups and stifle innovation. Motiv is a startup of fewer than 50 employees competing in the wearable technology space. Founded in 2013, the company creates fitness trackers, housed in a ring worn on your finger. In January, Motiv received a letter alleging infringement of U.S. Patent No. 9,069,648 (“the ’648 Patent”). The letter gave Motiv two options: pay $35,000 to license the ’648 Patent, or face the potential of costly litigation. The '648 Patent, owned by Motivational Health Messaging LLC (“MHM”), is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” It provides examples such as sending the message “don't give up” when the user is running up a hill, or messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.” Simply put, the patent claims to have invented using a computer to send tailored messages based on activity or location. While the name “Motivational Health Messaging” may sound new, the actors behind it aren’t: the people associated with MHM and its patent overlap with the people associated with notorious patent assertion entities Shipping & Transit, Electronic Communication Technologies, ArrivalStar, and Eclipse IP, who we’ve written about on numerous occasions. Collectively, these entities have filed over 700 lawsuits, with Shipping & Transit setting the 2016 record for most patent infringement lawsuits filed. Though MHM and its the patent may be new, the business model seems to be the same as the other, related entities: make patent infringement demands, often against small businesses, and leverage the high cost of litigation to extract settlements in the $25,000 to $45,000 range. (As of the date of this post, MHM has not yet filed any lawsuits and the related entities have been faring very poorly in court.) Unfortunately, for many small businesses it often makes sense to simply pay for a license instead of spending years tied up in court challenging a patent. Receiving a demand letter frivolously asserting infringement is annoying enough. Even more frustrating is being forced to divert resources away from product development in order to defend against a non-practicing entity with bad patents. Nevertheless, Motiv decided it would not go down without a fight. Motiv retained Rachael Lamkin, who replied with her own letter explaining why Motiv does not infringe, and why MHM’s patent is invalid. Lamkin also says that in the event of litigation, Motiv would seek to join the individuals behind MHM to the lawsuit—and make them personally responsible for “any sanction or fee award.” The letter laid out in painstaking detail many of the numerous deficiencies with MHM’s patent and infringement claim, and refused to pay MHM a cent. The complete set of materials sent to MHM can be found at the end of this post. We hope that MHM does not push ahead with a business model that preys on the vulnerability of small businesses, and only succeeds when undeserved settlements are paid. Patent holders like this takes advantage of inefficiencies in our legal system, despite the extreme weakness of their cases. By publishing Motiv’s response letter and supporting documentation, Motiv and EFF hope that others may benefit and not pay the troll under the bridge. If you have recently been sued or received a demand letter from MHM, contact info@eff.org. Links to documents and correspondence between Motivational Health Messaging, LLC and Motiv, Inc. U.S. Pat. No. 9,069,648 Motivational Health Messaging's demand letter to Motiv Motiv's response letter to Motivational Health Messaging Assignment records related to U.S. Pat. 9,069,648 Corporate records related to U.S. Pat. 9,069,648 Patent Office File History related to U.S. Pat. 9,069,648 Patent Office File History for patent application related to U.S. Pat. 9,069,648 Court records from Shipping & Transit, LLC v. Lensdiscounters.com Court records from Shipping & Transit, LLC v. 1A Auto, Inc. NIH Paper Lee prior art and invalidity chart Kaufman prior art and invalidity chart Steve prior art and invalidity chart Christ prior art and invalidity chart Ferguson prior art and invalidity chart Chittum prior art and invalidity chart Dalebout prior art and invalidity chart Hoffman prior art and invalidity chart
>> mehr lesen

Twilio Demonstrates Why Courts Should Review Every National Security Letter (Mi, 07 Feb 2018)
The list of companies who exercise their right to ask for judicial review when handed national security letter gag orders from the FBI is growing. Last week, the communications platform Twilio posted two NSLs after the FBI backed down from its gag orders. As Twilio’s accompanying blog post documents, the FBI simply couldn’t or didn’t want to justify its nondisclosure requirements in court. This might be the starkest public example yet of why courts should be involved in reviewing NSL gag orders in all cases. National security letters are a kind of subpoena that give the FBI the power to require telecommunications and Internet providers to hand over private customer records—including  names, addresses, and financial records. The FBI nearly always accompanies these requests with a blanket gag order, shutting up the providers and keeping the practice in the shadows, away from public knowledge or criticism. Although NSLs gag orders severely restrict the providers’ ability to talk about their involvement in government surveillance, the FBI can issue them without court oversight. Under the First Amendment, “prior restraints” like these gag orders are almost never allowed, which is why EFF and our clients CREDO Mobile and Cloudflare have for years been suing to have the NSL statute declared unconstitutional. In response to our suit, Congress included in the 2015 USA FREEDOM Act a process to allow providers to push back against those gag orders. The new process (referred to as “reciprocal notice”) gives technology companies a right to request judicial review of the gag orders accompanying NSLs. When a company invokes the reciprocal notice process, the government is required to bring the gag order before a judge within 30 days. The judge then reviews the gag order and either approves, modifies, or invalidates it. The company can appear in that proceeding to argue its case, but is not required to do so. Under the law, reciprocal notice is just an option. It’s no substitute for the full range of First Amendment protections against improper prior restraints, let alone mandatory judicial review of NSL gags in all cases. Nevertheless, EFF encourages all providers to invoke reciprocal notice because it’s the best mechanism available to Internet companies to voice their objections to NSLs. In our 2017 Who Has Your Back report, we awarded gold stars to companies that promised to tell the FBI to go to court for all NSLs, including giants like Apple and Dropbox. Twilio is the latest company to follow this best practice. It received the two national security letters in May 2017, both of which included nondisclosure requirements preventing Twilio from notifying its users about the government request. And both times, Twilio successfully invoked reciprocal notice, leading to FBI to give permission to publish the letters. This might seem surprising, given that in order to issue a gag, the FBI is supposed to certify that disclosure of the NSL risks serious harm related to an investigation involving national security. But rather than going to court to back up its certification, the FBI backed down. It retracted one of the NSLs entirely, so that Twilio was not forced to hand over any information at all. For the other, the FBI simply removed the gag order, allowing Twilio to inform its customer and publish the NSL. This is not what the proper use of a surveillance tool looks like. Instead, it reveals a regime of censorship by attrition. The FBI imposes thousands of NSL gag orders a year, and by default, these gag orders remain in place indefinitely. Only when a company like Twilio objects, does the government have any minimal burden of showing its work. Without a legal obligation to do so in all cases, the FBI can simply hope most companies don’t speak up. That’s why it’s so crucial that companies like Twilio take responsibility and invoke reciprocal notice. Better still,Twilio also published a list of best practices that companies can look to when responding to NSLs, including template language to push back on standard nondisclosure requirements. (Automattic, the company behind Wordpress, published a similar template last year.) As the company explained, “The process for receiving and responding to national security letters has become less opaque, but there’s still more room for sunlight.” We couldn’t agree more. Hopefully if more companies follow the lead of Apple, Dropbox, Twilio and the others who received stars on our report, the courts and Congress will see the need for further reform of the law.
>> mehr lesen

Fair Use Overcomes Chrysler's Bogus Copyright Notice (Mo, 05 Feb 2018)
If you watched this year’s Super Bowl, you might have seen an advertisement for Dodge Ram featuring a Dr. Martin Luther King, Jr. voiceover. To criticize the ad, and to show how antithetical it was to King’s views, Current Affairs magazine created a new version. The altered version overlays audio from elsewhere in the same speech where King criticizes excessively commercial culture and specifically calls out car ads. Although this is about as clear a fair use as one could imagine, Chrysler responded with a copyright claim. Fortunately, the takedown did not last long. The Streisand Effect quickly kicked into gear and others reposted the video. A copy on Twitter has collected over one million views. The copyright claim was then withdrawn. We reached out to Chrysler and a spokesperson responded that the video was taken down by YouTube's Content ID system but that it was restored after Chrysler discovered the error. While we are glad that this video was restored, in many less high-profile cases, automated takedowns are never reviewed or challenged. Many, including the King Center, have commented on how Chrysler came to use a speech that included criticism of car ads in a car ad. Chrysler has defended the ad saying it had permission from King’s estate. King’s estate partnered with EMI in 2009 to create new “revenue streams” for King’s works and image. But where the use has been unauthorized by King’s estate, it has tended to enforce its rights quite aggressively. It once sued CBS for using a lengthy clip of the “I have a Dream” speech in a documentary. The estate also exacted an $800,000 payment for “permission” to use King’s words and image on the Martin Luther King Jr. Memorial in Washington. The award-winning movie Selma couldn’t use any of King’s speeches because the rights had been licensed to another studio. Lengthy copyright terms and post-mortem rights of publicity mean that King’s words and image will be fueling EMI’s revenue streams until approximately 2039. Fortunately, fair use offers a counter-balance for the public interest. This is why we can watch Chrysler’s commercial combined with King’s real feelings about car ads. Fair use won the day this time.
>> mehr lesen

BMG v. Cox: ISPs Can Make Their Own Repeat-Infringer Policies, But the Fourth Circuit Wants A Higher "Body Count" (Mo, 05 Feb 2018)
Last week’s BMG v. Cox decision has gotten a lot of attention for its confusing take on secondary infringement liability, but commentators have been too quick to dismiss the implications for the DMCA safe harbor. Internet service providers are still not copyright police, but the decision will inevitably encourage ISPs to act on dubious infringement complaints, and even to kick more people off of the Internet based on unverified accusations. This long-running case involves a scheme by copyright troll Rightscorp to turn a profit for shareholders by demanding money from users whose computer IP addresses were associated with copyright infringement. Turning away from the tactic of filing lawsuits against individual ISP subscribers, Rightscorp began sending infringement notices to ISPs, coupled with demands for payment, and insisting that ISPs forward those notices to their customers. In other words, Rightscorp and its clients, including BMG, sought to enlist ISPs to help coerce payments from Internet users, threatening the ISPs themselves with an infringement suit if they don’t join in. Cox, a midsize cable operator and ISP,  pushed back and was punished for it. Before the suit, Cox had quite reasonably decided to stick up for its customers by refusing to forward Rightscorp’s money demands. Going along would have put Cox’s imprimatur on Rightscorp’s vaguely worded threats. The Digital Millennium Copyright Act safe harbors, which protect ISPs and other Internet services from copyright liability, don’t require ISPs who simply transmit data to respond to infringement notices, much less forward them. Unfortunately, Cox failed to comply with another of the DMCA’s requirements. To receive protection, an ISP must “reasonably implement” a policy for terminating “subscribers and account holders” who are “repeat infringers” in “appropriate circumstances.” Past decisions haven’t defined what “appropriate circumstances” are, but they do make clear that a repeat infringer policy has to be more than mere lip service. Cox’s defense foundered—as many do—on a series of unfortunate emails. As shown in court, Cox employees discussed receiving many infringement notices for the same subscriber, and giving repeated warnings to those subscribers, but never actually terminating them, or terminating them only to reconnect them immediately. The emails painted a picture of a company only pretending to observe the repeat-infringer requirement, while maintaining a real policy of never terminating subscribers. The reason, said the Cox employees to one another, was to eke out a bit more revenue. Despite the emails, BMG’s case had a weakness: the notices from Rightscorp and others were mere accusations of infringement, their accuracy and veracity far from certain. Nothing in the DMCA requires an ISP to kick customers off the Internet based on mere accusations. What’s more, the “appropriate circumstances” for terminating someone’s entire Internet connection are few and far between, given the Internet’s still-growing importance in daily life. As the Supreme Court wrote last year, “Cyberspace . . . in general” and “social media in particular” are “the most important places (in a spatial sense) for the exchange of views.” Even more than a website or social network, an ISP can and should save termination for the most egregious violations, backed by substantial evidence. The Court of Appeals for the Fourth Circuit acknowledged this, to a point. The court was “mindful of the need to afford ISPs flexibility in crafting repeat infringer policies, and of the difficulty of determining when it is ‘appropriate’ to terminate a person’s access to the Internet.” The court ruled that Cox had lost its safe harbor, not because its termination policy was too lenient, but because it failed to implement its own policy. “Indeed,” wrote the court, “in carrying out its thirteen-strike process, Cox very clearly determined not to terminate subscribers who in fact repeatedly violated the policy.” The court also ruled that “repeat infringer” isn’t limited to those who are found liable by a court. But the court stopped short of holding that mere accusations should lead to terminations. The court pointed to “instances in which Cox failed to terminate subscribers whom Cox employees regarded as repeat infringers” after conversations with those subscribers, implying that they, at least, should have been terminated. The court should have stopped there. Unfortunately, it also pointed to the number of actual suspensions Cox engaged in—less than one per month, compared to thousands of warnings and temporary suspensions—as a factor in denying Cox the safe harbor. That focus on “body counts” ignores the reality that terminating home Internet service is akin to “cutting off someone's water." And the court didn’t acknowledge that Cox’s decision to stop accepting Rightscorp’s notices—which included demands for money—protected Cox customers from an exploitative “speculative invoicing” business. So where does this decision leave ISPs? Certainly, they should not repeat Cox’s mistake by making it clear that their termination policy is an illusion. But nothing in the decision forbids an ISP from standing up for its customers by demanding strong and accurate evidence of infringement, and reserving termination for the most egregious cases—even if that makes actual terminations extremely rare. The case isn’t over; losing the DMCA safe harbor doesn’t mean that Cox is liable for copyright infringement by its customers. BMG still needs to show that Cox is liable under the contributory, vicarious, or inducement theories that apply to all service providers. The Fourth Circuit ruled that the jury got the wrong instructions, and that contributory liability requires more than a finding that Cox “should have known” about customers’ infringement. Because of that faulty instruction, the appeals court sent the case back for a new trial. The court’s ruling on inducement liability was confusing, as it seemed to conflate “intent” with “knowledge.” It’s important that the courts treat secondary liability doctrines thoughtfully and clearly, as they have a profound effect on how Internet services are designed and what users can do on them. That’s why, while we expect to see more suits like this, we hope that ISPs will continue to stand up for their users as Cox has in defending this one.
>> mehr lesen

Keep Border Spy Tech Out of Dreamer Protection Bills (Sa, 03 Feb 2018)
UPDATE Feb. 14, 2018: Today, President Trump endorsed Sen. Grassley's bill on border and immigration issues (H.R. 2579). EFF opposes it. Like many of its predecessors, this bill would expand invasive surveillance on Americans and foreigners alike, with biometric screening, social media snooping, drones, and automatic license plates readers. If Congress votes this month on legislation to protect Dreamers from deportation, any bill it considers should not include invasive surveillance technologies like biometric screening, social media snooping, automatic license plate readers, and drones. Such high tech spying would unduly intrude on the privacy of immigrants and Americans who live near the border and travel abroad. How We Got Here In September 2017, President Trump announced that, effective March 2018, his administration would end the Obama administration’s Deferred Action for Childhood Arrivals (DACA) program, which protects from deportation some 800,000 young adults (often called Dreamers) brought to the United States as children. In January 2018, Senate Majority Leader Mitch McConnell (R-KY) promised to hold a vote in February 2018 on an immigration bill that protects Dreamers. In response to this promise, Democratic Party Senators voted with Republican Party Senators to end last month’s government shutdown. That immigration vote could occur as early as next week, before a short-term federal funding law expires on February 8. President Trump’s recent framework for immigration legislation calls for unspecified “technology” to secure the border. That framework also calls for border wall funding, more immigration enforcement personnel, faster deportations, new limits on legal immigration, and a path to citizenship for Dreamers. A bill recently filed by House Judiciary Committee Chair Bob Goodlatte (R-VA) and House Homeland Security Committee Chair Michael McCaul (R-TX) includes a similar blend of immigration policies. This bill (H.R. 4760) may be the vehicle for Sen. McConnell to try to keep his promise of an immigration vote this month. This year’s Goodlatte-McCaul bill includes many high tech border spying provisions recycled from three bills filed last year: S. 1757, S. 2192, and H.R. 3548. EFF opposed these bills, and now opposes the Goodlatte-McCaul bill. Biometric Screening at the Border The Goodlatte-McCaul bill (section 2106) would require the U.S. Department of Homeland Security (DHS) to collect biometric information from people leaving the country, including both U.S. citizens and foreigners. The bill also requires collection of “multiple modes of biometrics.” Further, the new system must be “interoperable” with other systems, meaning together the systems can pool ever-larger sets of biometrics gathered for different purposes by different agencies. The bill would codify and expand an existing DHS program of facial recognition screening of all travelers, U.S. citizens and foreigners alike, who take certain flights out of the country. Instead, Congress should simply end this invasive program. Biometric screening is a unique threat to our privacy: it is easy for other people to capture our biometrics, and once this happens, it is hard for us to do anything about it. Once the government collects our biometrics, data thieves might steal it, government employees might misuse it, and policy makers might deploy it to new government programs. Also, facial recognition has significant accuracy problems, especially for people of color. Further, this bill’s border biometric screening must be understood as just the first step towards what DHS is already demanding: biometric screening throughout our domestic airports. Social Media Snooping on Visa Applicants The Goodlatte-McCaul bill (section 3105) would authorize DHS to snoop on the social media of visa applicants from so-called “high-risk countries.” This would codify and expand existing DHS and State Department programs of screening the social media of certain visa applicants. EFF opposes these programs. Congress should end them. They threaten the digital privacy and freedom of expression of innocent foreign travelers, and the many U.S. citizens and lawful permanent residents who communicate with them. The government permanently stores this captured social media information in a record system known as “Alien Files.” The government is now trying to build an artificial intelligence (AI) system to screen this social media information for signs of criminal intent. The government calls this planned system “extreme vetting.” Privacy and immigrant advocates call it a “digital Muslim ban.” Scores of AI experts concluded that this AI system will likely be “inaccurate and biased.” Moreover, the bill would empower DHS to decide which countries are “high-risk,” based on “any” criteria it deems “appropriate.” DHS may use this broad authority to improperly target social media screening at nations with majority Muslim populations. Drone Flights Near the Border The Goodlatte-McCaul bill (sections 1112, 1113, and 1117) would expand drone flights near the border. Unfortunately, the bill does not limit the flight paths of these drones. Nor does it limit the collection, storage, and sharing of sensitive information about the whereabouts and activities of innocent bystanders. Drones can capture personal information, including faces and license plates, from all of the people on the ground within the range and sightlines of a drone. Drones can do so secretly, thoroughly, inexpensively, and at great distances. Millions of U.S. citizens and immigrants live close to the U.S. border, and deployment of drones at the U.S. border will invariably capture personal information from vast numbers of innocent people. ALPRs Near the Border The Goodlatte-McCaul bill (section 2104) would require DHS to upgrade its automatic license plate readers (ALPRs) at the border, and authorize spending of $125 million to do this. It is unclear whether this provision applies only to ALPRs at border crossings, or also to ALPRs at interior checkpoints, some of which are located as far as 100 miles from the border. Millions of U.S. citizens and immigrants who live near the U.S. border routinely drive through these interior checkpoints on their way to work and school, while avoiding any actual passage through the U.S. border itself. The federal government should not subject them to ALPR surveillance merely because they live near the border. ALPRs collect highly sensitive location information. DHS already is using private ALPR databases to locate and deport undocumented immigrants. Likewise, it already is using its own ALPRs at interior checkpoints to enforce immigration laws. Dreamers and Surveillance For years, EFF has worked to protect immigrants from high tech spying. For example, we support legislation that would bar state and local police agencies from diverting their criminal justice databases to immigration enforcement. Some Dreamers fear a similar form of digital surveillance: diversion of the federal government’s DACA database, created to assist Dreamers, to instead locate and deport them. New legislation to protect Dreamers from deportation should not come at the price of other high tech spying on immigrants and others, including biometric screening, social media monitoring, drones, and ALPRs.
>> mehr lesen

Georgia Must Block This Flawed Computer Crime Bill (Fr, 02 Feb 2018)
The State of Georgia must decide: will it be a hub of technological and online media innovation, or will it be the state that criminalized terms of service violations? Will it support security research that makes us all safer, or will they chill the ability of Georgia’s infosec community to identify vulnerabilities that need to be fixed to protect our private information? This is what’s at stake with Georgia’s S.B. 315, and state lawmakers should stop it dead in its tracks. As EFF wrote in its letter opposing the bill, this legislation would hand immense power to prosecutors to go after anyone for “checking baseball scores on a work computer, lying about your age or height in your user profile contrary to a website’s policy, or sharing passwords with family members in violation of the service provider’s rules.” The bill also fails to clearly exempt legitimate, independent security research—such as that conducted by Georgia Tech’s renowned cybersecurity department—from the computer crime law. Georgia already has a robust computer crime statute that covers a wide range of malicious activities online, but S.B. 315 would criminalize simply accessing a computer, app, or website contrary to how the service provider tells you, even if you never cause or intend to cause harm. A violation under S.B. 315 would be classified as “a misdemeanor of a high and aggravated nature,” punishable by up to $5,000 and 12 months in jail. EFF has long criticized how stretched interpretations of the federal Computer Fraud & Abuse Act have resulted in the prosecution of computer scientists, such as Aaron Swartz. Georgia’s S.B. 315 is even worse in terms of how broadly it may be applied to regular users engaged in benign online behavior. Fortunately, the digital rights community in Georgia is mobilizing. Electronic Frontiers Georgia, an ally in the Electronic Frontiers Alliance network, is speaking out against S.B. 315. Andy Green, an infosec lecturer at Kennesaw State University, is also calling for an overhaul of the bill to ensure computer researchers can carry out their work “without fear of arrest and prosecution.” If Georgia lawmakers want to protect their residents from computer crime, it does not help to open them up to prosecution for the tiniest violation of the fine print in a buried terms of service agreement. And if lawmakers want Georgia to remain a welcoming destination for tech talent who can identify and stop breaches, they should spike S.B. 315 immediately. Read EFF's letter to the Georgia legislature by EFF Staff Attorney Jamie Williams. 
>> mehr lesen

Federal Appeals Court Misses Opportunity to Rule that Section 230 Bars Claims Against Online Platforms for Hosting Terrorist Content (Fr, 02 Feb 2018)
Although a federal appeals court this week agreed to dismiss a case alleging that Twitter provided material support for terrorists in the form of accounts and direct messaging services, the court left the door open for similar lawsuits to proceed in the future. This is troubling because the threat of liability created by these types of cases may lead platforms further filter and censor users’ speech. The decision by the U.S. Court of Appeals for the Ninth Circuit in Fields v. Twitter is good news inasmuch as it ends the case. But the court failed to rule on whether 47 U.S.C. § 230 (known as “Section 230”) applied and barred the plaintiffs’ claims. That’s disappointing. The Ninth Circuit missed an opportunity to rule that one of the Internet’s most important laws bars these types of cases. Section 230 provides online platforms with broad immunity from liability that flows from user speech. By limiting intermediary liability for user-generated content, Congress sought to incentivize innovation in online products and services and thereby create new avenues for online discourse and engagement. Section 230’s value has taken on increasing importance as the current Congress considers substantially weakening the statute. The plaintiffs in Fields filed their lawsuit in an attempt to hold Twitter liable for the deaths of two Americans killed in a 2015 attack in Jordan for which ISIS had taken credit. The plaintiffs claimed that by providing accounts and messaging services to ISIS members and sympathizers, Twitter had provided material support to terrorists in violation of U.S. law. The trial court dismissed the claims, ruling that Section 230 barred the claims. The court also ruled that the plaintiffs had not shown that Twitter played a direct role in the Jordan attack. When the plaintiffs appealed, EFF filed a brief in support of Twitter. First, we argued that extending such material support liability to online platforms would threaten Internet users because those platforms would become incentivized to over-censor user content or severely curtail the creation of accounts (or even new products and services) in the first place. Second, we argued that such material support liability would violate online platforms’ First Amendment rights. Finally, we argued that the claims undercut both the letter and spirit of Section 230. The Ninth Circuit affirmed the trial court’s ruling that that plaintiffs had failed to sufficiently allege that Twitter was the “proximate cause” of the attack. This legal concept requires plaintiffs to link the harm they suffered to the actions of defendants. The appeals court wrote that although the plaintiffs’ complaint in Fields established “Twitter’s alleged provision of material support to ISIS facilitated the organization’s growth and ability to plan and execute terrorist attacks,” the complaint failed to “articulate any connection between Twitter’s provision of this aid and Plaintiffs-Appellants’ injuries.” After tossing out the case on the proximate cause issue, the Ninth Circuit deliberately avoided ruling on the question of whether Section 230 barred the lawsuit regardless of the causation issue. This was a missed opportunity because a definitive ruling on Section 230 would have likely shut down a handful of similar suits currently in other federal courts—or possibly being considered by other parties. Like in Fields, these lawsuits claim that online platforms such as Twitter, Facebook, and YouTube provided material support to terrorists based on the presence of user-generated content advocating for terrorism, and that this content led to the injuries or deaths of the plaintiffs. Although the ruling in Fields should make it difficult for these cases to proceed, it’s possible that some plaintiffs could write their complaints to address the causation issue identified by the Ninth Circuit. On the other hand, if the appeals court had ruled that Section 230 barred the claims, it would have been a clear indication that these lawsuits are not on sound legal footing—and might have been the end of the line for these types of cases. So, although we’re happy that the plaintiffs did not prevail in this case, we hope that future courts examining this issue will actually rule on Section 230 grounds.
>> mehr lesen

You Can Call the Super Bowl the "Super Bowl" (Fr, 02 Feb 2018)
Are you going to a Big Game party on Sunday? Or perhaps going to watch the pro football championship game? Or take in the majestic splendor of the Superb Owl? You can also just call it by its real name: the Super Bowl. The NFL is infamous for coming down like a ton of bricks on anyone who dares use the actual name for the game in public. And it's also famous for trying to grab control of the names people started using when the NFL’s tactics worked and scared everyone away from saying “Super Bowl.” No matter how hard the NFL tries, it doesn’t own the phrase “The Big Game,” which has been used for longer than there’s been a Super Bowl. But anything that looks like someone making money off of the name will attract the NFL’s attention. In 2007, the NFL put a stop to an Indiana church’s party for a number of reasons, including that the church promoted it as a “Super Bowl bash.” NFL’s tactics don’t change the fact that you can totally say “Super Bowl.” The NFL has trademarked the terms “Super Bowl” and “Super Sunday,” but that doesn’t mean it actually controls all rights to the phrase. Instinctually, we all know that can’t be how the law works. We see and use trademarked names for things all the time. Grocery stores advertise special deals on Coca-Cola and we put “Windex” on our grocery lists. Commercials namecheck competitors by name all the time. It doesn’t even make any internal sense. Companies have trademarks so that they can have something that everyone instantly recognizes, not so that they suddenly become Voldemort and can’t be named out of fear. Having a trademark means being able to make sure no one can slap the name of your product onto theirs and confuse buyers into thinking they’re getting the real thing. It also means stopping an instance where using the name might make someone think it’s an endorsement or sponsorship. If neither of those things happens, you can call the Super Bowl the Super Bowl. The ability to use something’s trademarked name to identify it—even in a commercial—is called “nominative fair use.” Because the trademark is its name. Thankfully, the NFL and the Super Bowl are really good at letting us know who has paid astronomical amounts to get the NFL’s endorsement. Ads end with things like “official vehicle sponsor of the NFL” and there’s a whole page of sponsor names on the Super Bowl’s website. There are so many instantly recognizable ways to know who has partnered with the NFL and who hasn’t that no one can think your party is an official, NFL-sponsored get together. No one thought that about the one at the church in 2007. The reason no one says “Super Bowl” has nothing to do with the law and everything to do with the massive amount of resources the NFL has brought to bear on the issue. Its pockets are very deep, its will is strong, and its desire for control ravenous. But its scare tactics don’t change the fact that you can totally say “Super Bowl.”
>> mehr lesen

Catalog of Missing Devices Illustrates Gadgets that Could and Should Exist (Fr, 02 Feb 2018)
Bad Copyright Law Prevents Innovators from Creating Cool New Tools San Francisco - The Electronic Frontier Foundation (EFF) has launched its “Catalog of Missing Devices”—a project that illustrates the gadgets that could and should exist, if not for bad copyright laws that prevent innovators from creating the cool new tools that could enrich our lives. “The law that is supposed to restrict copying has instead been misused to crack down on competition, strangling a future’s worth of gadgets in their cradles,” said EFF Special Advisor Cory Doctorow. “But it’s hard to notice what isn’t there. We’re aiming to fix that with this Catalog of Missing Devices. It’s a collection of tools, services, and products that could have been, and should have been, but never were.” The damage comes from Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), which covers digital rights management software (DRM). DRM was designed to block software counterfeiting and other illegal copying, and Section 1201 bans DRM circumvention. However, businesses quickly learned that by employing DRM they could thwart honest competitors from creating inter-operative tools. Right now, that means you could be breaking the law just by doing something as simple as repairing your car on your own, without the vehicle-maker’s pricey tool. Other examples include rightsholders forcing you to buy additional copies of movies you want to watch on your phone—instead of allowing you to rip the DVD you already own and are entitled to watch—or manufacturers blocking your printer from using anything but their official ink cartridges. But that’s just the beginning of what consumers are missing. The Catalog of Missing Devices imagines things like music software that tailors your listening to what you are reading on your audiobook, or a gadget that lets parents reprogram talking toys to replace canned, meaningless messaging. “Computers aren’t just on our desktops or in our pockets—they are everywhere, and so is the software that runs them,” said EFF Legal Director Corynne McSherry. “We need to fix the laws that choke off competition and innovation with no corresponding benefit.” The Catalog of Missing Devices is part of EFF’s Apollo 1201 project, dedicated to eradicating all DRM from the world. A key step is eliminating laws like DMCA 1201, as well as the international versions of this legislation that the U.S. has convinced its trading partners to adopt. For the Catalog of Missing Devices: https://www.eff.org/missing-devices Contact:  Cory Doctorow EFF Special Advisor doctorow@craphound.com Corynne McSherry Legal Director corynne@eff.org
>> mehr lesen

The Hypocrisy of AT&T’s “Internet Bill of Rights” (Fr, 02 Feb 2018)
Last week AT&T has decided it’s good business to advocate for an “Internet Bill of Rights.” Of course, that catchy name doesn’t in any way mean that what AT&T wants is a codified list of rights for Internet users. No, what AT&T wants is to keep a firm hold on the gains it has made in the last year at the expense of its customers’ rights. There is nothing in the history—the very recent history—of AT&T to make anyone believe that it has anyone’s actual best interests at heart. Let’s take a look at how this company has traditionally felt about privacy and network neutrality. Few companies have done more to combat privacy and network neutrality than AT&T. It takes an incredible amount of arrogance for AT&T to take out a full-page ad in the New York Times calling for an “Internet Bill of Rights” after spending years effectively waging the most far-reaching lobbying campaign to eliminate every consumer right. In some ways, it should strike you as a type of conquerors decree after successfully laying waste to the legal landscape to remake it in its own image. But AT&T’s goal is abundantly clear: It does not like the laws that exist today to police its conduct in privacy and network neutrality so it wishes to rewrite them while hoping Americans ignore its past actions. AT&T’s Fight Against Privacy In 2017, Congress repealed broadband privacy. It was easy to be frustrated and angry with the government, but remember, when it happened, AT&T was there arguing that losing your privacy was good for you. In fact, it even argued that you didn’t need to worry because AT&T and other ISPs were still regulated. In its own words: “for example, AT&T and other ISPs’ actions continue to be governed by Section 222 of the Communications Act.” This is deeply ironic as, at the same time, AT&T was arguing that the Communications Act would protect us while simultaneously lobbying the Federal Communications Commission (FCC) stop applying Section 222 to their broadband. which just happened this December when the FCC repealed the 2015 Open Internet order. AT&T has not stopped there either. Having won on the national level, it is, right now, using the same double-talk to stop states from passing ISP privacy laws to fill the gap it created. In California, for example, it stated on the record that “AT&T and other major Internet service providers have committed to legally enforceable Privacy Principles that are consistent with the privacy framework developed by the FTC over the past twenty years.” Which is a long way of saying, “There is no need to pass a state law because the Federal Trade Commission can enforce the law on us.” As with the arguments made in front of Congress and the FCC, AT&T both states that there are laws that cover ISPs and that those laws don’t exist. What exactly is AT&T saying about the FTC’s enforcement power — the power it is saying obviates the need for state laws — in the courts today? That they are exempt from it. The image above is from litigation known as the FTC vs AT&T Mobility case, which is still ongoing. The core of AT&T's argument is because their telephone service is a common carrier service, and the FTC is prohibited from regulating common carriers (even though their broadband product is no longer a common carrier with the repeal of the 2015 Open Internet Order), that the entire company is exempt. It has so far prevailed on that argument in the 9th Circuit and many proponents of repealing network neutrality incorrectly, though spiritedly, claimed that their decision to end common carrier regulations of broadband would enhance the FTC's power over AT&T.  However, AT&T is going so far as to argue – today – that it does not even matter how the FCC regulates ISP broadband, it is just de facto exempt from FTC power. (Footnote from AT&T's legal filing.) All of this is to say that AT&T is waging against a sustained, current war on user privacy. It was AT&T that was inserting ads into the traffic of people who use their wifi hotspots in airports. It also used “Carrier IQ,” which gave them the capability to track everything you do, from what websites you visit to what applications you use. It took a class action lawsuit for the carriers to begin backing down from this idea. And if it was not for Verizon getting into legal trouble with the federal government for use of the undeletable “super cookie,” AT&T would have followed suit to get in on the action. AT&T’s Fight Against Network Neutrality “Some companies want us to be a big dumb pipe that gets bigger and bigger. No one gets a free ride. Those that want to use this will pay.” - former AT&T CEO in 2006 This famous remark by AT&T was probably the most straightforward and honest statement the ISP has made in regards to their thoughts on network neutrality. In addition to obviously misconstruing the facts, it’s a manifestation of AT&T’s belief that an open and free Internet is a threat to their bottom line. At each and every iteration of the network neutrality debate at the federal agency, AT&T has raised objections to enforcing net neutrality. In filing, after filing, after filing, after filing, AT&T has made arguments against being required to operate in a non-discriminatory manner. Which makes sense, since over those years, AT&T has violated net neutrality on multiple occasions. Just last year, the FCC determined that AT&T was engaging in discriminatory, anti-competitive practices by zero-rating its own DIRECTV content while simultaneously charging its competitors unfavorable rates to get the same treatment. While FCC Chairman Ajit Pai halted the investigation and rescinded its findings to eliminate their legal impact on behalf of AT&T and Verizon, the facts are indisputable that AT&T was giving away its own video programming for free in order to drive customers to subscribe to DIRECTV, while stifling any competing video streaming services. The Department of Justice under President Trump shares these concerns when it filed its anti-trust lawsuit against AT&T to block its acquisition of Time Warner content on the grounds that it will harm online video competition. But that’s just the tip of the iceberg. Back in 2012, AT&T blocked its customers from using FaceTime, Apple’s video chat app, unless they switched to data plans that were generally more expensive. Not only was this a clear case of blocking based on content for purely business reasons, AT&T tried to claim that doing so didn’t violate net neutrality. This two-faced argument shows just how far the company is willing to go in its double-speak to get away with violating real net neutrality. If AT&T wants the public to take their “Internet Bill of Rights” advocacy seriously, rather than come across as disingenuous in their public relations campaign, then it needs to actively change how it lobbies Congress and the state legislatures. Rather than deploying its efforts to actively oppose every effort to restore network neutrality and privacy, they should be supporting those efforts. Until then, this is just another example of a major ISP coopting a message they fought hard to defeat (and lost) but are now pretending they support in hopes that Internet users look the other way.
>> mehr lesen

How Congress’s Extension of Section 702 May Expand the NSA’s Warrantless Surveillance Authority (Do, 01 Feb 2018)
Last month, Congress reauthorized Section 702, the controversial law the NSA uses to conduct some of its most invasive electronic surveillance. With Section 702 set to expire, Congress had a golden opportunity to fix the worst flaws in the NSA’s surveillance programs and protect Americans’ Fourth Amendment rights to privacy. Instead, it reupped Section 702 for six more years. But the bill passed by Congress and signed by the president, labeled S. 139, didn’t just extend Section 702’s duration. It also may expand the NSA’s authority in subtle but dangerous ways. The reauthorization marks the first time that Congress passed legislation that explicitly acknowledges and codifies some of the most controversial aspects of the NSA’s surveillance programs, including “about” collection and “backdoor searches.” That will give the government more legal ammunition to defend these programs in court, in Congress, and to the public. It also suggests ways for the NSA to loosen its already lax self-imposed restraints on how it conducts surveillance. Background: NSA Surveillance Under Section 702 First passed in 2008 as part of the FISA Amendments Act—and reauthorized last week until 2023—Section 702 is the primary legal authority that the NSA uses to conduct warrantless electronic surveillance against non-U.S. “targets” located outside the United States. The two publicly known programs operated under Section 702 are “upstream” and “downstream” (formerly known as “PRISM”). Section 702 differs from other foreign surveillance laws because the government can pick targets and conduct the surveillance without a warrant signed by a judge. Instead, the Foreign Intelligence Surveillance Court (FISC) merely reviews and signs off on the government’s high-level plans once a year. In both upstream and downstream surveillance, the intelligence community collects and searches communications it believes are related to “selectors.” Selectors are search terms that apply to a target, like an email address, phone number, or other identifier. Under downstream, the government requires companies like Google, Facebook, and Yahoo to turn over messages “to” and “from” a selector—gaining access to things like emails and Facebook messages. Under upstream, the NSA relies on Internet providers like AT&T to provide access to large sections of the Internet backbone, intercepting and scanning billions of messages rushing between people and through websites. Until recently, upstream resulted in the collection of communications to, from, or about a selector. More on “about” collection below. The overarching problem with these programs is that they are far from “targeted.” Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review. In countless ways, Section 702 surveillance violates Americans’ privacy and other constitutional rights, not to mention the millions of people around the world whose right to communications privacy is also ignored. This is why EFF vehemently opposed the Section 702 reauthorization bill that the President recently signed into law. We’ve been suing since 2006 over the NSA’s mass surveillance of the Internet backbone and trying to end these practices in the courts. While S. 139 was described by some as a reform, the bill was really a total failure to address the problems with Section 702. Worse still, it may expand the NSA’s authority to conduct this intrusive surveillance. Codified “About” Collection One key area where the new reauthorization could expand Section 702 is the practice commonly known as “about” collection (or “abouts” collection in the language of the new law). For years, when the NSA conducted its upstream surveillance of the Internet backbone, it collected not just communications “to” and “from” a selector like an email address, but also messages that merely mentioned that selector in the message body. This is a staggeringly broad dragnet tactic. Have you ever written someone’s phone number inside an email to someone else? If that number was an NSA selector, your email would have been collected, though neither you nor the email’s recipient was an NSA target. Have you ever mentioned someone’s email address through a chat service at work? If that email address was an NSA selector, your chat could have been collected, too. “About” collection involves scanning and collecting the contents of Americans’ Fourth Amendment-protected communications without a warrant. That’s unconstitutional, and the NSA should never have been allowed to do it in the first place. Unfortunately, the FISC and other oversight bodies tasked with overseeing Section 702 surveillance often ignore major constitutional issues.  So the FISC permitted “about” collection to go on for years, even though the collection continued to raise complex legal and technical problems. In 2011, the FISC warned the NSA against collecting too many “non-target, protected communications,” in part due to “about” collection. Then the court imposed limits on upstream, including in how “about” communications were handled. And when the Privacy and Civil Liberties Oversight Board issued its milquetoast report on Section 702 in 2014, it said that “about” collection pushed “the entire program close to the line of constitutional reasonableness.” For its part, the NSA asserted that “about” collection was necessary technically to ensure the agency actually collected all the to/from communications it was supposedly entitled to. In April 2017, we learned that the NSA’s technical and legal problems with “about” collection were even more pervasive than previously disclosed, and it had not been complying with the FISC’s already permissive limits. As a result, the NSA publicly announced it was ending “about” collection entirely. This was something of a victory, following years of criticism and pressure from civil liberties groups and internal government oversight. But the program suspension rested on technical and legal issues that may change over time, and not a change of heart or a controlling rule. Indeed, the suspension is not binding on the NSA in the future, since it could simply restart “about” collection once it figured out a “technical” solution to comply with the FISC’s limits. Critically, as originally written, Section 702 did not mention “about” collection. Nor did Section 702 provide any rules on collecting, accessing, or sharing data obtained through “about” collection. But the new reauthorization codifies this controversial NSA practice. According to the new law, “The term ‘abouts communication’ means a communication that contains a reference to, but is not to or from, a target of an acquisition authorized under section 702(a) of the Foreign Intelligence Surveillance Act of 1978.” Under the new law, if the intelligence community wants to restart “about” collection, it has a path to doing so that includes finding a way to comply with the FISC’s minimal limitations. Once that’s done, an affirmative act of Congress is required to prevent it. If Congress does not act, then the NSA is free to continue this highly invasive “about” collection. Notably, by including collection of communications that merely “contain a reference to . . .  a target,” the new law may go further than the NSA’s prior practice of collecting communications content that contained specific selectors. The NSA might well argue that the new language allows them to collect emails that refer to targets by name or in other less specific ways, rather than actually containing a target’s email address, phone number, or other “selectors.” Beyond that, the reauthorization codifies a practice that, up to now, has existed solely due to the NSA’s interpretation and implementation of the law. Before this year’s Section 702 reauthorization, the NSA could not credibly argue Congress had approved the practice. Now, if the NSA restarts “about” collection, it will argue it has express statutory authorization to do so. Explicitly codifying “about” collection is thus an expansion of the NSA’s spying authority. Finally, providing a path to restart that practice absent further Congressional oversight, when that formal procedure did not exist before, is an expansion of the NSA’s authority. For years, the NSA has pushed its boundaries. The NSA has repeatedly violated its own policies on collection, access, and retention, according to multiple, unsealed FISC opinions. Infamously, by relying on an unjustifiable interpretation of a separate statute—Section 215—the NSA illegally conducted bulk collection of Americans’ phone records for years. And even without explicit statutory approval, the NSA found a way to create this bulk phone record program and persuade the FISC to condone it, despite having begun the bulk collection without any court or statutory authority whatsoever.  History teaches that when Congress gives the NSA an inch, the NSA will take a mile. So we fear that the new NSA spying law’s unprecedented language on “about” collection will contribute to an expansion of the already excessive Section 702 surveillance. Codified Backdoor Searches The Section 702 reauthorization provides a similar expansion of the intelligence community’s authority to conduct warrantless “backdoor searches” of databases of Americans’ communications. To review, the NSA’s surveillance casts an enormously wide net, collecting (and storing) billions of emails, chats, and other communications involving Americans who are not targeted for surveillance. The NSA calls this “incidental collection,” although it is far from unintended. Once collected, these communications are often stored in databases which can be accessed by other agencies in the intelligence community, including the FBI. The FBI routinely runs searches of these databases using identifiers belonging to Americans when starting—or even before officially starting—investigations into domestic crimes that may have nothing to do with foreign intelligence issues. As with the initial collection, government officials conduct backdoor searches of Section 702 communications content without getting a warrant or other individualized court oversight—which violates the Fourth Amendment. Just as with "about" collection, nothing in the original text of Section 702 authorized or even mentioned the unconstitutional practice of backdoor searches. While that did not stop the FISC from approving backdoor searches under certain circumstances, it did lead other courts to uphold surveillance conducted under Section 702 and ignore whether these searches are constitutional. Just as with "about" collection, the latest Section 702 reauthorization acknowledges backdoor searches for the first time. It imposes a warrant requirement only in very narrow circumstances: where the FBI runs a search in a “predicated criminal investigation” not connected to national security. Under FBI practice, a predicated investigation is a formal, advanced case. By all accounts, though, backdoor searches are normally used far earlier. In other words, the new warrant requirement will rarely, if ever, apply. It is unlikely to prevent a fishing expedition through Americans’ private communications. Even where a search is inspired by a tip about a serious domestic crime [.pdf], the FBI should not have warrantless access to a vast trove of intimate communications that would otherwise require complying with stringent warrant procedures. But following the latest reauthorization, the government will probably argue that Congress gave its OK to the FBI searching sensitive data obtained through NSA spying under Section 702, and using it in criminal cases against Americans. In sum, the latest reauthorization of Section 702 is best seen as an expansion of the government’s spying powers, and not just an extension of the number of years that the government may exercise these powers. Either way, the latest reauthorization is a massive disappointment. That’s why we’ve pledged to redouble our commitment to seek surveillance reform wherever we can: through the courts, through the development and spread of technology that protects our privacy and security, and through Congressional oversight.
>> mehr lesen

The State of the Union: What Wasn’t Said (Do, 01 Feb 2018)
President Donald Trump’s first State of the Union address last night was remarkable for two reasons: for what he said, and for what he didn’t say. The president took enormous pride last night in claiming to have helped “extinguish ISIS from the face of the Earth.” But he failed to mention that Congress passed a law at the start of this year to extend unconstitutional, invasive NSA surveillance powers. Before it passed the House, the Senate, and received the president’s signature, the law was misrepresented by several members of Congress and by the president himself. On the morning the House of Representatives voted to move the law to the Senate, the president weighed in on Twitter, saying that “today’s vote is about foreign surveillance of foreign bad guys on foreign land.” Make no mistake: the bill he eventually signed—S. 139—very much affects American citizens. That bill reauthorized Section 702 original enacted as part of the FISA Amendments Act—a legal authority the NSA uses to justify its collection of countless Americans’ emails, chat logs, and browsing history without first obtaining a warrant. The surveillance allowed under this law operates largely in the dark and violates Americans’ Fourth Amendment right to privacy. Elsewhere in his speech, the president trumpeted a future America with rebuilt public infrastructure. He foretold of “gleaming new roads, bridges, highways, railways, and waterways across our land.” What the president didn’t say, again, is worrying. The president failed to mention that the Federal Communications Commission, now led by his personal choice in chairman, made significant steps in dismantling another public good: the Internet. Last year, the FCC voted to repeal net neutrality rules, subjecting Americans to an Internet that chooses winners and losers, fast lanes and slow ones. The FCC’s order leaves Americans open to abuse by well-funded corporations that can simply pay to have their services delivered more reliably—and quickly—on the Internet, and it creates a system where independent business owners and artists are at a disadvantage to have their online content viewed by others. And the president last night mentioned fair trade deals and intellectual property. He complimented his administration’s efforts in rebalancing “unfair trade deals that sacrificed our prosperity and shipped away our companies, our jobs, and our Nation’s wealth.” He promised to “protect American workers and American intellectual property through strong enforcement of our trade rules.” Trump didn’t mention that the United States' demands for the copyright and patent sections of a renegotiated NAFTA closely mirror those of the TPP, with its unfair expansion of copyright law. It’s ironic that one of the TPP’s most vocal critics would seemingly champion one of its most dangerous components. The president gave Americans a highlight reel last night about his perceived accomplishments. But he neglected to tell the full story about his first year in the White House. As civil liberties are threatened and constitutional rights are violated, EFF is continuing to fight. We are still supporting net neutrality. We are still taking the NSA to court over unconstitutional surveillance. And we are still working to protect and expand your rights in the digital world, wherever the fight may take us.
>> mehr lesen

EFF Asks California Court to Reverse Ruling That Could Weaken Open Records Rules, Impede Public Access to Government Records (Do, 01 Feb 2018)
State agencies in California are collecting and using more data now than they ever, and much of this data includes very personal information about California residents. This presents a challenge for agencies and the courts—how to make government-held data that’s indisputably of interest available to the public under the state’s public records laws while still protecting the privacy of Californians. EFF filed an amicus brief today urging a state appeals court to reverse a San Francisco trial judge’s ruling that would impede and possibly preclude the public’s ability to access state-held data that includes private information on individuals—even if that data is anonymized or redacted to protect privacy. The California Public Records Act (CPRA) has a strong presumption in favor of disclosure of state records. And the California state constitution recognizes that public access to government information is paramount in enabling people to oversee the government and ensure it’s acting in their best interest. But the state constitution also recognizes a strong privacy right, so public access to information must be carefully balanced with personal privacy. To keep records secret, agencies must show that concealment, not transparency, best serves the public interest. This balancing test was at issue in a lawsuit brought by UCLA law professor Richard Sanders and the First Amendment Coalition (FAC), who are seeking access to information from the California Bar Association about the race, ethnicity, test scores, and other information of tens of thousands of people who took the state bar exam to become lawyers. The state bar refused to release the data to protect the confidentiality of test-takers, even though no names or personal identifying information would be disclosed. The case is Sander v. State Bar of California. A trial court sided with the bar. The case eventually went all the way to the California Supreme Court, which correctly recognized the strong public interest in disclosing the data so the effect of law school admissions policies on exam performance could be studied. It’s “beyond dispute” that the public has the right to access the information, the court said in a unanimous decision, as long as the identity of individual test takers remained confidential. It sent the case back to the trial court to decide if and how much material could be released. This is where things took a wrong turn. Sanders and FAC presented several possible protocols to protect bar exam takers’ privacy, including three complicated anonymization techniques, but the trial court ruled that, even under these protocols, the data couldn’t be released. The court improperly placed the burden on Sanders and FAC to show that there was absolutely no way anyone’s identity could be revealed—including if the anonymized data were combined with other obscure but publicly-available personal information. In doing so, the court failed to adhere to the CPRA’s balancing tests, which require the state bar to show that the public interest in protecting the privacy of bar takers—even after their data is stripped of identifying information—clearly outweighs the public interest in the data. In a particularly dangerous finding, the court held the CPRA couldn’t require the state bar to apply anonymization protocols because that would constitute creating a “new record” from the existing data. However, the CPRA clearly requires agencies to produce as much public information as possible, even if that means using a “surgical scalpel” to separate information that’s exempt from disclosure under the CPRA from non-exempt information. Techniques for protecting exempt information while still releasing otherwise non-exempt government records that are of great interest to the public must evolve as the government’s means of collecting, compiling, and maintaining such records has evolved. Protocols that propose to anonymize data, such as those presented by Sander and FAC, represent one such technique. California courts should not avoid grappling with whether anonymization can protect privacy by dismissing it out of hand as the creation of a “new record.”  The California’s Public Records Act is a vital check on government secrecy. With the explosive growth of government data, particularly law enforcement surveillance data, we can’t stand by while courts sidestep the task of evaluating anonymization protocols that will increasingly play a role in balancing public access rights under the CPRA and laws like it in other states. If upheld, the Sanders ruling could weaken the public’s ability to access other electronic records and government data that contains private identifying information. EFF has fought in court to gain access to license plate records indiscriminately collected on millions of drivers by Los Angeles law enforcement agencies. The California Supreme Court ruled that police can’t keep those records secret, paving the way for EFF to analyze how this huge surveillance program works. But the records could identify drivers, so the next step is to figure out how the data can be made public in a redacted or anonymized form to protect drivers’ privacy. We are watching the Sanders case closely, and hope the appeals court does the right thing: reverse the trial court’s findings, require it to fully address the proposed anonymization protocols, and properly apply the balancing tests under the CPRA. Related Cases:  Automated License Plate Readers- ACLU of Southern California & EFF v. LAPD & LASD
>> mehr lesen

California’s Senate Misfires on Network Neutrality, Ignores Viable Options (Mi, 31 Jan 2018)
Yesterday, the California Senate approved legislation that would require Internet service providers (ISPs) in California to follow the now-repealed 2015 Open Internet Order. While well-intentioned, the legislators sadly chose an approach that is vulnerable to legal attack. The 2015 Open Internet Order from the Federal Communications Commission provided important privacy and net neutrality protections, such as banning blocking, throttling, and paid prioritization. It is important for states to fill the void left behind by the FCC’s abandonment of those protections. States are constrained, however, because federal policy can override, or “pre-empt,” state regulation in many circumstances. State law that doesn’t take this into account can be invalidated by the federal law. It’s a waste to pass a bill that is vulnerable to legal challenge by ISPs when strong alternatives are available. In a letter to the California Senate, EFF provided legal analysis explaining how the state can promote network neutrality in a legally sustainable way. Unfortunately, SB 460, the legislation approved by the California Senate, is lacking many of the things EFF’s letter addressed. Better Approaches Left Behind by SB 460 Today, California spends $100s of millions on ISPs, including AT&T, as part of its California broadband subsidy program. The state could require that recipients of that funding provide a free and open Internet, to ensure that taxpayer funds are used to benefit California residents rather than subsidizing a discriminatory network. This is one of the strongest means the state has to promote network neutrality, and it is missing from SB 460. California also has oversight and power over more than 4 million utility poles that ISPs benefit from accessing to deploy their networks. In fact, California is expressly empowered by federal law to regulate access to the poles and the state legislature can establish network neutrality conditions in exchange for access to the poles. Again, that is not in the current bill passed by the Senate. Lastly, each city negotiates a franchise with the local cable company and often the company agrees to a set of conditions in exchange for access to valuable, taxpayer-funded rights of way. California’s legislature can directly empower local communities to negotiate with ISPs to require network neutrality in exchange for the benefit of accessing tax-payer funded infrastructure. This is also not included in the current bill. States Should Put Their Full Weight in Support of Network Neutrality Any state moving legislation to promote network neutrality should invoke all valid authority to do so. At the very least, California should view the additional legal approaches we have recommended as backups, to be relied upon if the current proposal is held invalid by a court. If SB 460’s approach to directly regulating ISPs is found to be invalid, ultimately all the legislation does is require state agencies to contract with ISPs that follow the 2015 Open Internet Order. While an important provision, it can already be required with a stroke of the pen tomorrow under a Governor’s Executive Order much in the same way as Montana and New York. And while the 2015 Open Internet Order was a good start, why not bring to bear all the resources a state has to secure such an important principle for Californians? EFF hopes that subsequent network neutrality legislation such as Senator Wiener’s SB 822 can cover what is missing in SB 460 or that future amendments in the legislative process can bring the full weight of the state of California to bear in favor of network neutrality. Both options remain available and it is our hope that California’s legislators understand that the millions of Americans who are fighting hard to keep the Internet free and open expect elected officials that side with us to deploy their power wisely and effectively. The importance of keeping the Internet free and open necessitates nothing less.
>> mehr lesen

Stupid Patent of the Month: Bigger Screen Patent Highlights a Bigger Problem (Mi, 31 Jan 2018)
For more than three years now, we’ve been highlighting weak patents in our Stupid Patent of the Month series. Often we highlight stupid patents that have recently been asserted, or ones that show how the U.S. patent system is broken. This month, we’re using a pretty silly patent in the U.S. to highlight that stupid U.S. patents may soon—depending on the outcome of a current Supreme Court case—effectively become stupid patents for the entire world. Lenovo was granted U.S. Patent No. 9,875,007 [PDF] this week. The patent, entitled “Devices and Methods to Receive Input at a First Device and Present Output in Response on a Second Device Different from the First Device,” relates to presenting materials on different screens. The first claim of the patent is relatively easy to read and understandable, for a patent. What Lenovo claims is: A first device, comprising: at least one processor; storage accessible to the at least one processor and bearing instructions executable by the at least one processor to: receive user input to present an output on a display; and determine a second device different from the first device on which to present the output based at least in part on identification by the first device of the second device as having a relatively larger display on which to present the output than the first device. This claim describes a distinction a child may make, in asking a parent to put something up on the “big screen.” It covers a generic computing device, programmed to make a comparison between the size of display screens, and choose one of the screens based on that comparison, something that any person would know how to do, and any programmer would know how to implement. A review of what happened [PDF] at the Patent Office shows the fine (and trivial) distinctions Lenovo made over what was known in order to claim there was an invention. Lenovo argued that although previous technologies allowed for displaying material on second devices with larger screens, those technologies didn’t do it by identifying second devices by determining the size of the screens. Even if what Lenovo claims is true (though we have doubts that they were the first to do this), we’re not sure why as a public policy matter Lenovo should be entitled to a monopoly on this “invention.” It seems more the product of basic skill and design rather than anything inventive. Generally, people are not supposed to get patents on things that are obvious. It’s quite possible that Lenovo will never assert this patent against anyone, and it will become like many patents, just a piece of paper on a wall. But what if Lenovo decided to assert this patent? We’re highlighting this patent in order to bring attention to the fact that a U.S. Supreme Court case being decided this term could make this patent not just a stupid U.S. patent, but effectively a stupid worldwide patent. Generally, countries’ patent laws only have domestic effect. If you want to have patent protection in the U.S., you need to file for a U.S. patent and show your application complies with U.S. patent law. If you want protection in India, you also need to file in India and show it complies with Indian patent law. There are differences between the patent laws of various countries; some provide more protection, some provide less. That patent laws differ in different countries is generally considered a feature, not a bug. There’s an important exception in the U.S., however, to this general idea that patent rights are limited to a particular country. Under the Patent Act, specifically 35 U.S.C. § 271(f), if combining certain parts would constitute patent infringement in the U.S., then someone who knowingly supplies those same parts with the intention that they be combined abroad is also liable for infringement. Basically, you can’t make parts A and B in the U.S., ship them abroad and tell people to combine them into AB, if you know that you’d infringe a patent in the U.S. if you just combined them into AB in the U.S. and then shipped them abroad. The point of this narrow rule is to prevent people from offshoring the final step of a process, where they’re purposefully doing so in order to evade U.S. patent rights. There is a new case currently pending at the U.S. Supreme Court called WesternGeco v. Ion Geophysical that relates to this fairly narrow law, § 271(f). The question the Supreme Court has been asked is: if someone is liable for patent infringement under § 271(f), can the patent owner recover lost profits relating to the combinations that were made abroad? If confined to § 271(f), the result (whichever way the court could rule) would be a fairly narrow decision. Most cases asserting patent infringement are not brought under that basis, and the circumstances that would cause infringement under that section don’t arise very often.   But at an earlier stage of the case, the Solicitor General, whose opinion is often given great weight by the Supreme Court, advanced [PDF] a startling idea: patent owners should be able to collect damages for any act that was a foreseeable result of the U.S. based infringement in every case of patent infringement, not just those brought under § 271(f), even if the act occurred completely abroad. This would result in a dramatic expansion of the scope of U.S. patent remedies, making them, effectively, act the same as worldwide patent rights for any product that has a global market. An example is useful here. Suppose a display systems designer BobCo designed a system that it sold to Lenovo’s competitor CompCo in China to include in CompCo’s goods built in China. CompCo sells its goods globally, and some of the goods end up in the U.S., infringing on Lenovo’s patent in the United States. Under the Solicitor General’s view, Lenovo should be able to sue BobCo for violating the U.S. patent, and recover all the profits that BobCo made from CompCo for all the goods sold worldwide—regardless of whether they ever ended up in the U.S. Lenovo could get this reward despite the fact that all of BobCo’s acts (other than importing a few of the products) occurred abroad. Even though technically only the products that entered the U.S. infringed the U.S. patent, Lenovo’s remedy for violation of those rights would include recovery for all products worldwide. This example is essentially the facts of a case called Power Integrations, where the U.S. Court of Appeals for the Federal Circuit (correctly, in our view), rejected such a broad scope of remedies, stating that U.S. patent laws “do not [] provide compensation for a defendant’s foreign exploitation of a patented invention, which is not an infringement at all.” The Solicitor General is now challenging this rule. It’s not hard to see how the Solicitor General’s rule could interfere with rights held by others abroad—including the rights of the public—and would mean that U.S. patent rights would be effectively exported to other countries. Innovations that are in the public domain in China or Germany, for example, could all of a sudden become more expensive because a U.S. patent rights holder gets to impose a cost on goods in those countries because some end up in the U.S. For example, even though Lenovo has applied for a patent in China that is related to the U.S. patent, no patent has (yet) issued. Lenovo’s attempt to get a patent in Germany has also not yet been successful. But under the Solicitor General’s theory, if sales in Germany or China were a foreseeable result of infringing sales in the U.S., Lenovo could impose costs in those countries regardless of whether they actually were entitled to a patent in that jurisdiction. We’re also concerned that such a rule would lead to a race to the bottom: if U.S. courts impose a worldwide patent tax to benefit U.S. patent holders, other countries may try to do the same as well. This could lead to companies being subject to multiple claims of infringement with multiple payments of worldwide damages, again increasing costs to consumers. We’ve now spent years highlighting some pretty silly U.S. patents. We hope the Supreme Court rejects the Solicitor General’s desire to expand the scope of U.S. patent remedies, and refuses to turn stupid U.S. patents into, effectively, stupid worldwide patents.
>> mehr lesen

¿El fin de una Internet libre, abierta e inclusiva? (Di, 30 Jan 2018)
Este artículo fue escrito por Edison Lanza, Relator Especial para la Libertad de Expresión de la Comisión Interamericana de Derechos Humanos. En poco más de 20 años se hizo evidente el potencial de Internet para el ejercicio de las libertades, la educación, el impacto de las redes sociales; y la revolución que supuso para el comercio, el entretenimiento y la innovación. Por supuesto, un cambio de tal naturaleza también entraña desafíos como la diseminación del discurso que incita a la violencia; los riesgos para la privacidad; la necesidad de que toda la humanidad acceda a la red; la difusión de noticias falsas y el papel de las plataformas en la circulación de la información. Con todo, los beneficios e impactos positivos de internet parecían justificar el optimismo respecto a la marcha de la revolución digital. Pero el fin de la historia, ya se sabe, no está a la vuelta de la esquina. El 14 de diciembre de 2017 la administración del presidente Donald Trump dio un paso que tiene el potencial de cambiar la naturaleza de internet como fuerza democratizadora al derogar a nivel del gobierno federal la regla que garantizaba la "neutralidad de la red" (Net Neutrality). Esta norma, que había sido aprobada por la Federal Comunications Comission (FCC) durante la administración del presidente Barack Obama, consideraba a Internet un servicio público de telecomunicaciones. La neutralidad de la red impedía a los ISPs (proveedores de servicios de internet, por su sigla en inglés) que brindan banda ancha (fija y móvil), manipular el flujo en Internet de cualquiera de las siguientes formas: 1) bloquear cualquier contenido legal o paquete de datos (sin importar su origen, dispositivo o destinatario); 2) ralentizar en su carretera un contenido o aplicación sobre los demás; y 3) favorecer cierto tráfico sobre otro, creando líneas más rápidas para unas aplicaciones a cambio de una contraprestación. Esa decisión llegó tras una década de disputas jurídicas. Las empresas de telecomunicaciones argumentaban que para incrementar el acceso a Internet (adquirir espectro, colocar antenas, tender fibra óptica directa al hogar, etc) las inversiones quedaban a su cargo, pero la neutralidad les impedía desarrollar un modelo de negocios segmentado, basado en ofrecer acceso diferenciado a determinados servicios o aplicaciones según las necesidades de cada usuario. Según este argumento, las empresas tecnológicas gozaban en cambio de toda la libertad en el nivel Over The Top (OTT) para llevar tráfico hacia aplicaciones cada vez más sofisticadas, lo que a su vez, aumenta la exigencia de más ancho de banda. Desde Sillicon Valley respondían que el problema nunca fue el principio de neutralidad de la red, sino la falta de comprensión de la nueva economía por parte de las "telcos": después de todo -argumentan-, el mensaje de texto en telefonía móvil surgió antes que las aplicaciones en Internet, pero no supieron ver lo que tenían delante de sus ojos: nada impedía a las "telcos" desarrollar el video on demand, las compras en línea o las aplicaciones para el transporte, por citar algunos ejemplos de innovación. Semanas atrás, en una votación dividida, la mayoría republicana (3 a 2) en la FCC suprimió la net neutrality: internet es ahora un servicio de información privado y los intermediarios sólo tienen obligaciones de transparentar cómo gestionan la red. La FCC también declinó en la Comisión de Comercio su autoridad para regular posibles monopolios y oligopolios, fusiones o compras que supongan niveles excesivos de concentración en internet. Aunque aquellos que celebraron la medida no promueven el bloqueo de contenidos por razones políticas o ideológicas, si afirman que las fuerzas del mercado, una vez liberadas de la regulación, se encargarán de hacer surgir negocios, más inversión y ofrecerán mejores condiciones de acceso a internet. En contraposición, un grupo de 20 científicos e ingenieros considerados los padres fundadores de internet escribieron una carta abierta al Congreso de Estados Unidos en la que afirman que la nueva mayoría en la FCC no entiende como funciona internet. Allí advierten sobre el impacto que el fin de la neutralidad de la red tendría para la innovación y para el derecho a crear, compartir y acceder a información en línea. Argumentos  aparte, desde el punto de vista de los derechos humanos el cambio aprobado por la FCC trae consigo graves preocupaciones que los Relatores para la Libertad de Expresión hemos puesto de manifiesto. Internet se ha desarrollado a partir de determinados principios de diseño, cuya aplicación sostenida en el tiempo ha permitido un ambiente descentralizado, abierto y neutral. La inteligencia de la red estestableceranos de administraciones democriones de la red, dado quecancha a favor de las "telcos"uscar, recibir y difundir informá en las puntas: el valor lo genera todo aquel que pueda conectarse a la red, subir y compartir información, ideas y aplicaciones. El entorno original de internet ha sido, precisamente, clave para garantizar la libertad de buscar, recibir y difundir información sin distinción de fronteras y, sin duda, tuvo un efecto positivo en la diversidad y el pluralismo. De hecho, esta característica (la neutralidad) fue elevada a principio fundamental tanto en el sistema interamericano de derechos humanos, como en el sistema universal, a través de diversas declaraciones y decisiones. Vale precisar que la batalla jurídica por mantener el principio vigente en Estados Unidos recién comienza: ya suman 20 los estados -entre ellos California, Nueva York-, cuyos fiscales generales han interpuesto acciones contra la nueva regla con el objetivo de garantizar las libertades, un dato relevante en un país donde la primera enmienda es asunto serio. Aunque es difícil que la mayoría republicana cambie una política del Ejecutivo, el Congreso estudia anular la decisión (las encuestas indican que la idea de una Internet libre y abierta es compartida por el 70% de la población). Por otra parte, los estados pueden establecer normas que obliguen a observar el principio de neutralidad en sus jurisdicciones y, de hecho, el estado de Montana acaba de aprobar una norma para garantizar la neutralidad de la red dentro de sus límites. Ahora bien, que pasaría si el nuevo modelo se impone. Sin duda la red descentralizada que conocemos se transformaría en un espacio centralizado, con unos pocos intermediarios con el poder de distribuir el acceso a las aplicaciones y contenidos. Los procesos de concentración y fusiones entre empresas de telecomunicaciones y empresas tecnológicas seguramente se aceleren, y esto podría relegar a pequeños emprendimientos a una Internet de baja calidad; al final, para el usuario común Internet podría convertirse en un espacio fragmentado y de aplicaciones dominantes.  Otra visión tecnológicamente más optimista sugiere que, pese a este retroceso en los principios, la red no cambiará su naturaleza: no habrá un despliegue de censura radical de parte de los ISP's y buena parte de los consumidores -en particular la generación de los milenial y las siguientes- van a seguir reclamando acceso a una Internet completa, abierta y neutral. Se podrá decir que algunas plataformas o gigantes como Google y Facebook ya tienen un grado de concentración excesiva y pueden competir con las corporaciones de telecomunicaciones. Es cierto, pero bajo la neutralidad de la red miles de sitios que acceden a la vida digital, se sirven de (y sirven a) las redes más grandes, conviven en un ecosistema abierto. Y finalmente cabe preguntarse: ¿Qué impacto tendrá esta medida en el resto del mundo? En América Latina Argentina, Brasil, México y Chile ya avanzaron en leyes que garantizan este principio. ¿Habrá un efecto contagio en la región? ¿Que harán las empresas de telecomunicaciones que operan en la región? ¿Qué modelo seguirá Europa y los países nórdicos, algunos de los cuales elevaron a derecho constitucional el acceso a una Internet libre y abierta? Y los gobiernos autoritarios alrededor del mundo: ¿utilizarán el fin de la neutralidad de la red para justificar una política aún más agresiva de bloqueo y filtrado de medios de comunicación, páginas web y aplicaciones que consideran un peligro para la supervivencia de su régimen?
>> mehr lesen

California Senate Rejects License Plate Privacy Shield Bill (Di, 30 Jan 2018)
The California Senate has rejected a bill to allow drivers to protect their privacy by applying shields to their license plates when parked. The simple amendment to state law would have served as a countermeasure against automated license plate readers (ALPRs) that use plates to mine our location data. As is the case with many privacy bills, S.B. 712 had received strong bipartisan support since it was first introduced in early 2017. The bill was sponsored by Sen. Joel Anderson, a prominent conservative Republican from Southern California, and received aye votes from Sens. Nancy Skinner and Scott Wiener, both Democrats representing the Bay Area.   Each recognized that ALPR data represents a serious threat to privacy, since ALPR data can reveal where you live, where you work, where you worship, and where you drop your kids at school. Law enforcement exploits this data with insufficient accountability measures. It is also sold by commercial vendors to lenders, insurance companies, and debt collectors.  Just last week, news broke that Immigrations & Customs Enforcement would be exploiting a database of more than 6.5 billion license plate scans collected by a private vendor. This measure was a simple way to empower people to protect information about where they park their cars, be it an immigration resource center, a reproductive health center, a marijuana dispensary, a place of worship, or a gun show.  Under lobbying from law enforcement interests, senators killed the bill with a 12-18 vote.  Privacy on our roadways is one of the most pressing issues in transit policy. The federal government—including the Drug Enforcement Agency and Immigrations & Customs Enforcement—are ramping up their efforts to use ALPR data, including data procured from private companies.  Major vulnerabilities in computer systems are revealing how dangerous it can be for government agencies and private data brokers to store our sensitive personal information.  If the Senate is going to begin 2018 killing a driver privacy measure, it is incumbent on them to spend the rest of the year probing the issue to find a new solution. Related Cases:  Automated License Plate Readers (ALPR)
>> mehr lesen

Code Review Isn't Evil. Security Through Obscurity Is. (Di, 30 Jan 2018)
On January 25th, Reuters reported that software companies like McAfee, SAP, and Symantec allow Russian authorities to review their source code, and that "this practice potentially jeopardizes the security of computer networks in at least a dozen federal agencies." The article goes on to explain what source code review looks like and which companies allow source code reviews, and reiterates that "allowing Russia to review the source code may expose unknown vulnerabilities that could be used to undermine U.S. network defenses." The spin of this article implies that requesting code reviews is malicious behavior. This is simply not the case. Reviewing source code is an extremely common practice conducted by regular companies as well as software and security professionals to ensure certain safety guarantees of the software being installed. The article also notes that “Reuters has not found any instances where a source code review played a role in a cyberattack.” At EFF, we routinely conduct code reviews of any software that we elect to use. Just to be clear, we don’t want to downplay foreign threats to U.S. cybersecurity, or encourage the exploitation of security vulnerabilities— on the contrary, we want to promote open-source and code review practices as stronger security measures. EFF strongly advocates for the use and spread of free and open-source software for this reason. Not only are software companies disallowing foreign governments from conducting source code reviews, trade agreements are now being used to prohibit countries from requiring the review of the source code of imported products. The first such prohibition in a completed trade agreement will be in the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP, formerly just the TPP), which is due to be signed in March this year. A similar provision is proposed for inclusion in the modernized North American Free Trade Agreement (NAFTA), and in Europe’s upcoming bilateral trade agreements. EFF has expressed our concern that such prohibitions on mandatory source code review could stand in the way of legitimate measures to ensure the safety and quality of software such as VPN and secure messaging apps, and devices such as routers and IP cameras. The implicit assumption that "keeping our code secret makes us safer" is extremely dangerous. Security researchers and experts have made it explicit time and time again that relying solely on security through obscurity simply does not work. Even worse, it gives engineers a false sense of safety, and can encourage further bad security practices. Even in times of political tension and uncertainty, we should keep our wits about us. Allowing code review is not a direct affront to national security— in fact, we desperately need more of it.
>> mehr lesen

Private Censorship Is Not the Best Way to Fight Hate or Defend Democracy: Here Are Some Better Ideas (Di, 30 Jan 2018)
From Cloudflare’s headline-making takedown of the Daily Stormer last autumn to YouTube’s summer restrictions on LGBTQ content, there's been a surge in “voluntary” platform censorship. Companies—under pressure from lawmakers, shareholders, and the public alike—have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed “offensive” but legal content. These moves come in the midst of a fierce public debate about what responsibilities platform companies that directly host our speech have to take down—or protect—certain types of expression. And this debate is occurring at a time in which only a few large companies host most of our online speech. Under the First Amendment, intermediaries generally have a right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should. To begin with, a great deal of problematic content sits in the ambiguous territory between disagreeable political speech and abuse, between fabricated propaganda and legitimate opinion, between things that are legal in some jurisdictions and not others. Or they’re things some users want to read and others don’t. If many cases are in grey zones, our institutions need to be designed for them. We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them, or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities to not be silenced by harassment. We should all have the ability to exercise control over our online environments: to feel empowered by the tools we use, not helpless in the face of others' use. But in moments of apparent crisis, the first step is always to look simple solutions. In particular, in response to rising concerns that we are not in control, a groundswell of support has emerged for even more censorship by private platform companies, including pushing platforms into ever increased tracking and identification of speakers. We are at a critical moment for free expression online and for the role of the Internet in the fabric of democratic societies. We need to get this right. Platform Censorship Isn’t New, Hurts the Less Powerful, and Doesn’t Work Widespread public interest in this topic may be new, but platform censorship isn’t. All of the major platforms set forth rules for their users. They tend to be complex, covering everything from terrorism and hate speech to copyright and impersonation. Most platforms use a version of community reporting. Violations of these rules can prompt takedowns and account suspensions or closures. And we have well over a decade of evidence about how these rules are used and misused. The results are not pretty. We’ve seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech. Platform censorship has included images and videos that document atrocities and make us aware of the world outside of our own communities. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo. These takedowns are sometimes intentional, and sometimes mistakes, but like Cloudflare’s now-famous decision to boot off the Daily Stormer, they are all made without accountability and due process. As a result, most of what we know about censorship on private platforms comes from user reports and leaks (such as the Guardian’s “Facebook Files”). Given this history, we’re worried about how platforms are responding to new pressures. Not because there’s a slippery slope from judicious moderation to active censorship — but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook—or in governments around the world. Instead many, especially in policy circles, continue to push for companies to—magically and at scale—perfectly differentiate between speech that should be protected and speech that should be erased. If our experience has taught us anything, it’s that we have no reason to trust the powerful—inside governments, corporations, or other institutions—to draw those lines. As people who have watched and advocated for the voiceless for well over 25 years, we remain deeply concerned. Fighting censorship—by governments, large private corporations, or anyone else—is core to EFF’s mission, not because we enjoy defending reprehensible content, but because we know that while censorship can be and is employed against Nazis, it is more often used as a tool by the powerful, against the powerless. First Casualty: Anonymity In addition to the virtual certainty that private censorship will lead to takedowns of valuable speech, it is already leading to attacks on anonymous speech. Anonymity and pseudonymity have played important roles throughout history, from secret ballots in ancient Greece to 18th century English literature and early American satire. Online anonymity allows us to explore controversial ideas and connect with people around health and other sensitive concerns without exposing ourselves unnecessarily to harassment and stigma. It enables dissidents in oppressive regimes to tell their stories with less fear of retribution. Anonymity is often the greatest shield that vulnerable groups have. Current proposals from private companies all undermine online anonymity. For example, Twitter’s recent ban on advertisements from Russia Today and Sputnik relies on the notion that the company will be better at identifying accounts controlled by Russia than Russia will be at disguising accounts to promote its content. To make it really effective, Twitter may have to adopt new policies to identify and attribute anonymous accounts, undermining both speech and user privacy. Given the problems with attribution, Twitter will likely face calls to ban anyone from promoting a link to suspected Russian government content. And what will we get in exchange for giving up our ability to speak online anonymously? Very little. Facebook for many years required individuals to use their “real” name (and continues to require them to use a variant of it), but that didn’t stop Russian agents from gaming the rules. Instead, it undermined innocent people who need anonymity—including drag performers, LGBTQ people, Native Americans, survivors of domestic and sexual violence, political dissidents, sex workers, therapists, and doctors. Study after study has debunked the idea that forcibly identifying speakers is an effective strategy against those who spread bad information online. Counter-terrorism experts tell us that “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” We need a better way forward. Step One: Start With the Tools We Have and Get Our Priorities Straight Censorship is a powerful tool and easily misused. That’s why, in fighting back against hate, harassment, and fraud, censorship should be the last stop. Particularly from a legislative perspective, the first stop should be looking at the tools that already exist elsewhere, rather than rushing to exceptionalize the Internet. For example, in the United States, defamation laws reflect centuries of balancing the right of individuals to hold others accountable for false, reputation-damaging statements, and the right of the public to engage in vigorous public debate. Election laws already prohibit foreign governments or their agents from purchasing campaign ads—online or offline—that directly advocate for or against a specific candidate. In addition, for sixty days prior to an election, foreign agents cannot purchase ads that even mention a candidate. Finally, the Foreign Agent Registration Act also requires information materials distributed by a foreign entity to contain a statement of attribution and to file copies with the U.S. Attorney General. These are all laws that could be better brought to bear, especially in the most egregious situations. We also need to consider our priorities. Do we want to fight hate speech, or do we want to fight hate? Do we want to prevent foreign interference in our electoral processes, or do we want free and fair elections? Our answers to these questions should shape our approach, so we don’t deceive ourselves into thinking that removing anonymity in online advertising is more important to protecting democracy than, say, addressing the physical violence by those who spread hate, preventing voter suppression and gerrymandering, or figuring out how to build platforms that promote more informed and less polarizing conversations between the humans that use them. Step Two: Better Practices for Platforms But if we aren’t satisfied with those options, we have others. Over the past few years, EFF—in collaboration with Onlinecensorship.org and civil society groups around the world—has developed recommendations to companies aimed at fighting censorship and protecting speech. Many of these are contained within the Manila Principles, which provide a roadmap for companies seeking to ensure human rights are protected on their platforms. In 2018, we’ll be working hard to push companies toward better practices around these recommendations. Here they are, in one place. Meaningful Transparency Over the years, we and other organizations have pushed companies to be more transparent about the speech that they take down, particularly when it’s at the behest of governments. But when it comes to decisions about acceptable speech, or what kinds of information or ads to show us, companies are largely opaque. We believe that Facebook, Google, and others should allow truly independent researchers—with no bottom line or corporate interest—access to work with, black box test and audit their systems. Users should be told when bots are flooding a network with messages and, as described below, should have tools to protect themselves. Meaningful transparency also means allowing users to see what types of content are taken down, what’s shown in their feed and why. It means being straight with users about how their data is being collected and used. And it means providing users with the power to set limitations on how long that data can be kept and used. Due Process We know that companies make enforcement mistakes, so it’s shocking that most lack robust appeals processes—or any appeals processes at all. Every user should have the right to due process, including the option to appeal a company's takedown decision, in every case. The Manila Principles provide a framework for this. Empower Users With Better Platform Tools Platforms are building tools that let user filter ads and other content, and this should continue. This approach has been criticized for furthering “information bubbles,” but those problems are less worrisome when users are in charge and informed, than when companies are making these decisions for users with one eye on their bottom lines. Users should be in control of their own online experience. For example, Facebook already allows users to choose what kinds of ads they want to see—a similar system should be put in place for content, along with tools that let users make those decisions on the fly rather than having to find a hidden interface. Use of smart filters should continue, since they help users can better choose content they want to see and filter out content they don’t want to see. Facebook’s machine learning models can recognize the content of photos, so users should be able to choose an option for "no nudity" rather than Facebook banning it wholesale. (The company could still check that by default in countries where it's illegal.) When it comes to political speech, there is a desperate need for more innovation. That might include user interface designs and user controls that encourage productive and informative conversations; that label and dampen the virality of wildly fabricated material while giving readers transparency and control over that process. This is going to be a very complex and important design space in years to come, and we’ll probably have much more to say about it in future posts. Empower Users With Third-Party Tools Big platform companies aren’t the only place where good ideas can grow. Right now, the larger platforms limit the ability of third parties to offer alternative experiences on the platforms, using closed APIs, blocking scraping and limiting interoperability. They enforce their power to limit innovation on the platform through a host of laws, including the Computer Fraud and Abuse Act (CFAA), copyright regulations, and the Digital Millennium Copyright Act (DMCA). Larger platforms like Facebook, Twitter and YouTube should facilitate user empowerment by opening their APIs even to competing services, allowing scraping and ensuring interoperability with third party products, even up to forking of services. Forward Consent Community guidelines and policing are touted as a way to protect online civility, but are often used to take down a wide variety of speech. The targets of reporting often have no idea what rule they have violated, since companies often fail to provide adequate notice. One easy way that service providers can alleviate this is by having users affirmatively accept the community guidelines point by point, and accept them again each time they change. Judicious Filters When implemented by the platform, we worry about filtering technologies that automatically take down speech, because the default for online speech should always to be to keep it online until a human has reviewed it. Some narrow exceptions may be appropriate, e.g., where a file is an exact match of a file already found to be infringing, where no effort was made to claim otherwise and avenues remain for users to challenge any subsequent takedown. But in general platforms can and should simply use smart filters to better flag potentially unlawful content for human review and to recognize when their user flagging systems are being gamed by those seeking to get the platform to censor others. Platform Competition and User Choice Ultimately, users also need to be able to leave when a platform isn’t serving them. Real data portability is key here and this will require companies to agree to standards for how social graph data is stored. Fostering competition in this space could be one of the most powerful incentives for companies to protect users against bad actors on their platform, be they fraudulent, misleading or hateful. Pressure on companies to allow full interoperability and data portability could lead to a race to the top for social networks. No Shadow Regulations Over the past decade we have seen the emergence of the secretive web of backroom agreements between companies that seeks to control our behavior online, often driven by governments as a shortcut and less accountable alternative to regulation. One example among many: under pressure from the UK Intellectual Property Office, search engines agreed last year to a "Voluntary Code of Practice" that requires them to take additional steps to remove links to allegedly unlawful content. At the same time, domain name registrars are also under pressure to participate in copyright enforcement, including “voluntarily” suspending domain names. Similarly, in 2016, the European Commission struck a deal with the major platforms, which, while ostensibly about addressing speech that is illegal in Europe, had no place for judges and the courts, and concentrated not on the letter of the law, but the companies' terms of service. Shadow regulation is dangerous and undemocratic; regulation should take place in the sunshine, with the participation of the various interests that will have to live with the result. To help alleviate the problem, negotiators should seek to include meaningful representation from all groups with a significant interest in the agreement; balanced and transparent deliberative processes; and mechanisms of accountability such as independent reviews, audits, and elections. Keep Core Infrastructure Out of It As we said last year, the problems with censorship by direct hosts of speech are tremendously magnified when core infrastructure providers are pushed to censor. The risk of powerful voices squelching the less powerful is greater, as are the risks of collateral damage. Internet speech depends on an often-fragile consensus among many systems and operators. Using that system to edit speech, based on potentially conflicting opinions about what can be spoken on the Internet, risks shattering that consensus. Takedowns by some intermediaries—such as certificate authorities or content delivery networks—are far more likely to cause collateral censorship. That’s why we’ve called these parts of the Internet free speech’s weakest links. The firmest, most consistent, defense these potential weak links can take is to simply decline all attempts to use them as a control point. They can act to defend their role as a conduit, rather than a publisher. Companies that manage domain names, including GoDaddy and Google, should draw a hard line: they should not suspend or impair domain names based on the expressive content of websites or services. Toward More Accountability There are no perfect solutions to protecting free expression, but as this list of recommendations should suggest, there’s a lot that companies—as well as policymakers—can do to protect and empower Internet users without doubling down on the risky and too-often failing strategy of censorship. We'll continue to refine, and critique the proposals that we and others make, whether they're new laws, new technology, or new norms. But we also want to play our part to ensure that these debates aren't dominated by existing interests and a simple desire for rapid and irrevocable action. We'll continue to highlight the collateral damage of censorship, and especially to highlight the unheard voices who have been ignored in this debate—and have the most to lose. Note: Many EFF staff contributed to this post. Particular thanks to Peter Eckersley, Danny O’Brien, David Greene, and Nate Cardozo.
>> mehr lesen

ETICAS Releases First Ever Evaluations of Spanish Internet Companies' Privacy and Transparency Practices (Di, 30 Jan 2018)
It’s Spain's turn to take a closer look at the practices of their local Internet companies, and how they treat their customers’ personal data. Spain's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of ETICAS Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to shine a light on Internet privacy practices in Iberoamerica. The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities (A few months ago Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and Argentinean digital rights group ADC will be releasing a similar study this year). ETICAS surveyed a total of nine Internet companies. These companies’ logs hold intimate records of the movements and relationships of the majority of the population in the country. The five telecommunications companies surveyed—Movistar, Orange, Vodafone-ONO, Jazztel, MásMóvil—together make up the vast majority of the fixed, mobile, and broadband market in Spain. ETICAS also surveyed the four most popular online platforms for buying and renting houses—Fotocasa, Idealista, Habitaclia, and Pisos.com. ETICAS, in the tradition of Who Has Your Back?, evaluated the companies for their commitment to privacy and transparency, and awarded stars based on their current practices and public behavior. Each company was given the opportunity to answer a questionnaire, to take part in a private interview, and to send any additional information they felt appropriate, all of which was incorporated into the final report. This approach is based on EFF’s earlier work with Who Has Your Back? in the United States, although the specific questions in ETICAS’ study were adapted to match Spain’s local laws and realities. ETICAS rankings for Spanish ISPs and phone companies are below; the full report, which includes details about each company, is available at: https://eticasfoundation.org/qdtd ETICAS reviewed each company in five categories: Privacy Policy: whether its privacy policy is linked from the main website, whether it tell users which data are being processed, how long these companies store their data, and if they notify users if they change their privacy policies. According to law: whether they publish their law enforcement guidelines and whether they hand over data according to the law. Notification: whether they provide prior notification to customers of government data demands.   Transparency: whether they publish transparency reports. Promote users’ privacy in courts or congress: whether they have publicly stood to promote privacy. Conclusion A chart describing the results of the ETICAS survey of nine Internet companies Companies in Spain are off to a good start but still have a ways to go to fully protect their customers’ personal data and be transparent about who has access to it. This years' report shows Telefónica-Movistar taking the lead, followed closely by Orange, but both still have plenty of room for improvement, especially on Transparency Reports and Notification. For 2018, competitors could catch up with efforts to provide better user notification of surveillance, publish transparency reports, law enforcement guidelines, or publicly make clear data protection policies. ETICAS is expected to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Spaniards will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope the report will shine with more stars next year.
>> mehr lesen

When Trading Track Records Means Less Privacy (Di, 30 Jan 2018)
Sharing your personal fitness goals—lowered heart rates, accurate calorie counts, jogging times, and GPS paths—sounds like a fun, competitive feature offered by today’s digital fitness trackers, but a recent report from The Washington Post highlights how this same feature might end up revealing not just where you are, where you’ve been, and how often you’ve traveled there, but sensitive national security information. According to The Washington Post report, the fitness tracking software company Strava—whose software is implemented into devices made by Fitbit and Jawbone—posted a “heat map” in November 2017 showing activity of some of its 27 million users around the world. Unintentionally included in that map were the locations, daily routines, and possible supply routes of disclosed and undisclosed U.S. military bases and outposts, including what appear to be classified CIA sites. Though the revealed information itself was anonymized—meaning map viewers could not easily determine identities of Strava customers with the map alone—when read collectively, the information resulted in a serious breach of privacy.  Shared on Twitter, the map led to several discoveries, the report said. “Adam Rawnsley, a Daily Beast journalist, noticed a lot of jogging activity on the beach near a suspected CIA base in Mogadishu, Somalia. Another Twitter user said he had located a Patriot missile system site in Yemen. Ben Taub, a journalist with the New Yorker, homed in on the location of U.S. Special Operations bases in the Sahel region of Africa.” On Monday, according to a follow-up report by The Washington Post, the U.S. military said it was reviewing guidelines on how it uses wireless devices. As the Strava map became more popular, the report said, Internet users were able to further de-anonymize the data, pairing it to information on Strava’s website. According to The Washington Post's follow-up report: “On one of the Strava sites, it is possible to click on a frequently used jogging route and see who runs the route and at what times. One Strava user demonstrated how to use the map and Google to identify by name a U.S. Army major and his running route at a base in Afghanistan.” The media focused on one particular group affected by this privacy breach: the U.S. military. But of course, regular people’s privacy is impacted even more by privacy leaks such as this. For instance, according to a first-person account written in Quartz last year, one London jogger was surprised to learn that, even with strict privacy control settings on Strava, her best running times—along with her first and last name and photo—were still visible to strangers who peered into her digital exercise activity. These breaches came through an unintended bargain, in which customers traded their privacy for access to social fitness tracking features that didn’t exist several years ago. And these breaches happened even though Strava attempted to anonymize its customers’ individual data. That clearly wasn’t enough. Often, our understanding of “anonymous” is wrong—invasive database cross-referencing can reveal all sorts of private information, dispelling any efforts at meaningful online anonymity. While “gamified” fitness trackers, especially ones that have social competition built-in, are fun, they are really just putting a friendly face on big brother. When we give control over our personal data—especially sensitive data such as location history—to third parties, we expect it to be kept private. When companies betray that trust, even in “anonymized” form such as the Strava heat map, unintended privacy harms are almost guaranteed. Clearly communicated privacy settings can help us in situations like these, but so can company decisions to better protect the data they publish online.
>> mehr lesen

It's Time to Make Student Privacy a Priority (Di, 30 Jan 2018)
Last month, the Federal Trade Commission and the U.S. Department of Education held a workshop in Washington, DC. The topic was “Student Privacy and Ed Tech.” We at EFF have been trying to get the FTC to focus on the privacy risks of educational technology (or “ed tech”) for over two years, so we eagerly filed formal comments. We’ve long been concerned about how technology impacts student privacy. As schools and classrooms become increasingly wired, and as schools put more digital devices and services in the hands of students, we’ve been contacted by a large number of concerned students, parents, teachers, and even administrators. They want to know: What data are ed tech providers collecting about our kids? How are they using it? How well do they disclose (if at all) the scope of their data collection? How much control (if any) do they give to schools and parents over the retention and use of the data they collect? Do they even attempt to obtain parental consent before collecting and using incredibly sensitive student data? In the spring of 2017, we released the results of a survey that we conducted in order to plumb the depths of the confusion surrounding ed tech. And as it turns out, students, parents, teachers, and even administrators have lots of concerns—and very little clarity—over how ed tech providers protect student privacy. Drawing from the results of our survey, our comments to the FTC and DOE touched on a broad set of concerns: The FTC has ignored our student privacy complaint against Google. Despite signing a supposedly binding commitment to refrain from collecting student data without parental consent beyond that needed for school purposes, Google openly harvests student search and browsing behavior, and uses that data for its own purposes. We filed a formal complaint with the FTC more than two years ago but have heard nothing back. There is a consistent lack of transparency in ed tech privacy policies and practices. Schools issue devices to students without their parents’ knowledge and consent. Parents are kept in the dark about what apps their kids are required to use and what data is being collected. The investigative burden too often falls on students and parents. With no notice or help from schools, the investigative burden falls on parents and even students to understand the privacy implications of the technology students are using. Data use concerns are unresolved. Parents have extensive concerns about student data collection, retention, and sharing. Many ed tech products and services have weak privacy policies. For instance, it took the lawyers at EFF months to get a clear picture of which privacy policies even applied to Google’s student offerings, much less how they interacted. Lack of choice in ed tech is the norm. Parents who seek to opt their children out of device or software use face many hurdles, particularly those without the resources to provide their own alternatives. Some districts have even threatened to penalize students whose parents refuse to consent to what they believe are egregious ed tech privacy policies and practices. Overreliance on “privacy by policy.” School districts generally rely on the privacy policies of ed tech companies to ensure student data protection. Parents and students, on the other hand, want concrete evidence that student data is protected in practice as well as in policy. There is an unfilled need for better privacy training and education. Both students and teachers want better training in privacy-conscious technology use. Ed tech providers aren’t fulfilling their obligations to schools when they fail to provide even rudimentary privacy training. Ed tech vendors treat existing privacy law as if it doesn’t apply to them. Because the Family Educational Rights and Privacy Act (“FERPA”) generally prohibits school districts from sharing student information with third parties without written parental consent, districts often characterize ed tech companies as “school officials.” However, districts may only do so if—among other things—providers give districts or schools direct control over all student data and refrain from using that data for any other purpose. Despite the fact that current ed tech offerings generally fail those criteria, vendors generally don’t even attempt to obtain parental consent. We believe it is incumbent upon school districts to fully understand the data and privacy policies and practices of the ed tech products and services they wish to use, demand that ed tech vendors assent to contract terms that are favorable to the school districts and actually protect student privacy, and be ready not to do business with a company who does not engage in robust privacy practices. While we understand that school budgets are often tight and that technology can actually enhance the learning experience, we urge regulators, school districts, and the ed tech companies themselves to make student privacy a priority. We hope the FTC and DOE listen to what we, and countless concerned students, parents, and teachers, have to say.
>> mehr lesen

ICE Accesses a Massive Amount of License Plate Data. Will California Take Action? (Mo, 29 Jan 2018)
The news that Immigrations & Customs Enforcement is using a massive database of license plate scans from a private company sent shockwaves through the civil liberties and immigrants’ rights community, who are already sounding the alarm about how mass surveillance will be used to fuel deportation efforts. The concerns are certainly justified: the vendor, Vigilant Solutions, offers access to 6.5 billion data points, plus millions more collected by law enforcement agencies around the country. Using advanced algorithms, this information—often collected by roving vehicles equipped with automated license plate readers (ALPRs) that scan every license plate they pass—can be used to reveal a driver’s travel patterns and to track a vehicle in real time. ICE announced the expansion of its ALPR program in December, but without disclosing what company would be supplying the data. While EFF had long suspected Vigilant Solutions won the contract, The Verge confirmed it in a widely circulated story published last week. In California, this development raises many questions about whether the legislature has taken enough steps to protect immigrants, despite passing laws last year to protect residents from heavy-handed immigration enforcement. But California lawmakers should have already seen this coming. Two years ago, The Atlantic branded these commercial ALPR databases, “an unprecedented threat to privacy.” Vigilant Solutions tells its law enforcement customers that accessing this data is “as easy as adding a friend on your favorite social media platform.” As a result, California agencies share their data wholesale with hundreds of entities, ranging from small towns in the Deep South to a variety of federal agencies. An analysis by EFF of records obtained from local police has identified more than a dozen California agencies that have already been sharing ALPR data with ICE through their Vigilant Solutions accounts. The records show that ICE, through its Homeland Security Investigations offices in Newark, New Orleans, and Houston, and its Bulk Cash Smuggling Center, has had access to data from more than a dozen California police departments for years. At least one ICE office has access to ALPR data collected by the following police agencies: Anaheim Police Department Antioch Police Department Bakersfield Police Department Chino Police Department Fontana Police Department Fountain Valley Police Department Glendora Police Department Hawthorne Police Department Montebello Police Department Orange Police Department Sacramento Police Department San Diego Police Department Simi Valley Police Department Tulare Police Department ICE agents have also obtained direct access to this data through user accounts provided by local law enforcement. For example, an ICE officer obtained access through the Long Beach Police Department’s system in November 2016 and ran 278 license plate searches over nine months. Two CBP officers further conducted 578 plate searches through Long Beach’s system during that same period. It’s important to note that ALPR technology collects and stores data on millions of drivers without any connection to a criminal investigation. As EFF noted, this data can reveal sensitive information about a person, for example, if they visit reproductive health clinics, immigration resource centers, mosques, or LGBTQ clubs. Even attendees at gun shows have found their plates captured by CBP officers, according to the Wall Street Journal. Police departments must take a hard look at their ALPR systems and un-friend DHS. But the California legislature also has a chance to offer a defense measure for drivers who want to protect their privacy. Update: The California Senate voted down S.B. 712 on January 30, 2018.  S.B. 712 would allow drivers to apply a removable cover to their license plates when they are lawfully parked, similar to how drivers are currently allowed to cover their entire vehicles with a tarp to protect their paint jobs from elements. While this would not prevent ALPRs from collecting data from moving vehicles, it would offer privacy for those who want to protect the confidentiality of their destinations. Before the latest story broke, S.B. 712 was brought to the California Senate floor, where it initially failed on a tied vote, with many Republicans and Democrats—including Sens. Joel Anderson (R-Alpine) and Scott Wiener (D-San Francisco)—joining in support. Unfortunately, several Democrats, such as Senate President Kevin de León and Sen. Connie Leyva, who have positioned themselves as immigrant advocates, voted against the bill the first time around. Others, such as Sens. Toni Atkins and Ricardo Lara, sat the vote out. The Senate has one last chance to pass the bill and send it to the California Assembly by January 31. The bill is urgently necessary to protect the California driving public from surveillance. Californians: join us today in urging your senator to stand up for privacy, not the interests of ICE or the myriad of financial institutions, insurance companies, and debt collectors who also abuse this mass data collection. Note: This post has been updated to include the Bulk Cash Smuggling Center in the list of ways ICE accesses data. 
>> mehr lesen

EFF's Fight to End Warrantless Device Searches at the Border: A Roundup of Our Advocacy (Sa, 27 Jan 2018)
EFF has been working on multiple fronts to end a widespread violation of digital liberty—warrantless searches of travelers’ electronic devices at the border. Government policies allow border agents to search and confiscate our cell phones, tablets, and laptops at airports and border crossings for no reason, without explanation or any suspicion of wrongdoing. It’s as if our First and Fourth Amendment rights don’t exist at the border. This is wrong, which is why we’re working to challenge and end these unconstitutional practices. EFF and the ACLU filed a brief today in our Alasaad v. Nielsen lawsuit to oppose the government’s attempt to dismiss our case. Our lawsuit, filed in September 2017 on behalf of 11 Americans whose devices were searched, takes direct aim at the illegal policies enforced by the U.S. Department of Homeland Security and its component agencies, U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE). In our brief we explain that warrantless searches of electronic devices at the border violate the First and Fourth Amendments, and that our 11 clients have every right to bring this case. This is just the latest action we’ve taken in the fight for digital rights at the border. EFF is pushing back against the government’s invasive practices on three distinct fronts: litigation, legislation, and public education. A Rampant Problem Over the past few years there has been a dramatic increase in the number of searches of cell phones and other electronic devices conducted by border agents. CBP reported that in fiscal year 2012 the number of border device searches was 5,085. In fiscal year 2017, the number had increased to 30,200—a six-fold increase in just five years. DHS claims the authority to ransack travelers’ cell phones and other devices and the massive troves of highly personal information they contain. ICE agents can do so for any reason or no reason. Under a new policy issued earlier this month, CBP agents can do so without a warrant or probable cause, and usually can do so without even reasonable suspicion. Also, agents can and do confiscate devices for lengthy periods of time and subject them to extensive examination. These practices are unconstitutional invasions of our privacy and free speech. Our electronic devices contain our emails, text messages, photos and browsing history. They document our travel patterns, shopping habits, and reading preferences. They expose our love lives, health conditions, and religious and political beliefs. They reveal whom we know and associate with. Warrantless device searches at the border violate travelers’ rights to privacy under the Fourth Amendment, and freedoms of speech, press, private association, and anonymity under the First Amendment. These practices have existed at least since the George W. Bush administration and continued through the Obama administration. But given the recent dramatic uptick in the number of border device searches since President Trump took office, a former DHS chief privacy officer, Mary Ellen Callahan, concluded that the increase was “clearly a conscious strategy,” and not “happenstance.” But the U.S. border is not a Constitution-free zone. The Fourth Amendment requires the government to obtain a probable cause warrant before conducting a border search of a traveler’s electronic device. This follows from the U.S. Supreme Court case Riley v. California (2014). The court held that police need a warrant to search the cell phones of people they arrest. The warrant process is critical because it provides a check on government power and, specifically, a restraint on arbitrary invasions of privacy. In seeking a warrant, a government agent must provide sworn testimony before a neutral arbiter—a judge—asserting why the government believes there’s some likelihood (“probable cause”) that the cell phone or other thing to be searched contains evidence of criminality. If the judge is convinced, she will issue the search warrant, allowing the government to access your private information even if you don’t consent. Right now, there are no such constraints on CBP and ICE agents—but we’re fighting in court and in Congress to change this. Litigation On September 13, 2017, EFF along with ACLU filed our lawsuit, Alasaad v. Nielsen, against the federal government on behalf of ten U.S. citizens and one lawful permanent resident whose smartphones and other devices were searched without a warrant at the U.S. border. The plaintiffs include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. Several are Muslims or people of color. All were reentering the country after business or personal travel when border agents searched their devices. None were subsequently accused of any wrongdoing. Each of the Alasaad plaintiffs suffered a substantial privacy invasion. Some plaintiffs were detained for several hours while agents searched their devices, while others had their devices confiscated and were not told when their belongings would be returned. One plaintiff was even placed in a chokehold after he refused to hand over his phone. You can read the detailed stories of all the Alasaad plaintiffs. In the Alasaad lawsuit, we are asking the U.S. District Court for Massachusetts to find that the policies of CBP and ICE violate the Fourth Amendment. We also allege that the search policies violate the First Amendment. We are asking the court to enjoin the federal government from searching electronic devices at the border without first obtaining a warrant supported by probable cause, and from confiscating devices for lengthy periods without probable cause. In the past year, EFF also has filed three amicus briefs in U.S. Courts of Appeals (in the Fourth, Fifth, and Ninth Circuits). In those briefs, we argued that border agents need a probable cause warrant to search electronic devices. There are extremely strong and unprecedented privacy interests in the highly sensitive information stored and accessible on electronic devices, and the narrow purposes of the border search exception—immigration and customs enforcement—are not served by warrantless searches of electronic data. Legislation EFF is urging the U.S. Congress to pass the Protecting Data at the Border Act. The Act would require border agents to obtain a probable cause warrant before searching the electronic devices of U.S. citizens and legal permanent residents at the border. The Senate bill (S. 823) is sponsored by Sen. Ron Wyden (D-OR) and Sen. Rand Paul (R-KY). Rep. Polis (D-CO), Rep. Smith (D-WA), and Rep. Farenthold (R-TX) are taking the lead on the House bill (H.R. 1899). In addition to creating a warrant requirement, the Act would prohibit the government from delaying or denying entry or exit to a U.S. person based on that person’s refusal to hand over a device passcode, online account login credential, or social media handle. You can read more about this critical bill in our call to action, and our op-ed in The Hill. Please contact your representatives in Congress and urge them to co-sponsor the Protecting Data at the Border Act. Public Education Finally, EFF published a travel guide that helps travelers understand their individual risks when crossing the U.S. border (which includes U.S. airports if flying from overseas), provides an overview of the law around border searches, and offers technical guidance for securing digital data. Our travel guide recognizes that one size does not fit all, and it helps travelers make informed choices regarding their specific situation and risk tolerance. The guide is a useful resource for all travelers who want to keep their digital data safe. You can download our full report as a PDF. Additionally, you can print EFF’s pocket guide to protecting digital privacy at the border. Related Cases:  Alasaad v. Nielsen
>> mehr lesen

EFF and ACLU Ask Court to Allow Legal Challenge to Proceed Against Warrantless Searches of Travelers’ Smartphones, Laptops (Fr, 26 Jan 2018)
Eleven Travelers in Groundbreaking Case Face Substantial Risk of Future Unconstitutional Searches Boston, Massachusetts—The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) urged a federal judge today to reject the Department of Homeland Security’s attempt to dismiss an important lawsuit challenging DHS’s policy of searching and confiscating, without suspicion or warrant, travelers’ electronic devices at U.S. borders. EFF and ACLU represent 11 travelers—10 U.S. citizens and one lawful permanent resident—whose smartphones and laptops were searched without warrants at the U.S. border in a groundbreaking lawsuit filed in September. The case, Alasaad v. Nielsen, asks the court to rule that the government must have a warrant based on probable cause before conducting searches of electronic devices, which contain highly detailed personal information about people’s lives. The case also argues that the government must have probable cause to confiscate a traveler’s device. The plaintiffs in the case include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. The government seeks dismissal, saying the plaintiffs don’t have the right to bring the lawsuit and the Fourth Amendment doesn’t apply to border searches. Both claims are wrong, the EFF and ACLU explain in a brief filed today in federal court in Boston. First, the plaintiffs have “standing” to seek a court order to end unconstitutional border device searches because they face a substantial risk of having their devices searched again. This means they are the right parties to bring this case and should be able to proceed to the merits. Four plaintiffs already have had their devices searched multiple times. Immigration and Customs Enforcement (ICE) policy allows border agents to search and confiscate anyone’s smartphone for any reason or for no reason at all. Customs and Border Protection (CBP) policy allows border device searches without a warrant or probable cause, and usually without even reasonable suspicion. Last year, CBP conducted more than 30,000 border device searches, more than triple the number just two years earlier. “Our clients are travelers from all walks of life. The government policies that invaded their privacy in the past are enforced every day at airports and border crossings around the country,” said EFF Staff Attorney Sophia Cope. “Because the plaintiffs face being searched in the future, they have the right to proceed with said Cope. Second, the plaintiffs argue that the Fourth Amendment requires border officers to get a warrant before searching a traveler’s electronic device. This follows from the Supreme Court’s 2014 decision in Riley v. California requiring that police officers get a warrant before searching an arrestee’s cell phone. The court explained that cell phones contain the “privacies of life”—a uniquely large and varied amount of highly sensitive information, including emails, photos, and medical records. This is equally true for international travelers, the vast majority of whom are not suspected of any crime. Warrantless border device searches also violate the First Amendment, because they chill freedom of speech and association by allowing the government to view people’s contacts, communications, and reading material. “Searches of electronic devices at the border are increasing rapidly, causing greater numbers of people to have their constitutional rights violated,” said ACLU attorney Esha Bhandari. “Device searches can give border officers unfettered access to vast amounts of private information about our lives, and they are unconstitutional absent a warrant.” Below is a full list of the plaintiffs along with links to their individual stories, which are also collected here: Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student. Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients. Sidd Bikkannavar is an optical engineer for NASA’s Jet Propulsion Laboratory in California. Jeremy Dupin is a journalist living in Massachusetts. Aaron Gach is an artist living in California. Isma’il Kushkush is a journalist living in Virginia. Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida. Zainab Merchant, from Florida, is a writer and a graduate student in international security and journalism at Harvard. Akram Shibly is a filmmaker living in New York. Matthew Wright is a computer programmer in Colorado. For the brief: https://www.eff.org/document/alasaad-v-nielsen-opposition-motion-dismiss For more EFF information on this case: https://www.eff.org/cases/alasaad-v-duke  For more ACLU information on this case: https://www.aclu.org/news/aclu-eff-sue-over-warrantless-phone-and-laptop-searches-us-border For more on privacy at the border: https://www.eff.org/wp/digital-privacy-us-border-2017 Contact:  Sophia Cope Staff Attorney sophia@eff.org Adam Schwartz Senior Staff Attorney adam@eff.org Josh Bell ACLU Media Strategist media@aclu.org
>> mehr lesen

Europe's GDPR Meets WHOIS Privacy: Which Way Forward? (Fr, 26 Jan 2018)
Europe's General Data Protection Regulation (GDPR) will come into effect in May 2018, and with it, a new set of tough penalties for companies that fail to adequately protect the personal data of European users. Amongst those affected are domain name registries and registrars, who are required by ICANN, the global domain name authority, to list the personal information of domain name registrants in publicly-accessible WHOIS directories. ICANN and European registrars have clashed over this long-standing contractual requirement, which does not comply [PDF] with European data protection law. This was one of the highest profile topics at ICANN's 60th meeting in Abu Dhabi which EFF attended last year, with registries and registrars laying the blame on ICANN, either for their liability under the GDPR if they complied with their WHOIS obligations, or for their contractual liability to ICANN if they didn't. ICANN has recognized this and has progressively, if belatedly, being taking steps to remediate the clash between its own rules, and the data protection principles that European law upholds. A Brief History of Domain Privacy at ICANN ICANN's first step in improving domain privacy, which dates from 2008 and underwent minor revisions in 2015, was to create a very narrow and cumbersome process for a party bound by privacy laws that conflicted with its contractual requirements to seek an exemption from those requirements from ICANN. Next in 2015, ICANN commenced a Policy Development Process (PDP) for the development of a Next-Generation gTLD Registration Directory Services (RDS) to Replace WHOIS, whose work remains ongoing, with the intention that this new RDS would be more compatible with the privacy laws, probably by providing layered access to registrant data to various classes of authorized users. Meanwhile, ICANN considered whether to limit registrants' access to a privacy workaround that allowed registrants to register their domain via a proxy, thereby keeping their real personal details private. Although it eventually concluded that access to privacy proxy registration services shouldn't be limited [PDF], these don't amount to a substitute for the new RDS that will incorporate privacy by design, because not all registrars provide this option, only do so as an opt-in service, or via a third party who charges money for it. Meanwhile, effective July 2017, ICANN amended its contract with registries to require them to obtain the consent of registrants for their information to be listed online. But again, this is no substitute for the new RDS, because consent that is required as a condition of registering a domain wouldn't qualify as "freely given" under European law. ICANN followed up in November 2017 with a statement that it would abstain from taking enforcement action against registries or registrars who provided it with a "compliance model" that sought to reconcile their contractual obligations with the requirements of data protection law. Three Interim Options Finally, with the GDPR deadline hard approaching and with the work of the Next-Generation RDS group nowhere near completion, ICANN has issued a set of three possible stop-gap measures for public comment. These three models, based upon legal advice belatedly obtained by ICANN last year [PDF], are intended to protect registries and registrars from liability under the GDPR during the interim period between May 2018 and the final implementation of the recommendations of the Next-Generation RDS PDP. In simple terms the three options are: Allowing anyone who self-certifies that they have a legitimate interest in accessing personal data of an individual registrant to do so.  Setting up a formal accreditation/certification program under which only a defined set of third-party requestors would be authorized to gain access to individual registrants' personal data. Access to personal data of registrants would only be available under a subpoena or other order from a court or other judicial tribunal of competent jurisdiction.  None of these are perfect solutions for retroactively enforcing new privacy on ICANN's old procedures. In EFF's comments on ICANN's proposals, we ended up supporting the third option; or actually, a variation of it. Whereas in ICANN's option 3 proposal a case by case evaluation of each field in each registration would be required to determine whether it contains personal data, this seems impractical. Instead, as with option 2, it should be assumed that the name, phone number, and address fields contain personal data, and these should be withheld from public display.1 ICANN's first option, which would allow anyone to claim that they have a legitimate interest in obtaining registrants' personal data, is unlikely to hold water against the GDPR —they could simply lie, or may be mistaken about what amounts to a legitimate interest. The second option is likely to be unworkable in practice, especially for implementation in such a short space of time. By requiring ICANN to make a legal evaluation of the legitimate interests of third parties in gaining access to personal information of registrants, ICANN's legal advisers acknowledge that this option would: require the registrars to perform an assessment of interests in accordance with Article 6.1(f) GDPR on an individual case-by-case basis each time a request for access is made. This would put a significant organizational and administrative pressure on the registrars and also require them to obtain and maintain the competence required to make such assessments in order to deliver the requested data in a reasonably timely manner. Moreover, the case most commonly made for third party access to registration data is for law enforcement authorities and intellectual property rights holders to be able to obtain this data. We already have a system for the formal evaluation of the claims of these parties to gain access to personal data; it's the legal system, through which they can obtain a warrant or a subpoena, either directly if they are in the same country as the registry or registrar, or via a treaty such as a Mutual Legal Assistance Treaty (MLAT) if they are not. This is exactly what ICANN's Model 3 allows, and it's the appropriate standard for ICANN to adopt. Is the Sky Falling? Many ICANN stakeholders are concerned that access to the public WHOIS database could change. Amongst the most vocal opponents of new privacy protections for registrants include some security researchers and anti-abuse experts, for whom it would be impractical to go to a court for a subpoena for that information, even if the court would grant one. Creating, as Model 2 would do, a separate class of Internet "super-users" who could use their good work as a reason to examine the personal information databases of the registrars seems a tempting solution. But we would have serious concerns about seeing ICANN installed as the gatekeeper of who is permitted to engage in security research or abuse mitigation, and thereby to obtain privileged access to registrant data. Requiring a warrant or subpoena for access to personal data of registrants isn't as radical as its opponents make out. There are already a number of registries, including the country-code registries of most European countries (which are not subject to ICANN's WHOIS rules) that already operate in this way. Everyone who is involved in WHOIS research — be they criminals using domains for fraud, WHOIS scraping spammers, or anti-abuse researchers — is already well aware of these more privacy-protective services.  It's better for us all to create and support methods of investigation that accept this model of private domain registration, than open up ICANN or its contracted parties to the responsibility of deciding what they should do if, for example, the cyber-security wing of an oppressive government begins to search for the registration data of dissidents. There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order. But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address. There's no reason why this couldn't be required in conjunction with ICANN's Model 3, to address the legitimate concerns of those who need to contact domain owners for operational or business reasons, and who for whatever reason can't obtain contact details in any other way. Comments on ICANN's proposals are being received until January 29, and may be sent to gdpr@icann.org. You can read our comment here. 1. There are actually two versions of Model 2 presented; one that would only apply if the registrant, registry, or registrar is in Europe (which is also the suggested scope of Model 1), and the other that would apply globally. Similarly, options are given for Model 2 to apply either just to individual registrants, or to all registrants. Given that there are over 100 countries that have omnibus data protection laws (and this number is growing), many of which are based on the European model, there seems to be little sense for any of the proposals to be limited to Europe. Neither does it make sense to limit the proposals to individual registrants, because even if it were possible to draw a clear line between individual and organizational registrations (it often isn't), organizational registrations may contain personally identifiable information about corporate officers or contact persons.
>> mehr lesen

Duterte Administration Moves to Kill Free Speech in the Philippines (Fr, 26 Jan 2018)
In a country where press freedom is already under grave threat, the revocation of an independent publication’s license to operate and a proposed amendment to the Bill of Rights are pushing journalists further into the margins. While the Constitution of the Philippines guarantees press freedom and the country’s media landscape is quite diverse, journalists nevertheless face an array of threats. Libel threats and advertising boycotts are common, and the country ranks fifth in the world in terms of impunity for killing journalists. And since the election of President Rodrigo Duterte in 2016, press freedom in the Philippines has taken a further blow. Like President Trump, Duterte enjoys going after individual media outlets that criticize his policies, creating an increasingly chilled atmosphere for the country’s independent journalists and free speech. In an unprecedented move, the Duterte administration’s Security and Exchange Commission (SEC) revoked the registration of independent news organization, Rappler, and ordered them to close up shop. Rappler has been a vocal critic of the Duterte regime and appears to be targeted for its criticism of the current administration, especially when contrasted with how other pro-Duterte bloggers and outlets have been rewarded with government positions or hired as consultants using public funds. The Duterte administration’s SEC claims its decision to revoke Rappler’s registration was based on an alleged violation of the Foreign Equity Restriction in Mass Media by accepting funds from the Omidyar Network, a fund created by eBay founder Pierre Omidyar that has contributed to independent media outlets all over the world, like the Intercept and the International Consortium of Investigative Journalists. The SEC had accepted and approved Rappler’s Philippine Depository Receipt (PDR) for contributions from the Omidyar Network back in 2015. A PDR is a financial instrument that does not give the investor voting rights in the board or a say in the management of the organization. But when President Duterte went after Rappler (as well as broadcast network ABS-CBN) in his July 2017 State of the Nation address, claiming that the company was owned by foreigners, the pressure began to mount. The president later repeated this claim, stating that the company was violating a Constitutional requirement of domestic ownership. Under this increasing pressure from the Duterte administration, the SEC voided the Omidyar PDR last week and revoked Rappler’s Certificate of Incorporation. Rappler expressed dismay at the “misrepresentations, outward lies, and malice contained in criticisms of Rappler” and maintains that it has complied with all SEC regulations and acted in good faith in adhering to all requirements “even at the risk of exposing [its] corporate data to irresponsible hands with an agenda.” Rappler continues to stand firm in its conviction that it is “100% Filipino-owned” and has not violated any Constitutional restrictions in accepting money from foreign philanthropic investors. Rappler intends to contest the SEC’s revocation “through all legal processes available” in its fight for freedom of the press. But the Philippine government is taking things even a step further with a push to mandate “responsible speech”. The House of Representatives has moved to amend Article 3, Section 4 of the Constitution's Bill of Rights, which currently states “No law shall be passed abridging the freedom of speech, of expression, or of the press, or the right of the people peaceably to assemble and petition the government for redress of grievances” to read “No law shall be passed abridging the responsible exercise of the freedom of speech, of expression, or of the press, or the right of the people peaceably to assemble and petition the government for redress of grievances.” As opinion writer Ellen T. Tordesillas noted, the movement is similar to a 2006 attempt by the government of former president Gloria Macapagal Arroyo. The movement may have officially come from the house, but the proposal was actually created by a  committee under the office of the president. On a talk show, former solicitor general Florin Hilbay criticized the proposal, stating that “The danger in inserting the word ‘responsible’ is that you’re giving the state power to define responsibility.” We agree. Handing power over to government authorities to determine what is or isn’t "responsible" is always dangerous, and in the case of Duterte—a president who has targeted journalists, drug users, and communists—could prove deadly. We call on the Philippines to respect the fundamental right to freedom of expression and remind the country of its obligations under the International Covenant on Civil and Political Rights, which allows for only narrow legal limitations to the right to freedom of expression. Furthermore, we stand in solidarity with Rappler, the Foundation for Media Alternatives in its Statement on Press Freedom and Free Speech, as well as the journalists, students, bloggers, and local and international advocates who have taken a stand against the Duterte government’s “alarming attempt to silence independent journalism.”
>> mehr lesen

EFF to Court: Don't Let Celebrities Censor Realistic Art (Fr, 26 Jan 2018)
A huge range of expressive works—including books, documentaries, televisions shows, and songs—depict real people. Should celebrities have a veto right over speech that happens to be about them? A case currently before the California Court of Appeal raises this question. In this case, actor Olivia de Havilland has sued FX asserting that FX’s television series Feud infringed de Havilland’s right of publicity. The trial court found that de Havilland had a viable claim because FX had attempted to portray her realistically and had benefited financially from that portrayal. Together with the Wikimedia Foundation and the Organization for Transformative Works, EFF has filed an amicus brief [PDF] in the de Havilland case arguing that the trial court should be overruled. Our brief argues that the First Amendment should shield creative expression like Feud from right of publicity claims. The right of publicity is a cause of action for commercial use of a person’s identity. It makes good sense when applied to prevent companies from, say, falsely claiming that a celebrity endorsed their product. But when it is asserted against creative expression, it can burden First Amendment rights. Courts have struggled to come up with a consistent and coherent standard for how the First Amendment should limit the right of publicity. California courts have applied a rule called the “transformative use” test that considers whether the work somehow “transforms” the identity or likeness of the celebrity. In Comedy III Productions v. Gary Saderup, the California Supreme Court found that the defendant’s etchings were not protected because they were merely “literal, conventional depictions” of the Three Stooges. In contrast, in Winter v. DC Comics, the same court found comic book depictions of Johnny and Edgar Winter to be protected because they transformatively portrayed the brothers as half-human/half-worm creatures. The transformative use test is deeply flawed. Plenty of valuable speech, such as biographies or documentaries, involves depicting real people as accurately as possible. Why should these works get less First Amendment protection? If the First Amendment requires turning your subject into a half-human/half-worm creature, then the doctrine has gone very badly wrong. Catherine Zeta Jones (L) as Olivia de Havilland in Feud The trial court’s ruling in the de Havilland case, which leaves realistic art about celebrities essentially unprotected, is the logical end-point of the transformative use test. We hope that the drastic result in this case leads California courts to reevaluate free speech limits on the right of publicity. As one judge wrote 30 years ago, no “author should be forced into creating mythological worlds or characters wholly divorced from reality.” As evidence of the importance of the case, amicus briefs were filed by a number of other companies and organizations. The MPAA and Netflix [PDF], the International Documentary Association [PDF], a group including A&E TelevisionReporters Committee for Freedom of the Press, and the First Amendment Coalition [PDF], and a group of law professors [PDF] filed briefs arguing that the First Amendment protects docudramas like Feud. The Screen Actors Guild, on the other hand, filed a brief [PDF] in support of Olivia de Havilland's claim. While it is not unheard of, it is unusual for the MPAA and EFF to be on the same side.
>> mehr lesen