Diego Gomez Finally Cleared of Criminal Charges for Sharing Research
(Do, 25 Mai 2017)
In 2011, Colombian graduate student Diego Gomez shared another student’s Master’s thesis with colleagues over the Internet. After a long legal battle, Diego was able to breathe a sigh
of relief today as he was cleared of the
criminal charges that he faced for this harmless act of sharing scholarly research.
Since Diego was first brought to trial, thousands of you have shown your support for him
via our online petition. The petition’s message is simple: open access should be the international default for scholarly publication.
That’s true, but Diego’s story also demonstrates what can go wrong when nations enact severe penalties for copyright infringement. Even if all academic research were published freely
and openly, researchers would still need to use and share copyrighted materials for educational purposes. With severe prison sentences on the line for copyright infringement, freedom
of expression and intellectual freedom suffer.
Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement.
Diego’s story also serves as a cautionary tale of what can happen when copyright law is broadened through international agreements. The law Diego was prosecuted under was enacted as
part of a trade agreement with the United States. But as is often the case
when trade agreements are used to expand copyright law, the agreement only exported the U.S.’ extreme criminal penalties; it didn’t export our broad fair use provisions. When
copyright law becomes more restrictive with no account for freedom of expression, people like Diego suffer.
Diego was lucky to have the tireless support of local NGO Fundación Karisma, and allies around the world such as
EFF, who brought global attention to the injustice of the criminal accusations against him. However, the prosecutor in the case has appealed the verdict, leaving Diego with possible
liability continuing to hang over his head for an undetermined time to come.
There are also many other silent victims of overzealous copyright enforcement, including those who are constrained from performing useful research, who shut down websites that come
under unfair attack, and who shy away from sharing with colleagues for fear of being targetted with civil or criminal charges.
Please join us today in standing up for open access, standing up for fair copyright law, and standing with Diego.
Take ActionSupport Open Access Worldwide
>> mehr lesen
Book Review: The End of Ownership
(Mi, 24 Mai 2017)
In the digital age, a lot depends on whether we actually own our stuff, and who gets to decide that in the first place.
In The End of Ownership: Personal Property in the Digital Age, Aaron
Perzanowski and Jason Schultz walk us through a detailed and highly readable explanation of exactly how we’re losing our rights to own and control our media and devices, and
what’s at stake for us individually and as a society. The authors carefully trace the technological changes and legal changes that have, they argue, eroded our rights to do as we
please with our stuff. Among these changes are the shift towards cloud distribution and subscription models, expanding copyright and patent laws, Digital Rights Management (DRM), and
use of End User License Agreements (EULAs) to assert all content is “licensed” rather than “owned.” And Perzanowski and Schultz present compelling evidence that many of us are unaware
of what we’re giving up when we “buy” digital goods.
Ownership, as the authors explain, provides a lot of benefits. Most importantly, ownership of our stuff supports our individual autonomy, defined by the authors as our “sense of
self-direction, that our behavior reflects our own preferences and choices rather than the dictates of some external authority.” It lets us choose what we do with the stuff that we
buy – we can keep it, lend it, resell it, repair it, give it away, or modify it, without seeking anyone’s permission. Those rights have broader implications for society as a whole –
when we can resell our stuff, we enable secondary and resale markets that help disseminate knowledge and technology, support intellectual privacy, and promote competition and user
innovation. And they’re critical to the ability of libraries and archives to serve their missions – when a library owns the books or media in its collection, it can lend those books
and media almost without restriction, and it generally will do so in a way that safeguards the intellectual privacy of its users.
These rights, long established for personal property, are safeguarded in part by copyright law’s “exhaustion doctrine.” As the authors make clear, that doctrine, which holds that some
of a copyright holders’ rights to control what happens to a copy are “exhausted” when they sell the copy, is a necessary feature in copyright law’s effort to limit the powers granted
to copyright holders so that overbroad copyright restrictions do not undermine the intended benefit to the public as a whole.
Throughout the book, Perzanowski and Schultz present a historical account of rights holder attempts to overcome exhaustion and exert more control over what people do with their media
and devices. The authors describe book publishers’ hostile, “fearful” response to lending libraries in the 1930’s:
…a group of publishers hired PR pioneer Edward Bernays….to fight against used “dollar books” and the practice of book lending. Bernays decided to run a contest to “look for a
pejorative word for the book borrower, the wretch who raised hell with book sales and deprived authors of earned royalties.”…Suggested names included “bookweevil,”…”libracide,”
“booklooter,” “bookbum,” “culture vulture,” … with the winning entry being “booksneak.”
Publishers weren’t alone, the authors show that both record labels and Hollywood studios fought against the rise of secondary markets for music and home video rental, respectively.
Hollywood fought a particularly aggressive battle against the VCR. In the end, the authors note, Hollywood continued to “resist the home video market,” at least until they gained
more control over the distribution technology.
But while historically, overzealous rights holders may have been stymied to some extent by the law’s limitation of their rights, recent technological changes have made their quest a
“In a little more than the decade,” the authors explain, we’ve seen dramatic changes in content distribution, from tangible copies, to digital downloads, to the cloud, and now,
increasingly, to subscription services. These technological changes have precipitated corresponding changes in our abilities to own the works in our libraries. While, as the authors
explain, copyright law has long relied on the existence of a physical copy to draw the lines between rights holders’ and copy owners’ respective rights, “[e]ach of these shifts in
distribution technology has taken us another step away from the copy-centric vision at the heart of copyright law.” Unfortunately, the law hasn’t kept up: “Even as copies
escape our possession and disappear from our experience, copyright law continues to insist that without them, we only have the rights copyright holders are kind enough to grant us.”
Perzanowski and Schultz point to End User License Agreements (EULAs), with their excessive length, one-sided,
take-it-or-leave-it nature, complicated legalese, and relentless insistence that what you buy is only “licensed” to you (not “owned”), as a main culprit behind the decline of
ownership. They provide some pretty standout examples – including EULAs that exceed the lengths of classic works of literature, and those that claim to prevent a startling array
of activity. For the authors, these EULAs
. . . create private regulatory schemes that impose all manner of obligations and restrictions, often without meaningful notice, much less assent. And in the process, licenses
effectively rewrite the balance between creators and the public that our IP laws are meant to maintain. They are an effort to redefine sales, which transfer ownership to the
buyer, as something more like conditional grants of access.
And unfortunately, despite their departure from some of contract law’s core principles, some courts have permitted their enforcement, “so long as the license recites the proper
The authors are at their most poetic in their criticism of Digital Rights Management (DRM) and Section 1201 of the DMCA, perhaps the worst scourges of ownership in the book. As they point out, even
in the absence of restrictive EULA terms, DRM embeds rights holders’ control directly into our technologies themselves – in our cars, our toys, our insulin pumps and heart monitors.
Comparing it to Ray Bradbury’s Farenheit 451, they explain:
While not nearly as dramatic as flamethrowers and fighting robot dogs, the unilateral right to enforce such restrictions through DRM exerts many of the types of social control
Bradbury feared. Reading, listening, and watching become contingent and surveilled. That system dramatically shifts power and autonomy away from individuals in favor of retailers
and rights holders, allowing for enforcement without anything approaching due process.
As Perzanowski and Schultz explain, these shifts aren’t just about our relationship to our stuff. They recalibrate the relationship between rights holders and consumers on a broad
When we say that personal property rights are being eroded or eliminated in the digital marketplace, we mean that rights to use, to control, to keep, and to transfer purchases –
physical and digital – are being plucked from the bundle of rights purchasers have historically enjoyed and given instead to IP rights holders. That in turn means that those
rights holders are given greater control over how each of us consume media, use our devices, interact with our friends and family, spend our money, and live our lives. Cast in
these terms, it is clear that there is a looming conflict between the respective rights of consumers and IP rights holders.
The authors repeatedly remind us that who makes the decision between what is owned and what is licensed is crucial – both on the individual and societal scale. When we allow companies
to define when we can own our stuff, through EULAs or Digital Rights Management, we shift crucially important decisions about how our society should work away from legislatures,
courts, and public processes, to private entities with little incentive to serve our interests. And, when we don’t know exactly what we give up when we “buy” digital goods, we’re not
making an informed choice. Further, when we opt for mere access over ownership, our choices have broader societal effects. The more we shift to licensing and subscription models, the
more it may become harder for those who would rather own their stuff to exercise that option – stores close, companies shift distribution models, and some works disappear from the
In the end, Perzanowski and Schultz leave us with a thread of hope that we still might see a future for ownership of digital goods. They believe that at least some courts and policy
makers, and “[p]erhaps more importantly, readers, listeners, and tinkerers – ordinary people – are expressing their own reluctance to accept ownership as an artifact of some bygone
predigital era.” And they provide a set of arguments and reform proposals to martial in the fight to save ownership before it’s too late. They lay out an array of technological and
legal strategies to reduce deceptive practices, curb abusive EULAs, and, reform copyright law. The most thoroughly developed of these proposes a legislative restructuring of copyright
exhaustion in a flexible, multi-factor format, in part modeled on the United States’ fair use doctrine. It’s a good idea, and it would probably work. But (and the authors
acknowledge this) even modest attempts at reform have failed to garner the necessary support in Congress to move forward. A more ambitious proposal, like this one, seems at least
unlikely in the near term.
Overall, the End of Ownership is a deeply concerning exposition of how we’re losing valuable rights. The questions it raises about whether and how we can preserve the benefits of
ownership in the digital age will likely continue to be relevant even as technology, and the law, evolve. Most critically, it asks us to rethink who we want making the decisions that
shape how we live our lives. While the book tackles complex issues in the relationship between law and technology, it does so in a way that’s accessible and interesting both for
lawyers and laypersons alike. The book’s ample real world examples of everything from disappearing e-book libraries, to tractors, dolls, and medical devices resistant to their owners’
control bring home both the impact of abstract legal doctrines and the urgency of their reform.
To learn about some of EFF’s efforts to protect your rights of ownership and autonomy, you can:
read about our Green v. DOJ lawsuit, seeking to overturn the law that gives DRM its legal fangs;
read our amicus brief before the Supreme Court in Lexmark, arguing that patent ownership shouldn’t give ongoing rights to control use of a product that
has been sold; and
read our amicus brief in Goren v. Small Justice, explaining why surprising terms in clickthrough
agreements like EULAs should not override fundamental rights.
>> mehr lesen
Congress’ Imperfect Start to Addressing Vulnerabilities
(Mi, 24 Mai 2017)
With the global and debilitating WannaCry ransomware attack dominating the news in recent
weeks, it’s increasingly necessary to have a serious policy debate about disclosure and patching of vulnerabilities in hardware and software.
Although WannaCry takes advantage of a complex and collective failure in protecting key computer
systems, it’s relevant to ask what the government’s role should be when it learns about new vulnerabilities. At EFF, we’ve been pushing for more transparency around the decisions
the government makes to retain vulnerabilities and exploit them for “offensive purposes.”
Now, some members of Congress are taking steps towards addressing these decisions with the the proposal of the Protecting Our Ability to Counter Hacking—or PATCH—Act (S.1157). The bill, introduced last week by Sens.
Ron Johnson, Cory Gardner, and Brian Schatz and Reps. Blake Farenthold and Ted Lieu, is aimed at strengthening the government’s existing process for deciding whether to disclose
previously unknown technological vulnerabilities it finds and uses, called the “Vulnerabilities
Equities Process” (VEP).
The PATCH Act seeks to do that by establishing a board of government representatives from the intelligence community as well as more defensive-minded agencies like the Departments of
Homeland Security and Commerce. The bill tasks the board with creating a new process to review and, in some cases, disclose vulnerabilities the government learns about.
The PATCH Act is a good first step in shedding some light on the VEP, but, as currently written, it has some shortcomings that would make it ineffective in stopping the kind of
security failures that ultimately lead to events like the WannaCry ransomware attack. If lawmakers really want to deal with the dangers of the government holding on to
vulnerabilities, the VEP must apply to classified vulnerabilities that have been leaked.
The VEP was established in 2010 by the Obama administration and was intended to require government agencies to collectively weigh the costs and benefits of disclosing these
vulnerabilities to outside parties like software vendors instead of holding onto them to use for spying and law enforcement purposes.
Unfortunately, after EFF fought a long FOIA battle to obtain a copy of the written VEP policy document,
we’ve learned that it went largely unused. In the meantime, agencies like the NSA and CSA suffered major thefts of their often incredibly powerful tools. In particular, the 2016 Shadow Brokers leak enabled outsiders to later
develop the WannaCry ransomware using an NSA tool that the agency likened to “fishing
Lawmakers should be commended for trying to codify and expand the existing process to ensure that the government is adequately considering these risks, and the PATCH Act is a welcome
But there are two areas in particular where it needs to go further.
First, as described above, the current bill seems to overlook situations where the government loses control of vulnerabilities that it has decided to retain. As we’ve seen with the
Shadow Brokers leaks, this is a very real possibility, one which even kept the NSA up at night,
according to the Washington Post. Yet the PATCH Act specifically states that a classified vulnerability will not be considered “publicly known” if it has been “inappropriately
released to the public.” That means that a stolen NSA tool can be circulating widely among third parties without triggering any sort of mandatory reconsideration of disclosure to a
vendor to issue a patch. While it might be argued that other provisions of the bill implicitly account for this scenario, we’d like to see it addressed explicitly.
In addition to overlooking situations like the WannaCry ransomware attack, the bill excludes cases where the government never actually acquires information about a vulnerability and
instead contracts with a third-party for a “black box exploit.”
For example, in the San Bernardino case, the FBI reportedly paid a contractor a large sum of money to unlock an iPhone without ever learning details of how the exploit worked. Right now, the government apparently
believes it can contract around the VEP in this way. This raises concerns about the government’s ability to adequately assess the risks of using these vulnerabilities, which is why a
report written by former members of the National Security Council recommended prohibiting non-disclosure agreements with third-parties entirely. At the
very least, we’d like to see the bill bring more transparency to the use of vulnerabilities even when the government itself doesn’t acquire knowledge of the vulnerability.
We hope to see the bill’s authors address these concerns as it moves forward to ensure that all of the vulnerabilities known to the government are reviewed and, where appropriate,
EFF v. NSA, ODNI - Vulnerabilities FOIA
>> mehr lesen
TPP Comes Back From the Dead... Or Does It?
(Mi, 24 Mai 2017)
Could the Trans-Pacific Partnership (TPP) be coming back from the dead? It is at least a possibility, following the release of
a carefully-worded statement last Sunday from an APEC
Ministerial meeting in Vietnam. The statement records the agreement of the eleven remaining partners of the TPP, aside from the United States which withdrew in January, to "launch a process to assess
options to bring the comprehensive, high quality Agreement into force." This assessment is to be completed by November this year, when a further APEC meeting in Vietnam is to be held.
We do know, however, that not all of the eleven countries are unified in their view about how the agreement could be brought into force. In particular, countries like Malaysia and Vietnam would like to see revisions to the treaty before
they could accept a deal without the United States. This is hardly an unreasonable position, since it was the United States that pushed those countries to accept provisions such as an
unreasonably long life plus 70 year copyright term, which is to no other country's benefit.
Other TPP countries, such as Japan and New Zealand, are keen to bring the deal into force without any renegotiation, which could add years of further delay to the treaty's completion.
Japan also likely fears losing some of the controversial rules that it had pushed for, such as the ban on software source code audits. The country's Trade
Minister, Hiroshige Seko, has been quoted as
saying, "No agreement other than TPP goes so far into digital trade, intellectual property and improving customs procedures."
For now, that remains true; many of the TPP's digital rules are indeed extreme and untested. But for how much longer? Industry lobbyists are pushing for the same digital trade rules
to be included in Asia's Regional Comprehensive Economic Partnership (RCEP) and in a
renegotiated version of the North American Free Trade Agreement (NAFTA). Since RCEP and NAFTA together cover most of the same countries as the TPP, there will be little other
rationale for the TPP to exist if lobbyists succeed in replicating its rules in those other deals.
Free Trade Rules that Benefit Users
It's worth stressing that EFF is not against free trade. If trade agreements could be used to serve users rather than to make their lives more difficult EFF could accept or even
actively support certain trade rules. For example, last week the Re:Create Coalition, of which EFF is a
member, issued a statement explaining how the inclusion of fair use in trade agreements would make them more balanced than they are now. The complete statement, issued by Re:Create's
Executive Director Joshua Lamel, says:
If NAFTA is renegotiated and if it includes a chapter on copyright, that chapter must have mandatory language on copyright limitations and exceptions, including fair use. The
United States cannot export one-sided enforcement provisions of copyright law without their equally important partner under U.S. law: fair use.
The U.S. should also take further steps to open up and demystify its trade policy-making processes, not only to Congress but also to the public at large, by publishing text
proposals and consolidated drafts throughout the negotiation of trade agreements.
The last paragraph of this statement is key: we can't trust that trade agreements will reflect users' interests unless users have a voice in their development. Whether the TPP comes back into
force or not, the insistence of trade negotiators on a model of secretive, back-room policymaking will lead to the same flawed rules popping up in other agreements, to the benefit of
large corporations and the detriment of ordinary users.
At this point we have no faith that the TPP would be reopened for negotiation in a way that is inclusive, transparent and balanced, and we maintain our outright opposition to the
deal. RCEP is being negotiated in an equally closed process, though we are continuing to lobby negotiators about our concerns with that agreement's IP and Electronic Commerce
chapters. As for NAFTA, we are urging the USTR to heed our
recommendations for reform of the office's practices before negotiations commence.
The death of the TPP didn't mark the end of EFF's work on trade negotiations and digital rights, and its reanimation won't change our course either. No matter where the future of
digital trade rules lie, our approach remains the same: advocating for users' rights, and fighting for the reform of closed and captured processes. Until our concerns are heard and
addressed, trade negotiators can be assured that regulating users' digital lives through trade agreements isn't going to get any easier.
>> mehr lesen
No Evidence that "Stronger" Patents Will Mean More Innovation
(Mi, 24 Mai 2017)
Push to once again allow abstract patents is misguided
Right now, the patent lobby—in the form of the Intellectual Property Owners Association and the American Intellectual Property Law Association—is demanding “stronger” patent laws. They want to undo
Alice v. CLS Bank and return us to a world where “do it on a
computer” ideas are eligible for a patent. This would help lawyers file more patent applications and patent litigation. But there’s no evidence that such laws would benefit the public
or innovation at all.
One of the primary justifications we hear for why patents are social goods is that they encourage innovation. Specifically, the argument goes, patents incentivize companies and
individuals to invest in costly research and development that they would not otherwise invest in because they know they will be able to later charge supracompetitive prices and recoup
the costs of that development.
Those who want "stronger" patents (i.e. patents that are easier to get and/or harder to invalidate) often use this rationale to justify changing patent laws to make patents more
enforceable. For example, a former Judge on the Court of Appeals for the Federal Circuit recently suggested that "America is in danger because we have strangled our innovation
system" by making it easier to challenge patents and show they never should have been granted. As another example, the Chief Patent Counsel at IBM argued that "The U.S. leads the software industry,
but reductions in U.S. innovation prompted by uncertain patent eligibility criteria threaten our leadership" because "Patents promote innovation."
These arguments all presume that "stronger" patents mean more research and development dollars and thus more innovation. They also presume that if the U.S. doesn't provide "stronger"
patents, innovation will go elsewhere.
But reality is much more complex. As one recent paper put it: "there is little evidence that stronger patent laws result in increases
in [research and development] investments," at least if the yardstick is patent filings. Indeed, "we still have essentially no credible empirical evidence on the seemingly
simple question of whether stronger patent rights – either longer patent terms or broader patent rights – encourage research investments into developing new technologies."
good reasons to think "stronger" patents do not actually spur innovation. Patents are a double-edged sword. Although they may provide some incentive to innovate (even that premise
is unclear), they also create barriers to more innovation. Patents work to prevent the development of follow-on innovation until
that patent expires, delaying innovation that would have occurred, but is prevented by the grant of an artificial, government-backed monopoly.
The problem of patents impeding future innovation is exacerbated in software, where the life cycle is relatively short and innovation tends to move
quickly. When a patent lasts for 20 years, software patents—especially broad and abstract software patents—have the potential to significantly delay the introduction of new
innovations to the market.
Despite no "credible empirical evidence" that recent changes to patent laws, including the limits on patentable subject matter reaffirmed by the U.S. Supreme Court in Alice, have done any harm to the innovation economy or innovation generally, somepatent owners have been lobbying Congress legislate the case away. But doing
so would allow patents on abstract ideas, and risks exacerbating the deadweight loss caused by too much patenting. The proposals are not minor changes. For example, if enacted they
would mean that anything is patentable, so long as it is doesn't "exist solely in the human mind," i.e. "do it on a computer." Absent any evidence that this would mean more
innovation, the recent reform proposals seem like little more than a bid by lawyers to create work for themselves.
Those rushing to ratchet up patent rights are doing so with little to no empirical basis that any such change is necessary, and it may actually end up harming the innovation economy.
Congress should think twice before changing patent law so as to make patents even "stronger."
>> mehr lesen
Wikimedia's Constitutional Challenges of NSA Upstream Surveillance Move Forward
(Mi, 24 Mai 2017)
A court ruling today allowing Wikimedia’s claims challenging the
constitutionality of NSA’s Upstream surveillance to go forward is good news. It shows that the court—the U.S. Court of
Appeals for the Fourth Circuit—is willing to take seriously the impact mass surveillance of the Internet backbone has on ordinary people. Wikimedia's First and Fourth Amendment
challenges will move on to the next phase in the case, Wikimedia Foundation v. NSA
The news isn't all good: we disagree with the court's decision disallowing Wikimedia's other dragnet collection claims from going forward, and think the dissent got it right. In
Jewel v. NSA, EFF's landmark lawsuit challenging NSA surveillance, the Ninth Circuit Court of Appeals has already
ruled that our claims pass initial
review. The trial court presiding over the case just last week required the government to comply with our
request to provide information about the scope of the mass surveillance. Jewel v. NSA includes specific evidence of a backbone tapping location on Folsom Street in San Francisco presented by former AT&T employee Mark Klein. This level of detail and
description is enough for our claims to move forward even with the Fourth Circuit’s ruling.
Wikimedia v. NSAJewel v. NSA
>> mehr lesen
Addressing Delays in Democracy.io and the EFF Action Center Message Delivery
(Mi, 24 Mai 2017)
EFF has identified and addressed the delivery problem, and we extend our deep apologies for the delays to digital activists who use our tools.
We recently became aware that there were significant delays in delivering some of the messages sent to Congress via two of EFF’s open-source messaging tools, Democracy.io and the EFF Action Center. While we have now addressed the problem, we wanted to be transparent with the
community about what happened and the steps we’ve taken to fix it.
The EFF Action Center is a tool people can use to speak out in defense of digital liberty using text prompts from EFF, including letters to Congress that users can edit and customize.
Democracy.io is a free tool that we built for the world based on the same technical backend as our Action Center. It lets users send messages to their members of Congress on any
topic, with as few clicks as possible. The errors we experienced only impacted letters (not petitions, tweet campaigns, or call campaigns) for a number of Representatives and a
handful of Senators. We sincerely apologize to everyone who was affected by this delay.
The issue sprang from the way in which our tools handled CAPTCHAs, a type of service that website owners use to verify that a given user is a human and not a bot. Our tools work by
filling out contact forms on individual congressional websites on behalf of users. When our tool bumps into a CAPTCHA, it takes a snapshot, returns it to the user, and lets the user
give the correct answer to finish filling out the form. Since all of our messages to Congress are submitted by real people, this worked fine for traditional CAPTCHAs. However, a
percentage of Congress members had begun using a more complicated type of CAPTCHA known as reCAPTCHA, which was beyond the
technical abilities of our system.
At the same time, we have made some fundamental changes to our error-logging system. As a result, the engineers who staff and maintain Democracy.io stopped receiving notifications of
delivery errors, so we unfortunately missed the fact that a portion of messages were failing.
Some messages are undeliverable due to user data errors, legislators leaving office, or other irresolvable issues. However, we have now successfully re-sent nearly all the deliverable
messages that had been delayed in our system. A very small percentage of messages are still pending, but we will be delivering them over the next few weeks.
In addition to delivering the delayed messages, we’ve made some key infrastructure changes to help prevent problems like this from arising in the future and to mitigate the impact of
any issues that do arise. First, we integrated an experimental API delivery for the House of Representatives called Communicating
with Congress. This implementation has resolved the reCAPTCHA problems we were facing in the House of Representatives. In addition, when someone tries to send a message to one of
the few Senators whose forms we cannot complete, we’ll notify the user in real time and provide a link to the Senator’s website so the user can send a message directly. Finally, we’ve
improved our error logging process so that if another significant delay happens in the future, we’ll know about it right away.
It’s unfortunate and frustrating that many members of Congress have placed digital hurdles on constituent communications. In a more perfect democracy, we think it would be easy for
constituents to simply send an email to their members of Congress and be assured that the message was received and counted. Instead, each member of Congress adopts their own form,
many of them requiring users to provide information like titles, exact street address, topic areas, etc. Users who want to email their Congress members may have to hunt down and
complete forms on three different websites, and they may inadvertently end up on the wrong site.
We believe that the voices of technology users should echo loudly in the halls of Congress and that timely and personal communication from constituents is vital to holding our elected
officials to account. That’s why we built these tools for both the EFF community and wider world. We’re committed to continuing to improve the process of communicating with Congress,
both for EFF friends speaking out in defense of digital rights and for the general public. We hope one day Congress will make it easier for constituents to reach them. Until then,
we’ll do our best to help tech users find a powerful voice. We are sorry that in this instance we fell short of our goal.
>> mehr lesen
Court Orders Government To Provide More Information About Withheld Information in Laura Poitras’ FOIA Lawsuit
(Di, 23 Mai 2017) Laura Poitras—the Academy and Pulitzer Prize Award-winning documentary filmmaker and journalist behind CITIZENFOUR and Risk—wants to know why she was stopped and detained at the U.S.
border every time she entered the country between July 2006 and June 2012. EFF is representing Poitras in a Freedom of Information Act (FOIA) lawsuit aimed at answering this
question. Since we filed the complaint in July
2015, the government has turned over hundreds of pages of
highly redacted records, but it has failed to provide us with the particular justification for each withholding—as it is required to do. In March, in a win for transparency, a federal
judge called foul and ordered the government to explain with particularity its rationale for
withholding each document.
Poitras travels frequently for her work on documentary films. Between July 2006 and June 2012, she was routinely subject to heightened security screenings at airports around the world
and stopped and detained at the U.S. border every time she entered the country—despite the fact that she is a law-abiding U.S. citizen. She’s had her laptop, camera,
mobile phone, and reporter notebooks seized, and their contents copied. She was also once threatened with handcuffs for taking notes. (The border agents said her pen could be used as
a weapon.) No charges were ever brought against her, and she was never given any explanation for why she was continually subjected to such treatment.
In 2014, Poitras sent FOIA requests to multiple federal agencies for any and all records naming or relating to her, including case files, surveillance records, and counterterrorism
documents. But the agencies either said they had no records or simply didn’t respond. The FBI, after not responding to Poitras’ request for a year, said in May 2015 that it had
located a mere six pages of relevant material but that it was withholding all six because of grand jury secrecy rules.
With EFF’s help, Poitras ultimately filed a lawsuit against the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence. In
the months following the filing of the lawsuit, the government discovered and released over 1,000 pages of responsive records, some of which were on display as at the Whitney Museum
in New York last year as part of Poitras’ Astro Noise exhibit. But most of these records are highly redacted, so while
Poitras now has some information about why she stopped, the details
remain unclear. And the government failed to provide clear rationale for why withholding the redacted information was justified.
Court to Government: “Try Again”
We argued in a motion for summary judgment filed last fall that the government had failed to meet its burden of justifying its continued withholding of information. In an order issued
last month, the Honorable Ketanji Brown Jackson agreed with us. As the court explained, the government “describes in great detail the government’s general reasons for withholding
entire categories of information, but does not connect these generalized justifications to the particular documents that are being withheld in this case in any discernable fashion.”
She noted that instead of providing a complete list of “document-specific justifications,” the government provided a list with “only some of the records that the agency has withheld”
and even then failed to “explain the reasons that the particular exemption is being asserted with respect to any document[.]”
The court didn’t grant our motion for summary judgment, but it did order the government to go back and try again—i.e., provide both us and the court with a list describing each
document redacted or withheld, noting the FOIA exemption(s) that the government thinks apply to the document, and explaining the “particularized reasons that the government believes
that the asserted exemption applies to the particular document at issue.”
It’s clear the judge isn’t planning to just rubber stamp the government’s assertions in this case. Forcing the government to justify its vast withholding of documents in this case is
a win for transparency. We will post updates on the case as it proceeds and as we continue our fight to shed more light on the government’s unjust and potentially chilling treatment
of a journalist.
>> mehr lesen
Judge Orders Government to Provide Evidence About Internet Surveillance
(Di, 23 Mai 2017)
We're finally going to get some honesty on how the NSA spies on innocent Americans' communications.
A federal judge late last week in Jewel v. NSA, EFF’s landmark case against mass surveillance, ordered [PDF] the government to provide to it all relevant evidence necessary to prove or deny that plaintiffs
were subject to NSA surveillance via tapping into the Internet backbone. This includes surveillance done pursuant to section 702 of the FISA Amendments Act since 2008, which is up for renewal this year. It also includes
surveillance between 2001-8 conducted pursuant to the Presidents Surveillance Program.
In 2016 the Court had ordered that the plaintiffs
could seek discovery. After over a year of government stonewalling, the Court has now ordered the government to comply with a narrowed set of discovery requests by August 9, 2017. The
discovery is aimed at whether plaintiffs' communications were subject to the mass NSA program tapping into the Internet backbone called Upstream. The court also ordered the
government to file as much of its responses as possible on the public court docket.
The Jewel v. NSA case continues to mark the first time the NSA has been ordered to respond to civil discovery about any of its mass surveillance programs. Since the first EFF
case against NSA mass surveillance was launched in 2006, the government has abandoned or dramatically reduced three of the four key programs addressed by the lawsuit:
Internet metadata collection,
Mass collection of telephone records collection under
Section 215 of the Patriot Act which was ended by passage of the USA Freedom Act in 2015,
Full-content “about” searching of information collected from the Internet
What's left, at least that the public is aware of at this time, is the interception and use of communications flowing over the Internet backbone at key junctures. Thanks to the new
order, the U.S. government will, for the first time, have to answer to privacy concerns about the remaining Internet surveillance methods and their impact on Americans.
The NSA must tell the Court whether its 702 Upstream surveillance touches the communications of millions of Americans.
It’s been a long, slow road, but the NSA has been forced to reduce its mass spying in the United States in major ways. This has come through a combination of litigation
pressure, ongoing activism and public concern, technological efforts to encrypt more of the Internet, Congressional pressure, and a steady stream of information coming out about its
activities including from government investigations spurred by whistleblowers like Edward Snowden and Mark Klein. EFF will continue to push forward with the litigation and all of
EFF's other efforts until all Americans who rely on the Internet can feel safe that they can communicate online without NSA having broad access to their communications.
San Francisco attorney Richard Wiebe argued the matter for the plaintiffs. Also assisting EFF with the case are attorneys from the firm Keker, Van Nest and Peters, Thomas Moore
III, James Tyre and Aram Antaramian.
Jewel v. NSA
>> mehr lesen
Illinois Advances “Right to Know” Digital Privacy Bills
(Di, 23 Mai 2017)
EFF supports Illinois legislation (SB 1502
2774) that would empower people who visit commercial websites and online services to learn what personal information the site and service operators collected from them, and which
third parties the operators shared it with. EFF has long supported such “right to know” legislation, which
requires company transparency and thereby advances digital privacy.
As we explain in our support letter:
Many operators of commercial websites and online services collect from their visitors a tremendous amount of highly personal information. This can include facts about our health,
finances, location, politics, religion, sexual orientation, and shopping. Many operators share this information with third parties, including advertisers and data brokers. This
information has great financial value, so pressure to collect and share it will continue to grow.
This is a profound threat to our privacy. We live more and more of our lives online. The aggregation of our myriad clicks can turn our lives into open books. Our sensitive
personal information, pooled into ever-larger reservoirs, can be sold to the highest bidder, stolen by criminals, and seized by government investigators.
Many people would like to protect their own privacy, by making informed choices about which websites and online services to visit. Some sites and services are more respectful of
visitors’ privacy, and others are less so.
But all too often, such attempts at privacy self-help are stymied by the lack of available information about what personal information a website is collecting and sharing.
SB 1502 and HB 2774 would even the playing field. They would ensure that people can obtain the information they need to make fact-based decisions about where they want to spend
their time online.
These bills would not restrict how any website or online service gathers or shares information. Operators can keep doing exactly what they are doing – they just have to be more
transparent about it.
In April, the Illinois Senate passed SB 1502, and the Illinois House Committee
on Cybersecurity passed HB 2774. We
thank the lead legislative sponsors, Sen. Michael Hastings and Rep. Arthur Turner. We also thank the bills’
proponents, including the ACLU of Illinois, the Digital Privacy Alliance, the Illinois Attorney General, Illinois PIRG, the Office of the Cook County Sheriff, and the Privacy
Read EFF’s full letter to the Illinois legislature.
>> mehr lesen
New Twitter Policy Abandons a Longstanding Privacy Pledge
(Di, 23 Mai 2017)
Not Track (DNT) browser privacy setting. Instead, the company is switching to the Digital Advertising Alliance's toothless and broken self-regulatory program. At the same time,
the company is taking the opportunity to introduce a new tracking option and two new targeting options, all of which are set to “track and target” by default. These are not the
actions of a company that respects people’s privacy choices.
Twitter implements various methods of tracking, but one of the biggest is the use of Tweet buttons, Follow buttons, and embedded Tweets to record much of your browsing history. When
you visit a page that contains one of these, your browser make a request to Twitter’s servers. That request contains a header that tells Twitter which web site you visited. By setting
a unique cookie, Twitter can build a profile of your browsing history, even if you aren’t a Twitter user. When Twitter rolled out this tracking, it was the first major social network
to do so; at the time, Facebook and Google+ were careful not to use their social widgets for tracking, due to privacy concerns. Twitter sweetened their new tracking initiative for
privacy-aware Internet users by offering Do Not Track support. However, when the
other social networks quietly followed in Twitter's footsteps, they decided to ignore Do Not Track.
Now, Twitter proposes to abandon the Do Not Track standard and use the “WebChoices” tool, part of self-regulatory program of the Digital
Advertising Alliance (DAA). This program is toothless because the only choice it allows users is to opt out of “customizing ads,” when most people actually want to opt out of
tracking. Many DAA participants, including Twitter, continue to collect your information even if you opt-out, but will hide that fact by only showing you untargeted ads. This
is similar to asking someone to stop openly eavesdropping on your conversation, only to watch them hide behind a curtain and keep listening.
Also, WebChoices is broken; it’s incompatible with other privacy tools, and it requires constant vigilance in order to use. It relies on setting a third-party opt-out cookie on 131
different advertising sites. But doing this is incompatible with one of the most basic browser privacy settings: disabling third party cookies. Even if you allow third party cookies,
your opt-out only lasts until the next time you clear cookies, another common user strategy for protecting online privacy. And new advertising sites are created all the time. When the
132nd site is added to WebChoices, you need to go back and repeat your opt-out, which, unless you follow the advertising press, you won’t know to do.
These problems with DAA's program are why Do Not Track exists. It’s simple, compatible with other privacy measures, and works across browsers.
Twitter knows the difference between a real opt-out and a fake one: for years, it has implemented DNT as a true "stop tracking" option, and you can still choose that option under the
"Data" section of Twitter's settings, whether you are a Twitter user or not. However, if you use the new DAA opt-out that Twitter
plans to offer instead of DNT, the company will treat that as a fake opt-out: Twitter keeps tracking, but won't show you ads based on it.
What can you do as an individual to protect yourself against Twitter's tracking? First, follow our guide to disable the settings. Second, install Privacy Badger, EFF's browser extension that, in addition to setting DNT, attempts to automatically detect and block third-party
tracking behavior. Privacy Badger also specifically replaces some social network widgets with non-tracking static versions.
both DNT and the DAA opt-out as a true "stop tracking" option.
>> mehr lesen
Supreme Court Ends Texas’ Grip On Patent Cases
(Mo, 22 Mai 2017)
Today the Supreme Court issued a decision that will have a massive impact on patent troll
litigation. In TC Heartland v. Kraft Foods, the court ruled that patent owners can sue corporate defendants
only in districts where the defendant is incorporated or has committed acts of infringement and has a regular and established place of business. This means that patent trolls can no
longer drag companies to distant and inconvenient forums that favor patent owners but have little connection to the dispute. Most significantly, it will be much harder for trolls to
sue in the Eastern District of Texas.
For more than ten years, patent troll litigation has clustered in the Eastern District of Texas (EDTX). Patent trolls began to flock there
when a judge created local patent rules that were perceived as friendly to patent owners.
The court required discovery to start almost right away and did very little to limit costs (which were borne much more heavily by operating companies because they have more
documents). Cases also tended not to be decided by summary judgment and went to trial more quickly.
These changes led to a stunning rise in patent trolling in EDTX. In 1999, only 14 patent cases were filed in
the district. By 2003, the number of filings had grown to 55. By 2015, it had exploded to over 2500 patent suits, mostly filed by trolls. Patent litigation grew so much in EDTX that
it became part of the local economy. In addition to providing work for the
local lawyers, it generated business for the hotels, restaurants, and printers in towns like Marshall and Tyler.
Although the TC Heartland case will have a big impact on EDTX, the case involved a suit filed in the District of Delaware and the legal question was one of statutory
interpretation. Prior to 1990, the Supreme Court had held that in patent cases,
the statute found at 28 U.S.C. § 1400 controlled where a patent case could be filed. However, in 1990 in a case
called VE Holding, the Federal Circuitheld that a small technical amendment to another venue statute—28 U.S.C. § 1391—effectively overruled that line of cases. VE Holding meant that companies that sold products
nationwide can be sued in any federal court in the country on charges of patent infringement, regardless of how tenuous the connection to that court. Today’s decision overrules VE
Holding and restores venue law to how it was: corporate patent defendants can only be sued where they are incorporated or where they allegedly infringe the patent and have
a regular and established place of business.
Together with Public Knowledge, we filed an amicus brief urging the Supreme Court to hear this case, and once it did, another brief urging it to overrule VE Holding. We explained that venue law is concerned with fairness and
that forum shopping in patent cases has had very unfair results, especially in EDTX. While the Supreme Court reached the result we hoped for, the court did not discuss these policy
issues (it also showed little interest in the policy debate during the oral argument in the case). The court approached the case as a pure question of statutory interpretation and
ruled 8-0 that the more specific statute, 28 U.S.C. § 1400, controls where a patent case can be filed.
While today’s decision is a big blow for patent trolls, it is not a panacea. Patent trolls with weak cases can, of course, still file elsewhere. The ruling will likely lead to a big
growth in patent litigation in the District of Delaware where many companies are incorporated. And it does not address the root cause of patent trolling: the thousands of overbroad and vague software patents that
the Patent Office issues every year. We will still need to fight for broader patent reform and defend good decisions like the Supreme Court’s 2014 ruling in Alice v CLS Bank.
TC Heartland v. Kraft Foods
>> mehr lesen
Online Censorship and User Notification: Lessons from Thailand
(Mo, 22 Mai 2017)
For governments interested in suppressing information online, the old methods of direct censorship are getting less and less effective.
Over the past month, the Thai government has made escalating attempts to suppress critical information online. In the last week, faced with an embarrassing video of the Thai King, the government ordered Facebook to geoblock over 300 pages on the platform and even threatened to shut Facebook down in the country. This is on top of last
month's announcement that the government had banned any
online interaction with three individuals: two academics and one journalist, all three of whom are political exiles and prominent critics of the state. And just today, law
enforcement representatives described their efforts to target those who simply
view—not even create or share—content critical of the monarchy and the government.
The Thai government has several methods at its own disposal to directly block large volumes of content. It could, as it has in the past, pressure ISPs to block websites. It could also
hijack domain name queries, making sites harder to access. So why is it negotiating with Facebook instead of just blocking the offending pages itself? And what are Facebook’s
responsibilities to users when this happens?
HTTPS and Mixed-Use Social Media Sites
The answer is, in part, HTTPS. When HTTPS encrypts your browsing, it doesn’t just protect the contents of the communication between
your browser and the websites you visit. It also protects the specific pages on those sites, preventing censors from seeing and blocking anything “after the slash” in a URL. This
means that if a sensitive video of the King shows up on a website, government censors can’t identify and block only the pages on which it appears. In an HTTPS world that makes such
granularized censorship impossible, the government’s only direct censorship option is to block the site entirely.
That might still leave the government with tenable censorship options if critical speech and dissenting activity only happened on certain sites, like devoted blogs or message boards.
A government could try to get away with blocking such sites wholesale without disrupting users outside a certain targeted political sphere.
But all sorts of user-generated content—from calls to revolution to cat pictures—are converging on social media websites like Facebook, which members of every political party use and
rely on. This brings us to the second part of the answer as to why the government can’t censor like it used to: mixed-use social media sites. When content is both HTTPS-encrypted and
on a mixed-use social media site like Facebook, it can be too politically
expensive to block the whole site. Instead, the only option left is pressuring Facebook to do targeted blocking at the government’s request.
Government Requests for Social Media Censorship
Government requests for targeted blocking happen when something is compliant with Facebook’s community guidelines, but not with a country’s domestic law. This comes to a head when
social media platforms have large user bases in repressive, censorious states—a dynamic that certainly applies in Thailand, where a military dictatorship shares its capital city with
a dense population of Facebook power-users and one of the most Instagrammed locations on earth.
In Thailand, the video of the King in question violated the country’s overbroad lese majeste defamation laws against
in any way insulting or criticizing the monarchy. So the Thai government requested that Facebook remove it—along with hundreds of other pieces of content—on legal grounds, and made
an ultimately empty threat to shut down the platform in
Thailand if Facebook did not comply.
Facebook did comply and geoblock over 100 URLs for which it received warrants from the Thai government. This may not be surprising; although the government is likely not going to
block Facebook entirely, they still have other ways to go after the company, including threatening any in-country staff. Indeed, Facebook put itself in a vulnerable position when it
inexplicably opened a Bangkok office during high political tensions after the 2014 military coup.
Platforms’ Responsibility to Users
If companies like Facebook do comply with government demands to remove content, these decisions must be transparent to their users and the general public. Otherwise, Facebook's
compliance transforms its role from a victim of censorship, to a company pressured to act as a government censor. The stakes are high, especially in unstable political environments
like Thailand. There, the targets of takedown requests can often be journalists, activists, and dissidents, and requests to take down their content or block their pages often
serve as an ominous prelude to further action or targeting.
With that in mind, Facebook and other companies responding to government requests must provide the fullest legally permissible notice to users whenever possible. This
means timely, informative notifications, on the record, that give users information like what branch of government requested to take down their content, on what legal grounds, and
when the request was made.
Facebook seems to be getting better at this, at least in Thailand. When journalist Andrew MacGregor Marshall had content of his geoblocked in January, he did not receive consistent notice. Worse, the page that his readers
in Thailand saw when they tried to access his post implied that the block was an error, not a deliberate act of government-mandated removal.
More recently, however, we have been happy to see evidence of Facebook providing more detailed notices to users, like this notice that exiled dissident Dr. Somsak Jeamteerasakul received
and then shared online:
In an ideal world, timely and informative user notice can help power the Streisand effect: that is, the dynamic in which
attempts to suppress information actually backfire and draw more attention to it than ever before. (And that’s certainly what’s happening with the video of the King, which has garnered countless international media headlines.) With
details, users are in a better position to appeal to Facebook directly as well as draw public attention to government targeting and censorship, ultimately making this kind of
censorship a self-defeating exercise for the government.
In an HTTP environment where governments can passively spy on and filter Internet content, individual pages could disappear behind obscure and misleading error messages. Moving to an
increasingly HTTPS-secured world means that if social media companies are transparent about the pressure they face, we may gain some visibility into government censorship. However, if
they comply without informing creators or readers of blocked content, we could find ourselves in a much worse situation. Without transparency, tech giants could misuse their power not
only to silence vulnerable speakers, but also to obscure how that censorship takes place—and who demanded it.
Have you had your content or account removed from a social media platform? At EFF, we’ve been shining a light on the expanse and breadth of content removal on
social media platforms with OnlineCensorship.org, where we and our partners at Visualising Impact collect your stories about content and
account deletions. Share your story here.
>> mehr lesen
No Hunting Undocumented Immigrants with Stingrays
(Sa, 20 Mai 2017)
In the latest sign of mission creep in domestic deployment of battlefield-strength surveillance technology, U.S. Immigration and Customs Enforcement (ICE) earlier this year used a
cell site simulator (CSS) to locate and arrest an undocumented immigrant, according to a report yesterday by The Detroit News.
CSSs, often called IMSI catchers or Stingrays, masquerade as cell phone towers and trick our phones into connecting to them so police can track down a target. EFF has long opposed
CSSs. They are a form of mass surveillance, forcing the phones of
countless innocent people to disclose information to the police, in violation of the Fourth Amendment. They disrupt cellular communications, including 911 calls. They are deployed disproportionately within communities of color and poorer neighborhoods. They exploit
vulnerabilities in the cellular communication system that government should fix instead of exploit.
Police said they needed CSSs to fight terrorism. Instead, police use CSSs to locate
low-level offenders, such as a suspect who stole $60 of food from a
restaurant delivery employee.
Now we fear that ICE may be routinely using CSSs to hunt down people whose only offense is to unlawfully enter or remain in the United States. ICE has spent over $10 million to
purchase 59 CSSs, according to a recent Congressional report.
In the first quarter of 2017, ICE arrested nearly 11,000 undocumented immigrants with no criminal record, more than double the
number from the first quarter of 2016. And yesterday, The Detroit News reported that ICE used a CSS to locate and arrest an undocumented immigrant.
It is good news that ICE obtained a warrant before using its
CSS to find this immigrant, in accordance with a change in DHS and DOJ policies in 2015. It is also a welcome sign that a bipartisan Congressional report in December 2016 called for federal legislation requiring a
warrant for CSS use by law enforcement. But a warrant alone is not enough.
If permitted at all, government use of CSSs should be strictly limited to addressing serious violent crime. Few law enforcement spying technologies are a greater threat to digital
liberty: by their very nature, CSSs seize information from all of the people who happen to be nearby. So government should be barred, for example, from using CSSs to hunt down traffic
scofflaws, petty thieves, and undocumented immigrants.
Notably, the federal eavesdropping statute limits police use of that surveillance technology to certain enumerated
crimes. Because CSSs conduct general searches, any such enumeration for CSSs must be even narrower, and limited to serious violent crimes.
Finally, if government is allowed to use CSSs, there must be other safeguards, too. Government should be limited to using CSSs to acquire location information, and forbidden from
using CSSs for other purposes, such as acquiring communications content. An Illinoisstatute enacted in 2016 contains this limit. Also, government should be required to minimize
the capture of information from people who are not the target of investigation, and to immediately destroy all data that does not identify the target. A U.S. Magistrate Judge’s
order in 2015 contains this limit.
Too often, government deploys powerful spying technologies against vulnerable groups of people, including immigrant communities, as well as racial, ethnic, and religious minorities.
EFFhaslongopposedthis. We thus oppose using CSSs to hunt down undocumented immigrants, or
anyone else who is not a serious violent threat to public safety.
>> mehr lesen
How to Opt Out of Twitter's New Privacy Settings
(Sa, 20 Mai 2017)
effect June 18:
Contrary to the inviting “Sounds good” button to accept the new policy and get to tweeting, the changes Twitter has made around user tracking and data personalization do not
sound good for user privacy. For example, the company will now record and store non-EU users’ off-Twitter web browsing history for up to 30 days, up from 10 days in the previous
Worst of all, the “control over your data” promised by the pop-up is on an opt-out basis, giving users choices only after Twitter has set their privacy settings to invasive defaults.
Instead, concerned users have to click “Review settings” to opt out of Twitter’s new mechanisms for user tracking. That will bring you to the “Personalization and Data” section of
your settings. Here, you can pick and choose the personalization, data collection, and data sharing you will allow—or, click “Disable all” in the top-right corner to opt out entirely.
the left, and then click “Edit” next to “Personalization and data.”
While you’re at it, this is also a good opportunity to review, edit, and/or remove the data Twitter has collected on you in the past by going to the “Your Twitter data” section of
Twitter has stated that these granular settings are intended to replace Twitter’s reliance on
Do Not Track. However, replacing a standard cross-platform choice with new, complex options buried in the settings is not a fair
trade. Although “more granular” privacy settings sound like an improvement, they lose their meaning when they are set to privacy-invasive selections by default. Adding new tracking
options that users are opted into by default suggests that Twitter cares more about collecting data than respecting users’ choice.
>> mehr lesen
As USTR Takes Office, EFF Sets Out Our Demands on Trade Transparency
(Do, 18 Mai 2017)
The new U.S. Trade Representative, Robert Lighthizer, took office this week. EFF has written him a letter to let him know that we'll be holding him to the commitments that he made during his confirmation hearing about improving the transparency
and inclusiveness of the USTR's notoriously closed and opaque trade negotiation practices. Our letter, which you can download in full below, reads in part:
The American people’s dissatisfaction with trade deals of the past, such as NAFTA, does not merely lie in their effects on the American manufacturing sector and its workers.
Another of the key mistakes of previous U.S. trade policy, we respectfully submit, has been the closed and opaque character of trade negotiations. ...
Absent meaningful reforms that allow the public to see what is being negotiated on their behalf, and to participate in developing trade policy proposals, the public will reject
new agreements just as they rejected failed agreements of the past, such as the Trans-Pacific Partnership and the Anti-Counterfeiting Trade Agreement.
Conversely, given a real voice in trade policy development, there is the potential for trade agreements of the future to become more inclusive, better informed, and more
popular—all of which are essential if America is to retain and strengthen its global economic leadership in the digital age.
Tech industry groups the Internet Association, [PDF]
the Computer and Communications Industry Association (CCIA) and the
Internet Infrastructure Coalition (i2Coalition) [PDF], have also sent letters to the new USTR. In addition to addressing how America's future trade agreements should
address tech policy issues, the CCIA and i2Coalition letter addresses the need for greater transparency in trade negotiations, stating "we encourage you to maintain as much
transparency in trade negotiations as is reasonably possible. More open negotiation processes will contribute to increased support for the trade agenda."
House and Senate Democrats have reportedly delivered the same
message [paywalled] to Ambassador Lighthizer during his first week in office, urging that the renegotiation of NAFTA—which officially launched today—be made more transparent than the
negotiations of its failed predecessor, the TPP.
To further reinforce this message, EFF has gone even further—taking out a paid advertisement in POLITICO magazine's Morning Trade newsletter which runs all this week. It directs to a
new page of EFF's website that is specifically targetted at D.C.'s trade community. You can see a copy of the banner graphic that we've used
for that campaign to the side.
Will any of this make a difference? We certainly hope so, but we're not counting on it. That's why in case Ambassador Lighthizer fails to heed our message, we'll also be supporting
new legislation to be introduced in Congress to force the USTR to implement the necessary reforms. One way or another, the long overdue reform of trade negotiation processes has to
happen, and we're committed to seeing it through.
>> mehr lesen
Dear FCC: We See Through Your Plan to Roll Back Real Net Neutrality
(Do, 18 Mai 2017)
Pretty much everyone says they are in favor of net neutrality–the idea that service providers shouldn’t engage in data discrimination, but should instead remain neutral in how they
treat the content that flows over their networks. But actions speak louder than words, and today’s action by the FCC speaks volumes. After weeks of hand-waving and an aggressive
misinformation campaign by major telecom companies, the FCC has taken the first concrete step toward dismantling the net neutrality protections it adopted two years ago.
Specifically, the FCC is proposing a rule that would reclassify broadband as an “information service” rather than a “telecommunications service.” FCC Chairman Ajit Pai claims that
this move would protect users, but all it would really do is protect Comcast and other big ISPs by destroying the legal foundation for net neutrality rules. Once that happened, it
would only be a matter of time before your ISP had more power than ever to shape the Internet.
Here’s why: Under the Telecommunications Act of 1996, a service can be either a “telecommunications service” that lets the subscriber choose the content they receive and send without
interference from the service provider; or it can be an “information service,” like cable television, that curates and selects what subscribers will get. “Telecommunications services”
are subject to nondiscrimination requirements–like net neutrality rules. “Information services” are not.
For years, the FCC incorrectly classified broadband access as an “information service,” and when it tried to apply net neutrality rules to broadband providers, the courts struck them
down. Essentially, the D.C. Circuit court explained that the FCC can’t
exempt broadband from nondiscrimination requirements by classifying it as an information service, but then impose those requirements anyway.
The legal mandate was clear: if we wanted meaningful open Internet rules to pass judicial scrutiny, the FCC had to reclassify broadband as a telecom service. Reclassification also
just made sense: broadband networks are supposed to deliver information of the subscriber’s choosing, not information curated or altered by the provider.
It took an Internet uprising to persuade the FCC to reclassify. But in the end we succeeded: in
2015 the FCC reclassified broadband as a telecom service. Resting at last on a proper legal foundation, its net neutrality rules finally passed judicial scrutiny [PDF].
Given this history, there’s no disguising what the new FCC majority is up to. If it puts broadband back in the “info service” category and then tries to appease critics by adopting
meaningful net neutrality rules, we’ll be in the same position we were three years ago: Comcast will take the FCC to court–and Comcast will win. It’s simple: you can’t reclassify
and keep meaningful net neutrality rules. Reclassification means giving ISPs a free pass for data discrimination.
Chairman Pai’s claim that this move is good for users because it will spur investment in broadband infrastructure is a cynical one at best. Infrastructure investment has gone up since
the 2015 Order, ISP profits are growing exponentially, and innovation and expression are flourishing.
At the same time, too many Americans have only one choice for high speed broadband. There are good reasons to worry about FCC overreach regulation in many contexts, but the fact is the U.S.
broadband market is now excessively concentrated and lacks real choice, and there are few real options to prevent ISPs from abusing their power. In this environment, repealing the
simple, light-touch rules of the road we just won would give ISPs free reign to use their position as Internet gatekeepers to funnel customers to their own content, thereby distorting
the open playing field the Internet typically provides, or charge fees for better access to subscribers. Powerful incumbent tech companies will be able to buy their way into the fast
lane, but new ones won’t. Nor will activists, churches, libraries, hospitals, schools or local governments.
We can’t let that happen. So, Team Internet, we need you to step up once again and tell the FCC that it works for the American people, not Comcast, Verizon, or AT&T. Go to
dearfcc.org and tell the FCC not to undermine real net neutrality protections.
Contact the FCC Now
>> mehr lesen
The FCC Needs to Cut Through The Noise and Listen to The Public’s Support for Net Neutrality
(Do, 18 Mai 2017)
The Federal Communications Commission’s vote tomorrow will be a step towards undermining the rules that protect Internet users from data discrimination by their ISPs. These net
neutrality rules, though not perfect, have broad support from the public. But FCC Chairman Ajit Pai seems to be preparing to dismiss and ignore the wishes of ordinary Internet users
by forcing us to use a broken and discredited online comment filing system.
It’s been a sad few weeks for the FCC’s IT department. Following Last Week Tonight host John Oliver’s segment on net
neutrality, in which the comedian called on viewers to defend net neutrality protections by filing comments, the FCC’s comment system was disabled. The agency’s Chief Information
Officer claimed that the system had been targeted in a distributed denial-of-service
attack, bombarding it with traffic and making it difficult to file comments. But despite requests from the public and
members of Congress, the FCC hasn’t given any details about the supposed attack or why it concluded that the system was attacked at all, rather than simply being overwhelmed by
the number of comments it received.
Following that initial problem, the FCC’s site reportedly received more than 58,000
nearly identical comments containing names and addresses that appeared to be taken from a marketing database. These comments, which seemed to be fraudulent, supported Chairman Pai’s
gutting of net neutrality. To date, the FCC hasn’t said what it’s doing to safeguard its comment system and make it ready to handle the thousands, even millions, of public comments
it’s likely to receive after tomorrow’s formal vote.
What’s so important about maintaining ECFS and actually hearing the opinions expressed by ordinary Internet users there? Taking comments from the public is not merely a tradition -
it’s a key safeguard for democracy. Independent agencies like the FCC have vast rule-making powers. In many areas, they have more practical power over our lives than Congress does,
because Congress doesn’t have the capacity or expertise to create the detailed rules that govern telecommunications and other industries.
Unlike Congress, independent agencies aren’t elected by the people—they’re run by boards that are filled by presidents and congressional leaders. They can’t be voted out of office
(except indirectly as their members are replaced by future presidents). Because they’re not held accountable through the political process, agencies are required by law to accept and
consider public comments before making major changes to the rules. If the FCC responds to attacks on its public comment system not by defending the system, but by discounting and
ignoring public opinion expressed through that system, then the agency is answerable to no one. (In theory, Congress could step in and pass new laws concerning net neutrality, but
meaningful action by Congress is unlikely this year).
Digital democracy is not easy. The FCC can’t just count comments for and against net neutrality as though they were ballots in a ballot box. But neither can Chairman Pai ignore the
opinions of Internet users in the U.S., the majority of whom want to keep being protected against data discrimination by ISPs like Comcast, AT&T, and Verizon. Letting those users
be blocked, drowned out by bots, or ignored when they express their opinions on net neutrality is no way to begin.
You can submit comments to the FCC through EFF’s commenting tool at dearfcc.org. We will work to get your comments through and make your voice heard
Net Neutrality Lobbying
>> mehr lesen
Recording Industry Claims Imaginary Value Gap as a Bigger Threat Than Piracy
(Do, 18 Mai 2017)
One of the most significant events that took place at this month's meeting of the World Intellectual Property Organization (WIPO), that
EFF attended, wasn't part of the meeting's formal agenda. It came at a
side-meeting organized by the International Federation of the Phonographic Industry (IFPI), an affiliate of the Recording Industry Association of America (RIAA). At that meeting,
IFPI panelist David Price made the startling admission that copyright infringement is no longer the recording industry's biggest concern.
Apparently, the industry's biggest concern is no longer those who distribute music illegally for free. It's platforms like YouTube that do pay copyright holders, but don't pay
enough. According to the IFPI, YouTube's reliance on the U.S. DMCA and Europe's E-Commerce Directive to allow it to host user-uploaded music videos has created a "value gap"
that deprives the recording industry of royalties they believe should be theirs. The sudden elevation of this supposed "value gap" above the bugaboo of piracy is all the more
surprising because term didn't even exist until about 2016, when it was
created out of whole cloth as a device to explain why copyright holders should be entitled to a larger slice of Internet platform revenues.
Interestingly, Price and his co-panelists at the WIPO event admitted that there ought to be free music services for those who don't wish to pay. Currently, YouTube provides this free
service for millions of users around the world. It pays royalties to copyright holders for doing so, even for user-uploaded content, where the copyright owner can be identified using
ContentID fingerprint matching. (The law doesn't require YouTube to do this, although plans are afoot
in Europe to change this.) ContentID has serious problems,
including imposing advertising and monetization on critical videos that are clear fair uses, against the wishes of video creators. But in the right circumstances, it also provides an
important revenue stream for recording artists.
The record labels' contention is that YouTube streaming depresses the rates that subscription-based music streaming services, such as Spotify, are willing to pay for streaming
licenses. That's an interesting theory, but research released by Google casts significant doubt on
it. At least according to the Google-sponsored research, YouTube actually diverts users not from other paid services, but from infringement. Were YouTube to go away, 85% of views
would simply disappear, or would move to lower-value alternatives such as illegal file sharing.
Just as the entertainment industry's war against "piracy" harmed users, through the ratcheting up of enforcement measures and the banning of technological tools, so too the new war against user-generated content platforms will also have
harmful effects. That's because the legal foundation of user-generated content platforms, the copyright safe harbor that lies at the heart of the DMCA's Section 512 and the E-Commerce
Directive, doesn't only facilitate the sharing of music, but also all of the other speech and innovation that happens on those platforms. Entertainment industry-driven attacks on that
foundation, such as Europe's mandatory upload filtering plan, and
proposals to replace Section 512 in the U.S. with a filtering mandate, could have
significant negative impacts on the viability of online content platforms, and on the rights of their users. The greatest impacts will be on platforms that are much smaller than
YouTube, and on new entrants.
During IFPI's presentation, we asked them directly about the desired "end game" of their opposition to the safe harbor protections that YouTube and similar platforms enjoy. While they
denied that their goal was to dismantle copyright safe harbor protection altogether, there was no doubt that they are serious in their intent to prevent YouTube from taking advantage
of it. That inevitably means eliminating the DMCA and E-Commerce Directive safe harbor rules that millions of other websites, both commercial and noncommercial, rely upon today, and
replacing them with mandatory filtering rules.
It's all rather ironic given that the IFPI acknowledge how streaming services, including YouTube, have led the recording industry to a resurgence of profitability in the past two years. If safe harbor rules have
now eclipsed infringement as the biggest threat to the recording industry, and the industry can still earn record profits even so, it's difficult to see how scrapping those
rules could possibly be warranted.
>> mehr lesen
Nominate a 2017 Pioneer!
(Mi, 17 Mai 2017) Nominations are now open for EFF's 26th Annual Pioneer Awards, to be presented this fall in San Francisco. EFF
established the Pioneer Awards in 1992 to recognize leaders who are extending freedom and innovation in the realm of technology. The nomination window will be open until 11:59pm PDT
on May 23, 2017. You could nominate the next Pioneer Award winner today!
What does it take to be a Pioneer? Nominees must have contributed substantially to the health, growth, accessibility, or freedom of computer-based communications. Their contributions
may be technical, social, legal, academic, economic or cultural. This year’s Pioneers will join an esteemed group of past award winners that includes the late visionary activist Aaron
Swartz; open source pioneer Limor "Ladyada" Fried; and the documentarian and journalist Laura Poitras and Glenn Greenwald, among many remarkable activists, entrepreneurs, public
interest attorneys, and others.
2016 Pioneer Award winners & EFF Executive Director Cindy Cohn. Photo by Alex
The Pioneer Award ceremony depends on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the Pioneer
Awards, please email email@example.com.
Remember, nominations are due no later than 11:59pm PDT on Tuesday, May 23! After you nominate your favorite contenders, we hope you will consider joining us this fall in San
Francisco to celebrate the work of the 2017 winners. If you have any questions or if you'd like to receive updates about the event, including ticket information, please email
Nominate your favorite digital rights hero now!
>> mehr lesen
RCEP's Digital Trade Negotiations Remain Shrouded in Secrecy
(Di, 16 Mai 2017)
From May 2-12, the Philippines hosted the 18th round of negotiations of the Regional Comprehensive Economic Partnership (RCEP), a TPP-like trade agreement covering ten
members of the Association of Southeast Asian Nations (ASEAN) and six partner countries – China, India, Japan, Australia, New Zealand and South Korea. Access to the negotiators was
extremely limited, with the negotiations themselves taking place behind closed doors. The non-availability of an agenda or confirmation of meetings and limited access to negotiators
were amongst the factors constraining civil society organisations' (CSOs) engagement.
For example, EFF organised a dinner presentation on May 9 for IP negotiators, with panelists from Public Citizen, Sinar Project, La Trobe University and Third World Network. Although
the event drew a handful of negotiators from four of the partner countries along with an ASEAN representative, it transpired that it had been scheduled at the same time as a private
RCEP event of which we hadn't been informed. Given the high interest in the RCEP and its impact on rights of citizens across Asia, it is pitiful that groups like EFF are forced to
bear the costs of reaching out to negotiators, and that negotiators show such little inclination to engage with us when we do.
Unfortunately, this is a familiar story for the hardy few civil society activists who have been covering this neglected trade deal. Few of the negotiating states have convened
national consultations, held public hearings, or initiated an on-the-record public notice and comment process. There has also been no official release of the chapters and textual
proposals related to rules that are being tabled. Given that the negotiations are closed to the public, we do not know what text is currently being deliberated on by the negotiators
and/or the consensus on provisions among states.
Secrecy in negotiations and lack of information is a common feature in free trade agreement negotiations. In the past, CSOs have had to resort to guerilla tactics to intervene and
defeat similar agreements such as the Trans-Pacific Partnership (TPP) and the the Transatlantic Trade and Investment Partnership
(TTIP). Yet, just as with those better-known trade-deals, the potential significance of RCEP is immense, and so too are the dangers it could pose to Internet users if the negotiators
fail to take their interests into account.
Digital Rights and RCEP
Similar to the TPP, RCEP includes provisions dealing with intellectual property (IP), e-commerce, investment, goods, services, telecommunications, and competition. The 16
Asian countries negotiating RCEP cover 12% of the world trade and represent nearly half of the global population. If ratified, the RCEP will not only be the first trade agreement for
the digital economy will also set the rules for trade across Asia over the next decade. While not all institutional consequences of the partnership can be fully known in advance, much
will depend on how the negotiation develops.
RCEP's e-commerce provisions will likely deal with cross-border information flows, data localization, legal immunity of intermediaries and requirements concerning disclosure of source
code that have not been tested elsewhere. We have also raised concerns that the provisions included under the leaked IP chapter notably on enforcement in a digital environment and
failure to include fair-use exception may end up expanding the the digital divide. RCEP attempts to enshrine stringent obligations for the protection of broadcasters that
remain controversial and are currently still under negotiation at WIPO. None of these problems would have come to light if earlier drafts of the agreement had not been leaked.
There has been a recent push to raise awareness of the RCEP with CSOs conducting strategy meetings and organizing weeks before the negotiations kicked off in Manila. Many CSOs also
organised activities parallel to the negotiations clubbed under the #NoRCEP week of action. On May 10, members of the People Over Profit network staged a protest action, inside the
convention centre where the negotiators were meeting with stakeholders, demanding a stop to the negotiations. RCEP will impact developers and startups, small and medium enterprises
that create goods and services for an increasingly global market. The right trade policy environment, one that accounts for diverse national contexts and encourages innovation is
critical for the growth and development of the region.
The next round of negotiations set to happen in Hyderabad, India in July this year. Hoping to address the lack of representation of views included in the process and reflect on some
of the concerns raised, EFF will facilitate engagement between negotiators and affected stakeholders at a public meeting in Hyderabad. In the meantime, we maintain our call for ASEAN
and the RCEP member states, many of which have complained about their lack of representation in US led trade agreements, to improve on the broken process that resulted in the failure
of the TPP, and create avenues for meaningful consultation and participation from stakeholders.
EFF expresses its appreciation to Sze Ming Tan of Sinar Project, who presented our materials at the Manila event and provided logistical
support for the event.
>> mehr lesen
Why the Patching Problem Makes us WannaCry
(Di, 16 Mai 2017)
Over the weekend a cyber attack known as "WannaCry"
infected hundreds of computers all over the world with ransomware (malware which encrypts your data until you pay a ransom, usually in Bitcoin). The attack takes advantage of an
exploit for Windows known as "EternalBlue" which was in the possession of NSA and, in mid April, was made public by a group known as "The Shadow
Brokers." Microsoft issued a patch for the vulnerability on March 14 for all supported versions of Windows (Vista and later). Unfortunately at the time the attack started
many systems were still unpatched and legacy Windows systems such as Windows XP and Windows Server 2003 were left without a patch for the vulnerability. Since the attack began
Microsoft has issued a patch for Windows XP and Windows Server 2003 as well.
Certainly, some of the blame falls on the NSA, which developed EternalBlue and then lost control of it. But these attacks are a complex failure for which there is plenty of blame to
go around. The WannaCry ransomware attacks demonstrate that patching large, legacy systems is hard. For many kinds of systems, the existence of patches for a vulnerability is no
guarantee that they will make their way to the affected devices in a timely manner. For example, many Internet of Things devices are unpatchable, a fact that was exploited by the Mirai Botnet. Additionally, the majority of Android devices are no longer supported by Google or
the device manufacturers, leaving them open to exploitation by a "toxic hellstew" of known vulnerabilities.
Even for systems that can be patched, applying patches to large enterprise or government
systems in a timely manner is notoriously difficult. Enterprise and government systems can rarely afford the potential downtime that goes along with a software patch or upgrade.
As one researcher put it, "enterprises often face a stark choice with security patches: take the risk of being knocked of the air by hackers, or take the risk of knocking yourself off
This attack raises two extremely important areas of research: writing software that is less prone to the most common security vulnerabilities (such as by using memory safe languages,
formal verification techniques, etc.), and solving the patching problem.
Reportedly about 90 percent of all spending on cyber programs is dedicated to offensive efforts,
leaving a mere 10 percent for defense. During his candidacy, President Trump expressed tremendous concern about national cybersecurity weaknesses, stating "the scope of our
cybersecurity problem is enormous. Our government, our businesses, our trade secrets and our citizens’ most sensitive information are all facing constant cyberattacks…."
If the Trump administration is serious about improving cybersecurity, it should place a greater emphasis on funding defensive security research. Research into defensive methods
and better strategies for patching systems is less sexy than over-hyped zero-day vulnerabilities or imaginary "cyber-missiles," but it is the surest path to a more secure internet for
>> mehr lesen
Secret New European Copyright Proposal Spells Disaster for Free Culture
(Mo, 15 Mai 2017)
EFF has learned about a new proposal for European law that takes aim at online streaming services, but which will strike a serious blow to creators and their fans. The proposal, which
would effectively ban online streaming services from hosting works under free licenses, could spell an end to services like the Luxembourg-based Jamendo that offers access to free music online, and raise new barriers to offering freely-licensed works on other streaming platforms.
This is all part of Europe's proposed new Digital Single Market Directive,
which is presently doing the rounds of the three European institutions (the European Commission, European Parliament, and Council of the European Union) that will have to reach
agreement on its final text. As part of this process, proposals for amendment to the Commission's original draft are coming up from several of the committees of the European
Parliament. We've previously sounded the alarm about other aspects of this Directive, including its misguided link tax and plans for an upload filtering mandate, both of which are the subject of ongoing compromise
But this latest amendment proposal, coming out of left field, would be added to another section of the Directive, that proposes to ensure fair remuneration to authors for the use of
their works, an objective that EFF otherwise supports. The Parliamentary committee leading the negotiations is the Legal Affairs (JURI) committee, but other committees are preparing
opinions on the draft and can also propose their own amendments to it. This proposal has come from the Committee on Culture and Education (CULT). Although the text of the proposal is
not available online, as it is under discussion by the Rapporteur and Shadow Rapporteurs of the CULT behind closed doors, EFF has obtained a copy, which says:
Member States shall ensure that, when authors and performers transfer or assign the right of making available to the public of their works or other subject-matter for online
on-demand services, they retain the right to obtain fair remuneration derived from the direct exploitation of their works present in the catalogue of those services.
The right of an author or performer to obtain fair remuneration for the making available of his/her work as described in paragraph 1 cannot be waived.
In short, this creates what amounts to a tax on copyright works made available on online streaming services, payable to the collecting societies that administer copyright on behalf of
authors and performers (though the tax itself is separate from the copyright holder's economic rights). The tax cannot be waived by the authors or performers themselves, which means
that even if they want to make their works available for streaming online for free, the law would tie their hands and prohibit this. The streaming site would still be required to set
aside money for "fair remuneration" of the authors and performers, whether they want this or not.
The proposal seems to be modeled on a similar amendment that was introduced in Chile last year, and which unfortunately passed soon
after we wrote about it, without any substantive debate. It's not unusual for measures such as this to pop up in Europe or America after a smaller country adopts them. The recording
industry's IP maximalist agenda is a global one, and it often makes sense for them to establish a precedent somewhere else in the world where resistance to their proposals may be
weaker, before pushing it out to larger economies.
This amendment would eliminate one of the few advantages that small and independent artists enjoy in promoting their work online—the ability to make it available for free. For some
such artists, the free online availability of their work builds up a fan base to support future licensing deals, concert tours, and merchandise sales. Others may release some or all
of their work for free for non-economic reasons, such as to communicate a message, or simply for the love of their art. Certainly, not all artists do this. But the law as it
exists at present at least offers them a choice. Either they can license their work to streaming platforms for money, or they can make it available to such platforms for free. But if
this amendment passes, that choice will be taken away from them.
The losers from this proposal are fourfold. Perhaps the biggest losers are the creators themselves, who will face new barriers between their art and their fans and collaborators. The
streaming services will also lose out, as they will face higher expenses and will no longer be able to operate non-commercially even if they only carry freely licensed content. Fans,
of course, will suffer because of the reduced legal availability of free music and video online. And even the copyright industry will suffer, as the increased costs of legal streaming
services may cause creators and fans to shift back to peer to peer file sharing, where copyright infringing works are also exchanged.
Since this proposal enjoys the support of a majority of the European political groups in the CULT, if nothing changes then it is very likely to pass that committee at least. The next
meeting of the Shadow Rapporteurs is on Tuesday May 16, we have no time to waste in sounding the alarm about how misguided and destructive this amendment is. A list of the CULT
members who are considering the proposal can be found here, complete with email and social media contact
EFF's European supporters are urged to contact their representatives with a simple message: to oppose any amendment to the Digital Single Market Directive that would create a new
unwaiveable right to fair remuneration on online streaming platforms. The future of free culture in Europe depends upon it.
>> mehr lesen
California Authorities Are Failing to Track and Prevent Abuse of Police Databases
(Mo, 15 Mai 2017)
Police in California have your data literally at their fingertips.
They can sit at a computer terminal or in their squad car and check your DMV records, your criminal records, your parking citations, any restraining orders you’ve filed or have been
filed against you. They can search other state databases and even tap into the FBI’s trove. If you’ve got a snowmobile, they can look up that registration too. Much of this personal
data they can access through a smartphone app.
Is there a name for this information network? Yes, it’s really boring: the
California Law Enforcement Telecommunications System (CLETS). Most people pronounce it “Clets.”
Do police abuse their access to CLETS? You betcha. For example, they’ve used it to stalk their ex-partners,
gain advantage in
custody proceedings, and screen potential online
dates. In one of the worst incidents, an officer allegedly attempted to leak records on
witnesses to family of a convicted murderer. According to the latest data, 2016 was a record-breaking year: California hit a statewide, all-time high for police discipline
involving CLETS; meanwhile the Oakland Police Department broke an all-time record for individual law enforcement agencies.
Is anybody doing anything about CLETS misuse? Yes and no. Certainly EFF has been making noise about privacy violations involving CLETS. The government, not so much.
For years, we’ve pushed for better data to track when California cops misuse CLETS data. We have filed request after request for misuse data under the California Public Records Act.
We’ve sent letters, met with staff, assisted journalists, and spoken up during public meetings to demand state officials overseeing these databases take some sort of action. This is
the third report we’ve published on misusedata.
Yet state officials have made zero progress in addressing widespread database misconduct. No hearings on misuse have been held, no disciplinary actions have been taken, and the horror
stories continue to mount.
Who are these state officials? Get ready for another boring acronym: the CLETS Advisory Committee (CAC). Yes, CAC is an acronym containing an acronym. Most people pronounce it
CAC was created by the California legislature decades ago to oversee CLETS as part policy body, part disciplinary board. It comes under the California Department of Justice and
works hand in hand with CADOJ’s Criminal Justice Information Services department. CAC has 11
members, with more than half being appointed by special interest groups that lobby for law enforcement and municipalities. That means CAC is controlled by groups that are
predisposed to support—not punish—their members. As a result, the body has gone out of its way to pass policies that police ask for, while simultaneously
taking a largely hands-off approach to discipline.
It used to be that CADOJ and CAC investigated violations, but several years back they handed off that responsibility to the individual agencies that subscribe to CLETS. Nowadays
each of those agencies is required to file disclosures about each investigation they conduct, including an annual summary for CAC to review. Then CAC decides whether further
administrative action is necessary
Or at least that’s how it’s supposed to work. CAC has not even looked at the misuse data in years, and consequently, they’ve taken no action whatsoever against anyone or any
department—not even a “don’t do it again” warning letter.
What’s even worse is that they’ve been remarkably lax about whether agencies need to file anything about CLETS violations at all. This year some of the state’s largest law enforcement
agencies failed to file the mandated paperwork. Meanwhile, agencies that do report often list investigations as “pending,” but never follow-up with the eventual outcome as required.
So, when EFF obtained the latest round of misuse data, we knew it would be bad. But we also knew it would be incomplete—the tip of a very large, blue iceberg.
Download the 2016 CLETS misuse data. Previous data available: 2011-2014 (zip) and 2015
Screen grabs from official CLETS training videos. Source: Lemoore Police Department
What the Misuse Data Told Us
Police agencies disclosed that a total of 159 misuse investigations were launched in 2016. Of those, 117 investigations found that police had in fact abused CLETS. Another 39 cases
were listed as pending conclusion. That means there were only a small number of cases—potentially in single digits—where an investigation cleared the officer.
Let’s focus on those 117 cases of confirmed misuse. They represent a 14.5% increase over misuse in 2015, and a 50% increase over 2011.
In 27 cases, the misuse was so severe that the offending police officer either resigned or was terminated. Three cases resulted in a misdemeanor conviction, and three cases resulted
in a felony conviction.
In 24 cases, no action was taken to discipline the offending officer at all. In 28 cases, the result was “counseling.” Another 21 mystery cases were listed as “other” action having
been taken, leaving the public in the dark.
When we opened the data file, two agencies immediately jumped out as repeat offenders.
First, there was the Oakland Police Department, who for the first time since we’ve been collecting data, actually turned in their disclosures. That’s the good news.
The bad news is that they reported 17 cases of CLETS misuse—the highest number for any agency in at least seven years. These are likely related to the ongoing, expansive scandal in which at least one OPD officer is accused of
providing CLETS records to a teenage sex worker whom he—and many other officers—allegedly sexually exploited.
The head of OPD’s internal affairs department filed the hard copy of the disclosure with CADOJ. However, when we called OPD’s public affairs division, a spokesperson challenged the
numbers, staying that only 1 misuse case was found in 2016, while the remaining 16 are still pending. That’s still bad and possibly even worse, if it turns out OPD provided wildly
inaccurate data to CADOJ.
The Yuba County Probation Department—a very small agency in central-northern California—also drew our attention. In 2015, they broke the record with 15 violations of CLETS policy, all
of which resulted in only “counseling” for the officers who broke the rules. CADOJ ignored our request for a public hearing on this. Facing no action to deter further violations, Yuba
reported another six cases of misuse in 2016—again with counseling as the only outcome.
What was missing from the data also jumped out at us. The Los Angeles Police Department for the seventh year in a row filed no misuse disclosures with the state. Typically, the
San Diego County Sheriff’s Office conducts more investigations into CLETS misuse than any other agency. This time, they did not file anything at all.
Oversight on Hold
Will this be the year CADOJ and the CLETS Advisory Committee finally steps up to protect our privacy? Probably not. In December, CADOJ failed to produce historical misuse
statistics as requested by CAC, so the committee agreed to postpone discussion until its next meeting. However, since CAC reduced its meetings to the statutory minimum of two per
year, it won’t meet again until this summer. The year will be half over and, if the trend continues, many, many more people will have had their privacy invaded by misbehaving
One thing you can count on: EFF will continue to pressure these state officials, and if we can’t get them to do their jobs, then it’s time for the legislature to find someone else who
A Note on CLETS and the California Values Act
EFF has fielded a lot of questions recently about CLETS as legislators consider S.B. 54, the California Values Act. The bill, among other measures to protect immigrants, would limit the
federal government’s access to California’s law enforcement databases for the purposes of immigration enforcement. CLETS would clearly fall into that category.
During the bill-making process, S.B. 54 was amended to allow immigration officials to access criminal history information via CLETS. EFF is very concerned that this CLETS provision
would create a backdoor to the very data the bill was designed to protect. While S.B. 54 may still protect some Californians’ data accessible through CLETS from immigration officials,
implementation of those protections would require an oversight body with the motivation to enforce the law.
That oversight body would be—you guessed it—the same CLETS Advisory Committee that refuses to take any action on database abuse by police officers. In fact, several of the
organizations that have seats on CAC—including the California Peace Officers’ Association and the California State Sheriffs’ Association—are actively lobbying against S.B. 54.
EFF supports S.B. 54 and believes it will do much to protect the data of California
residents. However, we hope that as lawmakers build a firewall against data misuse by the feds, they take a close look at the officials who would be watching the CLETS gateway.
>> mehr lesen
European Publishing Lobby Forces Compromise on Marrakesh Treaty
(Fr, 12 Mai 2017)
The Marrakesh Treaty to Facilitate Access to Published Works
for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled was one of the most fiercely contested treaty negotiations at the World Intellectual Property Organization
(WIPO). Representatives of publishers and other copyright holder groups spent years unashamedly lobbying against an instrument that would
provide access to the written word to blind and other print disabled users. Despite their efforts to derail the negotiations, the treaty was finally agreed in 2013, and came into force last year.
But that wasn't the end of it. An important step towards the realization of the treaty's benefits is the implementation of the treaty by the countries where the books for adaptation
into accessible formats are published. It happens that a large proportion of those books, especially those in French (which is spoken in many parts of Africa) and in Spanish (spoken
throughout Latin America), originate from Europe. Therefore many blind and print disabled users have eagerly awaited Europe's implementation of the Marrakesh Treaty to unlock its many
Publishers as well have been keenly aware of the importance of Europe's implementation of the treaty. They have been lobbying European lawmakers to implement it in the narrowest way
that the treaty allows. This week, a breakthrough was reached when lawmakers from the three European institutions (the European Parliament, the Council of the European Union, and the
European Commission) reached a compromise on the text of the Directive that will implement the treaty.
The main sticking points were whether the Directive would require those who adapt works into accessible formats to pay compensation to the publishers of the original works, whether
there should be a ban on creating accessible copies of works when copies are also commercially available, and whether only "authorized entities" would be permitted to create
accessible-format works. On most of these issues the interests of blind and print disabled users have prevailed, with one exception: Individual European countries may require that
publishers be paid compensation when adaptations of works are made by authorized entities such as charities and libraries in that country. Recital 11 of the text of the compromise
Directive summarizes the effect of this:
Member States should only be allowed to provide for compensation schemes regarding the permitted uses of works and other protected subject-matter by authorised entities. In order
to avoid burdens for beneficiary persons, prevent barriers to the cross-border dissemination of accessible format copies and excessive requirements on authorised entities, it is
important that the possibility for Member States to provide for such compensation schemes is limited.
Compensation schemes should therefore not require payments by beneficiary persons. They should only apply to uses by authorised entities established in the territory of the Member
State providing for such a scheme and they should not require payments by authorised entities established in other Member States or third countries that are parties to the
Marrakesh Treaty. ... Account should also be taken of the particular circumstances of each case, resulting from the making of a particular accessible format copy. Where the harm
to a rightholder would be minimal, no obligation for payment of compensation may arise.
It would have been better if the Directive had simply ruled out the need for payment of compensation for the adaptation of works for blind and print disabled users. In almost all
cases, adapting copyright works for the blind is undertaken from a motive of compassion, not profit. Indeed, if there were profit in it, blind users would not be suffering the "book
famine" that results in them having access to only 1% of published books in accessible formats in poor countries, and only 7% in rich countries.
Nevertheless, the implementing Directive will not impose payment conditions on foreign entities or those from other EU member states, which will likely mean that most of the
adaptation of works for blind and print disabled users will be conducted in countries that do not impose a requirement of compensation. Even works that are meant for users within such
a country will likely be imported from overseas. The right to import adapted works from other countries is a key feature of the Marrakesh Treaty, and a feature that the European
Directive will preserve.
Overall then, despite being somewhat tarnished by the self-interested demands of publishers, the overdue implementation of the Marrakesh Treaty in Europe is to be welcomed. Its
success affirms the consensus of WIPO member states that international law on copyright shouldn't be in the service of copyright holders alone, but needs to reflect a balance of
interests of creators and users, including disadvantaged users such as those who are blind, vision impaired, and print disabled.
>> mehr lesen
Oakland City Council Committee Advances Measure to Require Transparency and Public Process for Surveillance Tech
(Fr, 12 Mai 2017)
On May 9, the Public Safety Committee of the Oakland City Council voted unanimously to approve a
proposed “Surveillance and Community Safety Ordinance.” The measure, passed on to the
Council by the city’s Privacy Advisory Commission, is modeled on a law
enacted in spring 2016 by Santa Clara County and could set a new standard for municipal reforms seeking transparency, oversight, and accountability to restrain otherwise
Once approved by the full Council, the ordinance will require the Oakland Police Department to seek City Council approval before adopting or deploying new surveillance technologies.
The measure will also provide community members with an opportunity to comment on such proposals, and the use policies for these technologies, before the City Council makes its
Importantly, these requirements will apply to any surveillance platform, even ones that have yet to be developed and might not emerge for several years. The measure’s
device-neutral requirements for transparency and public process will ensure local democratic control over the adoption and use of powerful spying technologies into the future.
Supporters of the measure packed the council room on Monday, and spanned a number of organizations across the community representing a variety of constituencies and perspectives.
Brian Hofer, chair of the city’s privacy advisory commission and a member of Oakland Privacy (which participates in the Electronic Frontier Alliance) said:
Unfettered surveillance doesn't just waste public money and abuse our civil liberties. It endangers lives. Trump has access to tools that would make the Stasi and KGB envious. We must institutionalize limits to surveillance, prohibit secret uses, require
maximum oversight and transparency, and impose penalties for misconduct.
Catherine Crump, co-director of UC Berkeley's Center for Law and Technology has similarly emphasized that the problem inheres
in secrecy, and that public process can help prevent potential violations of rights and liberties.
Several advocates addressed the discriminatory impact of surveillance. For instance, Tracy Rosenberg of the Media Alliance noted, “Without lifting the veil of secrecy surrounding use of surveillance technologies upon vulnerable groups, we
cannot have truly safe communities. This ordinance is all about genuine public safety – for all of us who live, work in, and visit Oakland.” Christina Sinha, who co-leads
the National Security and Civil Rights Program of Asian Americans Advancing Justice, also suggested that the ordinance could
help support the rights of marginalized communities.
EFF Grassroots Advocacy Coordinator Camille Ochoa reminded Council members, “Effective policing can only be built upon trust. Trust is fostered when we build processes that are
transparent and responsive to the will of the people. This ordinance is a step in the right direction.”
Having gained the committee’s approval, the ordinance will next go to the full Council to consider before a vote later this year on a date to be determined.
>> mehr lesen
California Assembly Considers Bill to Protect Data from ICE
(Do, 11 Mai 2017)
Local and state governments regularly collect personal information about us and store it in databases–often without our knowledge and consent. Even when government has a seemingly
benevolent purpose for doing so, government all too often reuses that data in a manner that hurts us.
Given Pres. Donald Trump’s promise to deport millions of
immigrants, and a surge in immigration
enforcement against people not engaged in criminal activity, we fear
California databases will be misused to target immigrants communities. Many state and local government agencies have databases that the federal government might try to use to
identify, locate, and deport immigrants.
That’s why EFF supports S.B. 54, the California Values Act. This bill would bar California law enforcement agencies (including
state, local, and school police) from sharing their databases for purposes of immigration enforcement.
In April, the California Senate approved S.B. 54 and sent it on the California
Assembly. EFF has renewed its support for S.B. 54. We hope you will, too.
Support S.B. 54
>> mehr lesen
In Providence, Policymakers Delay Visionary Local Civil Rights and Civil Liberties Reforms
(Mi, 10 Mai 2017)
Recent events in Providence, RI demonstrate both how a sustained grassroots campaign can create opportunities for civil rights and civil liberties, and also how quickly those opportunities
can be derailed by institutional actors. While the latest City Council decision delayed reform efforts and frustrated community members, policymakers will return in a few weeks
to the crucial questions they deferred.
After three years of advocacy uniting communities across Providence, the City Council on April 20 voted unanimously to adopt a set of groundbreaking protections for civil rights and civil
liberties, including digital civil liberties. The proposed Community Safety Act (CSA) would, among other things, require police to justify any instance of targeted electronic surveillance, protect the
rights of residents to observe and record police
activities, and ensure due process protections for individuals otherwise arbitrarily included in gang databases.
The CSA has been championed by a broad local coalition, including Rhode Island Rights, a member of the Electronic Frontier Alliance.
Within a week of its April 20 vote, however, increasingly strident objections by the local police union drove the Council to reverse itself on April 27, deciding by a vote of 9-5 to table the ordinance until
June 1. The Council’s April 27 vote effectively placed on hold a wide-ranging reform measure it had unanimously supported only a week before, deferring to forthcoming recommendations
by a working group created by the Council to suggest potential amendments.
A letter from the Providence Fraternal Order of Police to the Council the day before the April 27
vote reveals the chasm separating the perspectives of residents responding to the needs of communities. It reflects an attitude of entitlement among public safety officials who seem
to view civil rights and civil liberties as impediments to their work, rather then the defining cornerstones of the society they pledge themselves to serve and protect.
In the letter, the executive board of the Providence Fraternal Order of Police express incredulity at the prospect that community members would feel the need to be kept safe
from police, overlooking years of continuing controversy inflamed by recurring incidents of arbitrary and unaccountable police violence across the country.
As a matter of unfortunate fact, law-abiding Americans do increasingly feel the need to be kept safe from police. That’s why tens of thousands have taken to the streets
responding to incidents of police violence. That’s also why local legislators around the country are taking action to ensure transparency and enable civilian oversight of police, impose limits on the use of surveillance devices, and refine procedures for seemingly "routine" searches to buttress fundamental constitutional protections that have been
widely eroded in practice.
Negotiations with the working group will continue over the course of this month, until the Council revisits the CSA and the working group's recommendations on June 1. Providence
community members will discover then whether their elected leaders answer to them, or instead to groups representing the police. For his part, Mayor Jorge Elorza has reiterated his intent to sign the proposed Community Safety Act into law should the
Council ultimately stand by its prior decision.
>> mehr lesen
California Senate Committee Votes Against Privacy for Our Travel Patterns
(Mi, 10 Mai 2017)
The Electronic Frontier Foundation and the ACLU of California joined forces with California State Sen. Joel Anderson (R-Alpine) on Tuesday to testify in favor of S.B. 712, a bill that would have allowed drivers to cover their license plates when parked in
order to protect their travel patterns from private companies operating automated license plate readers (ALPRs).
The Senate Transportation and Housing Committee heard testimony on how private ALPR companies are collecting massive amounts of data on innocent people's driving patterns and selling
it for profit. Despite learning how this data may be misused to target vulnerable communities by the federal government, a Democratic majority voted to kill the bill 6-5.
The bill would have adjusted current law, which allows drivers to cover their entire vehicles (for example with a tarp), so that a driver can cover just a portion: the plate. Police
would still have the ability to lift the cover to inspect the plate, and since the measure only applied to parked vehicles, it would not have affected law enforcement's ability to
collect data on moving vehicles.
Privacy info. This embed will serve content from youtube-nocookie.com
Here's the text of EFF's opening testimony from the hearing:
Mr. Chair and Members.
My name is Dave Maass, and I represent the Electronic Frontier Foundation, a sponsor of S.B. 712. EFF is a
non-profit organization that defends civil liberties as the world becomes a more digital place.
I am a researcher who investigates police technology. My previous work has resulted in agencies fixing insecure surveillance cameras, a federal fraud investigation into
child-safety software, and increased disclosure of misuse of police databases.
Since November, not a week has gone by when I haven’t been asked the same questions: How do we protect our communities from being targeted? More chillingly, they ask: Do we need
to start building a new Underground Railroad?
I immediately think about the massive amount of data being collected by automated license plate readers operated by private companies: billions and billions of data points mapping
out our travel patterns. These companies rent this data to law enforcement but they also sell it to the private sector. Lenders examine travel patterns before approving a loan.
Insurers look at travel patterns before quoting a rate. Collections agencies use it to hunt down debtors.
A user could easily key in the address of a mosque, an immigration law clinic, an LGBT health center to reveal whole networks of vulnerable communities. A user could program the
system to identify associates and get real time alerts about a driver’s whereabouts.
The California Constitution is supposed to protect us from these invasions of our privacy.
In 1972, voters agreed that we have an inalienable right to pursue and obtain privacy. Your predecessors in the legislature explicitly stated this amendment would protect us from
computerized mass surveillance by police and private companies.
SB 712 allows Californians to cover our plates when our vehicles are lawfully parked. This is a balanced approach that would not affect how police use ALPR technology to monitor
Today you are voting on whether we can exercise our constitutional right to privacy against advanced surveillance systems logging our travel patterns. Thank you for this
opportunity. I respectfully ask for your aye vote.
These senators voted in favor of the legislation: Sens. Anthony Cannella (R-Ceres), Ted Gaines (R-El Dorado), Mike Morrell (R-Rancho Cucamonga), Nancy Skinner (D-Berkeley), and Scott
Wiener (D-San Francisco). EFF thanks these lawmakers for their support for motorists’ location privacy.
Voting in opposition were: Sens. Ben Allen (D-Santa Monica), Toni Atkins (D-San Diego), Jim Beall (D-San Jose), Mike McGuire (D-Healdsburg), Richard Roth (D-Riverside), and Bob
Wieckowski (D-Fremont). Several cited vague public safety and parking enforcement concerns.
Some of these senators acknowledged the threat to our privacy caused by ALPR companies and suggested that different, perhaps more robust legislation was necessary. EFF looks forward
to taking these senators at their word and pursuing further privacy protections next session.
The full hearing, including EFF's rebuttal to law enforcement arguments, is available on YouTube.
Automated License Plate Readers (ALPR)
>> mehr lesen
EFF Statement on the Troubling Firing of FBI Director Comey
(Mi, 10 Mai 2017)
The FBI is the country’s top law enforcement agency and serves the public, not the president. As defenders of the rule of law, we have deep concerns about President Trump's firing of
FBI Director James Comey. We disagreed with the director on many issues, including his consistent push for backdoors into our electronic communications and devices and a general
weakening of encryption, which is crucial to protecting Americans' privacy and security. But we are deeply troubled about Director Comey’s termination and what it says about the
independence of the office and its ability to conduct fair investigations, including into threats to our digital security and the integrity of our elections. The next FBI director
must be a strong, independent voice for the Constitution and the public interest.
>> mehr lesen
The Fight Against General Warrants to Hack Rages On
(Di, 09 Mai 2017)
The federal government thinks it should be able to use one warrant to hack into an untold number of computers located anywhere in the world. But EFF and others continue to make the
case that the Fourth Amendment prohibits this type of blanket warrant. And courts are starting to listen.
Last week, EFF pressed its case against these broad and unconstitutional warrants in arguments before a federal
court of appeals in Boston, Massachusetts. As we spelled out in a brief filed earlier this year, these warrants fail to
satisfy the Fourth Amendment’s basic safeguards.
The case, U.S. v. Levin, is one of hundreds of prosecutions resulting from the FBI’s 2015 seizure and operation of a child pornography site “Playpen.” While running the site,
the FBI used malware—or a “Network Investigative Technique” (NIT), as they euphemistically call it—to infect computers used to visit the site and then identify those visitors. Based
on a single warrant, the FBI ended up hacking into nearly 9,000 computers, located in at least 26 different states, and over 100 countries around the world.
But that’s unconstitutional. One warrant cannot allow law enforcement to hack into thousands of computers wherever they are in the world. As law enforcement defended these blanket
hacking warrants and pushed for federal rule changes to allow them—and as Congress stood by
and idly let this rule change go into effect—we’ve been fighting in court to make sure that the Fourth Amendment’s protections don’t disappear as law enforcement begins to rely on
hacking more and more.
And there are signs that courts are beginning to recognize the threats to privacy these warrants pose. Earlier this year, a federal magistrate judge in Minnesota found [PDF] that the warrant the FBI relied on in the Playpen case—the same warrant we were arguing
against in Levin—violated the Fourth Amendment.
In the February report, Magistrate Judge Franklin Noel described how the government’s NIT fails the Fourth Amendment’s requirement that warrants describe a particular place to be
searched, agreeing with arguments we’ve made to courts in other Playpen prosecutions. The warrant in this case fails to satisfy that requirement because, at the time the warrant was
issued, “it is not possible to identify, with any specificity, which computers, out of all of the computers on earth, might be searched pursuant to this warrant,” Noel wrote.
He also explained how the warrant essentially flips the Fourth Amendment’s particularity requirement on its head, searching and then identifying specific computers instead of
identifying specific computers and then searching them. “Only with [information gathered through the use of malware] could the Government begin to describe with any particularity the
computers to be searched; however, at that point, the computer had already been searched.”
It’s encouraging that courts are beginning to agree with arguments from us and others that these warrants far exceed the Fourth Amendment’s limits on government searches.
As the Playpen prosecutions begin to work their way up to the courts of appeals, the stakes become higher. The decisions these courts reach will likely shape the contours of our
constitutional protections for years to come. We’ve filed briefs in every appeal so far, and we’ll continue to make the case that unfamiliar technology and unsavory crimes can’t
justify dispensing with the Fourth Amendment’s requirements altogether.
>> mehr lesen
The FCC Pretends to Support Net Neutrality and Privacy While Moving to Gut Both
(Di, 09 Mai 2017)
FCC Chairman Ajit Pai has proposed a plan to eliminate net neutrality and privacy for broadband subscribers. Of
course, those protections are tremendously popular, so Chairman Pai and his allies have been forced to pay lip service to preserving them in “some form.” How do we
know it’s just lip service? Because the plan Pai is pushing will destroy the legal foundation for net neutrality. That’s right: if Pai succeeds, the FCC won’t have the legal authority
to preserve net neutrality in just about any form. And if he’s read the case law, he knows it.
Let’s break it down.
The FCC’s Proposal Makes It Impossible to Enforce Core Net Neutrality Requirements
Under the Telecommunications Act of 1996, a service can be either a “telecommunications service,” like telephone service, that lets the subscriber choose the content they
receive and send without interference from the service provider, or it can be an “information service,” like cable television or the old Prodigy service, that curates and selects what
content channels will be available to subscribers. The 1996 law provided that “telecommunications services” are governed by “Title II” of the Communications Act of 1934, which
includes nondiscrimination requirements. “Information services” are not subject to Title II’s requirements.
Under current law, the FCC can put either label on broadband Internet service – but that choice has consequences. For years, the FCC incorrectly classified broadband access as
an “information service,” and when it tried to impose even a weak version of net neutrality protections the courts struck them down. Essentially, the D.C. Circuit court explained [PDF] that it would be inconsistent for the FCC to
exempt broadband from Title II’s nondiscrimination requirements by classifying it as an information service, but then impose those requirements anyway.
The legal mandate was clear: if it wanted meaningful open Internet rules to pass judicial scrutiny, the FCC had to reclassify broadband service under Title II. It was also clear
to neutral observers that reclassification just made sense. Broadband looks a lot more like a “telecommunications service” than an “information service.” It entails delivering
information of the subscriber’s choosing, not information curated or altered by the provider.
It took an Internet uprising to persuade the FCC that
reclassification made practical and legal sense. But in the end we succeeded: in 2015, at the end of a lengthy rulemaking process, the FCC reclassified broadband as a Title II
telecommunications service and issued net neutrality rules on that basis. Resting at last on a proper legal foundation, those rules finally passed judicial scrutiny [PDF].
But now, FCC Chairman Ajit Pai has proposed to reverse that decision and put broadband back under the regime for “information services” – the same regime that we already know
won’t support real net neutrality rules. Abandoning Title II means the end of meaningful, enforceable net neutrality protections, paving the way for companies like Comcast or Time
Warner Cable to slice up your Internet experience into favored, disfavored, and “premium” content.
Title II Is Not Overly Burdensome, Thanks to Forbearance
While we are on the subject of the legal basis for net neutrality, let’s talk about the rest of Title II. Net neutrality opponents complain that Title II involves a host of
regulations that don’t make sense for the Internet. This is a red herring. The FCC has used a process called “forbearance” – binding limits on its power to use parts of Title II – to
ensure that Title II is applied narrowly and as needed to address harms to net neutrality and privacy. So when critics of the FCC’s decision to reclassify tell horror stories about
the potential excesses of Title II, keep in mind that those stories are typically based on powers that the FCC has expressly disavowed, like the ability to set prices for
What is more, Title II offers more regulatory limits than the alternative of treating broadband as an information service, at least when it comes to net neutrality. Where Title
II grants specific, clear, and bounded powers that can protect net neutrality, theories that do not rely on Title II have to infer powers that aren’t clearly granted to the FCC. As
proponents of limited regulation, these theories concern us. The
proper way to protect neutrality is not to expand FCC discretion by stretching the general provisions of the Telecommunications Act (an approach already rejected in court), but to use
a limited subset of the clear authorities laid out in Title II.
The FTC Cannot Adequately Protect the Privacy of Internet Subscribers
Reclassifying broadband as an information service not subject to Title II also creates yet another mess for subscriber privacy. The FCC crafted good rules for Internet privacy,
but Congress just rejected them. But it left in place the FCC’s underlying authority to protect privacy under Title II, which leaves privacy in limbo. Abandoning Title II for
broadband altogether would mean that the FCC no longer has much of a role to play in protecting broadband privacy – and it’s not clear who will fill the gap.
Some have looked to the FTC to take up the mantle, but just last year AT&T persuaded a federal appeals court that, as a company that also
owned a telephone business, the FTC had no power over any aspect of AT&T. That precedent covers the entire west coast and leaves millions of Americans without recourse for
privacy violations by their Internet service provider. And there’s no doubt that AT&T and others will try to extend that precedent across the country.
Even without this precedent, the FTC’s enforcement authority here targets deceptive trade practices. The agency will only take action if a company promises one thing and
FTC can’t help you.
Tell the FCC to Keep Title II and Not Undermine Net Neutrality
The FCC is now accepting comments on its plan. Make yourself heard via DearFCC.org.
Net Neutrality Lobbying
>> mehr lesen
Danger Ahead: The Government’s Plan for Vehicle-to-Vehicle Communication Threatens Privacy, Security, and Common Sense
(Mo, 08 Mai 2017)
Imagine if your car could send messages about its speed and movements to other cars on the road around it. That’s the dream of the National Highway Traffic Safety Administration
(NHTSA), which thinks of Vehicle-to-Vehicle (V2V) communication technology as the leading solution for reducing accident rates in the United States. But there’s a huge problem: it’s
extremely difficult to have cars “talk” to each other in a way that protects the privacy and security of the people inside them, and NHTSA’s proposal doesn’t come close to successfully
addressing those issues. EFF filed public comments with both NHTSA and the FTC explaining why it needs to go back to the drawing board—and spend
some serious time there—before moving forward with any V2V proposal.
NHTSA’s V2V plan involves installing special devices in cars that will broadcast and receive Basic Safety Messages (BSMs) via short-range wireless communication channels. These messages will include information about a vehicle’s speed, brake status, etc.
But one big problem is that by broadcasting unencrypted data about themselves at all times, cars with these devices will be incredibly easy to track. All you would need is a device
that could intercept these messages. NHTSA is aware of this huge privacy problem and tried to develop a plan to make it harder to link V2V transmissions with particular vehicles,
while still including enough information for the receiver to be able to trust a message’s content. But NHTSA’s plan—which involves giving each car 20 rotating cryptographic certificates per week to be distributed and managed by a complicated public key infrastructure (PKI)—didn’t achieve either objective.
One of the fundamental problems with NHTSA’s plan is that assigning each vehicle a mere 20 identities over the course of an entire week will do the opposite of protecting privacy; it
will give anyone who wishes to track cars a straightforward way to do so. NHTSA proposes that a car’s certificate change every five minutes, rotating through the complete batch of 20
certificates once every 100 minutes. The car would get a new batch of 20 certificates the next week. As we explained in our comments, while a human being might find it confusing or burdensome to remember 20 different
identities for the same vehicle, a computer could easily analyze data collected via a sensor network to identify a vehicle over the course of one day. It would then be able to
identify and track the vehicle for the rest of the week via its known certificates. The sensor network would have to complete this same process every week, for every new batch of
certificates, but given how simple the process would be, this wouldn’t present a true barrier to a person or organization seeking to track vehicles. And because human mobility
patterns are “highly unique,” it would be easy—in the case of a vehicle used in its ordinary way—to recognize and
track a vehicle from week to week, even as the vehicle’s list of 20 assigned certificates changed.
NHTSA seems to presume that no one will make long-term, systematic efforts to track vehicles. But this presumption is incredibly naïve. We have learned the hard way that both the
government and private companies will go to great lengths to track vehicles—just look at the proliferation of Automated License Plate Readers (ALPR). V2V will make these tracking
efforts easier, by making it significantly cheaper to get more reliable information about a vehicle’s whereabouts, more of the time, in more situations, in a clandestine manner,
without requiring a line-of-sight to a vehicle’s license plate.
There are other fundamental problems with NHTSA’s plan. First, NHTSA proposes the creation of a new public key infrastructure (PKI) to solve a problem that PKIs simply cannot—and were
never intended—to solve, demonstrating a serious misunderstanding of the technology. The sole purpose of a distributed PKI system is to determine who or what produced a validly
signed message. A PKI system, for example, cannot establish that the content of the message is “safe” and truthful and therefore the reliable basis for decisions. But NHTSA’s
plan suggests that use of a PKI would enable vehicles to assess whether messages it receives are “safe” in this way. This will create widespread—and potentially quite
dangerous—confusion about the level of confidence that should be placed in the contents of a validly signed message. NHTSA’s failure to understand the inherent limits of PKIs is
Second, NHTSA’s envisioned PKI is much larger and more complicated than anything in existence, yet it fails to account for known—and considerable—technical challenges presented
by smaller systems. For instance, in the WebPKI—used to distribute and manage HTTPS certificates for websites—it has proven
extraordinarily difficult to phase out cryptographic algorithms after they are discovered to be insecure. It took four years, for instance, to phase out (or deprecate) the hash algorithm SHA-1. NHTSA
proposes a more complicated PKI, which would issue orders of magnitude more certificates, without even attempting to address the fundamental functional challenges that we already know
Third, NHTSA’s plan for dealing with “bad actors” is to revoke their certificate and push out certificate revocation lists (CRLs) to all vehicles participating in the system. But this
just won’t work. Not only will after-the-fact revocation be too late to prevent the first—and potentially catastrophic—attack, but sending out CLRs to every single vehicle
participating in the system would take a tremendous amount of data. CRL distribution in the WebPKI has received widespread criticism for being extremely traffic-intensive and
inefficient, and the CRLs NHTSA envisions would be orders of magnitude larger than the largest CRLs used in the WebPKI—we’re talking gigabytes of data being distributed to each
car in the United States on a regular basis.
What’s more, the plan opens cars up to an entirely new surface of attack while failing to address serious security
concerns, putting people at risk of potentially grave harm.
And the plan makes absolutely no sense from a cost-benefit perspective. Because V2V only works when lots of cars have
the devices, it will take a great deal of time and money—$33 billion to $75 billion over the course of 15 years—before there is any payoff in terms of increased safety. And by
that time, given the exponential rate of technological development in mobile data networks alone, it’s likely the technology will be obsolete. Automobile manufacturers have come out
in support of the proposal, arguing that over $1 billion has already been invested in V2V. But
that’s not a reason to keep moving ahead with a flawed idea, especially when so much more will need to be spent. As laid
out in detail by Brad Templeton, Chairman Emeritus of EFF’s Board and a developer of and commentator on self-driving cars, “[V2V’s] cost is high, and those resources could be much
better spent.” For this reason alone, NHTSA needs to move on from its outdated and backwards looking proposal.
We’ve made sure both NHTSA and the FTC heard our concerns loud and clear. Since submitting comments to the FTC, we’ve been invited to participate in a workshop in June dedicated to examining the privacy and security
issues posed by connected vehicles. Senior Staff Technologist and former autonomous vehicle researcher Jeremy Gillula will
be there to explain why the FTC should put the brakes on NHTSA’s misguided and potentially disastrous plan.
EFF is not alone in our concern over NHTSA’s V2V plan. Many otherorganizations have filed comments expressing
their own concerns with the troubling proposal. We hope NHTSA heeds these warnings—for the good of all of us.
>> mehr lesen
Intel's Management Engine is a security hazard, and users need a way to disable it
(Mo, 08 Mai 2017)
Intel’s CPUs have another Intel inside.
Since 2008, most of Intel’s chipsets have contained a tiny homunculus computer called the “Management Engine” (ME). The ME is a largely undocumented master
controller for your CPU: it works with system firmware during boot and has direct access to system memory, the screen, keyboard, and network. All of the code inside the ME is secret,
signed, and tightly controlled by Intel. Last week, vulnerabilities in the Active Management (AMT) module in some Management Engines have caused lots of machines with Intel CPUs to be
disastrously vulnerable to remote and local attackers. While AMT can be disabled, there is presently no way to disable or limit the Management Engine in general. Intel urgently needs
to provide one.
This post will describe the nature of the vulnerabilities (thanks to Matthew Garrett for documenting them well), and the
potential for similar bugs in the future. EFF believes that Intel needs to provide a minimum level of transparency and user control of the Management Engines inside our computers, in
order to prevent this cybersecurity disaster from recurring. Unless that happens, we are concerned that it may not be appropriate to use Intel CPUs in many kinds of critical
What is AMT? How is it vulnerable?
On many Intel chips, the Management Engine is shipped with the AMT module installed. It is intended to allow system administrators to remotely control the machines used by an
organization and its employees. A vulnerability announced on May 1 allows an
attacker to bypass password authentication for this remote management module, meaning that in many situations remote attackers can acquire the same capabilities as an organization’s
IT team, if active management was enabled and provisioned.1
Once they have AMT access, attackers can interact with the screen or console as if the user were doing so themselves. Attackers can also boot arbitrary OSes, install a new OS, and (with some work)
steal disk encryption passwords.2
Not every machine is susceptible to the attack. For it to work, AMT has to have been both enabled and provisioned (commonly AMT is enabled but not provisioned by
default). Once provisioned, AMT has a password set, and is listening for network packets and will control the system in response to those.3 It can be provisioned by default if vendors used a feature called “Remote Configuration” with OEM Setup, by a user with administrative access, interactively or
with a USB stick during system boot, or (via the LMS vulnerability) by unprivileged users on Windows systems with LMS. Macs have MEs, but don’t ship with AMT at all. The password
protection is crucial for machines with AMT provisioned, but this week’s vulnerability allowed it to be bypassed.
How can users protect themselves?
Many organizations will need to take steps to protect themselves by ensuring that AMT is disabled in their BIOS and LMS is not installed, or by updating Intel firmware.
Unfortunately, even if AMT is currently disabled, that doesn’t mean an attack was never possible—an attacker might have disabled AMT after concluding the attack, to close
the door on their way out.
But troublingly, AMT is only one of many services/modules that come preinstalled on Management Engines. The best recommendation we can make for addressing this vulnerability today is
to disable that specific AMT module, because Intel doesn’t provide any way to generally limit the power of the ME. But vulnerabilities in any of the other modules could be as bad, if
not worse, for security. Some of the other modules include hardware-based authentication code and a
system for location tracking and remote wiping of laptops for anti-theft purposes. While these may be
useful to some people, it should be up to hardware owners to decide if this code will be installed in their computers or not. Perhaps most alarmingly, there is also reportedly a DRM module that is actively working
against the user’s interests, and should never be installed in an ME by default.
For expert users on machines without Verified Boot, a Github project called ME cleaner exists and can be used to disable a Management
Engine. But be warned: using this tool has the potential to brick hardware, and interested parties should exercise caution before attempting to protect their systems. A real solution
is going to require assistance from Intel.
What Intel needs to do fix this mess
Users need the freedom to choose what they want running on their system, and the ability to remove code that might contain vulnerabilities. Because the Management Engine only runs
code modules signed by Intel, this means having a way to disable the ME or reflash it with minimal, auditable firmware. While Intel may put a lot of effort into hunting for security
bugs, vulnerabilities will inevitably exist, and having them lurking in a highly privileged, low level component with no OS visibility or reliable logging is a nightmare for defensive
cybersecurity. The design choice of putting a secretive, unmodifiable management chip in every computer was terrible, and leaving their customers exposed to these risks without an
opt-out is an act of extreme irresponsibility.
What would be best for users and for the public’s ability to control machines that they have purchased would be for Intel to provide official support for reducing the attack surface
to limit the potential harm of the ME.
So we call upon Intel to:
Provide clear documentation for the software modules that are preinstalled on various Management Engines. What HECI commands provide a full list of the installed modules/services? What are the interfaces to those services?
Provide a way for their customers to audit ME code for vulnerabilities. That is presently impossible because the code is kept secret.
Offer a supported way to disable the ME. If that’s literally impossible, users should be able to flash an absolutely minimal, community-auditable ME firmware image.
On systems where the ME is an essential requirement for other security features that are important to some users (like Boot Guard), offer an additional option of a near-minimal,
community-auditable ME firmware image that performs these security functions, and nothing else. Or alternatively, a supported way to build and flash firmware images where the user can
inspect and control which services/modules are present, in order to manage security risks from those modules.
Until Intel takes these steps, we have reason to fear that the undocumented master controller inside our Intel chips could continue to be a source of serious vulnerabilities in
personal computers, servers, and critical cybersecurity and physical infrastructure. Intel needs to act quickly to provide the community with an auditable solution to these threats.
Correction 2017-05-12: Intel has contacted us with two corrections to the details of this post. (1) Management Engines are not physically located on the
CPU die itself, but in other parts of
Intel's chipsets; (2) the LMS-based local privilege escalation was a second consequence of the first code vulnerability, rather than a second vulnerability or bug of its own. We
have accordingly edited the language of this post in a couple of places, but do not believe these updates affect its conclusions.
1. A second consequence of this vulnerability allowed local, non-administrator users of
Windows systems to provision AMT, if a Windows component called Local Manageability Service (LMS) is installed (whether LMS is installed is typically up to the hardware
manufacturer — for instance Lenovo would decide whether or not to include LMS on a Thinkpad by default). This second consequence allows non-admin users or compromised accounts to
take complete control of those machines by provisioning AMT with settings of their choice.
2. AMT access is not the same as running arbitrary ME code, so attackers can’t access system memory directly; they have
to use the console, VNC, or boot OS images to accomplish their goals.
3. If provisioned, AMT listens on ports 16992 and 16993. Often this would only be on a physical Ethernet connection, but
provisioning can also enable AMT over WiFi (once an OS is running, AMT over WiFi requires OS support).
>> mehr lesen
Launching DearFCC: The Best Way to Submit Comments to the FCC about Net Neutrality
(Mo, 08 Mai 2017)
John Oliver wants you to contact the FCC about net neutrality. Our new tool makes it
easy to contact the FCC and helps you craft unique comments with just a few clicks.
FCC Chairman Ajit Pai has made a dangerous proposal to destroy the FCC’s net neutrality rules—the very
same rules that keep Internet providers like Comcast, Verizon, and AT&T from choosing which websites you can and can’t access and how fast those websites will load. But before he
can enact this terrible plan, he has to make the proposal publicly available and accept comments from regular people about how it would affect them. That’s where you come in.
Today, we’re launching a new tool that will help you craft a unique comment to the FCC: DearFCC.org. Using custom-generated text, we help Internet
users develop and submit personal comments to the official docket with just two clicks.
We launched a similar tool in 2014 to help users have a voice, and over a million people used DearFCC to speak out. Now we need your help to defend that victory.
Net neutrality—the right to access all Internet content freely without your Internet provider slowing down or even blocking content at its whim—is fundamental to our democracy. As
communities across the United States fight to speak out on contentious political issues, the citizenry needs to know that government-subsidized monopolies like Comcast, AT&T and
Verizon aren’t dictating which website we can access. The clear, light-touch rules enacted by the FCC in 2015 are the Internet’s best hope for ensuring we have a free and open
Let’s send Chairman Pai a message: this is our Internet and we’ll fight to protect it.Contact the FCC Now
>> mehr lesen
Community Control of Police Spy Tech in Oakland
(Sa, 06 Mai 2017)
Oakland could become the next community in California to adopt an open and rigorous vetting process for police surveillance technology.
All too often, government executives unilaterally decide to adopt powerful new surveillance technologies that invade our privacy, chill our free speech, and unfairly burden
communities of color. These intrusive and proliferating tools of street-level surveillance include drones, cell-site simulators,
surveillance cameras, and automated license plate readers.
On Tuesday, Oakland’s Public Safety Committee will vote on the “Surveillance and Community Safety Ordinance,” a breakthrough slate of transparency requirements drafted by the city’s two-year-old
Privacy Advisory Commission. EFF strongly supports these reforms. We sent a letter backing the ordinance last week, and we testified before the Privacy Commission earlier this year.
Privacy info. This embed will serve content from youtube-nocookie.com
Under the proposed ordinance, the power to decide whether or not to adopt new surveillance technologies would rest with the Oakland City Council. Most importantly, the ordinance would
provide the general public with an opportunity to comment on proposed surveillance technologies, and the use policies for these technologies, before the City Council decides whether to
adopt them. This will ensure community control over decision-making about these powerful spying technologies.
EFF commends Oakland Privacy for leading this critical effort in Oakland. EFF has supported similar efforts for Santa Clara County, BART, and Palo Alto.
If you live in California, email your state legislator to support S.B. 21, a similar bill that would mandate, on a statewide basis, community control of
police surveillance technology.
>> mehr lesen
EFF, Sen. Anderson Sponsor California License Plate Privacy Legislation
(Fr, 05 Mai 2017)
Sacramento—The Electronic Frontier Foundation (EFF) and Sen. Joel Anderson (R-Alpine) have introduced a California bill to protect drivers’ privacy by allowing them to cover their
license plates while parked to avoid being photographed by automated license plate readers (ALPRs).
The legislation will be considered by the California Senate Transportation and Housing Committee on Tuesday, May 9, 2017. EFF Investigative Researcher Dave Maass will testify as a
witness in support of the bill.
Under current law, Californians can cover their entire vehicles—including the plates—when lawfully parked. The proposed bill, S.B. 712, would clarify that California drivers can cover
just the plate under the same circumstances. Law enforcement officers would still have the authority to lift the cover to inspect a license plate.
ALPRs are high-speed cameras that photograph the license plates of any vehicles that pass within view and convert the plate scans into machine-readable information. GPS coordinates
and time stamps are attached to the data, which is uploaded to a searchable central database. Depending on the database, this information may be accessed by a variety of sectors,
including law enforcement, the insurance industry, and debt collectors. In aggregate, this data can reveal sensitive, private location information about innocent people, such as their travel patterns, where
they sleep at night, where they worship, when they attend political protests or gun shows, and what medical facilities they visit.
The bill would allow vehicle owners to shield their license plates from ALPRs mounted on police cars or vehicles operated by private surveillance companies that cruise down streets
and in parking lots photographing licenses of parked cars. These companies often offer services such as the ability to predict a driver’s movements or to identify a driver’s
associates based on vehicles regularly found parked near each other.
“Californians deserve a way to protect themselves from the data miners of the roadway—automated license plate reader companies,” said Maass. “This bill doesn’t put a new burden on law
enforcement or businesses, but rather gives members of the public who aren’t breaking the law a way to ensure they’re not being spied on once they’ve legally parked their car.”
If the information is breached, accessed by unauthorized users, or sold publicly, ALPR data has the potential to put people in real danger, such as making domestic violence victims’
travel patterns available to their ex-partners. Law enforcement officials should also support this bill, since ALPR data can also reveal information about the home lives of officers
or their meetings with witnesses. People could protect themselves when they visit sensitive locations, such as political rallies and protests.
“State law allows for fully covered vehicles if law enforcement can lift the cover to read the license plate and registration,” Sen. Anderson said. “S.B. 712 would specifically allow
for partially covering vehicles including the license plate only.”
Who: Dave Maass, Electronic Frontier Foundation Investigative Researcher
When: Tuesday, May 9, 1:30 pm
Where: California State Capitol, Room 4203
10th and L Streets
Sacramento, CA 95814
Text of the legislation: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180SB712
EFF’s Support Letter: https://www.eff.org/document/sb-712-support-letter
EFF's Second letter on the Constitutional right to privacy: https://www.eff.org/document/effs-second-letter-sb-712
Official S.B. 712 Fact Sheet: https://www.eff.org/document/sb-712-fact-sheet
For more on ALPRs: https://www.eff.org/deeplinks/2017/04/four-flavors-automated-license-plate-reader-technology
>> mehr lesen
Actually, Congress Did Undermine Our Internet Privacy Rights
(Fr, 05 Mai 2017)
Don't listen to the telecom lobby. Congress' vote to repeal the Federal Communications Commission's (FCC) broadband privacy rules has a profound impact on your online privacy rights.
According to those who supported the repeal, the rules never took effect (they were scheduled to do so throughout 2017), so the repeal doesn't change anything. You hear it from the
likes of AT&T as well as lawmakers like
Senator Jeff Flake (R-AZ), the author of the legislation who was asked about it at a recent town
hall. You are hearing it now in state legislatures that are working diligently to
fix the gap Congress created.
But that argument is meant to distract you from the real issue - you had a legal right to privacy from your broadband provider, and when Congress repealed the broadband privacy rules
using the Congressional Review Act (CRA), Congress diminished that right and may have hamstrung the FCC from enforcing it in the future.
Here are the facts.
The FCC’s Broadband Privacy Rules Were Based on a Law Passed by Congress
All regulations passed by federal agencies must be founded in laws passed by Congress. In essence, a regulation from a federal agency is supposed to be a means of enforcing the law.
Here, the underlying law is Section 222 of the Communications Act (under Title II of the Communications Act). Congress
created Section 222 in 1996 as a means to protect our privacy from telecommunications carriers who have unique access to our communications and personal information. There was a
window of time when Section 222 did not apply to broadband companies, but as a matter of law today it does. When you look at what the House and Senate said about the law
when they passed it, it is clear Congress intended Section 222 to create an affirmative right to privacy in our communications.
The CRA repeal had a direct effect on Section 222. Obviously if the ISPs spent close to $8 million lobbying Congress to pass it, it must have had some impact. Here is what their money
bought. Before the broadband privacy repeal, Internet providers had an obligation to follow all of the legal duties and responsibilities that protect our right to online
communications privacy per Section 222 through FCC enforcement. But when Congress utilized the CRA to repeal the FCC’s broadband privacy rules, they effectively told the FCC “you
can’t enforce the law in this specific way.” There was a lot to like in the now repealed privacy rules, but now that Congress has prohibited the FCC from enforcing those rules (or
passing “substantially similar” rules) as a matter of federal law, it is basically up to the states to step in to fully restore our privacy rights until a new federal law is passed or
the courts minimize the impact of the repeal.
Some More History on Section 222 in Terms of Broadband Privacy
From 1996 until 2005, Section 222 applied to telephone service and DSL broadband. It was unclear how the law applied to cable modems because the FCC had not explicitly decided how to
classify broadband Internet through cable, though cable companies were generally regulated as television providers by the FCC. In an attempt to resolve this discrepancy and harmonize
the application of the law the FCC embarked on a long and ultimately failed journey to classify broadband service as an “information service” under Title I while still retaining
oversight akin to that for Title II telecommunications carriers through a now defunct legal theory known as ancillary jurisdiction.
This means that even when cable modems were “information services” as of 2002, and DSL in 2005, the FCC still believed it had authority over broadband companies to do things like enforce network
neutrality—and did so during a Republican administration. However, Comcast defeated
the “ancillary jurisdiction” theory in the courts and Verizon later defeated the FCC again assuring that anything classified as an
“information service” under Title I is no longer subject to any meaningful consumer protections (this is also why Comcast, Verizon and AT&T want to be classified as information
services today). If the FCC wanted to retain consumer protection authority over broadband companies, they needed to re-evaluate how it applied the law.
As the high speed broadband market became less competitive and given the dramatic power Internet providers can wield over how we use the web, EFF and others strongly advocated that
the FCC put broadband back under Title II of the Communications Act so that the agency could
enforce simple, light-touch regulations to protect privacy and net neutrality. The FCC (and federal courts) agreed, and in 2015, in a victory for fans of Internet freedom, the
Commission re-classified broadband providers as telecommunications carriers under Title II. That means the law Congress passed in 1996 to protect our communications privacy, Section
222 of the Communications Act, once again clearly applied to all broadband Internet providers.
And This is What Congress Took Away
The FCC’s now-repealed and prohibited privacy rules divided Internet subscriber’s personal information into three distinct categories, each requiring broadband companies to get
different types of consent from their customers before they could use or disclose that information. Those categories were “sensitive,” “non-sensitive,” and “exceptions to consent.”
Sensitive information, including browsing history, app usage data, and the contents of communications, was given the highest protection. Before they could legally use that information
for anything other than providing Internet service, your Internet provider needed your explicit opt-in consent.
The FCC agreed with privacy advocates including EFF that carriers have a legal duty under Section 222 (a) of the
Communications Act to protect the "confidentiality of proprietary information of...customers." The now repealed privacy rules were the FCC’s attempt to define the
contours of that legal duty. The other category of information that was subject to opt-in consent was “customer proprietary network information” (CPNI), defined as "information
that relates to the quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service subscribed to by any customer...made available to
the carrier by the customer solely by virtue of the carrier-customer relationship." The full list of what the FCC considered CPNI can be found in paragraphs 58 to 84 of the now repealed Report and Order.
To reiterate, all of these consumer protections listed above are now prohibited as a matter of law and the FCC is not allowed to interpret and enforce the communications privacy law
in this way at this time. That is, in essence, what Congress took away with its CRA repeal.
The Cable and Telephone Industry are Not Done Eliminating our Rights
Now that they have a law, Comcast, AT&T, and Verizon are coming in for the final blow against both privacy and Internet freedom. FCC Chairman Pai recently released his plan to reclassify broadband Internet provides as Title I
information services. Make no mistake, such a plan will not only end Internet freedom by drastically enhancing the power of Comcast, AT&T, and Verizon to dictate the future of
the Internet, but it will assure that any vestiges of privacy protections that remain under a neutered Section 222 are completely removed. Worst yet, the plan ignores the obvious gap
in consumer protection that exists for telephone companies ever since the 9th Circuit Court of Appeals found that common carriers are exempt from FTC enforcement as well for the western United States.
We must put a stop to this plan. We came very close to stopping the broadband privacy repeal, and now we have to redouble our efforts, recruit more of our friends, and tell Washington D.C. that we value a free and open Internet that is protected by law.
>> mehr lesen
California: Let’s End Unchecked Police Surveillance
(Di, 02 Mai 2017)
All surveillance is political.
Nowhere is this more evident than on the local level when law enforcement acquires new surveillance technology. Too often, the political process advantages police over the public
interest. In California, a new bill—S.B. 21—offers the rare opportunity to shift the
balance in favor of privacy.
Take ActionCalifornians: tell your state senator to vote in favor of S.B. 21.
Police know that once they acquire a new spy gadget or system, it’s difficult for elected officials to take it away, lest they seem “soft on crime” during the next election.
When police do seek approval of privacy-invasive technology law enforcement agencies often provide only the barest amount of information to policymakers and the public about how
a system works and how they intend to use it . Sometimes police officials will avoid the approval process altogether by purchasing equipment with asset forfeiture slush funds, by
having them purchased for them by outside nonprofits, or by accepting free trials from vendors.
In the worst cases, police have abused this technology for their own political goals. In Calexico, for example, the U.S. Department of Justice
found that police had quietly acquired $100,000 worth of high tech spy gear—GPS trackers, surreptitious audio and video recording devices, tactical binoculars, and even a special
backpack for conducting field surveillance. These officers then used this technology to spy on elected officials and members of the public who filed police complaints, allegedly with
the motive of extortion. The DOJ also criticized Calexico for approving body cameras, automated license plate
readers, and a city-wide video system "before implementing the essential fundamentals of policing."
Police should not have unilateral power to decide which privacy invasions are in the public interest.
It’s time for Californians to seize back control of these surveillance technologies. Police should not have unilateral power to decide which privacy invasions are in the public
interest. All surveillance technologies must go through a public process in which citizens and elected officials have a chance to decide the limits of high-tech policing, including
whether to acquire and use new spying tools in the first place.
EFF urges the California legislature to pass S.B. 21, a surveillance technology reform bill introduced by State Sen. Jerry Hill. This legislation would require that police
departments, before acquiring or using new spying technology, obtain approval in advance to do so from an elected board during a public hearing. When police obtain such approval, they
must also get approval of a use policy that includes privacy safeguards.
The bill would require a biennial transparency report on surveillance technology in which an agency must disclose information such as the total cost for each surveillance technology,
the number of times each technology was used, how effective the technology was, instances in which technology was shared with another agency, and instances in which the technology was
used in violation of department policy.
The bill would also allow individuals to sue an agency if they’ve been harmed by a violation of the legislation.
S.B. 21 follows on the heels of similar legislation passed in Santa Clara
County and an ordinance approved by the City of Oakland’s Privacy Advisory
Commission. The legislation also strengthens previous laws passed by Sen. Hill,
S.B. 34 and S.B. 741, which require government agencies to publish privacy policies for automated license plate readers and cell-site simulators respectively.
California should end unconstrained police surveillance. There is a clear way to defend against secret acquisition and arbitrary use of policing technologies that invade the privacy
of thousands of innocent people per usage: pass bills like S.B. 21 to ensure the law is on our side.
TAKE ACTIONEmail your California state senator today to pass S.B. 21.
>> mehr lesen
The WIPO Broadcasting Treaty Would be a Body Blow for Online Video
(Di, 02 Mai 2017)
This week EFF is in Geneva, at the Thirty-Fourth session of the Standing Committee on Copyright and Related Rights (SCCR) of the World
Intellectual Property Organization (WIPO), to oppose a Broadcasting Treaty that could limit the use of video
online. Ahead of this meeting, word was that delegations would be pushing hard to have a diplomatic conference to finalize the treaty scheduled at WIPO's October Assembly. In
combination with initial uncertainty about whether the new United States administration would be maintaining its opposition to a diplomatic conference, we knew that it was important
for EFF to be there to speak up for users.
The Broadcasting Treaty proposal simply doesn't make sense. It proposes to create a new layer of rights over material that has been broadcast over the air or over cable, in addition
to any underlying copyrights over such material. Such rights would increase the cost and complexity of licensing broadcast content for use online, and create new and artificial
barriers to the reuse of material that isn't protected by copyright at all, such as governmental and public domain works.
We'd like to be able to tell the delegates what we think about this, and normally we would be able to do that, because WIPO generally allows NGOs to deliver statements in its plenary
meeting sessions. However this meeting has a new chair, whose working style involves fewer plenary sessions, and more informal sessions where which NGOs are only permitted to listen,
not to speak or report. As as result, it's likely we won't have the opportunity to address delegates about until later in the week, if at all. But here is what we plan to say:
This week the Standing Committee has worked very hard to move the negotiations over the Broadcasting Treaty towards a diplomatic conference. Yet it appears to us that
disagreements continue to exist at such a fundamental level, extending to the very objects of the treaty, that agreement remains unattainable.
The closest this Committee has ever come to agreement was when it narrowed the Treaty to cover only broadcasting organizations in the traditional sense from broadcast signal
piracy. But as soon as the discussion is broadened to include transmissions over computer networks or post-fixation rights, it inevitably falls apart.
This is because there is no logic in granting exclusive rights to broadcasting organizations over Internet transmissions, without granting similar rights to other online video
platforms. And if that is done, the new layers of rights and rightsholders will increase the complexity and risk of licensing video content, raising costs and barriers to
innovation that outweigh any possible benefit to broadcasters.
We also have specific concerns that the current chairman's draft moves the proposal in the wrong direction, by eliminating previous text on limitations and exceptions, entrenching
the inimical effects of Technological Protection Measures that criminalize fair use and innovation, and proposing a 50 year term -- 30 years longer than the term of protection
under the Rome Convention.
More fundamentally, this treaty creates an unnecessary impediment to the legitimate reuse of broadcast material that is in the public domain, with little corresponding benefit. In
our view the committee's time would be better invested by removing this item from the agenda to make more room for other relevant issues, such as the analysis of copyright related
to the digital environment.
>> mehr lesen
Congress Should Leave Alice Alone
(Di, 02 Mai 2017)
Overturning the Supreme Court Decision Would Allow Abstract Patents to Hurt innovation
One of the most important cases to cut back on the availability of vague, abstract patents was the 2014 decision Alice v. CLS Bank. In Alice, the U.S. Supreme Court reaffirmed the long-standing law
that patents could not be granted on "laws of nature, natural phenomena, and abstract ideas." The decision reinvigorated the use of 35 U.S.C. § 101 to invalidate abstract patents based on the fact that they claim unpatentable subject matter.
Alice was a watershed moment. In the decades before Alice, the Court of Appeals for the Federal Circuit—the court that hears all patent appeals—had consistently expanded
the scope of patentable subject matter. The case law was to the point that it seemed that so long as something was done "automatically," anything could be patented, including business methods, investment strategies, and patenting itself.
Since Alice, lower courts have routinely invalidated some of the worst abstract and vague patents. We've highlighted many of these abstract patents in our Stupid Patent of the Month series. There was also the patent on a "picture
menu" that was used to sue over 70 companies. And the patent on using labels to store information in a data structure that, on being invalidated as abstract, ended an astonishing 168 cases.
Recently, we've heard that certain patent owners are lobbying Congress to modify 35 U.S.C. § 101 and
legislatively overrule Alice. Many of these advocates like to claim that the software industry and innovation have
been seriously harmed by Alice. But what has really happened?
Currently, five of the top 10
companies by market capitalization are information technology focused, a significant shift from ten years ago when only Microsoft made the cut. Tesla, who famously announced they
were abandoning patents, is now the highest valued U.S. car maker. The 2017 Silicon Valley Report from Joint Venture Silicon Valley noted “seven straight years of economic expansion” in the Bay
Area, a region known for its innovation.
Smaller innovators are also going strong. The Kauffman Index of Startup Activity shows a sharp increase in activity between 2014, the year Alice was decided, and 2016.
Employment in the innovation and information products field in Silicon Valley grew by 5.2% between 2015 and 2016, more than any other category, and
venture capital investment remains strong. Thus if Alice
were in fact "decimating" the industry as one judge on the Federal Circuit predicted, there is little evidence of it. To be clear, this isn’t to
say that Alice is the only reason the industry is thriving, but it is a reminder that software patents and the software industry are not the same thing.
Not only do current trends in the industry show that Alice did not harm the technology sector, but past trends confirm it. When the Federal Circuit dramatically expanded the
scope of patentable subject matter, first in 1994 and again in 1998, there is no indication the shift provided additional stimulus to the already growing economy. Indeed, there is evidence
that patenting has little effect on innovation. A 2014 Congressional Budget Office report noted that "the large increase in patenting activity since 1983 may have
made little contribution to innovation," and in fact, "the proliferation of low-quality patents" were working to prevent small innovators from easily entering the market.
Alice has not harmed the technology industry and the argument for overturning it just isn't based in fact. If anything the evidence shows abstract patents do more to harm the
technology industry than help it. Alice is working to rid the system of vague and overbroad abstract patents, without any serious negative effect on the technology sector, and
should remain the law.
>> mehr lesen
Limitations of ISP Data Pollution Tools
(Di, 02 Mai 2017)
Republicans in Congress recently voted to repeal the FCC’s broadband privacy rules. As a result, your Internet provider may be able to sell sensitive information like your browsing
history or app usage to advertisers, insurance companies, and more, all without your consent. In response, Internet users have been asking what they can do to protect their own data
from this creepy, non-consensual tracking by Internet providers—for example, directing their Internet traffic through a VPN or Tor. One idea to combat
this that’s recently gotten a lot of traction among privacy-conscious users is data pollution tools: software that fills your browsing history with visits to random websites in order
to add “noise” to the browsing data that your Internet provider is collecting.
One of the goals of this post is to dispel misconceptions about these tools regarding problems users may think they solve.
We’ve seen this idea suggested several times, and we’ve received multiple questions about how effective it would be and whether or not it protects your privacy, so we wanted to
provide our thoughts. Before we begin, however, we want to note that several
seasonedsecurity professionalshave already weighed in onthe effectiveness and risks
involvedin using these tools.
While we want to be optimistic and encourage more user-friendly technology, it’s important to evaluate new tools with caution, especially when the stakes are high. Additionally, one
of the goals of this post is to dispel misconceptions about these tools regarding problems users may think they solve.
Limitations of ISP Data Pollution Tools
After reviewing thesesorts oftools, we’ve come to the conclusion that in their current form, these tools are not privacy-enhancing technologies, meaning
that they don’t actually help protect users’ sensitive information.
To see why, let’s imagine two possible scenarios that could occur if your browsing history were somehow leaked.
First, imagine the tool visited a website you don’t want to be associated with. Many data pollution tools try to prevent this by blacklisting certain potentially inappropriate words
or websites (or only searching on whitelisted websites) and relying on Google’s SafeSearch feature. However, even with these
protections in place, the algorithm could still visit a website that might not be embarrassing for everyone, but could be embarrassing for you (say, a visit to an employment website
when you haven’t told your employer you’re thinking of leaving). In this case, it might be difficult to prove it was the automated tool and not you who generated that traffic.
Second, sensitive data is still sensitive even when surrounded by noise. Imagine that your leaked browsing history showed a pattern of visits to websites about a certain health
condition. It would be very hard to claim that it was the automated tool that generated that sort of traffic when it was in fact you.
It’s reasonable to assume that whoever is analyzing this data will put some effort into filtering out noise when looking for trends—after all, this is a standard industry-wide
practice when doing data analysis on large data sets. This doesn’t necessarily mean that the data analysis will always beat the noise generation, but it’s still an important factor to
consider. Likewise, layering noise onto a prominent pattern will not make that pattern any
less prominent. Additionally, your Internet provider may already have years of data about your browsing habits from which it can extrapolate to help with its noise-filtering
Even if these specific problems were solved, we would still be reluctant to say that data pollution software could successfully protect your privacy. That’s because this kind of
traffic analysis is an active area of research, and there aren’t any well-tested large scale
models to show that these techniques work yet.
In other words, there are currently too many limitations and too many unknowns to be able to confirm that data pollution is an effective strategy at protecting one’s privacy. We’d
love to eventually be proven wrong, but for now, we simply cannot recommend these tools as an effective method for protecting your privacy.
Changing Internet Provider Behavior is a Worthy Goal, but Your Energy is Better Spent Calling Congress
Data pollution tools aren’t likely to succeed at their other primary goal besides protecting privacy: convincing Internet providers to stop mining our data to sell targeted ads. The
theory here is that if enough people used these tools, then the vast majority of browsing data Internet providers collected would be inaccurate. Inaccurate data is worthless for
targeting ads, so there would no longer be any monetary incentive for Internet providers to try to sell targeted ads—and thus no incentive to keep collecting browsing data in the
Unfortunately, a huge fraction of customers would have to be using data pollution tools for them to have an impact on major Internet providers’ bottom lines. And while it's wonderful
to imagine the majority of Internet users up in arms and installing one of these projects, it'd be as useful (if not more so) for all these users to call their lawmakers directly and convince them to pass privacy-protecting
legislation instead. In fact, it would probably take far fewer people to get Congress to change its mind than it would to affect a large Internet provider’s bottom line.
Culture Jamming for the Web
With all of that said, these tools could potentially be effective at one thing: confusing your Internet provider’s ad-targeting algorithms and making the ads they show you less
relevant. If this sort of culture jamming appeals to you, then these tools could help you accomplish that. Just keep in mind that you’ll have to rely on other techniques to protect your privacy from your Internet provider, and
that to really achieve the sort of change we need, we also need to take the time to talk to our lawmakers and make our voices heard directly. Only through a
combination of activism, technology, and legislation will we truly be able to protect our privacy online.
>> mehr lesen
What Don’t You Want the NSA to Know About You?
(Di, 02 Mai 2017)
For years, U.S. government surveillance of innocent Americans has been a topic of heated debate, especially for those in the tech community.
With Congress gearing up for a fight over the 2017 reauthorization of a surveillance authority that lets the NSA spy on innocent Americans without a warrant—Section 702, enacted as
part of the FISA Amendments Act—that debate is sure to rage on in the coming months.
So we sent reporter David Spark to the RSA Conference in San Francisco, California in February to ask one simple question: What don’t you want the NSA to know about you?
Privacy info. This embed will serve content from youtube-nocookie.com
The answers spanned the spectrum, from emails, to phone calls, to web browsing records, to financial information, to information about individuals’ children, to nothing.
Some got philosophical. “Everyone says, ‘I have nothing to hide,’ and that’s not the point,” one attendee told us. “The point is that I want to control what people know about me.”
Others turned the question on its head, asking instead why the NSA is conducting surveillance on Americans. “I don’t think their charter is to spy on Americans, so why are they?” one
And some got blunt. One attendee said he already assumes the NSA knows a lot about him already. “It scares me and offends me,” he said.
If the warrantless spying on Americans scares and offends you, contact your representatives in
Congress and tell them to pull the plug on Section 702 surveillance. And watch the video to see other RSA Conference attendees’ responses.
Special thanks to David Spark (@dspark) and Spark Media Solutions
for their support and production of this video. The background music heard at the end—the song Hydrated—is licensed CC BY-NC-SA 3.0 by Kronstudios. EFF
original work (i.e., every thing but the background music heard at the end) is licensed CC BY 4.0.
>> mehr lesen
Courts Must Allow Online Platforms to Defend Their Users' Free Speech Rights, EFF Tells Court
(Di, 02 Mai 2017)
Online platforms must be allowed to assert their anonymous users’ First Amendment rights in court, EFF argued in a brief filed Monday in a California appellate court.
The case, Yelp v. Superior Court, concerns whether online review website Yelp has the legal right to appear in court and make arguments on behalf of its users.
Courts across the country have increasingly recognized that online platforms do have the right to argue for their users’ free speech rights, particularly when private litigants or
government officials seek to learn the speakers’ identities.
A California trial court, however, ruled in December 2016 that Yelp could not assert a user’s First Amendment rights after the platform received a subpoena seeking the identity of a
Yelp user that a plaintiff alleged had defamed him and his business.
But as EFF’s brief [.pdf] argues, online platforms have both a
legal right and an important role to play in asserting their users’ free speech rights.
Besides anonymous speakers asserting their own rights to directly challenge the legal demands to unmask them, online platforms are increasingly asserting their users’ rights in
court. Platforms assert their users’ rights for a variety of reasons, including deterring frivolous efforts to unmask speakers and upholding their own platforms’ views on the
importance of free speech. They also seek to make their platforms hospitable to important speech that may only be offered under the veil of anonymity. Simply put, many online
platforms recognize that a key to maintaining the robust forum their users rely upon requires having their users’ backs.
The trial court’s ruling is dangerous, EFF argues, because it “threatens to undermine online platforms’ standing to assert their users’ First Amendment rights and thereby erode the
ability for the Internet to serve as a forum for anonymous speakers.”
If platforms do not have a legal right to stand up for their users, “defense of these vexatious requests will fall solely to users themselves, many of whom may not know their rights
or may otherwise not be in a position to fight for them,” EFF’s brief argues.
>> mehr lesen
Chinese Government and Hollywood Launch Snoop-and-Censor Copyright Filter
(Mo, 01 Mai 2017)
Two weeks ago the Copyright Society of China (also known as the China Copyright Association) launched its new 12426 Copyright Monitoring Center,
which is dedicated to scanning the Chinese Internet for evidence of copyright infringement. This frightening panopticon is said to be able to monitor video, music and images found on "mainstream audio and video sites and graphic portals, small
and medium vertical websites, community platforms, cloud and P2P sites, SmartTV, external set-top boxes, aggregation apps, and so on."
When it finds content that matches material submitted to it by a copyright holder, the Center provides them with a streamlined
notification and takedown machine, from the issuance of warning notices through to the provision of mediation services. The Center's technology service provider also provides platforms with filtering technology that can allow infringing materials to be blocked from upload or download to
begin with, obviating the need for a separate takedown procedure.
The Copyright Society of China, which instituted the 12426 initiative, is formally a private association, and lists amongst the venture's partners American media companies such as
21st Century Fox and Warner Bros. On the other hand, the Society is headed by a representative of the National Congress of the Communist Party of China, and includes within its mission "to provide technical support for the government to carry out network copyright supervision according to law."
The 12426 service utilizes proprietary commercial technology for its copyright monitoring, and much of the same
technology is used by Chinese Internet companies for complying with Chinese government mandates for political censorship. For example, earlier this month it came to light that the
Chinese government, at least at a provincial level and possibly at a national level, requires every provider of non-residential public Wifi hotspots to monitor and record their users' activity. This is in addition
to the well-documented surveillance and censorship of Chinese online platforms such as Weibo and WeChat.
Copyright Surveillance and Censorship Closer to Home
Although this stifling surveillance machine is a human rights crisis in its own right for China's 720 million Internet users, it also provides a cautionary tale for the West, where
copyright holder lobbyists are advocating for very similar filtering and surveillance mechanisms to be made mandatory. In that sense, China's copyright surveillance machine of today
may warn of the European or American Internet of tomorrow.
For example, it is thanks to the copyright industry that German Wifi network operators are required to password-protect their networks to
prevent them being used by anonymous users (who might infringe copyright). And in Europe right now, a current proposal would require user content platforms to filter uploads
for material that copyright holders claim to be infringing.
This European proposal would put into place exactly the same kind of filtering that China's copyright surveillance industry provides today, repurposing technologies that the
authoritarian regime also uses for the repression of political dissent. And this kind of repurposing goes both ways—technologies and legal processes developed in the first instance
for copyright enforcement are also misused for political censorship and repression.
Another uncomfortable similarity between the Chinese Internet censorship regime and developments in the West is the close intermingling of public and private initiatives. Just as the
Chinese Community Party sits at the head of the Copyright Society of China, so too the heavy hand of government can be found behind many notionally self-regulatory industry schemes
from North America and Europe that aim to address copyright
infringement. These government-led arrangements, that we call Shadow Regulation, are notoriously lacking in
transparency, accountability, and user participation. The above mentioned European proposal, which pushes platforms and copyright owners into "voluntary" agreements concerning upload
filtering, is a textbook case in point.
The announcement of China's government-linked 12426 Copyright Monitoring Center is absolutely chilling. It is just as chilling that the governments of the United States and Europe are
being lobbied by copyright holders to follow China's lead. Although this call is being heard on both sides of the Atlantic, it has gained the most ground in Europe, where it needs to
be urgently stopped in its tracks. Europeans can learn more and speak out against these draconian censorship demands at the Save the Meme campaign
>> mehr lesen
Post-TPP Special 301 Report Shows How Little Has Changed
(Mo, 01 Mai 2017)
Last Friday the United States Trade Representative (USTR) released the 2017 edition of its
Special 301 Report [PDF], which the USTR issues each year to "name and shame" other countries that the U.S. claims should be doing more to protect and enforce their
copyrights, patents, trademarks, and trade secrets. Most of these demands exceed those countries' legal obligations, which makes the Special 301 Report an instrument of political
rhetoric, rather than a document with any international legal status.
Last year's Special 301 Report included 45 references to the Trans-Pacific Partnership, which was at the time soon expected to become the
jewel in the USTR's crown. This year, following the TPP's humiliating defeat, it is not mentioned in the Special 301 Report even once. Indeed, not only has the TPP been expunged from
the text as if it never happened at all, but the USTR has also finally ceased touting the Anti-Counterfeiting Trade Agreement (ACTA), another dead IP treaty that it had nonetheless
included as a supposed global standard in its previous Special 301 Reports.
Instead, the USTR reports on its work at the World Trade Organization (WTO), which has opened up as a possible new front for the USTR to push former TPP standards. The Special
301 Report scolds certain countries for "server localization mandates, cross-border data flow restrictions, [and] programs to support only local data hosting firms," all of which were
concerns previously addressed by the TPP, and now proposed for resolution at the WTO. Whether the WTO has the appetite to address such issues, however, remains to be seen; we'll know
more following its Ministerial Conference at Buenos Aires in December this year.
Other than the omission of the TPP shibboleth, it's surprising how little else has changed in this year's Special 301 Report compared to last year's. In fact, this is the first time ever that the exact
same 11 countries have been nominated for the Priority Watch List as last year, along with the exact same list of 23 countries for the regular Watch List. The topics on copyright that
are treated in the Special 301 Report are also a repetition of last year, including complaints about stream ripping, mod chips, and media players that are configured to access
infringing streams. China, in particular, is singled out for criticism in this regard:
China remains a leading source and exporter of systems that facilitate copyright piracy, including websites containing or facilitating access to unlicensed content, and illicit
streaming devices configured with apps to facilitate access to such websites. These systems also include devices and methods that facilitate the circumvention of technological
protection measures, which enable the delivery of services via the cloud and protect video games and other licensed content.
In addition to this, the USTR continues to complain about countries that fail to adequately protect trademarks used in domain names, and India in particular is criticized for "the
issuance of problematic guidelines that appear to restrict the patentability of computer implemented inventions."
None of these complaints have any legal basis. The technologies mentioned in the paragraph about China all have substantial non-infringing uses, such as the use of circumvention tools
for backup, archival, compatibility, and repair. The question of how and to what extent trademarks should be protected in domain names is a question for multi-stakeholder bodies such
as ICANN and its national-level equivalents, not for governmental horse-trading. And India's position on computer implemented inventions (which prohibit computer software from being
patented per se, but allow software in combination with new hardware to be patented) is broadly in line with similar policies held in Europe and elsewhere.
Then again, nobody in the know ever read the Special 301 Report expecting it to be legally accurate. Rather, it's just a document used to threaten other countries into submission to
unilateral U.S. demands. And with the demise of the TPP, those threats are now emptier than ever before.
>> mehr lesen
Hearing Wednesday: EFF Argues Against Massive Government Hacking in ‘Playpen’ Case
(Mo, 01 Mai 2017)
FBI Used One Warrant to Infiltrate Thousands of Computers
Boston – On Wednesday, May 3, at 9:30 am, the Electronic Frontier Foundation (EFF) will argue that an FBI search warrant used to hack thousands of computers around the world was
The hearing in U.S. v. Levin at the United States Court of Appeals for the First Circuit stems from one of the many cases arising from a controversial investigation into “Playpen,” a child pornography website. The precedent set by the
Playpen prosecutions is likely to impact the digital privacy rights of Internet users for years to come.
During the investigation, the FBI secretly seized the servers running the Playpen site and continued to operate them for two weeks. The bureau allowed thousands of images to be
downloaded while distributing malware to website visitors. With that malware, the FBI hacked into over 8,000 devices in hundreds of countries across the globe—all on the basis of a
However, because the government was running the Playpen site, it was already in possession of information about visitors and their computers. Rather than taking the necessary steps to
obtain narrow search warrants, the FBI instead sought a single, general warrant to authorize its massive hacking operation, violating the Fourth Amendment. In Wednesday’s hearing, EFF
Senior Staff Attorney Mark Rumold will argue as amicus, urging the court to send a clear message that a
vague search warrant is not enough to satisfy the privacy protections enshrined in the Constitution.
U.S. v. Levin
Wednesday, May 3
United States Court of Appeals for the First Circuit
John Joseph Moakley U.S. Courthouse
1 Courthouse Way
Boston, MA 02210
For more information on this case:
Senior Staff Attorney
>> mehr lesen
The End of the NSA's ‘About’ Searches Is Just the Beginning
(Sa, 29 Apr 2017)
The NSA is stopping its use of one controversial surveillance technique that impacts Americans' privacy.
Make no mistake. This is good news for anyone who wants government surveillance to follow the law. But there’s much more to be done to rein in unconstitutional spying.
Initially reported by The New York Times today and confirmed by the agency itself, the NSA will no longer conduct “about”
searches of the full content of Internet communications, including to and from innocent Americans, that are "about" -- or mention -- a foreign intelligence target’s email address or
other identifier. The NSA said the changes were a result of “inadvertent compliance incidents,” or violations of court-imposed restrictions.
These searches happen as part of the NSA’s Upstream program, through which the agency taps directly into the Internet backbone to seize and search Internet traffic. The U.S.
government has claimed these warrantless searches of Americans’ email are allowed under Section 702, enacted as part of the FISA Amendments Act, which is set to expire at the end of
In the NSA’s own words:
“NSA will no longer collect certain internet communications that merely mention a foreign intelligence target. … Instead, NSA will limit such collection to internet communications
that are sent directly to or from a foreign target.
Even though NSA does not have the ability at this time to stop collecting ‘about’ information without losing some other important data, the Agency will stop the practice to reduce
the chance that it would acquire communications of U.S. persons or others who are not in direct contact with a foreign intelligence target.
Finally, even though the Agency was legally allowed to retain such ‘about’ information previously collected under Section 702, the NSA will delete the vast majority of its
upstream Internet data to further protect the privacy of U.S. person communications.”
For nearly a decade, EFF has argued in court that these and other warrantless searches and seizures through Upstream are
unconstitutional. Although today's announcement is a welcome one, the NSA has demonstrated, time and time again, that it will only institute meaningful reforms after it gets caught in
serious and repeated violation of the law.
We demand better from our country’s intelligence community. With the looming sunset of Section 702, Congress is in the perfect position to demand more too, starting with a full and
public explanation the scope of Section 702 surveillance, including the long-overdue accounting for how many Americans have been impacted by NSA surveillance.
When it comes to reforms, Congress should codify the changes the NSA announced today. If “about” searches are so privacy-invasive for innocent Americans, they should be explicitly
prohibited by law.
But that’s not the only way Congress can work to reduce the risk of collecting information about innocent people. Lawmakers should also curtail surveillance programs under Section 702
including by limiting collection to information about true national security concerns instead of allowing the programs to collect the much broader category of “foreign intelligence
information.” Lawmakers should also work to reduce “incidental collection,” or the collection of communications to and from Americans who interact with individuals located outside of
the United States.
And that’s just on the intelligence collection side. Congress should limit what the intelligence community can do with information that has been collected under Section 702. One
obvious move would be to close the “backdoor search loophole,” or the gap in
privacy protections that allows the FBI to search for information about Americans in databases containing information collected under Section 702 without getting a warrant. Efforts to
close this loophole have been widely supported on the Hill in the past and should be included in any reform package Congress considers this year.
Outside of what information is collected and how it’s used, lawmakers should push for increased transparency into and oversight of the intelligence community’s use of Section 702.
That includes things like declassifying more information about the NSA’s surveillance programs, letting companies publish more specific information about the government requests they
receive for customer data, and making it easier for Americans to bring lawsuits against the U.S. government if they feel their constitutional privacy protections have been violated.
The NSA’s announcement today is a win for constitutional privacy protections, for those of us fighting unlawful surveillance in the courts, and for anyone who pushed for surveillance
reform by signing a petition, contacting their lawmakers, or otherwise voicing their concerns about warrantless spying on innocent Americans.
With the 702 reauthorization debate set to unfold in the coming weeks and months, we need to tell
Congress to keep fighting to rein in this warrantless spying.
Tell Congress: Pull the Plug on Internet Spying Programs.
>> mehr lesen
Stupid Patents of the Month: Taxi Dispatch Tech
(Fr, 28 Apr 2017)
With all the attention ride-sharing has been getting lately, some might
think Uber and Lyft were highly inventive apps. But according to at least one company, the apps are just highly infringing. Who’s right? Probably neither.
Hailo Technologies, LLC (“Hailo”) has recently sued both Uber and Lyft, alleging they infringed Hailo’s taxi dispatch patent, U.S.
Patent No. 5,973,619 (“the ’619 patent”). The patent claims a method for a “computer system” that: (1) displays a list of transportation options; (2) asks the customer for a
number of passengers; (3) shows destinations graphically; (4) displays the approximate fare; (5) calls a selected taxi company up for a ride; and (6) gives an estimated arrival time.
A few months ago, Hailo also sued a few other companies for infringing a different patent, U.S. Patent No. 6,756,913 (“the
’913 patent”), which claims a method for keeping track of available taxis on the road. More specifically, it claims a method where a computer (1) determines if a taxi is free (i.e.
currently has no rider); and if free (2) sends the current location of the taxi to the taxi dispatch server.
Both of Hailo’s patents date to the late 1990s. That is, the patents claim these inventions didn’t exist (or weren’t obvious) at that time. Except a brief Internet search shows that
similar taxi dispatch technology not only existed, but was widely used. Two reports from the Department of Transportation from 1991 and 1992 describe the state of
“computer dispatch” technology at that time, and show many of the claimed features of the ’619 and ’913 patents. Another
report, from 1995, has even more detail about various taxi dispatch technologies. For example, on page 115 the report details a product called “MT GU,” an automated call box that
allows customers to order “one or several taxis”, specify “the taxi desired” (including getting a larger van), and provides the waiting time. The MT GU system seems to describe many,
if not all, of the features in the system claimed in the ’619 patent, and predates it by several years.
So there’s good reason to think that the inventions claimed in the two patents were not actually novel or nonobvious when the patent applications were filed. But will any of that
matter? Patents, once issued, are presumed valid. In order for a patent to be declared invalid in court, a challenger must show “clear and convincing evidence” of invalidity. When the
argument for invalidity is based on prior art, this can be an expensive and time consuming process, often costing in the hundreds of thousands, if not millions, of dollars. Thus even
if these patents are in fact invalid and never should have issued, due to the cost of litigation courts often never decide the issue.
An alternative to court exists in the form of inter partes
review at the Patent Office. This allows the Patent Office to take a second look at claims in a patent, and declare them unpatentable under a more lenient “preponderance of
the evidence” standard. But this procedure, although cheaper than court, is still relatively expensive. One study estimated costs through appeal at $350,000.
Given the costs of litigation in court or at the Patent Office, a patent owner can sue on a “presumed valid” patent and use the threat of fees and costs to get an undeserved
settlement. When a company does nothing else (meaning, it doesn’t have a real business other than litigation) we call those companies “patent trolls.”
Hailo strikes us as pretty trollish. As noted, the patents in questions seem weak at best, and Hailo doesn’t seem to be seriously using the “inventions” in any event. In
its complaint against Uber, Hailo states that it is an app maker. But its website,
www.bring.bikes, was registered only 10 days before it sued Uber and Lyft. Confusingly enough, there is
another company named “Hailo” that actually does make a taxi hailing app. Even more confusing: “Hailo” the patent owner says it
does business under the name “Bring,” but does not appear to be associated with another company called Bring that’s actually involved in
This “Hailo” by contrast, seems focused on litigation. A recently filed document attaches
the agreement assigning the ’913 patent from its original owner to Hailo. The
contract is replete with references to patent enforcement and litigation. And in an earlier
complaint, Hailo listed its business address as that of a law firm, and one of its members, 2S Ventures, has been associated with at least one entity that has filed over 20
lawsuits (login req.), a typical litigation pattern for a patent troll.
Whether or not Hailo is a practicing company, these are weak patents that deserve serious challenge. Sadly, that’s unlikely to happen – which is why stupid patents like these should
>> mehr lesen
Quem defende seus dados no Brasil? Segundo relatório anual mostra melhora na privacidade das telecomunicações
(Do, 27 Apr 2017)
Tradução de: Ana Luiza Araujo
Hoje, o InternetLab, um dos principais centros de pesquisa independente em políticas de Internet no Brasil, lançou seu relatório de
2017 sobre companhias de telecomunicação locais e como elas lidam com as informações privadas de seus clientes. “Quem defende seus
dados?” procura encorajar as companhias a competir pelos usuários por mostrar quem se compromete a proteger a privacidade e os dados de seus clientes. É por isso que o InternetLab
avaliou as políticas das mais importantes empresas de telecomunicação brasileiras para verificar o seu comprometimento com a privacidade dos usuários quando o Estado pede informações
pessoais de seus clientes.
Esse relatório faz parte de uma iniciativa sul-americana por parte dos principais grupos de direito digital do continente para esclarecer as práticas de políticas de Internet na
região, baseado no relatório anual da EFF chamado “Who Has Your Back”. Na última semana, a organização TEDIC, do Paraguai e a Derechos Digitales, do Chile, lançaram seus respectivos
relatórios. Grupos digitais da Colômbia, México e Argentina também irão lançar estudos similares em breve.
O InternetLab escolheu as empresas provedoras de Internet que, de acordo com dados publicados pela ANATEL (Agência Nacional de Telecomunicações) em Outubro de 2016, possuem pelo menos
10% de todos os acessos de Internet no Brasil -- seja por banda larga ou telefonia móvel. Assim, “Quem defende seus dados?” inclui um
time de companhias que são responsáveis por 90% das conexões de Internet no Brasil -- NET, Oi e Vivo (banda larga) e Claro, Oi, TIM e Vivo (Internet móvel). Juntos, os registros
dessas empresas possuem informações íntimas dos movimentos e relacionamentos de quase todos os cidadãos do país.
O InternetLab desenvolveu sua própria metodologia para abarcar as especificidades sociais e legais do Brasil, focando em (1) comprometimento público com o cumprimento da lei; (2)
adoção de práticas e políticas pró-usuário; e (3) transparência sobre práticas e políticas. O relatório promove a transparência e as melhores práticas no campo da privacidade e
proteção de dados, empoderando usuários de Internet por meio da educação sobre suas escolhas como consumidores.
Cada companhia foi avaliada em seis categorias:
Informações sobre tratamento de dados: O provedor de Internet fornece informações claras e completas sobre coleta, uso, armazenamento, tratamento e proteção de dados?
Informações sobre condições de entrega de dados a agentes do Estado: O provedor de Internet promete entregar dados cadastrais e registros de conexão apenas mediante ordem
judicial, e dados cadastrais, por requisição, apenas a autoridades administrativas competentes?
Defesa da privacidade dos usuários no Judiciário: O provedor de Internet contestou judicialmente pedidos de dados abusivos ou legislação que considera invadir a privacidade de
Posicionamento público pró-privacidade: O provedor de Internet se posicionou publicamente sobre projetos de lei e políticas públicas que afetam a privacidade dos usuários,
defendendo dispositivos que melhoram a proteção desse direito?
Relatório de transparência sobre pedidos de dados: A empresa publica relatórios de transparência, informando quantas vezes recebeu pedidos de dados por autoridades estatais e
quantas vezes entregou?
Notificação do usuário: A empresa notifica usuários quando recebe pedidos de dados?
Os detalhes de cada de cada categoria podem ser acessados no site: http://quemdefendeseusdados.org.br/
Abaixo, veja o ranking das empresas de telecomunicações brasileiras:
Desde o primeiro relatório do InternetLab, apareceram sinais de melhoras. Neste ano, a Vivo foi a única companhia a receber
uma estrela cheia por informar seus clientes sobre práticas de proteção de dados e também por publicar um relatório de transparência. Essas foram as primeiras estrelas cheias nessas
categorias. Além disso, o InternetLab deu estrelas cheias para Claro, Oi e TIM por lutar pelos direitos de seus usuários no Judiciário; no ano passado, apenas a TIM havia conquistado
a estrela completa. As divisões móveis da Vivo e da TIM rivalizaram pelo primeiro lugar, ambas com 3 ¾ estrelas.
No entanto, em 2017, nenhuma empresa recebeu uma estrela cheia por possuir um compromisso com revelar dados pessoais e registros de conexão apenas frente a uma ordem judicial ou, no
caso de dados pessoais, frente a um pedido feito pelas autoridades administrativas competentes. Ano passado, o InternetLab havia dado estrelas cheias para duas companhias na então
versão desta categoria. E, mais uma vez, nenhuma empresa ganhou créditos por fornecer aos seus clientes notificações sobre pedidos de dados pelo governo.
Apesar do progresso inquestionável, ainda há um espaço significativo para melhora. O InternetLab convida as companhias a desenvolver políticas de privacidade para que os usuários
possam entender como seus dados pessoais são tratados, como manda o Marco Civil da Internet, e como as empresas provedoras de Internet lidam com demandas de informações vindas do
governo. O InternetLab também encoraja as companhias a usarem as “salas de imprensa” em seus sites para listar suas ações em defesa da privacidade e da proteção de dados nos tribunais
e em debates públicos. Por fim, o InternetLab também incentiva as empresas a publicar relatórios de transparência e a adotar práticas de notificação do usuário.
>> mehr lesen