Leaks Show Europe's Attempts to Fix the Copyright Directive Are Failing
(Fr, 16 Nov 2018)
The EU’s “Copyright in the Digital Single Market Directive” is closer than ever to becoming law in 28 European countries, and the deep structural flaws in its most controversial
clauses have never been more evident.
Some background: the European Union had long planned on introducing a new copyright directive in 2018, updating the previous directive from 2001. The EU's experts weighed a number of
proposals, producing official recommendations on what should (and shouldn't) be included in the new directive, and meeting with stakeholders to draft language suitable for adoption
into the EU member states' national laws.
Two proposals were firmly rejected by the EU's experts: Article 11, which would limit who could link to news articles and under which circumstances; and Article 13, which would force
online platforms to censor their users' text, video, audio, code, still images, etc., based on a crowdsourced database of allegedly copyrighted works.
But despite the EU's expert advice, these clauses were re-introduced at the last minute, at a stage in the directive's progress where they would be unlikely to receive scrutiny or
debate. Thankfully, after news of the articles spread across the Internet, Europe’s own voters took action and one million Europeans wrote to their MEPs to demand additional debate.
When that debate took place in September, a divided opposition to the proposals
allowed them to continue on to the next phase.
Now, the directive is in the final leg of its journey into law: the "trilogues," where the national governments of Europe negotiate with the EU's officials
to produce a final draft that will be presented to the Parliament for a vote.
The trilogues over the new directive are the first in EU history where the public are allowed some insight into the process, thanks to a European Court of Justice ruling that allows
members of the European Parliament to publicly disclose the details of the trilogues. German Pirate Party MEP Julia Reda has been publishing regular updates from behind the trilogues'
It's anything but an orderly process. A change in
the Italian government prompted the country to withdraw its support for the directive. Together with those nations that were already unsure of the articles, this means that
there are enough opposing countries to kill the directive. However, the opposition remains divided over tactics and that means that the directive is still proceeding through the
The latest news is a leaked set of proposed revisions to the
directive, aimed at correcting the extraordinarily sloppy drafting of Articles 11 and 13.
These revisions are a mixed bag. In a few cases, they bring much-needed clarity to the
proposals, but in other cases, they actually worsen the proposals—for example, the existing language holds out the possibility that platforms could avoid using automated copyright
filters (which are viewed as a recipe for disaster by the
world's leading computer scientists, including the inventors of the web and the Internet's core technologies). The proposed clarification eliminates that possibility.
To get a sense of how not-ready-for-action Articles 11 and 13 are in their current form, or with the proposed revisions from the trilogues, have a look at the proposals from
the Don't Wreck the Net coalition, which combines civil society groups and a variety of small and large platforms from
the US and the EU, who have produced their own list of the defects in the directive that have to be corrected before anyone can figure out what they mean and even try to obey them.
Here are a few:
Make it explicit that existing liability protections, such as those in the E-Commerce Directive, remain in place even under Article 13.
Clearly define what is meant by “appropriate and proportionate,” as it provides absolutely no guidance to service providers and is left wide open for litigation and abuse.
Clarify which “service providers” Article 13 applies to in much more detail. This includes a clear definition of “public access to large amounts of works.” What is “large”?
There should be clear and significant penalties for providing false reports of infringement.
Copyright holders should be required to help platforms identify specific cases of infringement to be addressed, rather than requiring service providers to police every corner of
There need to be clear exceptions for sites that make a good faith effort to comply, but that inadvertently allow some infringement to slip through on their platforms.
There should be required transparency reports on how Article 13 is being used, including reports on abusive claims of infringement.
We're disappointed to see how little progress the trilogues have made in the months since they disappeared behind their closed doors. The
proponents of Articles 11 and 13 have had years to do their homework and draft fit-for-purpose rules that can be parsed by governments, companies, and users, but instead they've
smuggled a hastily drafted, nebulous pair of dangerous proposals into a law that will profoundly affect the digital lives of more than 500 million Europeans. The lack of progress
since suggests that the forces pushing for Articles 11 and 13 have no idea how to fix the unfixable, and are prepared to simply foist them on the EU, warts and all.
>> mehr lesen
EFF and MuckRock Release Records and Data from 200 Law Enforcement Agencies' Automated License Plate Reader Programs
(Thu, 15 Nov 2018)
EFF and MuckRock have filed hundreds of public records requests with law enforcement agencies around the country to reveal how data collected from automated license plate readers
(ALPR) is used to track the travel patterns of drivers. We focused exclusively on departments that contract with surveillance vendor Vigilant Solutions to share data between their
Today we are releasing records obtained from 200 agencies, accounting for more than 2.5-billion license plate scans in 2016 and 2017. This data is collected regardless of whether the
vehicle or its owner or driver are suspected of being involved in a crime. In fact, the information shows that 99.5% of the license plates scanned were not under suspicion at the time
the vehicles’ plates were collected.
On average, agencies are sharing data with a minimum of 160 other agencies through Vigilant Solutions’ LEARN system, though many agencies are sharing data with over 800 separate
Click below to explore EFF and MuckRock’s dataset and learn how much data these agencies are collecting and how they are sharing it. We've made
the information searchable and downloadable as a CSV file. You can also read the source documents on DocumentCloud or track our ongoing requests.
Read the Report and Explore the datA
DATA DRIVEN: HOW COPS ARE COLLECTING AND SHARING OUR TRAVEL PATTERNS USING AUTOMATED LICENSE PLATE READERS
Automated License Plate Readers- ACLU of Southern California & EFF v. LAPD & LASDAutomated License Plate Readers (ALPR)
>> mehr lesen
Honoring the 2018 Pioneer Award Winners and John Perry Barlow
(Wed, 14 Nov 2018)
EFF’s annual Pioneer Awards Ceremony recognizes extraordinary individuals for their commitment and leadership in extending
freedom and innovation on the electronic frontier. At this year’s event held on September 27 in San Francisco, EFF rededicated the Pioneer Awards to EFF co-founder and Grateful Dead
lyricist John Perry Barlow. Barlow’s commitment to online freedom was commemorated by dubbing the Pioneer Awards statuette the “Barlow.” EFF welcomed keynote speaker Daniel Ellsberg,
known for his work in releasing the Pentagon papers, to help award the very first Barlows. This year's honorees were fair use champion Stephanie Lenz, European Digital Rights leader
Joe McNamee, and groundbreaking content moderation researcher Sarah T. Roberts.
Read the transcript of the full 2018 Pioneer Awards Ceremony here.
The evening kicked off with EFF Executive Director Cindy Cohn who had the honor of renaming the Pioneer Award as the “Barlow” to pay tribute to Barlow’s work creating EFF and his role
in establishing the movement for Internet freedom. “Barlow was one of the first people to see the potential of the Internet as a place of freedom where voices long silenced could find
an audience and people could connect, regardless of physical distance,” Cohn said. (If you’re an award winner and reading this,
you’ll be happy to know she also gave the green light to previous award winners to retroactively call their awards the Barlow.)
Cindy Cohn dedicates the Pioneer Awards to EFF co-founder John Perry Barlow.
Cohn introduced two of his daughters, Anna and Amelia Barlow (known affectionately as the Barlowettes), to the stage to share some words. Anna paid tribute to her father’s talents and
his ability to weave two worlds together, and shared a video of him speaking about the necessity of the physical world to provide a framework for love, illustrating his theory of life.
Amelia centered on gratitude sharing funny anecdotes on her father’s ancestral connections to early America. She emphasized the important of perceptivity, respect, and wisdom needed
in the information era in carrying her father’s legacy forward, and in an emotional moment told the room, “I really feel like those people are you.” She continued, “Maybe we all will
be guided by the wisdom of those who have come before us and not forget what is true as a means of seeking a beautiful future with the long view, the long game, and all beings in
Amelia Barlow addresses the audience during the 'Barlow' dedication.
Cohn introduced Daniel Ellsberg, Barlow’s friend and board member of the Freedom of the Press Foundation. Ellsberg’s release of the Pentagon papers in 1971 exposed U.S. criminality in
Vietnam at great personal risk to himself, and he has since tirelessly supported whistleblowers and worked to shed light on government surveillance. Cohn highlighted Ellsberg’s
understanding of how national security can affect the psyche of a government official as secrecy becomes more than a job, but an identity. This makes it even more difficult for
whistleblowers to step forward. “I can honestly say that without you as a role model for breaking out of the secrecy cult, the NSA's mass surveillance programs would still likely be a
secret to this day,” she said as she thanked him for his service.
Daniel Ellsberg and Cindy Cohn
Ellsberg took the stage to a standing ovation, and shared his impressions of the Supreme Court hearings. “I believed Anita Hill then. I believe Christine Blasey Ford now,” he said. He
told the story of how he first met Barlow and Barlow called him a “revelationary,” a term, he mused humorously, that was “a lot better than ‘whistleblower.’”
A high point was hearing Ellsworth call the Pioneer Awards the most exciting day of his life, as he was finally able to meet Chelsea Manning, who was in attendance that evening. He
joked he had missed her many times, once seeing the back of her head. “But I waited 39 years for her to appear in this world,” he said before continuing on to detail the significance
of the documents she leaked. He went on the praise both Manning and Edward Snowden, “I have often said that I identify more with them as revelationaries than with any other people in
"Here's the great thing about the choice to become an advocate: anyone can make it.”
EFF Legal Director Corynne McSherry introduced honoree Stephanie Lenz. Lenz became a fair use hero when, with the assistance of EFF, she sued Universal Studios for sending her a
takedown notice (taking advantage of the Digital Millennial Copyright Act) for a 29-second YouTube video of her kids dancing to Prince’s song “Let’s Go Crazy,” even though her video
was legitimate fair use. The fight has taken ten years to win, but Lenz never gave up. “Stephanie Lenz is not most people. She decided to take another course. She decided to fight
back,” said McSherry. In doing so, Lenz became a voice for thousands of users who have had their work taken down unfairly – and she made history. Lenz encouraged the audience to all
become activists, “I could've chosen silence. I chose speech. Here's the great thing about the choice to become an advocate: anyone can make it.”
Corynne McSherry and Stephanie Lenz, winner for her fight for fair use.
Danny O’Brien introduced the next honoree, Joe McNamee, and humorously praised his humility, stating that McNamee only agreed to accept the award if he could do so on behalf of his
colleagues. True to his word, O’Brien presented the award to McNamee and the European community. “Anyone know who that guy is?” quipped McNamee.
Danny O'Brien and Joe McNamee, winner for his work with European Digital Rights.
McNamee is Executive Director of European Digital Rights, Europe’s association for organizations supporting online freedom. From his home base of Brussels, he pioneered digital rights
advocacy in Europe with his work in net neutrality and General Data Protection Regulation or GDPR, and notably, was a centralizing force for diverse groups from politicians to
activists to come together. McNamee shared his concern for the copyright directive and the problems that arise from companies implementing policies on a global level. He also asked
the audience to go home and watch the video of Taren Stinebricker-Kauffman speaking on the banality of evil
during her acceptance speech for Aaron Swartz’s posthumous Pioneer Award, “And when you watch that video, be outraged that it's truer today than it was a few years ago. Be proud that
you're part of a community that does not accept this banality. And be energized by your outrage to fight the good fight.”
EFF Director of International Free Expression Jillian York presented the evening's last award over live video from Thessaloniki in Greece to Sarah T. Roberts, who was in Greece
keynoting a conference on content moderation. Roberts spoke of the hidden labor and the experiences of the workers in content moderation – which is largely invisible – and hoped the
award would help “elevate the profile and elevate the experience to these workers that have been hidden for so long.” Roberts’ research on content moderation has been vital in
documenting and informing how social media companies use low-wage laborers to the detriment of free expression and the health and well-being of the screeners.
Jillian C. York and Sarah T. Roberts, winner for her commercial content moderation work, LIVE FROM GREECE!
We are deeply grateful to Anna Barlow, Amelia Barlow, Daniel Ellsberg, and all of this year’s honorees for their contributions in the digital world and far beyond. This was truly an
ideal group to rededicate the Pioneer Awards Ceremony to a visionary like John Perry Barlow.
Awarded every year since 1992, EFF’s Pioneer Awards Ceremony recognizes the leaders who are extending freedom and innovation on the electronic frontier. Honorees are nominated by the
public. Previous honorees have included Aaron Swartz, Douglas Engelbart, Richard Stallman, and Anita Borg. Many thanks to the sponsors of the 2018 Pioneer Awards Ceremony: Anonyome
Labs; Dropbox; Gandi.net; Ridder, Costa & Johnstone LLP; and Ron Reed. If you or your company are interested in learning more about sponsorship, please contact email@example.com.
The 2018 Barlows revealed!
>> mehr lesen
The Supreme Court Should Confirm, Again, that Abstract Software Patents Don’t Need a Trial to be Proved Invalid
(Wed, 14 Nov 2018)
This year, we celebrated the fourth anniversary of the
Supreme Court’s landmark ruling in Alice v. CLS Bank. Alice made clear that generic computers do not make
abstract ideas eligible for patent protection. Following the decision, district courts across the country started rejecting ineligible abstract patents at early stages of litigation.
That has enabled independent software developers and small businesses to fight meritless infringement allegations without taking on the staggering costs and risks of patent
litigation. In other words, Alice has made the patent system better at doing what it is supposed to do: promote technological innovation and economic growth.
Unfortunately, Alice’s pro-innovation effects are already in danger. As we’ve explained before, the Federal Circuit’s decision in Berkheimer v. HP Inc. turns Alice upside-down by treating the legal question of
patent eligibility as a factual question based on the patent owner’s uncorroborated assertions. That will just make patent litigation take longer and cost more because factual
questions generally require expensive discovery and trial before they can be resolved.
Even worse, Berkheimer gives patent owners free rein to actually create factual questions because of its emphasis on a patent’s specification. The specification is the part
of the patent that describes the invention and the background state of the art. The Patent Office generally does not have the time or resources to verify whether
every statement in the specification is accurate. This means that, in effect, the Berkheimer ruling will allow patent owners to create factual disputes and defeat summary
judgment by inserting convenient “facts” into their patent applications.
If permitted to stand, the decision will embolden trolls with software patents to use the ruinous cost of litigation to extract settlement payments for invalid patents—just as they
did before Alice. Unfortunately, district courts and patent examiners are already relying on Berkheimer to allow patents that should be canceled under Alice
to survive in litigation or issue as granted patents. Berkheimer is good news for patent trolls, but it’s bad news for those most vulnerable to abusive litigation
threats—software start-ups, developers, and users.
Now that the Federal Circuit has declined rehearing en banc (with all active judges participating in the decision), only the Supreme Court can prevent Berkheimer’s
errors from turning Alice into a dead letter. That’s why EFF, together with the R Street Institute, has
filed an amicus brief [PDF] urging the Supreme Court to grant certiorari, and fix
yet another flawed Federal Circuit decision.
Our brief explains that Berkheimer is wrong on the law and bad for innovation. First, it exempts patent owners from the rules of federal court litigation by permitting them
to rely on uncorroborated statements in a patent specification to avoid speedy judgment under Alice. Second, it conflicts with Supreme Court precedent, which has never
required factfinding deciding the legal question of patent eligibility. Third, it threatens to undo the innovation, creativity, and economic growth that Alice has made
possible, especially in the software industry,
because Alice empowers courts to decide patent eligibility without factfinding or trial.
We hope the Supreme Court grants certiorari and confirms that patent eligibility is a legal question that courts can answer, just as it did in Alice.
>> mehr lesen
EFF, Human Rights Watch y más de 70 grupos de la sociedad civil solicitan a Mark Zuckerberg que proporcione a todos los usuarios y usuarias un mecanismo para apelar ante la censura de contenidos en Facebook
(Tue, 13 Nov 2018)
La libertad de expresión del mundo está en sus manos, dicen estos grupos al director ejecutivo de Facebook
San Francisco - The Electronic Frontier Foundation y más de 70 grupos de derechos humanos y digitales pidieron hoy a Mark Zuckerberg que añadiera transparencia y responsabilidad real
al proceso de eliminación de contenidos de Facebook. Específicamente, los grupos exigen que Facebook explique – claramente - cuánto
contenido elimina, correcta o incorrectamente, y que proporcione a la totalidad de sus usuarios un método justo y oportuno para apelar estas eliminaciones y ver la restitución
de su contenido.
Mientras que Facebook ya está bajo una enorme -y creciente - presión para eliminar material que es verdaderamente amenazante, sin transparencia, imparcialidad y procesos para
identificar y corregir errores, las políticas de eliminación de contenidos de Facebook con demasiada frecuencia resultan contraproducentes y silencian a las mismas personas que
deberían poder hacer oír sus voces en la plataforma.
Políticos, museos, celebridades y otros grupos e individuos de alto
perfil cuyo contenido removido indebidamente puede atraer la atención de los medios de comunicación, pueden tener pocos problemas para llegar a Facebook y recuperar
el contenido; a veces incluso reciben una disculpa. ¿Pero el o la usuario/a medio? No tanto. Facebook solo permite a la gente apelar las decisiones sobre contenido en un conjunto
limitado de circunstancias, y en muchos casos, los usuarios no tienen absolutamente ninguna opción para apelar. Onlinecensorship.org, un proyecto de EFF en el que cualquier usuario puede denunciar avisos de retirada de contenido, ha recopilado informes de cientos de incidentes de eliminaciones
injustificadas en los que no se disponía de recursos. Para la mayoría de usuarios y usuarias, el contenido que Facebook elimina rara vez se restaura, y algunas personas son
expulsados de la plataforma sin una buena razón.
EFF, Artículo 19, el Centro para la Democracia y la Tecnología, y Ranking Digital Rights escribieron hoy directamente a Mark Zuckerberg exigiendo que Facebook implemente estándares basados en el sentido común para que los usuarios y usuarias promedio puedan fácilmente
apelar las decisiones de moderación de contenido, recibir respuestas rápidas y revisiones oportunas por parte de una persona o personas, y la oportunidad de presentar evidencia
durante el proceso de revisión. La carta fue firmada conjuntamente por más de 70 organizaciones de derechos humanos, derechos digitales y libertades civiles de Sudamérica, Europa,
Oriente Medio, Asia y África.
"No hace falta ser alguien famoso ni aparecer en los titulares para que Facebook responda a las malas decisiones de moderación de contenido, pero eso es exactamente lo que está
ocurriendo", dijo la directora de Libertad de Expresión Internacional de la EFF, Jillian York. "Mark Zuckerberg creó una compañía que es la plataforma de comunicaciones más importante
del mundo. Tiene una responsabilidad con todos los usuarios, no solo con aquellos que pueden hacer más ruido y hacer que la compañía – potencialmente –se vea mal".
Además de implementar un proceso de apelación significativo, EFF y sus socios pidieron al Sr. Zuckerberg la emisión de informes de transparencia sobre la
aplicación de las normas comunitarias que incluyeran un desglose del tipo de contenido que ha sido restringido, datos sobre cómo se iniciaron las acciones de moderación de contenido y
el número de decisiones que fueron apeladas y que a raíz de esto, se demostró que
fueron un error.
"Facebook está muy por detrás de sus competidores cuando se trata de transparencia y responsabilidad en las decisiones de censura de contenidos", dijo Nate Cardozo, Asesor Senior de
Seguridad de la Información de EFF. "Le pedimos al Sr. Zuckerberg que implemente los Principios de Santa Clara, y que publique números reales detallando la frecuencia con la que
Facebook elimina el contenido y la frecuencia con la que lo hace de forma incorrecta".
"Sabemos que las políticas de moderación de contenido se están aplicando de manera desigual y que una enorme cantidad de contenido se está eliminando de forma inadecuada cada semana.
Pero no tenemos cifras o datos que puedan decirnos qué tan grande es el problema, qué contenido se ve más afectado y cómo se trataron las apelaciones", dijo Cardozo. "El Sr.
Zuckerberg debería hacer transparente estas decisiones, que afectan a millones de personas en todo el mundo, una prioridad en Facebook."
Los Principios de Santa Clara:
Para más información sobre la censura privada:
Director for International Freedom of Expression
Senior Information Security Counsel
>> mehr lesen
Federal Researchers Complete Second Round of Problematic Tattoo Recognition Experiments
(Tue, 13 Nov 2018)
Despite igniting controversy over ethical lapses and the threat to civil liberties posed by its tattoo recognition experiments the first time around, the National
Institute of Standards and Technology (NIST) recently completed its second major project evaluating software designed to reveal who we are and potentially what we believe based
on our body art.
Unsurprisingly, these experiments continue to be problematic.
The latest experiment was called Tatt-E, which is short for “Tattoo Recognition Technology Evaluation.” Using tattoo images collected by state and local law enforcement from
incarcerated people, NIST tested algorithms created by state-backed Chinese Academy of Sciences and MorphoTrak, a subsidiary of the French corporation Idemia.
According to the Tatt-E results, which were published in
October, the best-performing tattoo recognition algorithm by MorphoTrak had 67.9% accuracy in matching separate images of tattoo to each other on the first try.
NIST further tested the algorithms on 10,000 images downloaded from Flickr users by Singaporean researchers, even though it was not part of the original scope of Tatt-E. These
showed significantly improved accuracy, as high as 99%.
Tattoo recognition technology is similar to other biometric technologies such as face recognition or iris scanning: an
algorithm analyzes an image of a tattoo and attempts to match it to a similar tattoo or image in a database. But unlike other forms of biometrics, tattoos are not only a physical
feature but a form of expression, whether it is a cross, a portrait of a family member, or the logo for someone’s favorite band.
Since 2014, the FBI has sponsored NIST’s tattoo recognition project to
advance this emerging technology. In 2016, an EFF investigation
revealed that NIST had skipped over key ethical oversight processes and privacy protections with its earlier experiments called Tatt-C, which is short for the Tattoo Recognition Challenge. This
experiment promoted using tattoo recognition technology to investigate people’s beliefs and memberships, including their religion. The more recent Tatt-E, however, did not test for “tattoo similarity”—the ability to
match tattoos that are similar in theme in design, but belong to different people.
A database of images captured from incarcerated people was provided to third parties—including private corporations and academic institutions—with little regard for the privacy
implications. After EFF called out NIST, the agency retroactively altered its presentations and reports, including eliminating problematic information and replacing images of inmate
tattoos in a “Best Practices” poster with topless photos of a researcher with marker drawn all over his body. The agency also pledged to implement new oversight procedures.
However, transparency is lacking. Last November, EFF filed suit against NIST and the FBI
after the agencies failed to provide records in response to our Freedom of Information Act requests. So far the records we have freed have revealed how the FBI is seeking to develop
a mobile app that can recognize the meaning of tattoos
and the absurd methods NIST
use to adjust its “Best Practices” documents. Yet, our lawsuit continues, as the agency continues to withhold records and redacted much of the documents they have
Tatt-E was the latest set of experiments conducted by NIST. Unlike Tatt-C, which involved 19 entities, only two entities chose to participate in Tatt-E, each of
which has foreign ties. Both the Chinese Academy of Sciences and MorphoTrak submitted six algorithms for testing against a dataset of tattoo images provided by the Michigan State
Police and the Pinellas County Sheriff’s Office in Florida.
MorphoTrak’s algorithms significantly outperformed the Chinese Academy of Sciences’, which may not be surprising since the company’s software has been used with the Michigan
State Police’s tattoo database for more
than eight years. Its best algorithm could return a positive match within the first 10 images 72.1% of the time, and that number climbed to 84.8% if researchers
cropped the source image down to just the tattoo. The accuracy in the first 10 images increased to 95% if they used the infrared spectrum. In addition, the CAC algorithms performed
poorly with tattoos on dark skin, although the skin tone did not make much of a difference for MorphoTrak’s software.
One of the more concerning flaws in the research is that NIST did not document “false positives.” This is when the software says it has matched two tattoos, but the match turns
out to be in error. Although this kind of misidentification has been a perpetual problem with face
recognition, the researchers felt that it was not useful to the study. In fact, they suggest that false positives a may have “investigative utility in operations.” While they
don’t explain exactly what this use case might be, from other documents produced by NIST we can infer they are likely discussing how similar tattoos on different people could
establish connections among their wearers.
While Tatt-E was supposed to be limited to images collected by law enforcement, NIST went a step further and used the Nanyang Technological University Tattoo Database, which was
compiled from images taken from Flickr users, for further research. With this dataset, the Chinese Academy of Sciences' algorithms performed better, hitting at high as 99.3%
No matter the accuracy in identification, tattoo recognition raises serious concerns for our freedoms. As we’ve already seen, improperly interpreted tattoos have been used to
brand people as gang members and fast track them for
deportation. EFF urges NIST to make Tatt-E its last experiment with this technology.
NIST Tattoo Recognition Technology Program FOIA
>> mehr lesen
EFF, Human Rights Watch, and Over 70 Civil Society Groups Ask Mark Zuckerberg to Provide All Users with Mechanism to Appeal Content Censorship on Facebook
(Tue, 13 Nov 2018)
World’s Freedom of Expression Is In Your Hands, Groups Tell CEO
San Francisco—The Electronic Frontier Foundation (EFF) and more than 70 human and digital rights groups called on Mark Zuckerberg today to add real transparency and accountability to
Facebook’s content removal process. Specifically, the groups demand that Facebook clearly explain how much content it removes, both rightly and wrongly, and provide all users
with a fair and timely method to appeal removals and get their content back up.
While Facebook is under enormous—and still mounting—pressure to remove material that is truly threatening, without transparency, fairness, and processes to identify and correct
mistakes, Facebook’s content takedown policies too often backfire and silence the very people that should have their voices heard on the platform.
Politicians, museums, celebrities, and other high profile groups and individuals whose improperly removed content can garner
media attention seem to have little trouble reaching Facebook to have content restored—they sometimes even receive an apology. But the average user? Not so much. Facebook only allows
people to appeal content decisions in a limited set of circumstances, and in many cases, users have absolutely no option to appeal. Onlinecensorship.org, an EFF project for users to report takedown notices, has collected reports of hundreds of unjustified takedown incidents where
appeals were unavailable. For most users, content Facebook removes is rarely restored, and some are banned from the platform for no good reason.
EFF, Article 19, the Center for Democracy and Technology, and Ranking Digital Rights wrote directly to
Mark Zuckerberg today demanding that Facebook implement common sense standards so that average users can
easily appeal content moderation decisions, receive prompt replies and timely review by a human or humans, and have the opportunity to present evidence during the review process. The
letter was co-signed by more than 70 human rights, digital rights, and civil liberties organizations from South America, Europe, the Middle East, Asia, Africa, and the U.S.
“You shouldn’t have to be famous or make headlines to get Facebook to respond to bad content moderation decisions, but that’s exactly what’s happening,” said EFF Director for
International Freedom of Expression Jillian York. “Mark Zuckerberg created a company that’s the world’s premier communications platform. He has a responsibility to all users, not just
those who can make the most noise and potentially make the company look bad.”
In addition to implementing a meaningful appeals process, EFF and partners called on Mr. Zuckerberg to issue transparency reports on community standards
enforcement that include a breakdown of the type of content that has been restricted, data on how the content moderation actions were initiated, and the number of decisions that were
appealed and found to have been made in error.
“Facebook is way behind other platforms when it comes to transparency and accountability in content censorship decisions,” said EFF Senior Information Security Counsel Nate Cardozo.
“We’re asking Mr. Zuckerberg to implement the Santa Clara Principles, and release actual numbers detailing how often
Facebook removes content—and how often it does so incorrectly.”
“We know that content moderation policies are being unevenly applied, and an enormous amount of content is being removed improperly each week. But we don’t have numbers or data that
can tell us how big the problem is, what content is affected the most, and how appeals were dealt with,” said Cardozo. “Mr. Zuckerberg should make transparency about these decisions,
which affect millions of people around the world, a priority at Facebook.”
For the letter:
For the Santa Clara Principles:
For more information on private censorship:
Director for International Freedom of Expression
Senior Information Security Counsel
>> mehr lesen
EFF to U.S. Department of Commerce: Protect Consumer Data Privacy
(Tue, 13 Nov 2018)
On Friday, November 9, 2018, EFF submitted a letter in
response to the U.S. Department of Commerce's request for comment on "Developing the Administration's Approach to Consumer Privacy," urging the agency to consider any future policy
proposals in a users' rights framework. We emphasized five concrete recommendations for any Administration policy proposal or proposed legislation regarding the data privacy rights of
Requiring opt-in consent to online data gathering
Giving users a “right to know” about data gathering and sharing
Giving users a right to data portability
Imposing requirements on companies for when customer data is
Requiring businesses that collect personal data directly from consumers to serve as “information fiduciaries,” similar to the duty of care required of certified
But, to be clear, any new federal data privacy regulation or statute must not preempt stronger state data privacy rules. For example, on June 28, California
enacted the Consumer Privacy Act (S.B. 375) (“CCPA”). Though
there are otherexamples, the CCPA is the most comprehensive state-based data privacy law, and while it
could be improved, its swift passage highlights how state legislators
are often in the best position to respond to the needs of their constituents. While baseline federal privacy legislation would benefit consumers across the country, any federal
privacy regulation or legislation that preempts and supplants state action would actually hurt consumers and prevent states from protecting the needs of their constituents.
It is also important that any new regulations must be judicious and narrowly tailored, avoiding tech mandates and expensive burdens that would undermine competition—already a problem
in some tech spaces--or infringe on First Amendment rights. To accomplish that, policymakers must start by consulting with technologists as well as lawyers. Also, one size does not
fit all: smaller entities should be exempted from some data privacy rules.
>> mehr lesen
Principles for Corporate Platforms in the Gig Economy
(Wed, 07 Nov 2018)
From ride-hailing platforms like Lyft and Uber, to sites like Airbnb, FlipKey, or VRBO that enable occupants to rent properties, the so-called sharing or gig economy is expanding and
disrupting industries from hotels to taxis. Cities across the U.S.—and the rest of the world—are facing a daunting array of
regulatory challenges in responding to their growth.
The ongoing saga of scooter-sharing services suggests a foreseeable pattern....
Earlier this year, a new class of companies prompted controversies by flooding several cities with personal scooters available on a short-term rental basis. Advances in battery
technology recently made the business model of companies like Bird and Lime profitable. In the absence of regulation, the companies deployed thousands of scooters to selected cities across the world. A resulting backlash included municipal regulation, a recent class action lawsuit, and even acts of vandalism ranging from burying scooters at sea to lighting them on
The ongoing saga of scooter-sharing services suggests a foreseeable pattern: future advances in various technologies will generate opportunities for innovation. Users will clamor for
services that platforms will rush to offer, often without a recognition of the legal or economic externalities seemingly exogenous to their business model. Recognizing how this
pattern is poised to recur across context, we’re publishing a few suggested guidelines for companies creating the gig economy and the cities increasingly facing pressure to regulate
Location privacy and user security
Many platforms that enable users to share vehicles, from cars to scooters, collect precise location data when the vehicles are picked up, dropped off, and also continuously during
their operation. This data collection carries a series of risks.
The most obvious among them is a threat to user privacy. If a user of a scooter service rides one to a mental health provider, or a reproductive services clinic, collecting their trip
data and linking it to their other trip history or even an “anonymous” unique identifier could be personally invasive.
Beyond privacy, overbroad (or overly specific) data collection can also compromise user security, especially to the extent rider trip location data is either disclosed by a company or
stolen by malicious actors online. For instance, disclosure of the locations at which a ride-hailing service user is most frequently picked up could conceivably place their life in
danger, if they had been the target of domestic violence or private harassment.
Protective policies available to platforms
We recommend that platforms minimize the scope of their data collection, and also seek opportunities to transform data sets from individual data points into aggregate statistics
Limiting the specificity of data that is collected presents a further opportunity. For instance, a ride-hailing company might store only the zip code of the pick-up and destination of
a ride once the ride is complete, rather than the precise pick-up address or destination. On the other hand, a data-set may be re-identifiable despite attempts at obfuscation, so this
is not an effective strategy standing alone.
Beyond limiting what data platforms collect, companies should also work to prevent disclosure, in two predictable dimensions. First, when receiving government requests for
information, platforms should protect the rights of their users by insisting upon valid, narrow legal process with an opportunity for judicial review,
and by providing notice of disclosure to the subject user whenever possible.
Whatever data they do collect might be attractive to malicious actors such as hackers, organized crime, and even foreign state intelligence agencies. Accordingly, platforms should
also anticipate attacks and potential data breaches by taking affirmative steps to guard their systems against malicious attacks that could place sensitive and revealing user data at
Opportunities for Cities
Some cities, like Los Angeles, are creating proactive regulatory structures to govern
the activities of gig economy platforms in their areas, and some companies are
emerging to serve their needs in managing data from multiple platforms. Just like private platforms that collect or retain user data, municipal regulators should consult
cybersecurity experts to ensure that their systems do not become honey pots that attract malicious actors and eventual data breaches.
Other dimensions along which platforms can protect privacy—or that cities can insist upon it—include setting time limits beyond which they will not retain user data, and also by
specifying parameters governing the purposes for which any particular data may be used. For instance, a ride-hailing company could seek to maximize its revenue by disclosing users’
locations to third party advertisers for the purposes of serving them advertisements relevant to their precise location at any given moment, or instead decide to use location data
only for the purposes of determining which driver may be closest at the time a ride is requested.
Fast and Loose or Slow and Steady?
The history of the Internet is rife with examples of innovations that create opportunities that, in turn, displace established industries. On the one hand, we enthusiastically
encourage innovation and eagerly welcome new tools that can empower users. They reflect how code can effectively create law and policy, even absent a government action.
On the other hand, governments—including local and municipal governments—are within their rights to ensure that companies respect the rights of others, including non-users who might
share infrastructure with users of gig economy platforms. Where opportunities for technology meet burdens imposed by regulation, we encourage platforms to comply with those
regulations and seek their adjustment through formal democratic channels, rather than exploiting their ability to move faster than regulators can respond and risk alienating
regulators, policymakers, and residents.
Sometimes those channels can present opportunities for platform users to be engaged as participants in a regulatory process. In San Francisco, Airbnb mobilized hundreds of its
user-hosts in 2015 and 2016 to oppose a proposed
local regulation that would have capped short-term rentals at 60 days per year. While the controversial measure was ultimately defeated, the resulting policy framework was at least
the subject of deliberation and public debate—unlike the effectively lawless regime that prompted New York City to recently regulate
Uber & Lyft with caps on the number of riders, as well as new disclosure requirements.
These struggles are bound to proliferate going forward, as more and more cities address more and more externalities created by more and more platforms bringing services to market. By
keeping principles like user privacy, user security, and regulatory compliance in mind, companies can ensure that their future innovations generate more opportunity, and less
>> mehr lesen
If A Pre-Trial Risk Assessment Tool Does Not Satisfy These Criteria, It Needs to Stay Out of the Courtroom
(Tue, 06 Nov 2018)
Algorithms should not decide who spends time in a California jail. But that’s exactly what will happen under S.B. 10, a new law slated to take effect
in October 2019. The law, which Governor Jerry Brown signed in September, requires the state’s criminal justice system to replace cash bail with an algorithmic pretrial risk
assessment. Each county in California must use some form of pretrial risk assessment to categorize every person arrested as a “low,” “medium,” or “high” risk of failing to appear for
court, or committing another crime that poses a risk to public safety. Under S.B. 10, if someone receives a “high” risk score, the person must be detained prior to arraignment, effectively placing crucial decisions about a person’s freedom into the hands of companies that make assessment tools.
Some see risk assessment tools as being more impartial than judges because they make determinations using algorithms. But that assumption ignores the
fact that algorithms, when not carefully calibrated, can cause the same sort
of discriminatory outcomes as existing systems that rely on human judgement—and even make new, unexpected errors. We doubt these
algorithmic tools are ready for prime time, and the state of California should not have embraced their use before establishing ways to scrutinize them for bias, fairness, and
EFF in July joined more than a hundred advocacy groups to urge jurisdictions in California and across the country already
using these algorithmic tools to stop until they considered the many risks and consequences of their use. Our concerns are now even more urgent in California, with less than a year to
implement S.B. 10. We urge the state to start working now to make sure that S.B. 10 does not reinforce
existing inequity in the criminal justice system, or even introduce new disparities.
This is not a merely theoretical concern. Researchers at Dartmouth University found in January that one widely used tool, COMPAS, incorrectly classified black
defendants as being at risk of committing a misdemeanor or felony within 2 years at a rate of 40%, versus 25.4% for white defendants.
There are ways to minimize bias and unfairness in pretrial risk assessment, but it requires proper guidance and oversight. S.B. 10 offers no guidance
for how counties should calculate risk levels. It also fails to lay out procedures to protect against unintentional, unfair, biased, or discriminatory outcomes.
The state’s Judicial Council is expected to post the first of its rules mandated by S.B. 10 for public comment within the
coming days. The state should release information—and soon—about the various algorithmic tools counties can consider, for public review. To date, we don’t even have a list of the
tools up for consideration across the state, let alone the information and data needed to assess them and safeguard against algorithmic bias.
We offer four key criteria that anyone using a pretrial risk assessment tool must satisfy to ensure that the tool reduces existing inequities in the
criminal justice system rather than reinforces them, and avoids introducing new disparities. Counties must engage the public in setting goals, assess whether the tools they are
considering use the right data for their communities, and ensure the tools are fair. They must also be transparent and open to regular independent audits and future correction.
Policymakers and the Public, Not Companies, Must Decide What A Tool Prioritizes
As the state considers which tools to recommend, the first step is to decide what its objective is. Is the goal to have fewer people in prisons? Is it
to cut down on unfairness and inequality? Is it both? How do you measure if the tool is working?
These are complex questions. It is, for example, possible to optimize an algorithm to maximize “true positives,” meaning to correctly identify those
who are likely to fail to appear, or to commit another dangerous crime if released. Optimizing an algorithm that way, however, also tends to increase the number of “false positives,”
meaning more people will be held in custody unnecessarily.
It’s also important to define what constitutes success. A system that recommends detention for everyone, after all, would have both a 100% true
positive rate and a 100% false positive rate—and would be horribly unjust. As Matthias Spielkamp wrote for the
MIT Technology Review: “What trade-offs should we make to ensure justice and lower the massive social costs of incarceration?”
Lawmakers, the courts, and the public—not the companies who make and sell algorithmic tools—should decide together what we want pretrial risk
assessment tools to prioritize and how to ensure that they are fair.
The Data and Assumptions Used to Develop the Algorithm Must Be Scrutinized
Part of the problem is that many of these pretrial risk assessment tools must be trained by examining existing data. But the assumptions a developer
makes when creating an assessment don’t always apply to the communities upon which they are used. For example, the dataset used to train a machine-learning algorithm might not be
representative of the community that will eventually use the risk assessment. If the risk assessment tool was developed with bad training data, i.e. it “learned” from bad data, it
will produce bad risk assessments.
How might the training data for a machine-learning algorithm be bad?
For example, the rate of re-arrest of released defendants could be used as a way to measure someone’s risk to public safety when building an algorithm.
But does the re-arrest rate actually tell us about risk to public safety? In fact, not all jurisdictions define
re-arrest in the same way. Some include only re-arrests that actually result in bail revocation, but some include traffic or misdemeanor
offenses that don’t truly reflect a risk to society.
Training data can also often be “gummed up by our own systemic biases.” Data collected by the Stanford Open Policing Project shows that officers’ own biases cause them to
stop black drivers at higher rates than white drivers and to ticket, search, and arrest black and Hispanic drivers during traffic stops more often than whites. Using a rate of arrest
that includes traffic offenses could therefore introduce more racial bias into the system, rather than reduce it.
Taking the time to clean datasets and carefully vet tools before implementation is necessary to protect against unfair, biased, or discriminatory
Fairness and Bias Must Be Considered and Corrected
Beyond examining the training data algorithms use, it’s also important to understand how the algorithm makes its decisions. The fairness of any
algorithmic system should be defined and reviewed before implementation as well as throughout the system’s use. Does an algorithm treat all groups of people the same? Is the system
optimizing for fairness, for public safety, for equal treatment, or for the most efficient allocation of resources?
Biased decision-making is a trap that both simple and complicated algorithms can fall into. Even a tool using carefully vetted data that focuses too
narrowly on a single measure of success, for example, can also produce unfair assessments. (See, for example, Goodhart's Law.) Algorithmic systems used in
justice, education policy, insurance, and lending have
exhibited these problems.
It’s important to note that simply eliminating race or gender data will not make a tool fair because of the way machine learning algorithms process
information. Sometimes machine learning algorithms will make prejudiced or biased decisions even if data on demographic categories is deliberately excluded—a phenomenon called
“omitted variable bias” in statistics. For example, if a system is asked to predict a person’s risk to public safety, but lacks information about their access to supportive resources,
it could improperly learn to use their postal code as a way to determine their threat to public safety.
In this way, risk assessment can use factors that appear neutral—such as a person’s income level—but produce the same unequal results as if they had
used prohibited factors such as race or sex.
Automated assessments can also fail to take important, but less obvious, information about people’s lives into account—reducing people to the sum of
their data and ignoring their humanity. A risk assessment may not, for example, consider something like familial relationships and responsibilities. But a person who is the primary
caregiver for a sick relative may be at significantly higher risk of failing to appear in court—but not purposely absconding. If these familial relationships are not considered, then
the system may conflate such life circumstances with a risk of flight—which would lead to inaccurate, potentially biased, and discriminatory outcomes in the future.
There are sensible solutions to address omitted variable bias, and they must be applied properly to offset existing biases inherent in the training
The Public and Independent Experts Must Be Informed and Consulted
Any government decision to adopt a system or tool that uses algorithmic decision-making is a policy decision—whether the system is being used for
pretrial risk assessment or to determine whether to cut people off from healthcare—and the public needs to be able to hold the government
accountable for those decisions. Thus, even when decision makers have thought through the steps we’ve outlined as they choose vendors, it’s equally vital that they let the public and
independent data scientists review them.
Developers must be upfront about how their tools work, so that courts, policy makers, and the public understand how tools fit their communities. If
these tools are allowed to be a “black box”— a system or device that doesn’t reveal how it reaches its conclusions—then it robs the public of their right to understand what the
algorithm does and to test its fairness and accuracy. Without knowing what goes into the black box, it’s hard to assess the fairness and validity of what comes out of it.
The public must have access to the source code and the materials used to develop these tools, and the results of regular independent audits of the
system, to ensure tools are not unfairly detaining innocent people or disproportionately affecting specific classes of people.
Transparency gives people a way to measure progress and ensure government accountability. As Algorithm Watch says, “The fact that most [algorithmic
decision making] procedures are black boxes to the people affected by them is not a law of nature. It must end.”
California Needs To Address These Issues Immediately
As California looks to implement S.B. 10, it should not rely on vendor companies’ marketing promises. We urge the state to vet thoroughly any algorithmic tools considered—and enable independent experts and auditors to do the same. There must be
thorough and independent evaluations of whether the tools up for consideration are fair and appropriate.
Any recommendation to take away someone’s liberty must receive immediate human review. These considerations should have been baked into S.B. 10 from
the start. But it is critical that California satisfy these four criteria now, and that policymakers across the country considering similar laws build these critical safeguards
directly into their legislation.
>> mehr lesen
The Unresolved Issue of Verizon Throttling Santa Clara’s Fire Department Shows Why ISPs Need Rules
(Mon, 05 Nov 2018)
In August, it was revealed that Verizon throttled the wireless broadband services of fire fighters in the middle of a state emergency and spent four weeks debating with the local fire
department while trying to upsell them a more expensive plan. This week, the Santa Clara Board of County Supervisors held a hearing to review what happened during the
state’s worst fire in history. The hearing revealed that, Verizon’s statements to the contrary, nothing about this public safety issue has been resolved.
EFF testified at the hearing to provide an understanding as to
the legal and technical issues that are part of the conflict to assist the county in its review. In addition, EFF believes Verizon’s conduct would have subjected them to
penalties under the 2015 Open Internet
Order, because the actions by Verizon’s sales team appeared to be unjust and unreasonable under Title II of the Communications Act. Verizon have fully admitted that they
were at fault and wrong to have engaged in the conduct and have offered to end some of the business practices that led to the problem for the west coast states and Hawaii.
That said, the fact that the debate has centered so much on what Verizon is willing to do is part of the problem.
It is not, and should not be, a corporation’s job to figure out the balancing act between public safety and its profits.
During the hearing, Verizon made it very clear that they regretted what happened and that it should not have happened. To their credit, they have made a series of proposals to
eliminate throttling of a handful of public safety entities and are appearing to end the practices that led to the Santa Clara fire incident. But it became apparent in the
hearing that Verizon is ill-equipped to do the deep thinking on what is the best outcome for public safety purposes. The fact that they had wireless broadband plans that throttled
fire departments to 1/200th of their broadband speed—basically reducing them to dial-up speeds—has made it painfully obvious that they do not think proactively about
And we shouldn’t be asking them to. They only get to make these mistakes because the FCC completely abandoned its duties to oversee and regulate the industry. Verizon, as a for-profit
corporation, has a legal duty to its shareholders to maximize the value of their product of broadband Internet access. As we’ve explained in detail before, the FCC used to have a duty to
promote public interests—such as the balance between public safety and monetization of a service—and it used to have the power to enforce protections of those interests. The
so-called “Restoring Internet Freedom Order” represents the FCC walking away from those responsibilities.
As a result, there is no federal entity with the duty to establish policies that promote the public safety of all Americans for broadband. In the total absence of rules and public
policy, nothing serves as a counter weight to the profit motive. Therefore it is not surprising to anyone that months after the fact Verizon and the fire department still have not
reached an agreement that works for public safety. We are literally asking Verizon to figure out how to make less money and are hoping the internal political calculus of a corporate
behemoth will result in a net good for community safety. That is an absurd idea.
There is no good reason for throttling a fire department during an emergency. And yet, the hearing showed that the problem remains.
Verizon’s decision to throttle a public safety customer to 1/200th of their original wireless broadband speed had nothing to do with network management and everything to do
with how you make a customer pay more for the service.
Reasonable network management is a general exemption from net neutrality rules because the FCC and any network engineer knows there are times when your network is overloaded and you
have to make decisions on bandwidth allocations. But whenever the network is not overloaded, there are zero technical reasons to ration or reduce consumption, because you have plenty
of capacity for the delivery of your service. During the hearing, a Verizon representative briefly attempted to conflate data caps and overage fees with network management. But that
was abandoned as an argument, likely because those two things aren’t actually necessary for network management.
Supervisor Chavez articulated a useful analogy to the freeway system and managing car traffic. Congestion is basically the rush hour part of traffic, but when the roads are empty and
it's 3 am we likely do not have any reason to worry about traffic jams.
The fact that Verizon appears to have mostly ended the practice of throttling for a handful of public safety entities indicates that this was all the product of corporate
decision-making focused on profits. Chair Simitian has given Verizon until December 1st to come up with a solution or he will be forced to take action, which likely means
further debates and legislation in Sacramento in 2019.
As a reminder, this is why Verizon asked the FCC to preempt state laws at the same time it
was asking the FCC to stop overseeing them as an industry. There is little value to companies like Verizon, Comcast, and AT&T escaping federal oversight if states are
empowered to fill the void and regulate them in the absence of the FCC. This is where that void leaves us: a for-profit company getting to make public safety determinations.
>> mehr lesen
What to Do When Voting Machines Fail
(Mon, 05 Nov 2018)
With Election Day just hours away, we are seeing reports across the country that electronic voting machines are already inaccurately recording votes and questions are being raised about potential foreign interference after 2016.
While the responsibility to deal with these issues falls to state election officials, here is a quick guide for how to respond to some issues on Election Day, along with a handy
resource from our friends at Verified Voting indicating what equipment is used in each polling place across the nation.
866-OUR-VOTE: If you experience voter machine glitches, see voters being turned away from the poll, or run into other issues, report them to the nonpartisan Election
Protection network. This is the only way that we can spot patterns, put pressure on election officials to respond and, in the long run, make the case for paper ballots and risk
If Voting Machines Are Down
Since the first electronic voting machines were introduced, security experts have warned that they pose a risk of interference or simple malfunction that cannot be easily detected or
corrected. If someone hacks the machines, they hack the vote. If the machines fail, the vote is wrong. The fix is clear: all elections must include paper backups and a settled-on
process for real risk limiting audits.
If voting machines are down, you should ask for an emergency paper ballot. Do not simply accept that you cannot vote—broken machines should not result in disenfranchisement.
Call 866-OUR-VOTE and report the problem. Try to report as precisely as possible—which machine, what problem, what were you told by the election officials, etc. The
Election Protection Coalition will then have the information they need to work with election officials, or, if necessary, go to court, to try to make sure precincts that experienced
delays can keep polls open longer or that sufficient emergency ballots are available.
If you’re an election official in a state where some voting machines aren’t working, you should cooperate with efforts to keep the polls open late so everyone has a chance to vote. If machines are not
working properly, allow voters to use emergency ballots. If machines are not readily rebooted or recalibrated, call for delivery of sufficient emergency ballots. Do not attempt major
repairs on Election Day or ask voters to wait more than a few minutes before offering them emergency ballots. Keep any emergency ballots in a secure location.
Keep in mind that emergency ballots must be handled differently than provisional ballots and are always counted. Use of emergency ballots helps ensure that voters are not
disenfranchised due to equipment problems.
If There Is Evidence of Hacked or Malfunctioning Voting Machines
Some ordinary malfunctions can be addressed on Election Day. Machines can be rebooted or recalibrated if the touch screen appears to be off. Other small fixes are possible. Larger
fixes should not be attempted on Election Day, however.
There is only one remedy for a potentially hacked or seriously malfunctioning voting system: a paper-based risk limiting audit or a full or partial recount.
What is not acceptable when a voting system is acting suspiciously is a phony “machine recount.” This is when the system is just instructed to recount the electronic data. While some
officials have called this a “recount,” it’s not, since it will just replicate the problem. Insist that your election officials, up to and including your Secretary of State, conduct a
real hand recount of the paper records when they are available.
If you’re an election official: pull any machine that is acting strangely out of service and offer emergency ballots. And remember, machine recounts are not recounts. They just
replicate whatever problem occurred in the first place. Recounting the paper is the only real recount.
If There Are No Paper Records
Some states continue to use voting machines that don’t provide a voter verified paper record. If there is credible evidence that voting machines have been hacked or are malfunctioning
in such a state, the state should pursue (or at least allow) a forensic analysis of all potentially affected machines looking for signs of malware or tampering at all stages of the
If you’re an election official: Preserve evidence (voting machines, SD cards or USB drives, any ballot creation software and computers used to provision the voting machines). Keep
machines powered on and disconnected from cellular modems or the Internet to preserve the contents of their RAM as much as possible.
While this forensic investigation could turn up something, the fact that it doesn’t is not conclusive. That’s because many kinds of malware can clean up after themselves. This is why
it is so critical to ensure that all elections include voter verified paper trails. Ask your state Secretary of State to ensure the next election includes paper trails and ask your
senators to support the PAVE Act.
Stay Involved: November 7 and Beyond
As should be clear by now, these suggestions are simply Election Day triage. They may help in some situations, but may not in many more. The real pathway to securing our elections
starts on November 7—when we have to begin to take election system security seriously and deploy voter verified paper ballots, risk limiting audits and an approach that ensures
protections every step of the way from registration to the last vote count. Securing elections is a national security issue of the first order. It’s time we started treating it that
>> mehr lesen
EFF Unveils Virtual Reality Tool To Help People Spot Surveillance Devices in Their Communities
(Mon, 05 Nov 2018)
Law Enforcement’s Deployment of High-Tech Spying Tools On the Rise
San Francisco—The Electronic Frontier Foundation (EFF) launched a virtual reality (VR)
experience on its website today that teaches people how to spot and understand the surveillance technologies police are increasingly using to spy on communities.
“We are living in an age of surveillance, where hard-to-spot cameras capture our faces and our license plates, drones in the sky videotape our streets, and police carry mobile
biometric devices to scan people’s fingerprints,” said EFF Senior Investigative Researcher Dave Maass. “We made our ‘Spot the Surveillance’ VR tool to help people recognize these
spying technologies around them and understand what their capabilities are."
Spot the Surveillance, which works best with a VR headset but will also work on standard browsers, places users in a 360-degree street scene in San Francisco. In the scene, a young
resident is in an encounter with police. Users are challenged to identify surveillance tools by looking around the scene. The experience takes approximately 10 minutes to
The surveillance technologies featured in the scene include a body-worn camera, automated license plate readers, a drone, a mobile biometric device, and pan-tilt-zoom cameras. The
project draws from years of research gathered by EFF in its Street-Level Surveillance project, which shines a light on how police use, and
abuse, technology to spy on communities.
Created by EFF’s engineering and design team, the Stop the Surveillance VR experience can be found at https://eff.org/spot-vr.
“One of our goals at EFF is to experiment with how emerging online technologies can help bring about awareness and change,” said EFF Web Developer Laura Schatzkin, who coded the
project. “The issue of ubiquitous police surveillance was a perfect match for virtual reality. We hope that after being immersed in this digital experience users will acquire a new
perspective on privacy that will stay with them when they remove the headset and go out into the real world.”
The current version is now being made publicly available for user testing, as part of the Aaron Swartz Day and
International Hackathon festivities. EFF will be conducting live demonstrations of the project at the event on Nov. 10-11 at the Internet Archive in San Francisco.
Swartz, the brilliant activist and Internet pioneer, was facing a myriad of federal charges for downloading scientific journals when he took his own life in 2013.
EFF seeks user feedback and bug reports, which will be incorporated into an updated version scheduled for release in Spring 2019. The VR project was supported during its development
through the XRstudio residency program at Mozilla. The project was also made possible with the support of a 2018 Journalism 360 Challenge grant. Journalism 360 is a global network of storytellers accelerating the
understanding and production of immersive journalism. Its founding partners are the John S. and James L. Knight Foundation, Google News Initiative, and the Online News
For access to the VR experience and instructions on it use:
For details on the Aaron Swartz International Hackathon events at the Internet Archive, including talks by EFF Executive Director Cindy Cohn, International Director
Danny O’Brien, and Senior Investigative Researcher Dave Maass:
For more on Street Level Surveillance:
Senior Investigative Researcher
>> mehr lesen
Join Us For the Sixth Annual Aaron Swartz Day This Weekend at The Internet Archive
(Mon, 05 Nov 2018) Join EFF and others on November 10 and 11 to celebrate the sixth annual Aaron Swartz Day, with a weekend of lectures, a virtual reality fair and a hackathon. This weekend we’ll join
our friends at the Internet Archive in celebrating Aaron’s work as activist, programmer, entrepreneur, and political organizer.
Aaron’s life was cut short in 2013, after he was charged under the notoriously draconian Computer Fraud and Abuse Act for systematically downloading academic journal articles from the
online database JSTOR. Despite the fact that the CFAA was originally enacted to stop malicious computer break-ins, federal prosecutors have for years stretched the law beyond its
original intent, instead pushing for heavy penalties for any behavior they don't like that involves a computer.
This was the case for Aaron, who was charged with eleven counts under the CFAA. Facing decades in prison, Aaron took his own life at the age of 26. He would have turned 32 this week,
on November 8.
This weekend, you can help carry on the work that Aaron started. The hackathon, hosted at the Internet Archive, will focus in part on SecureDrop, a system that Aaron helped build for
anonymous whistleblower document submission. It is now maintained by the Freedom of the Press Foundation. This year’s Aaron Swartz Day will also feature a virtual reality fair,
showcasing virtual, augmented, and mixed reality work and several talks about the ways that artists, archivists, and programmers can use these new technologies. EFF will be
demonstrating our own virtual reality "Spot the Surveillance" project, which teaches
people how to identify the various spying technologies that police may deploy in communities.
If you’re not a programmer, you can also volunteer your time to help with research projects on police surveillance and other topics, or user testing for VR projects.
This year’s event also includes an impressive slate of speakers, including Chelsea Manning, journalist Barrett Brown, Freedom of the Press Foundation director Trevor Timm, and Aaron
Swartz Day organizer Lisa Rein. EFF’s Executive Director Cindy Cohn, International Director Danny O’Brien, and Senior Investigative Researcher Dave Maass will also speak.
At EFF, we have worked to continue the spirit of Aaron’s legacy. We have pushed to reform the CFAA in Congress, including through Rep. Zoe Lofgren’s proposed “Aaron’s Law,” and have
advocated for a narrow interpretation of the law in the courts across the country.
We’ve also sought to support Aaron’s mission to expand access to research online. In California, we successfully pushed for the passage of A.B. 2192, which requires that any academic paper that received state government funding be published in
an open-access journal within a year of its publication. To our knowledge, California is the only state to adopt an open access bill this comprehensive.
We continue to urge Congress to pass the FASTR—the Fair Access to Science and Technology Act (S.1701, H.R. 3427), which would require every federal agency that spends more than $100 million on grants for research to adopt an open access
These open access laws represent an important first step to making information more available to everyone.
We hope to see some of you at this weekend’s Aaron’s Day celebration. For more information about the hackathon, and to buy tickets to the weekend’s events or just the opening night
party, please visit the event
page. Those who’d like to volunteer or need a free ticket are welcome to email firstname.lastname@example.org.
To support EFF’s efforts on open access and CFAA reform, visit https://supporters.eff.org/donate.
>> mehr lesen
Snowden Files Declaration in NSA Spying Case Confirming Authenticity of Draft Inspector General Report Discussing Unprecedented Surveillance of Americans, Which He Helped Expose
(Sat, 03 Nov 2018)
EFF filed papers with the court in its long-running Jewel v. NSA
mass spying case today that included a surprising witness: Edward Snowden. Mr. Snowden’s short declaration confirms that a document relied upon in the case, a draft NSA Inspector General Report from 2009
discussing the mass surveillance program known as Stellar Wind, is actually the same document that he came upon during the course of his employment at NSA contractor. Mr. Snowden
confirms that he remembers the document because it helped convince him that the NSA had been engaged in illegal surveillance.
Mr. Snowden’s declaration was presented to the court because the NSA has tried to use a legal technicality to convince the court to disregard the document. The NSA has refused to
authenticate the document itself. This is important because documents gathered as evidence in court cases generally must be authenticated by whoever created them or has personal
knowledge of their creation in order for a court to allow them to be used. The NSA is claiming that national security prevents it from saying to the court what everyone in the world
now knows: that in 2009 the Inspector General of the NSA drafted a report discussing the Stellar Wind program. The document has been public now for many years, has never been claimed
to be fraudulent, and was the subject of global headlines at the time it was first revealed. Instead of acknowledging these obvious facts, the NSA has asserted that the plaintiffs may
not rely upon it unless it is confirmed to be authentic by someone with personal knowledge that it is.
Enter Mr. Snowden. The key part of his five paragraph declaration states:
During the course of my employment by Dell and Booz Allen Hamilton, I worked at NSA facilities. I had access to NSA files and I became familiar with various NSA
documents. One of the NSA documents I became familiar with is entitled ST-09-0002 Working Draft, Office of The Inspector General, National Security Agency, dated March 24,
2009. I read its contents carefully during my employment. I have a specific and strong recollection of this document because it indicated to me that the government had
been conducting illegal surveillance.
The government took a similar unfounded position with regard to another document – an Audit Report by the NSA in response to a secret FISA Court
Order – that it produced to the New York Times in response to a Freedom of Information Act request. The Vice President and Deputy General Counsel of the New York Times David McCraw,
provided a simple declaration to authenticate that document.
“Everyone knows that the government engages in these surveillance techniques, since they now freely admit it. The NSA’s refusals to formally ‘authenticate’ these long-public documents
is just another step in its practice of falling back on weak technicalities to prevent the public courts from ruling on whether our Constitution allows this kind of mass surveillance
of hundreds of millions of nonsuspect people,” said Cindy Cohn, EFF’s Executive Director.
Mr. Snowden and Mr. McCraw’s Declarations are part of EFF’s final submission to the court to establish that its clients have “standing” to challenge the mass spying because it is more
likely than not that their communications were swept up in the NSA’s mass surveillance mechanisms. These include telephone records collection, Internet metadata collection, and the
upstream surveillance conducted, in part, at the AT&T Folsom Street Facility in San Francisco. Mr. Snowden’s declaration joins those of three additional technical experts and another
whistleblower whose declarations were filed in September. The court has not set a hearing date for the matter.
Jewel v. NSA
>> mehr lesen
EFF Asks California Supreme Court to Hear Case on Anonymization and the Ability to Access Data Under the California Public Records Act
(Fri, 02 Nov 2018)
EFF is filing an amicus letter in support of a petition for review,
asking the California Supreme Court to overturn a harmful appellate court
decision in Sander v. State Bar of California that could prevent people from requesting public records from databases that contain private information, even if the
requesters specifically ask for private information to be anonymized.
The First District Court of Appeal issued an opinion in August that effectively rewrites the California Public Records Act (CPRA) in a way that could limit Californians’ access
to the vast amount of public data that state and local agencies are generating on our behalf. The court ruled that requiring the State Bar of California to de-identify personal
information that could be linked to specific bar applicants creates a “new record” because it requires the State Bar to “recode its original data into new values.”
The petition raises “an important question of law” that the California Supreme Court must settle: does anonymization of public data amount to a creation of new records under the CPRA?
If the appellate court’s opinion becomes the standard across California, the holding would undermine the purpose of the CPRA–the right to access government data and records in order
to understand what the government is doing and allow oversight to prevent government inefficiencies or malfeasance. This is especially important today as modern governments generate
and consume vast amounts of digital data about members of the public.
This case has been at the California Supreme Court before. Last time, the court acknowledged how useful this data is to the public, saying: “it seems beyond dispute that the public has a legitimate interest in whether different groups of
applicants, based on race, sex or ethnicity, perform differently on the bar examination and whether any disparities in performance are the result of the admissions process or of other
factors.” But when the case proceeded to trial, the superior court mistakenly placed the
burden on the petitioners to show it was possible to de-identify this data. However, under the CPRA, when the government refuses to disclose records requested by the public,
government agencies must show the court that it is not possible to release data and protect private information at the same time.
Interestingly, another superior court in California has ruled the opposite way, preserving the ability to access data by requiring government agencies to anonymize sensitive
information under the CPRA. In Exide Technologies v. California Department of Public
Health, the superior court in Contra Costa allowed protocols for de-identification that require the agency to manipulate existing public records to produce information. In that
case, the court ruled that the state must share the investigations of blood lead levels in a format that serves the public interest in government transparency (by disclosing
non-exempt information) while at the same time protecting the privacy interests of individual lead-poisoning patients (by withholding exempt information). The court recognized the
state legislature’s finding that “knowledge about where and to what extent harmful childhood lead exposures are occurring in the state could lead to the prevention of these exposures,
and to the betterment of the health of California's future citizens.”
This uncertainty of how to treat data disclosure and anonymization creates a need for the California Supreme Court to settle how agencies should handle sensitive digital information
under the CPRA. The more data that the state collects from and about members of the public, the more important access to this data and oversight of agency practices becomes. But
saying that good data-management practices like anonymization and de-identification create “new records” that can’t be disclosed compromises California’s fundamental commitment to
transparency, accountability, and access to public information.
>> mehr lesen
SOPA.au: Australia is the Testbed for the World's Most Extreme Copyright Blocks
(Fri, 02 Nov 2018)
It's been three years since Australia adopted a national copyright blocking
system, despite widespread public outcry over the
abusive, far-reaching potential of the system, and the warnings that it would not achieve its stated goal of preventing copyright infringement.
Three years later, the experts who warned that censorship wouldn't drive people to licensed services have been vindicated. According to the giant media companies who drove the
copyright debate in 2015, the national censorship system has not convinced Australians to pay up.
But rather than rethink their approach -- say, by bringing Australian media pricing in line with the prices paid elsewhere in the world, and by giving Australians access to the same
movies, music and TV as their peers in the US and elsewhere -- Australia's Big Content execs have demanded even more censorship powers, with less oversight, and for more
sites and services.
The current Australian censorship system allows rightsholders to secure court orders requiring the country's ISPs to block sites whose "primary purpose" is to "is to infringe, or to
facilitate the infringement of, copyright (whether or not in Australia)."
Under the new proposal, rightsholders will
be able to demand blocks for sites whose "primary effect" is copyright infringement. What's more, rightsholders will be able to secure injunctions against search engines,
forcing them to delist search-results that refer to the banned site.
Finally rightsholders will be able to order blocks for sites, addresses and domains that provide access to blocked sites, without going back to court.
Taken together, these new measures, combined with the overbroad language from 2015, are a recipe for unbridled, unstoppable censorship -- and it still won't achieve the goals
of copyright law.
The difference between "primary purpose" and "primary effect"
The 2015 censorship system required rightsholders to prove that the targeted sites were designed for copyright infringement: advertised for the purpose of breaking the law.
But the new system only requires that sites have the "primary effect" of copyright infringement, meaning that businesses will have to police the way that the public behaves
on their platforms in order to avoid a ban.
In thinking about the "primary effect" test, it's worthwhile looking at how big media companies have characterized general-purpose platforms and their features in other copyright
YouTube is the largest video-sharing site in the world, with several hundreds hours' worth of video added to the service every minute. The overwhelming majority of this video is not
infringing: it is, instead, material created by YouTube's users, uploaded by its creators and shared with their blessing.
In 2010, Viacom sued YouTube for being a party to copyright infringement. It was a colorful, complex suit full of comic profanity and skullduggery, with an eye-popping $1 billion
on the line. But one detail that is often forgotten is that Viacom's claim against YouTube turned, in part, on the fact that YouTube allowed users to make their videos private.
Viacom's argument went like this: If people can mark their videos as private, they might use that feature to hide infringing videos from our searches. You could upload a movie from
Viacom division Paramount, mark it as private and share it with your friends, and Paramount would not be able to discover the infringement and take it down. Viacom’s argument was that
every video you post should be visible to the world, just in case it infringed their copyright.
This is one example of how the copyright industry thinks about the "primary effect" of online services: Viacom said that once YouTube knew that privacy flags could be used to escape
copyright enforcement, that they had a duty to eliminate private videos, and their failure to do so made YouTube a copyright infringement machine.
Eight years later, YouTube is still the epicenter of the debate over how far a platform's duty to monitor and censor its users goes. In the EU, the big entertainment companies are on
their way to enshrining "Article 13" of the new Copyright Directive into law for 28
countries comprising more than 500 million residents. Under this proposal, YouTube would have to extend its current, voluntary system of pre-emptively censoring videos that match ones
claimed by a small cohort of trusted rightsholders, and replace it with a crowdsourced database of allegedly copyrighted works, to which anyone to could contribute anything, and
censor anything a uploaded by a user that seems to match a blocklisted item.
The rhetoric in support of these filters centers on YouTube's alleged role as a copyright infringement facilitator: whether or not YouTube's owners intend for the service to be a
black market for infringing material, big content says that's what they have become, because among the billions of videos YouTube hosts, are thousands of infringing ones.
This is the norm that the entertainment industry is pushing for all over the world: a service's "primary effect" is infringing if there is a significant amount of infringement taking
place on it, even if "a significant amount" is only a small percentage of the overall activity.
Search Engine Censorship: What We Don't Know We Don't Know
The new Australian copyright proposal allows rightsholders to dictate search-results to the likes of Bing, DuckDuckGo, and Google. Here, the copyright industry is tacitly admitting
that blocking orders don't actually work: as they were told in 2015, merely blocking something at the ISP level won't stop copyright infringement, such blocks are easy to get around
(with a VPN, for example).
The copyright industry's 2015 position was that blocking worked. The 2018 position is that blocking doesn't work: you have to keep the existence and location of infringing files a
The 2018 position has all the problems of the 2015 position, and some new ones. Users can still use VPNs to see search-results that are censored in Australia, and also use the VPNs to
bypass their ISPs' blocks.
But because search-results are blocked in Australia, ordinary Australians trying to do legitimate things will not be able to know what is blocked in their country, and will thus not
be able to push back against abusive or sloppy overblocking.
This is doubly important because the operators of the sites that are blocked are also not represented in the process when the blocking orders are drafted. The rules don't
require that anyone argue in favor of sites that are facing blocking and search delisting, they don't require that the owners of sites facing a block be notified of upcoming
proceedings, and the best the public can hope for is that an ISP might show up in court to argue for their right to see the whole web.
Combine this one-sided kangaroo (ahem) court with search-blocking and you get a deadly mix: unchecked censorship that's also invisible.
But it gets worse: the 2015 and 2018 censorship systems don't limit themselves to censoring sites that infringe Australian copyright: they also ban sites that violate any copyright in
Australia was recently pressured by the USA into extending its copyright term from the life of the author plus 50 years to "life plus 70," meaning that works that are legal to share
in the EU (where it's still life plus 50) might be illegal to share in Australia.
But other countries like Mexico and Jamaica have extended their copyrights to the even more farcical "life plus 100" term (or 95, in Jamaica), meaning that US sites hosting
public domain works might still be infringing copyright in Acapulco and Montego Bay. These sites are legal in the USA, legal in Australia, but illegal in Mexico -- and since
Australia's copyright system bans accessing a site that violates any country's copyright (2015) or even listing it in a search engine (2018), Australia's entire information
infrastructure has become a prisoner to the world's worst, most restrictive copyright.
VPNs are next
In 2015, the entertainment industry insisted that blocking orders would solve their infringement problems. In 2018, they've claimed that search-engine censorship will do the trick.
But the same tool that defeats blocking orders also defeats search-engine censorship: Virtual Private Networks (VPNs). These are much-beloved in Australia, a country that is a net
importer of copyrighted works and a third-class citizen in the distribution plans of big Hollywood studios, meaning that Australia's versions of Netflix, iTunes and other commercial
online entertainment services get the top movies, TV shows and music later than most of the rest of the English-speaking world.
Australians have come to rely on VPNs as the primary way to legitimately purchase and watch material that no one will sell them access to in Australia. By buying VPN service and
subscriptions to overseas online services, Australians are able to correct the market failure caused by US and British companies' refusal to deal.
The entertainment companies know that a frontal assault on VPNs is a nonstarter in Australia, but they also hate this evasion of regional release windows. Three years from
now, after the same people who defeated blocking orders with VPNs have shown that they can defeat search-engine censorship with VPNs, the same companies will be back for Australians'
They don't even need to take all the VPNs: as the Chinese government censors have shown in their dealings with Apple, a well-provisioned national firewall can be made compatible with VPNs, simply by requiring VPNs to share their keys with national
censors, allowing for surveillance of VPN users. VPNs that aren't surveillance-friendly are blocked at the national firewall.
In 2015, the entertainment companies convinced Australia to swallow a fly, and insisted that would be the end of it, no spiders required. Now they're asking the country to swallow
just a little spider to eat the fly, and assuring us there will be no bird to follow. The bird will come, and then the cat, the dog and so on -- we know how that one ends.
More Censorship, Less Oversight
The final piece of the new copyright proposal is to allow rightsholders to demand blocks for sites, services, addresses and domains that "provide access to” blocked sites, without a
new court order.
This language is potentially broad enough to ban VPNs altogether, as well as a wide range of general-purpose tools such as proxy servers, automated translation engines, content
distribution networks -- services that facilitate access to everything, including (but not only) things blocked by the copyright censorship orders.
If this power is wielded unwisely, it could be used to block access to major pieces of internet infrastructure, as recent experiences in Russia demonstrated:
the Russian government, in its attempt to block access to the private messaging app Telegram, blocked millions of IP addresses, including those used by major cloud providers,
because they were part of a general-purpose internet distribution system that Telegram was designed to hop around on.
So this is the kind of order that you'd want used sparingly, with close oversight, but the new rules make these blocks the easiest to procure: under the new proposal,
rightsholders can block anything they like, without going to court and showing proof of infringement of any kind, simply by saying that they're trying to shut down a service that
"provides access" to something already banned.
Australia has become a testbed for extreme copyright enforcement and the entertainment business in the twenty-first century. On the one hand, the Australian experience with legitimate
copyright businesses has shown an unmistakablelink between offering a wide selection of copyrighted works at a
fair price and a reduction in infringement. Most Australians just want to enjoy creative works and are happy to pay for them -- provided that someone will sell them those works. If
the works aren't on sale, then the Australian experience shows us that you need a constant, upwards-ratcheting censorship and surveillance system to check this natural impulse to
participate in common culture.
The entertainment industry lobbyists calling for this system insist that they are in the grips of an existential struggle: without broad censorship powers (they say), they will go out
of business. But a look at the effect of fair offerings and the Australian willingness to pay for VPNs to buy media abroad demonstrates that what the entertainment companies want is
to control legitimate purchases, not unfair copyright infringement. The industry can eke out a few extra points of profit by delaying the release-windows in Australia and
price-gouging Australian customers, and to prevent customers from evading these tactics, they propose to censor and control the entire Australian internet.
This is a bad deal for Australians. Even if (for some reason) you trust the entertainment companies to wield these powers wisely, the Australian experience has shown that copyright trolls are quick to seize any new
offensive weapons the Australian government hands them to blackmail Australians in ways that don't make one cent for creators.
For the world, Australia is a powerful cautionary tale: a low-population country where a couple of dominant media companies have been allowed to make internet policy as though the
major use of the internet is as a video-on-demand service, rather than as the nervous system for the 21st century. The regulatory malpractice of 2015 begat even harsher measures, with
no end in sight.
This is a prototype for a global system. Australia may be a net copyright importer, but it is in imminent danger of becoming a net copyright censorship exporter, with the
Australian model being held up as a proof that the entire world need subordinate its digital infrastructure to the parochial rent-seeking of a few entertainment companies.
>> mehr lesen
Google Chrome’s Users Take a Back Seat to Its Bottom Line
(Fri, 02 Nov 2018)
Google Chrome is the most popular browser in the world. Chrome routinely leads the pack in features for security and usability, most recently helping to drive the adoption of HTTPS. But when it comes to privacy,
specifically protecting users from tracking, most of its rivals leave it in the dust.
Users are more aware of, and concerned about, the harms of pervasive tracking than ever before. So why is Chrome so far behind? It’s because Google still makes most of its money from tracker-driven, behaviorally-targeted ads.
The marginal benefit of each additional bit of information about your activities online is relatively small to an advertiser, especially given how much you directly give Google
through your searches and use of tools like Google Home. But Google still builds Chrome as if it needs to vacuum up everything it can about your online activities, whether you want it
to or not.
In the documents that define how the Web works, a browser is called a
user agent. It’s supposed to be the thing that acts on your behalf in cyberspace. If the massive data collection appetite of Google’s advertising- and
tracking-based business model are incentivizing Chrome to act in Google’s best interest instead of yours, that’s a big problem—one that consumers and regulators should not
Chrome is More Popular Than Ever. So is Privacy.
Since Chrome’s introduction in 2008, its market share has risen inexorably. It now accounts for 60% of the
browsers on the web. At the same time, the public has become increasingly concerned about privacy online. In 2013, Edward Snowden’s disclosures highlighted the links
surreptitious corporate surveillance and the NSA’s spy programs. In 2016, the EU ratified the General Data Protection Regulation (GDPR), a sweeping (and
complicated) set of guidelines that reflected a new,
serious approach to data privacy. And in the U.S., this year’s Cambridge Analytica scandal sparked unprecedented backlash
against Facebook and other big tech companies, driving states like California to pass real data privacy laws for the first time (although
those laws are under threat federally by, you guessed it, Google and Facebook).
Around the world, people are waking up to the realities of surveillance
capitalism and the surveillance business model:
the business of “commodifying reality,” transforming it into behavioral data, and using that data and inferences from it to target us on an ever-more granular level. The
more users learn about this business model, the more they want out.
That’s why the use of ad and tracker blockers, like EFF’s Privacy Badger, has grown dramatically in recent years. Their popularity is a
testament to users’ frustration with the modern web: ads and trackers slow down the browsing experience, burn through data plans, and give people an uneasy feeling of being watched. Companies often justify their digital snooping by arguing that people prefer ads that are
“relevant” to them, but studies show that most users don’t
want their personal information to be used to target ads.
All of this demonstrates a clear, growing demand for consumer privacy, especially as it relates to trackers on the web.
As a result, many browser developers are taking action. In the past, tracker blockers have only been available as third-party “extensions” to popular browsers, requiring
diligent users to seek them out. But recently, developers of major browsers have started building tracking protections into their own products. Apple’s Safari has been
developing Intelligent Tracking Protection, or ITP, a system
that uses machine learning to identify and stop third-party trackers; this year, the improved ITP 2.0 became the default for tens of millions of Apple users. Firefox recently rolled
out its own tracking protection feature, which is on by default in
private browsing windows. Opera ships with the option to turn on both ad and tracker
blocking. Even the much-maligned Internet Explorer has a built-in “tracking protection” mode.
Yet Google Chrome, the largest browser in the world, has no built-in tracker blocker, nor has the company indicated any plans to build one. Sure, it now blocks some intrusive ads, but that feature has nothing to do with privacy. The
closest thing it offers to “private” browsing out-of-the-box is “incognito mode,” which only hides what you do from others who use your machine. That might hide embarrassing
searches from your family, but does nothing to protect you from being tracked by Google.
Conflicts of Interest
Google is the biggest browser company in the world. It’s also the biggest search engine, mobile operating system, video host, and email service. But most importantly, it’s the
biggest server of digital ads. Google controls 42% of the digital advertising market, significantly more than Facebook, its largest rival, and vastly more than anyone else. Its tracking codes
appear on three quarters of the top million sites on the web. 86% of
Alphabet’s revenue (Google’s parent company) comes from
advertising. That means all of Alphabet has a vested interest in helping track people and serve them ads, even when that puts the company at odds with its users.
Market share (desktop)
On by default in Private Browsing
Off by default
On by default
Off by default
Chrome and Edge are the only two major browsers without a built-in tracker blocker. Source: "Desktop Browser Market Share Worldwide - StatCounter Global
And that may explain why Chrome lacks real tracking protections. The only other major desktop browser without a built-in tracker blocker is Edge, Microsoft’s replacement for
Internet Explorer. This is bad news and a worrisome step. In moving from Explorer to Edge, Microsoft seems to be abandoning its interest in helping users protect their privacy in
order to race to the bottom against Google.
Google has come under fire in the past for using its power in one arena, like browsing or search, to drive revenue to other parts of its business. Yelp has repeatedly
claimed that Google uses its search engine to favor its own business reviews. Last year, the EU levied a record $2.7 billion fine against the company for using search to drive
users towards its comparison shopping site; then it outdid itself with a $5 billion fine for using
Android to drive web searches. And earlier this this year, Mozilla complained when a YouTube update made the video service five times slower in Firefox than in Chrome.
Chrome’s lack of tracker controls represents a different kind of harm, one that hurts its users more than its competition. Often, Google’s interests align with those of Chrome’s
users. It wants people browsing, searching, and watching as much as possible, and that means providing a buttery-smooth interface with fast loading times and secure infrastructure.
But it also wants them feeding back data about all their online activity to Google. Google has an incentive not to protect Chrome users’ privacy, and
it appears that it sees no balancing that incentive to accommodate users’ desires.
With Great Power...
Many of the criticisms in this post apply to Microsoft Edge as well, but we’re picking on Chrome for a reason. The Chrome team’s actions affect more than just their users.
Because Chrome controls such a massive part of the browser market, decisions its developers make have an outsized impact on the rest of the ecosystem. Companies that make money by
tracking users need browser makers to permit these business models. If everyone used Tor
Browser or Brave, many data brokers’ and trackers’ current business models would cease to
Apple’s efforts with ITP have already forced some tracking companies to change their behavior. Facebook recently announced its intention to move away from using third-party
cookies to power Pixel, its third-party analytics product. And French ad-targeter Criteo announced that it was expecting an ITP-related revenue hit while
promising to invest in tracking methods that would work around Apple’s system.
Google is by far the largest player in both the desktop and mobile browser markets, and it has access to engineering resources unmatched by its competitors. As long as most
Chrome users remain susceptible to tracking, trackers will keep making money. On the other hand, if Chrome were to introduce an optional tracker-blocking mode—or better yet, to block
trackers by default—the ramifications would be enormous. Overnight, an entire industry built on surreptitious tracking would have a lot fewer people to track.
And there lies the catch. Users don’t like tracking. For Chrome to serve its users, it needs to help them block trackers. But for Alphabet to keep making as much money as possible on
ad retargeting, it needs web browsers—especially Chrome—to allow tracking.
Do The Right Thing
One way for Google to deal with this kind of criticism without truly changing its approach would be to have Chrome block certain kinds of “low-hanging fruit” tracking, like
third-party cookies, while purposefully leaving the door open for more sophisticated methods. Google could then adjust its own tracking strategies to work within the rules Chrome
would set. The company has already done something similar in the ad
space, configuring Chrome to block egregiously annoying ads (but leaving its own ads alone).
This might look like a win-win at first: Google could claim to provide better privacy for its users, at least in the short term, while simultaneously undermining its competitors
in the targeted-ad space. However, beyond being a potential abuse of its market power, this would actually hurt users in the long run. If Google can cement its dominance in both the
browser and ad industries while continuing its tracking practices, it will ensure that users are tracked on Google’s terms for years to come. Google cannot both be the umpire of what
tracking ads are allowed and the pitcher of most of the ads.
Google could take the lead on solving this problem. Trackers are not necessary to make the Web work, and they shouldn’t be necessary for Google to make lots (and lots) of money.
As we noted above, Google has mountains of direct information about what you want to buy through its various services, from search to Maps to Google Play. Ads don’t need to be
targeted using every little bit of information about us that Google has access to via our use of its browser. A sustainable Web needs to be built on consent, not subterfuge.
The change could start by making sure that Chrome’s features and settings always look out for the user first, blocking trackers by default. Beyond that, Chrome’s developers
should have the freedom to design the best “user agent” they can, without caving to the imperatives of Google’s advertising business. If this simply isn’t possible for the company,
then the consumer harms of having these two conflicting business priorities under one roof may be cause for an intervention by antitrust or other legal authorities.
>> mehr lesen
Surveillance Targets Deserve Answers About Mysterious Wiretaps
(Thu, 01 Nov 2018)
New Information Could Get to the Bottom of Riverside’s Massive Wiretap Campaign
Riverside, CA – Two individuals with no criminal record—one of whom is a retired California Highway Patrol officer—are asking a California Superior Court why their phones were tapped
in 2015. These are just two targets of hundreds of questionable wiretaps authorized by a single judge, Helios J. Hernandez, in Riverside County.
The Electronic Frontier Foundation (EFF) and Sheppard, Mullin, Richter & Hampton, LLP represent the targeted individuals, who were never charged and never received any
notification that they were the subject of a wiretap order, despite the law requiring such notice within 90 days of the wiretap’s conclusion. Instead, they only learned about the
wiretap from friends and family who did receive notification.
The wiretap in this case was issued over three years ago, a time when Riverside County was issuing a record number of wiretaps. In 2014, for example, the court approved 624 wiretaps,
triple the number of any other state or federal
courts. The targets were often out of state, resulting in hundreds of arrests nationwide. After a series of stories in USA TODAY questioned the
legality of the surveillance, watchdogs said that the wiretaps likely violated federal law.
“There are very real questions about the legitimacy of the warrant-approval process in Riverside County during the time when our clients were wiretapped, including questions about the
behavior of the judge and the District Attorney’s Office,” said EFF Staff Attorney Stephanie Lacambra. “The court should release information about how this wiretap was approved and
why, so both our clients and the public can understand what happened during Riverside County’s massive surveillance campaign.”
Riverside County’s wiretaps have since dropped from 640 wiretaps in 2015 to 118 in 2017, nearly all related to narcotics cases. Riverside was second only to Los Angeles County, which
filed 230 wiretaps last year. Orange County and San Diego County, which are both larger than Riverside in terms of population, only approved 10 and 28 wiretaps respectively. Yet the
District Attorney’s Office has still refused to release the old and new policies on wiretapping, rejecting multiple EFF public records requests since 2015.
“There are so many questions about what went on with these wiretaps. And it’s only fair our clients get answers. They don’t know if they were targeted by accident, or if they were
suspected of something, or on what basis the order was issued at all,” said Tina Salvato of Sheppard, Mullin, Richter & Hampton, LLP. “But this is also a matter of public
interest, and we hope the court sees this too.”
Data on state-level wiretaps are disclosed in the U.S. Court’s annual Wiretap Report and in the California Attorney General’s Electronic Interceptions Report. Last year, EFF successfully pressured the Attorney General to begin making these records available in a
machine-readable format through the California Public Records Act.
For more on this case:
Criminal Defense Staff Attorney
>> mehr lesen
Stupid Patent of the Month: How 34 Patents Worth $1 Led to Hundreds of Lawsuits
(Wed, 31 Oct 2018)
One of the nation’s most prolific patent trolls is finally dead. After more than a decade of litigation and more than 500 patent suits, Shipping & Transit LLC (formerly known as
Arrivalstar) has filed for
bankruptcy. As part of its bankruptcy filing [PDF], Shipping & Transit was required to
state how much its portfolio of 34 U.S. patents is worth. Its answer: $1.
We are recognizing Shipping & Transit’s entire U.S. portfolio as our latest stupid patent of the month. We agree that these patents are worthless. Indeed, they have always been
worthless, except as litigation weapons. In the hands of their unscrupulous owners, they caused enormous damage, costing productive companies more than $15 million in licensing fees
and untold legal expenses. That’s tens of millions of dollars that won’t be used to invest in new products, reward shareholders, or give raises to workers.
Dozens of worthless patents
All patent troll stories start with Patent Office. You can’t be a patent troll without patents. And you can’t have patents unless with Patent Office grants them. We have
foundmanyoccasions to write about problems with patent examination.
The Patent Office spends only a few hours per application and regularly issues software patents without considering any real-world software at all. This helps explain how an
entity like Shipping & Transit could end up with dozens of valueless patents.
Shipping & Transit claims to be “one of the pioneers of determining when something is arriving and being able to message that out.” Its patent portfolio mostly relates to
tracking vehicles and packages. Of course, Shipping & Transit did not invent GPS tracking or any protocols for wireless communication. Rather, its patents claim mundane methods of
using existing technology.
Consider U.S. Patent No. 6,415,207. This patent claims a “system for monitoring and reporting
status of vehicles.” It describes using computer and software components to store status information associated with a vehicle, and communicate that information when requested. In
other words: vehicle tracking, but with a computer. It doesn’t disclose any remotely new software or computer technology. Rather, the patent claims the use of computer and software
components to perform routine database and communications operations. There is nothing inventive about it.
Given that it was aggressively filing lawsuits as recently as 2016, it is striking to see Shipping & Transit now admit that its patent portfolio is worthless. While many of its
patents have expired, that is not true of all of them. For example, U.S. Patent No. 6,415,207
does not expire until 2020. Also, the statute of limitations for patent infringement is six years. An expired patent can be asserted against past infringement so long as the
infringement occurred before the patent expired and within the last six years. Many of the patents Shipping & Transit have asserted in court expired less than six years before its
bankruptcy filing. Yet Shipping & Transit valued all of its U.S. patents at $1.
A decade of patent trolling
When it was known as Arrivalstar, Shipping & Transit sued a number of cities and public transit agencies claiming that transit apps infringed its patents. (While the exact legal relationship between Arrivalstar
S.A. and Shipping & Transit LLC is unclear, Shipping & Transit has itself said that it was “formerly known as Arrivalstar.”) Its litigation had all the hallmarks of classic
patent trolling. When transit agencies banded together to defend themselves on the merits, it quickly abandoned its claims.
Shipping & Transit’s campaign continued for years against a variety of targets. In 2016, it was the top patent litigator in the entire country, mostly targeting small
businesses. One judge described its tactics as “exploitative
litigation.” The court explained:
Plaintiff’s business model involves filing hundreds of patent infringement lawsuits, mostly against small companies, and leveraging the high cost of litigation to extract
settlements for amounts less than $50,000.
For many years, this strategy worked. Shipping & Transit/Arrivalstar is reported to have
collected more than $15 million from defendants between 2009 and 2013.
Finally, after more than a decade, Shipping & Transit’s exploitative tactics finally caught up with it. First one, then another federal judge awarded attorneys’ fees to the defendants in cases brought by
Shipping & Transit. With defendants successfully fighting back, it stopped filing new cases.
The end: Shipping & Transit files an inaccurate bankruptcy petition
Shipping & Transit filed its bankruptcy petition [PDF] on September 6, 2018. The
petition discloses that Shipping & Transit’s gross revenue in the two-year period of 2016 and 2017 was just over $1 million dollars. Of course, this does not include the legal
costs that Shipping & Transit imposed on its many targets. It claimed to have no revenue in 2018.
Other than its 34 U.S patents (valued at $1), and its 29 worldwide patents (also valued at $1), Shipping & Transit claims to have no assets at all. Where did more than $1 million
dollars it received go? The application doesn’t say.
The bankruptcy petition, submitted under the penalty of perjury and signed by Shipping & Transit’s Managing Member Peter Sirianni, contains at least one false statement. A
bankruptcy petition includes Statement of Financial Affairs (Form 207). Part 3 of this form requires the debtor to list any “legal actions … in which the debtor was involved in any
capacity—within 1 year before filing this case.” Shipping & Transit said “none.”
But that isn’t true. On July 23, 2018, a writ of execution [PDF] was issued as to Shipping &
Transit in the amount of $119,712.20. This writ was issued in Shipping and Transit, LLC v. 1A Auto, Inc., Case No. 9:16-cv-81039, in the Southern District of Florida. The
judge in that case had issued a final judgment [PDF] on April 3, 2018, awarding fees
and costs to the defendant. Both of these orders, and many other court filings, took place within a year of Shipping & Transit’s bankruptcy petition. Yet Shipping & Transit
still affirmed that it had not been involved in litigation “in any capacity” within a year of the bankruptcy filing.
Shipping & Transit’s petition does list 1A Auto as an unsecured creditor. Even though a court has issued a writ of execution with a precise six-figure amount, Shipping &
Transit stated that the amount of 1A Auto’s claim is “unknown.”
It is not surprising that a decade of abusive patent trolling would end with an inaccurate bankruptcy petition. To be clear, our opinion that Shipping & Transit’s bankruptcy
petition includes a false statement submitted under oath is based on the following disclosed facts: its answer to Part 3 of Form 207 of its petition, and the public docket in Case No. 9:16-cv-81039 in the United States District Court for the
Southern District of Florida.
A monster story for Halloween
UPSTO Director Andrei Iancu recently gave a speech where he suggested that those who complain about patent trolls are spreading “scary monster stories.” It may finally be dead, but Shipping &
Transit was a patent troll, and it was very, very real. We estimate that its lawsuits caused tens of millions of dollars of economic harm (in litigation costs and undeserved
settlements) and distracted hundreds of productive companies from their missions. Research shows that companies sued for patent infringement later
invest less in R&D.
A patent system truly focused on innovation should not be issuing the kind of worthless patents that fueled Shipping & Transit’s years of trolling. Courts should also do more to
prevent litigation abuse. It shouldn’t take an entire decade before an abusive patent troll faces consequences and has to shut down.
While it lived, Shipping & Transit/Arrivalstar sued over 500 companies and threatened many hundreds more. That might be a “monster story,” but it is true.
>> mehr lesen
The EU's Link Tax Will Kill Open Access and Creative Commons News
(Sun, 28 Oct 2018)
All this month, the European Union's "trilogue" is meeting behind closed doors to hammer out the final wording of the new Copyright Directive, a once-noncontroversial regulation that became a hotly contested matter
when, at the last minute, a set of extremist copyright proposals were added and voted through.
One of these proposals is Article 11, the "link tax," which requires a negotiated, paid license for links that contain "excerpts" of news stories. The Directive is extremely vague on
what defines a "link" or a "news story" and implies that an "excerpt" consists of more than one single word from a news-story (many URLs contain more than a single word from
Article 11 is so badly drafted that it's hard to figure out what it bans and what it permits (that's why we've written to the trilogue negotiators to ask them to clarify key points). What can be
discerned is deeply troubling.
One of the Directive's "recitals" is Recital 32:
"(32) The organisational and financial contribution of publishers in producing press publications needs to be recognised and further encouraged to ensure the sustainability of the
publishing industry and thereby to guarantee the availability of reliable information. It is therefore necessary for Member States to provide at Union level legal protection for
press publications in the Union for digital uses. Such protection should be effectively guaranteed through the introduction, in Union law, of rights related to copyright for the
reproduction and making available to the public of press publications in respect of digital uses in order to obtain fair and proportionate remuneration for such uses. Private uses
should be excluded from this reference. In addition, the listing in a search engine should not be considered as fair and proportionate remuneration." (emphasis added)
Once you get through the eurocratese here, Recital 32 suggests that (1) anyone who wants to link to the news has to have a separate, commercial license; and (2) news companies can't
waive this right, even through Creative Commons licenses and other tools for granting blanket permission.
Many news organizations allow anyone to link to their work, including some of the world's leading newsgatherers: ProPublica ("ProPublica's mission is for our journalism to have impact, that is for it to spur
reform"), Global Voices (a leading source of global news written by reporters on the
ground all over the planet), and many others. These Creative Commons news entities often rely on public donations to do their excellent, deep, investigative work. Allowing free re-use
is a key way to persuade their donors to continue that funding. Without Creative Commons, some of these news entities may simply cease to exist.
Beyond sources of traditional news, an ever-growing section of the scholarly publishing world (like the leading public health organisation Cochrane) make some or all of their work available for free re-use in the spirit of "open
access" -- the idea that scholarship and research benefit when scholarly works are disseminated as freely as possible.
Article 11's trampling of Creative Commons and open access isn't an accident: before link taxes rose to the EU level, some EU countries tried their own national versions. When
Germany, tried it the major newspapers simply granted Google a free license to use their works, because they couldn't afford to be boycotted by the search giant. When Spain passed its
own link tax, the government tried to prevent newspapers from following the same path by forcing all news to have its own separate, unwaivable commercial right. Spanish publishers
promptly lost 14% of their traffic and €10,000,000/year.
All of this is good reason to scrap Article 11
altogether. The idea that creators can be "protected" by banning them from sharing their works is perverse. If copyright is supposed to protect creators' interests, it should
protect all interests, including the interests of people who want their materials shared as widely as possible.
>> mehr lesen
New Exemptions to DMCA Section 1201 Are Welcome, But Don’t Go Far Enough
(Fri, 26 Oct 2018)
We’re pleased to announce that the Library of Congress and the Copyright Office have expanded the exemptions to Section 1201 of the DMCA, a dangerous law that inhibits speech, harms competition, and threatens digital
security. But the exemptions are still too narrow and too complex for most technology users.
Section 1201 makes it illegal to “circumvent” digital locks that control access to copyrighted works, and to make and sell
devices that break digital locks. Every three years, the Copyright Office holds hearings and accepts evidence of how this ban harms lawful activities. This year, EFF proposed expansions of some of the existing exemptions for video
creators, repair, and jailbreaking.
With this rulemaking, there will be more circumstances where people can legally break digital access controls to do legal things with their own media and devices:
People who repair digital devices, including vehicles and home appliances, will have more protection from legal threats.
Filmmakers, students, and ebook creators will be able to use video clips more freely.
People can now jailbreak and modify voice
assistant devices like the Amazon Echo and Google Home, as they can with smartphones and tablets.
Security researchers will have more freedom to
investigate and correct flaws on a wider range of devices.
But the Copyright Office rejected proposals from many people
to simplify the exceptions so that ordinary people can use them without lawyers. EFF proposed to expand the exemption for vehicle maintenance and repair to cover all devices that
contain software, and to cover legal modification and tinkering that goes beyond repair. We cited a broad range of examples where Section 1201 interfered with people’s use of their
own digital devices. But the Office expanded the exemption only to “smartphone[s],” “home appliance[s],” and “home system[s], such as a refrigerator, thermostat, HVAC or electrical
system.” This list doesn’t come close to capturing all of the personal devices that contain software, including the ever-growing “Internet of Things,” for which people need the
ability to repair and maintain without legal threats. And the Office has again refused to expand the exemption to lawful modification and tinkering.
In a similar way, the Copyright Office rejected proposals to stop dividing video creators into narrow categories like “documentary filmmakers” and creators of “noncommercial videos”
and “multimedia e-books.” Each group of creators will still have to jump through several varying legal hoops to avoid lawsuits under Section 1201 as they do their work.
The exemptions announced yesterday will help more creators and technology users, but they don’t save the law from being an unconstitutional restraint on freedom of speech. The
Constitution doesn’t permit speech licensing regimes like this rulemaking that give government officials the power to deny people permission to express themselves using technology
where officials claim total discretion to grant or deny permission without binding legal standards or judicial oversight. EFF represents entrepreneur Andrew “bunnie” Huang and
Professor Matthew Green in a lawsuit seeking to overturn Section 1201. Having finished this year’s rulemaking, we
look forward to continuing that case.
2018 DMCA Rulemaking
>> mehr lesen
EFF Wins DMCA Exemption Petitions for Tinkering With Echos and Repairing Appliances, But New Circumvention Rules Still Too Narrow To Benefit Most Technology Users
(Fri, 26 Oct 2018)
Library of Congress Denied Petition for Exemption Permitting Repair of All Software-Enabled Devices
Washington, D.C.—The Electronic Frontier Foundation won petitions submitted to the Library of Congress that will make it easier for people to legally remove or repair software in the
Amazon Echo, in cars, and in personal digital devices, but the library refused to issue the kind of broad, simple and robust exemptions to copyright rules that would benefit millions
of technology users.
The Library of Congress, through its subsidiary, the Copyright Office, yesterday granted several new exemptions to Section 1201 of the Digital
Millennium Copyright Act (DMCA). Section 1201 makes it illegal to circumvent the computer code that prevents copying or modifying in most
software-controlled products—which nowadays includes everything from refrigerators and baby monitors to tractors and smart speakers. EFF has fought a years-long battle against anti-consumer DMCA abuse, since the statute on its face gives manufacturers, tech companies,
and Hollywood studios outsized control over the products we own.
The Library and Copyright Office granted an exemption that allows jailbreaking of voice assistant devices like the Amazon Echo and Google Home. Owners of those devices will now be
able to add and remove software, adapting the devices to their own needs, even in ways the manufacturer doesn’t approve of.
The agencies also granted EFF’s proposal to expand the universe of devices that security researchers can examine for flaws. Instead of narrow categories of devices, researchers can
now access the software on any computer system or network for good-faith security research without fear of a DMCA lawsuit.
Exemptions allowing video creators to use encrypted video clips as source material were expanded and made simpler. And new exemptions were granted covering maintenance and repairs of
vehicles and some personal devices.
Yet despite these exemptions, the agencies denied EFF’s broader proposal that would end this expensive, piecemeal approach requiring individual repair requests on behalf of each new
kind of device, and instead create an exemption to permit repair of all devices that contain software. They also rejected EFF’s proposal to allow for lawful
modification and tinkering with digital devices that goes beyond repair.
“Software-enabled machines and devices surround us. Most of us own and use several every day. Section 1201 prevents people from tinkering with products they
purchase, and prevents researchers, scientists, educators, and creators from looking for new ways to improve, create, and innovate,” said EFF Senior Staff Attorney Mitch Stoltz. “This
expensive regulatory process of seeking individual permission for each kind of device is a tremendous drag on innovation. We will continue to advocate for exemptions to Section
1201 so that people—not manufacturers— control the appliances, computers, toys, vehicles, and other products they own, but this process is unreasonable.
In addition to fighting for exemptions, EFF is challenging the constitutionality of the DMCA in court on
First Amendment grounds.
“Section 1201 is an unconstitutional restraint on speech because it blocks a wide range of legitimate, noninfringing expression,” said EFF Senior Staff Attorney Kit Walsh. “While the
new exemptions are important victories, they don’t cure the law’s fatal flaws.”
For more on the DMCA lawsuit:
For more on EFF’s exemption petition:
For more on copyright abuse:
Senior Staff Attorney
>> mehr lesen
“Information Fiduciaries” Must Protect Your Data Privacy
(Thu, 25 Oct 2018)
Legislators across the country are writingnewlaws to
protect your data privacy. One tool in the toolbox could be “information fiduciary” rules. The basic idea is this: When you give your personal information to an online company in
order to get a service, that company should have a duty to exercise loyalty and care in how it uses that information. Sounds good, right? We agree, subject to one major caveat: any
such requirement should not replace other privacy protections.
Why We Need Information Fiduciary Rules
The law of “fiduciaries” is hundreds of years old. It arises from economic relationships based on asymmetrical
power, such as when ordinary people entrust their personal information to skilled professionals (doctors, lawyers, and accountants particularly). In exchange for this trust, such
professionals owe their customers a duty of loyalty, meaning they cannot use their customers’ information against their customers’ interests. They also owe a duty of care, meaning
they must act competently and diligently to avoid harm to their customers. These duties are enforced by government licensing boards, and by customer lawsuits against fiduciaries who
These long-established skilled professions have much in common with new kinds of online businesses that harvest and monetize their customers’ personal data. First, both have a direct
contractual relationship with their customers. Second, both collect a great deal of personal information from their customers, which can be used against these customers. Third, both
have one-sided power over their customers: online businesses can monitor their customers’ activities, but those customers don’t have reciprocal power.
Accordingly, severallawprofessors have proposed adapting these venerable fiduciary rules
to apply to online companies that collect personal data from their customers. New laws would define such companies as “information fiduciaries.”
What Information Fiduciary Rules Would Do
EFF supports legislation to create “information fiduciary” rules. While the devil is in the details, those rules might look something like this:
If a business has a direct contractual relationship with a customer (such as an online terms-of-service agreement), the business would owe fiduciary duties to their customer as to
the use, storage, and disclosure of the customer’s personal information. Covered entities would include search engines, ISPs, email providers, cloud storage services, and social
media. Also covered would be online companies that track user activity across their own websites, and (through tracking tools) across other websites.
To avoid an undue burden on small start-ups and noncommercial free software projects that often spur innovation, information fiduciary rules would exempt (wholly or partially)
smaller entities. A company’s size would be defined by its revenue, or by its
number of customers or employees. Care should be taken to make sure that these rules (like any others) do not inadvertently cement the power of the current technology giants.
Covered entities would owe their customers a duty of loyalty, that is, to act in the best interests of their customers, without regard to the interests of the business. They would
also owe a duty of care, that is, to act in the manner expected by a reasonable customer under the circumstances. These duties would apply regardless of whether the customer pays for
the service. However, they would not bar a covered entity from earning a profit with their customers’ data.
If a business violates one of these duties, the customer would be able to bring their own lawsuit against the business.
New information fiduciary rules would help address situations that have arisen in the past:
If a company collects data for one purpose, it would not be allowed to use that data for an entirely different purpose, or transfer it to a third party that would do so. For
example, the self-description you give a company in response to a personality quiz should not be used to try to influence how you vote. Similarly, the phone number you give a company to
secure your personal information with two-factor authentication should not be used for targeted ads.
If an online business gathers and stores its customers’ personal information, it would be required to take reasonable steps to secure that information and to promptly notify you if the information leaks or is stolen.
An online business would not be allowed to secretly conduct human subject experiments
on its customers that attempt to change their moods or behaviors.
The rules can also help in potential future situations as well:
If a customer publicly criticizes an online business, the business would not be allowed to attempt to discredit the customer by publishing their personal information.
If an online business provides travel directions to a customer, it would not be allowed to secretly route a customer past another business that paid for this routing.
If a social media encourages its customers to vote, it would not be allowed to selectively do so based on whether a customer’s personal information indicates they will vote consistently with the company’s
What Information Fiduciary Rules Would Not Do
While information fiduciary rules would be an important step forward, they are just one strand of the larger tapestry of data privacy legislation.
First, while information fiduciary rules are a good fit for “first-party” data miners that have a direct contractual relationship to their customers (such as social media companies
and online vendors), these rules may be less applicable to “third-party” data miners that have no direct relationship to the people whose data they gather (such as credit agencies).
The essence of the fiduciary relationship is the choice of a customer to entrust someone else with their personal information.
Second, while information fiduciary rules would limit how a first-party data miner may use, store, and disclose a customer’s personal information, these rules may have less to say
about when and how a business may initially collect a customer’s personal information.
Third, there is uncertainty as to how information fiduciary rules will be applied in practice. Fiduciary rules are hundreds of years old, and have typically been applied to skilled
professionals. But since the law of information fiduciaries does not yet exist, it remains unclear exactly what enforceable limits it will place on online businesses.
We should not put all of our eggs in this one basket. EFF supports information fiduciary rules. But these rules must not displace other data privacy rules that EFF alsosupports, such as opt-in consent to collect or share personal information, the “right to
know” what personal information has been collected from you, and dataportability. Companies subject to data fiduciary rules
must follow these other data privacy rules, too.
Likewise, a federal information fiduciary statute must not preempt state laws that provide these other privacy safeguards. EFF has been soundingthealarm against federal legislation that preempts strong state data privacy laws—and
that includes any federal law on information fiduciaries.
>> mehr lesen
Corporate Speech Police Are Not the Answer to Online Hate
(Thu, 25 Oct 2018)
A coalition of civil rights and public interest groups issued recommendations today on policies they believe
Internet intermediaries should adopt to try to address hate online. While there’s much of value in these recommendations, EFF does not and cannot support the full document. Because we
deeply respect these organizations, the work they do, and the work we often do together; and because we think the discussion over how to support online expression—including ensuring
that some voices aren’t drowned out by harassment or threats—is an important one, we want to explain our position.
We agree that online speech is not always pretty—sometimes it’s extremely ugly and causes real world harm. The effects of this kind of speech are often disproportionately felt by
communities for whom the Internet has also provided invaluable tools to organize, educate, and connect. Systemic discrimination does not disappear and can even be amplified online.
Given the paucity and inadequacy of tools for users themselves to push back, it’s no surprise that many would look to Internet intermediaries to do more.
We also see many good ideas in this document, beginning with a right of appeal. There seems to be near universal agreement that intermediaries that choose to take down “unlawful” or
“illegitimate” content will inevitably make mistakes. We know that both
human content moderators and machine learning algorithms are prone to
error, and that even low error rates can affect large swaths of users. As such, companies must, at a minimum, make sure there’s a process for appeal that is both rapid and
fair, and not only for “hateful” speech, but for all speech.
Another great idea: far more transparency. It’s very difficult for users and policymakers to comment on what intermediaries are doing if we don’t know the lay of the land. The
model policy offers a pretty granular set of requirements that would provide a reasonable start. But we believe that transparency of this kind should apply to all types of speech.
Another good feature of the model policy are provisions for evaluation and training so we can figure out the actual effects of various content moderation approaches.
So there’s a lot to like about these proposals; indeed, they reflect some of theprinciples EFF and others have supported for years.
But there’s much to worry about too.
Companies Shouldn’t Be The Speech Police
Our key concern with the model policy is this: It seeks to deputize a nearly unlimited range of intermediaries—from social media platforms to payment processors to domain name
registrars to chat services—to police a huge range of speech. According to these recommendations, if a company helps in any way to make online speech happen, it should monitor that
speech and shut it down if it crosses a line.
This is a profoundly dangerous idea, for several reasons.
First, enlisting such a broad array of services to start actively monitoring and intervening in any speech for which they provide infrastructure represents a dramatic
departure from the expectations of most users. For example, users will have to worry about satisfying not only their host’s terms and conditions but also those of every service in the
chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech. Given
the potential consequences of violations, many users will simply avoid sharing controversial opinions altogether.
Second, we’ve learned from the copyright wars that many services will be hard-pressed to come up with responses that are tailored solely to objectionable content. In 2010,
for example, Microsoft sent a DMCA takedown notice to Network Solutions, Cryptome’s DNS and hosting provider, complaining about Cryptome’s (lawful) posting of a global law enforcement
guide. Network Solutions asked Cryptome to remove the guide. When Cryptome refused, Network Solutions pulled the plug on the entire Cryptome website—full of clearly legal content—because Network Solutions was not technically capable
of targeting and removing the single document. The site was not restored until wide outcry in the blogosphere forced Microsoft to retract its takedown request. When the Chamber
of Commerce sought to silence a parody website created by activist group The Yes Men, it sent a DMCA takedown notice
to the Yes Men’s hosting service’s upstream ISP, Hurricane Electric. When the hosting service May First/People Link resisted Hurricane Electric’s demands to remove the parody site, Hurricane Electric shut down
MayFirst/PeopleLink’s connection entirely, temporarily taking offline hundreds of "innocent bystander" websites as collateral damage.
Third, we also know that many of these service providers have only the most tangential relationship to their users; faced with a complaint, takedown will be much easier and
cheaper than a nuanced analysis of a given user’s speech. As the document itself acknowledges and as the past unfortunately demonstrates, intermediaries of all stripes are
not well-positioned to make good decisions about what constitutes “hateful”
expression. While the document acknowledges that determining hateful activities can be complicated “in a small number of cases,” the number likely won’t be small at all.
Finally, and most broadly, this document calls on companies to abandon any commitment they might have to the free and open Internet, and instead embrace a thoroughly
locked-down, highly monitored web, from which a speaker can be effectively ejected at any time, without any path to address concerns prior to takedown.
To be clear, the free and open Internet has never been fully free or open—hence the impetus for this document. But, at root, the Internet still represents and embodies an
extraordinary idea: that anyone with a computing device can connect with the world, anonymously or not, to tell their story, organize, educate and learn. Moderated forums can be
valuable to many people, but there must also be a place on the Internet for unmoderated communications, where content is controlled neither by the government nor a large corporation.
What Are “Hateful Activities”?
The document defines “hateful activities” as those which incite or engage in “violence, intimidation, harassment, threats or defamation targeting an individual or group based on their
actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation or disability.”
We may agree that speech that does any of these things is deeply offensive. But the past proves that companies are ill-equipped to make informed decisions about what falls into these
categories. Take, for example, Facebook’s decision, in
the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment
provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word
“queer” as offensive, regardless of context. These and many other decisions show that blunt policies designed to combat “hateful” speech can have unintended consequences.
Furthermore, when divorced from a legal context, terms like “harassment” and “defamation” are open to a multitude of interpretations.
If You Build It, Governments Will Come
The policy document also proposes that Internet companies “combine technology solutions and human actors” in their efforts to combat hateful activities. The document rightly points
out that flagging can be co-opted for abuse, and offers helpful ideas for improvement, such as more clarity around flagging policies and decisions, regular audits to improve flagging
practices, and employing content moderators with relevant social, political, and cultural knowledge of the areas in which they operate.
However, the drafters are engaging in wishful thinking when they seek to disclaim or discourage governmental uses of flagging tools. We know that state and state-sponsored actors have
weaponized flagging tools to silence
dissent. Furthermore, once processes and tools to silence “hateful activities” are expanded, companies can expect a flood of demands to apply them to other speech. In the
U.S., the First Amendment and the safe harbor of CDA 230 largely prevent such requirements. But recent legislation has started to chip away at Section 230, and we expect to see more efforts along
those lines. As a result, today’s “best practices” may be tomorrow’s requirements.
Our perspective on these issues is based on decades of painful history, particularly with social media platforms. Every major social media platform sets forth rules for its users, and
violations of these rules can prompt content takedowns or account suspensions. And the rules—whether they relate to “hateful activities” or other types of expression—are often
enforced against innocent actors. Moreover, because the platforms have to date refused our calls for transparency, we can’t even quantify how often they fail at enforcing their
We’ve seen prohibitions on hate speech employed to silence individuals engaging in
anti-racist speech; rules against harassment used to suspend the account of an LGBTQ activist calling out their harasser; and a ban on nudity used to censor women who share childbirth images in private
groups. We’ve seen false copyright and trademark allegations used to take down all kinds of lawful content, including
time-sensitive political speech. Regulations on violent content have disappeareddocumentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly
been used to take down a famous Vietnam war
These recommendations and model policies are trying to articulate better content moderation practices, and we appreciate that goal. But we are also deeply skeptical that even the
social media platforms can get this right, much less the broad range of other services that fall within the rubric proposed here. We have no reason to trust that they will, and
every reason to expect that their efforts to do so will cause far too much collateral damage.
Given these concerns, we have serious reservations about the approach the coalition is taking in this document. But there are important ideas in it as well, notably the opportunity
for users to appeal content moderation decisions, and expanded transparency from corporate platforms, and we look forward to working together to push them forward.
>> mehr lesen
DMCA Mystery: Did Epic Games Send a Takedown to Itself?
(Wed, 24 Oct 2018)
Welcome to a brand new kind of whodunnit. This one has everything: an extremely popular game, a short-lived takedown, and so very many memes. The ways of the DMCA and
YouTube are unknown and unknowable.
Trailers are a time-tested and proven way of getting attention for a new piece of media—movies, television, video games, whatever. If it’s a highly-anticipated or very
popular title, you can get a whole bunch of free press with a trailer as everyone shares and analyzes it. And so it is unusual, in that situation, for a trailer to be
officially released without every bit of it being vetted and approved. (Unusual, but not unheard of.)
And even if a company uploaded the wrong trailer to YouTube or Twitter or wherever, they could always delete it from their own account. And then, sure, use the DMCA to
keep people from uploading copies. That’s what makes what happened with Fortnite so weird.
Fortnite is a staggeringly popular game. Epic Games, which makes it, put a trailer for Battle Pass Season 6 on the official Fortnite YouTube page. So far, so normal.
Then it briefly vanished, with a screenshot posted to Reddit showing that the video was unavailable due to a copyright claim by … Epic Games, Inc. Of course, this touched
off a waveof memeresponses, although “it hurt itself in confusion” gets
extra points for being a game meme being used to make fun of another game, which is also a source of many memes.
The trailer is back up, but the mystery of what happened remains.
DMCA takedowns require that the notice be sent by the person who owns the material and that they believe the use of their material to be infringing. How could Fortnite
be infringing on Fortnite? If there was material in there they did not own, they can’t file a DMCA notice asking for it to be taken down on that basis … they don’t own
the work being infringed. And even if that was the case, it’s their channel, they can just delete the video.
If Epic Games has an automated process that just sends out takedowns for everything that matches its content, then we are seeing a brand new example of why copyright
bots don’t work. Someone forgot to make sure that Epic Games’ own accounts were excluded from the search and itDMCA’d itself. That still means a bogus takedown was sent.
If the takedown was sent by someone pretending to be Epic Games, that’s also not supposed to be allowed under the DMCA.
There’s no situation where this was a valid takedown. It’s rare that wrongful takedown notices result in any consequences for the sender. They can in theory, although
it may require many years of litigation. In general, there are
not enough disincentives to discourage bad takedowns, including weird, mysterious takedowns like this one.
>> mehr lesen
Blunt Policies and Secretive Enforcement Mechanisms: LGBTQ+ and Sexual Health on the Corporate Web
(Wed, 24 Oct 2018)
The free and open Internet has enabled disparate communities to come together across miles and borders, and empowered marginalized communities to share stories, art, and
information with one another and the broader public—but restrictive and often secretive or poorly messaged policies by corporate gatekeepers threaten to change that.
Content policies restricting certain types of expression—such as nudity, sexually explicit content, and pornography—have been in place for a long time on most social networks.
But in recent years, a number of companies have instituted changes in the way policies are enforced, including demonetizing or hiding content behind an age-based interstitial; using
machine learning technology to flag content; blocking keywords in search; or disabling thumbnail previews for video content.
While there are some benefits to more subtle enforcement mechanisms—age restrictions, for example, allow content that would otherwise be removed entirely to be able available to
some users—they can also be confusing for users. And when applied mistakenly, they are difficult—if not impossible—to appeal.
In particular, policy restrictions on “adult” content have an outsized impact on LGBTQ+ and other marginalized communities. Typically aimed at keeping sites “family friendly,”
these policies are often unevenly enforced, classifying LGBTQ+ content as “adult” when similar heterosexual content isn’t. Similarly, as we noted last year, policies are
sometimes applied more harshly to women’s content
than to similar content by men.
Watts the Safeword is a YouTube channel that seeks to provide “kink-friendly
sexual education.” In March, one of the channel’s creators, Amp, noticed that the thumbnails that the channel’s team had carefully selected to represent the videos were not showing
correctly in search. Amp reached out to YouTube and, in a back-and-forth email exchange, was repeatedly told by several employees that it was a technical error. Finally, after six
days, Amp was told that “YouTube may disable custom thumbnails for certain search results when they’re considered inappropriate for viewers.” To determine inappropriateness, the
YouTube employee wrote, the company considers “among other signals, audience retention, likes and dislikes and viewer feedback.”
When I spoke to Amp earlier this month, he told me that he has observed a number of cases in which sexual education that contains a kink angle is demonetized or otherwise
downgraded on YouTube where other sexual education content isn’t. Amp expressed frustration with companies’ lack of transparency about their own enforcement practices: “They’re
treating us like children that they don’t want to teach.”
Caitlyn Moldenhauer produces a podcast entitled “Welcome to My Vagina” that
reports on gender, sex, and reproductive rights. “The intention of our content,” she told me, “is to educate, not entertain.”
The podcast, which uses a cartoon drawing of a vulva puppet as its logo, was rejected from TuneIn, an audio streaming service that hosts hundreds of podcasts. When Caitlyn wrote
to the company to inquire about the rejection, she received a message that read: “Thank you for wanting to add your podcast to our directory unfortunately we will not be adding please
refer to our terms and conditions for more information.”
A close reading of the terms of service and acceptable use policy failed to provide clarity. Although the latter refers to “objectionable
content,” the definition doesn’t include sexuality or nudity—only “obscene,” “offensive,” or “vulgar” content, leading Caitlyn to believe that her podcast was classified as such. A
cursory search of anatomical and sexual terms, however, demonstrates that the site contains dozens of podcasts about sex, sexual health, and pornography—including one that Caitlyn
produces—raising questions as to why “Welcome to My Vagina” was targeted.
These are not the only stories like this. Over the past few months, we’ve observed a number of other cases in which LGBTQ+ or sexual health content has been wrongfully
The YouTube channel of Recon—“fetish-focused dating app”—was suspended, and later reinstated after receiving press coverage. It was the second occurence.
The Facebook page of Naked Boys Reading was removed after being flagged for policy violations. After the organizers accused Facebook of “queer erasure,” the page was restored.
In 2017, Twitter began collapsing “low-quality” and “abusive” tweets behind a click-through interstitial—but users have reported that tweets merely containing the words
“queer” and “vagina” are affected.
Chase Ross, a long-time YouTuber who creates educational videos about transgender issues and about his personal experience as a trans person, has reported that videos containing the word “trans” in
their title are regularly demonetized.
Many creators have suggested the uptick in enforcement is the result of the passing of FOSTA, a law that purports to target sex trafficking
but is already broadly chilling online speech and silencing marginalized voices (and to which we are posing a legal challenge). Although it’s difficult to attribute specific policy
changes to FOSTA, a number of companies have made amendments to their sexual content policies in the wake of the bill’s passing.
Others, such as the creator of Transthetics, have suggested that the
crackdown—particularly on transgender content—is the result of public pressure. The company, which produces "innovative prosthetics for trans men et al," has had its YouTube page
suspended twice, and reinstated both times only after they demonstrated that YouTube allowed similar, heterosexual content. In a video discussing the most recent suspension of Transthetics’ YouTube page, its creator
said: “My worry is that for every Alex Jones channel that they can, they feel that a bit of a tit for tat needs to happen and in their view, the polar opposite of that is the LGBT
Both Amp and Caitlyn expressed a desire for companies to be clear about their policies and how they enforce them, and we agree. When users don’t understand why their content is
removed—or otherwise downgraded—it can take days for them to make necessary changes to comply with the rules, or to get their content reinstated. Says Amp:
“It's not just an inconvenience but in most cases requires us
to monitor our social media and scrutinize every inch of our content to find if we've been removed, restricted or deleted. YouTube specifically says it will notify you when something
is affected, but it rarely happens.”
Similarly, Caitlyn told me: "The issue isn't really that we get flagged, as much as when we reach out over and over again to try and solve the issue or defend ourselves
against their content policies, that we're met with radio silence ... I wish sometimes that I could argue with a guidelines representative and hear a valid reason why we are
denied access to their platform, but we're not even given a chance to try. It's either change or disappear."
Companies must be transparent with their users about their rules and enforcement mechanisms, and any restrictions on content must be clearly messaged to users. Furthermore, it’s
imperative that companies implement robust systems of appeal so that users whose content is wrongly removed can have it quickly reinstated.
But with so many examples related to sex and sexuality, we also think it’s time for companies to consider whether their overly restrictive or blunt policies are harming some of
their most vulnerable users.
>> mehr lesen
EFF's Letter to the EU's Copyright Directive Negotiators
(Wed, 24 Oct 2018)
Today, Electronic Frontier Foundation sent the note below to every member of the EU
bodies negotiating the final draft of the new Copyright Directive in the "trilogue" meetings.
The note details our grave misgivings about the structural inadequacies and potential for abuse in the late-added and highly controversial Articles 11 and 13, which require paid
licenses for links to news-sites (Article 11) and censoring public communications if they match entries in a crowdsourced database of copyrighted works.
I write today on behalf of the Electronic Frontier Foundation, to raise urgent issues related to Articles 11 and 13 of the upcoming Copyright in the Digital Single Market Directive,
currently under discussion in the Trilogues.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression,
and innovation through impact litigation, policy analysis, grassroots activism, and technology development. We work to ensure that rights and freedoms are enhanced and protected as
our use of technology grows. We are supported by over 37,000 donating members around the world, including around three thousand within the European Union.
We believe that Articles 11 and 13 are ill-considered and should not be EU law, but even stipulating that systems like the ones contemplated by Articles 11 and 13 are desirable, the
proposed text of the articles in both the Parliament and Council texts contain significant deficiencies that will subvert their stated purpose while endangering the fundamental human
rights of Europeans to free expression, due process, and privacy.
It is our hope that the detailed enumeration of these flaws, below, will cause you to reconsider Articles 11 and 13's inclusion in the Directive altogether, but even in the
unfortunate event that Articles 11 and 13 appear in the final language that is presented to the Plenary, we hope that you will take steps to mitigate these risks, which will
substantially affect the transposition of the Directive in member states, and its resilience to challenges in the European courts .
Article 13: False copyright claims proliferate in the absence of clear evidentiary standards or consequences for inaccurate claims.
Based on EFF’s decades-long experience with notice-and-takedown regimes in the United States, and private copyright filters such as YouTube's ContentID, we know that the low
evidentiary standards required for copyright complaints, coupled with the lack of consequences for false copyright claims, are a form of moral hazard that results in illegitimate acts
of censorship from both knowing and inadvertent false copyright claims.
For example, rightsholders with access to YouTube's ContentID system systematically overclaim copyrights that they do not own. For instance, the workflow of news broadcasters will
often include the automatic upload of each night's newscast to copyright filters without any human oversight, despite the fact that newscasts often include audiovisual materials whose
copyrights do not belong to the broadcaster – public domain footage, material used under a limitation or exception to copyright, or material that is licensed from third parties. This
carelessness has predictable consequences: others — including bona fide rightsholders — who are entitled to upload the materials claimed by the newscasters are blocked by YouTube and
have a copyright strike recorded against them by the system, and can face removal of all of their materials. To pick one example, NASA's own Mars lander footage was broadcast by
newscasters who carelessly claimed copyright on the video by dint of having included NASA's livestream in their newscasts which were then added to the ContentID database of
copyrighted works. When NASA itself subsequently tried to upload its footage, YouTube blocked the upload and recorded a strike
In other instances, rightsholders neglect the limitations and exceptions to copyright when seeking to remove content. For example, Universal Music Group insisted on removing a video
uploaded by one of our clients, Stephanie Lenz, which featured incidental audio of a Prince song in the background. Even during the YouTube appeals process, UMG refused to acknowledge
that Ms. Lenz’s incidental inclusion of the music was fair use – though this analysis was eventually confirmed by a US federal judge. Lenz's case took more than ten years to
adjudicate, largely due to Universal's intransigence, and elements of the case
still linger in the courts.
Finally, the low evidentiary standards for takedown and the lack of penalties for abuse have given rise to utterly predictable abuses. False copyright claims have been used to
suppress whistleblower memos detailing flaws in election security, evidence of police brutality, and disputes over scientific publication.
Article 13 contemplates that platforms will create systems to allow for thousands of copyright claims at once, by all comers, without penalty for errors or false claims. This is a
recipe for mischief and must be addressed.
Article 13 Recommendations
To limit abuse, Article 13 must, at a minimum, require strong proof of identity from those who seek to add works to an online service provider's database of claimed copyrighted works
and make ongoing access to Article 13's liability regime contingent on maintaining a clean record regarding false copyright claims.
Rightsholders who wish to make copyright claims to online service providers should have to meet a high identification bar that establishes who they are and where they or their agent
for service can be reached. This information should be available to people whose works are removed so that they can seek legal redress if they believe they have been wronged.
In the event that rightsholders repeatedly make false copyright claims, online service providers should be permitted to strike them off of their list of trusted claimants, such that
these rightsholders must fall back to seeking court orders – with their higher evidentiary standard – to effect removal of materials.
This would require that online service providers be immunised from Article 13's liability regime for claims from struck off claimants. A rightsholder who abuses the system should not
expect to be able to invoke it later to have their rights policed. This striking-off should pierce the veil of third parties deputised to effect takedowns on behalf of rightsholders
("rights enforcement companies"), with both the third party and the rightsholder on whose behalf they act being excluded from Article 13's privileges in the event that they are found
to repeatedly abuse the system. Otherwise, bad actors ("copyright trolls") could hop from one rights enforcement company to another, using them as shields for repeated acts of
Online service providers should be able to pre-emptively strike off a rightsholder who has been found to be abusive of Article 13 by another provider.
Statistics about Article 13 takedowns should be a matter of public record: who claimed which copyrights, who was found to have falsely claimed copyright, and how many times each
copyright claim was used to remove a work.
Article 11: Links are not defined with sufficient granularity, and should contain harmonised limitations and exceptions.
The existing Article 11 language does not define when quotation amounts to a use that must be licensed, though proponents have argued that quoting more than a single word requires a
The final text must resolve that ambiguity by carving out a clear safe-harbor for users, and ensure that there’s a consistent set of Europe-wide exceptions and limitations to news
media’s new pseudo-copyright that ensure they don’t overreach with their power.
Additionally, the text should safeguard against dominant players (Google, Facebook, the news giants) creating licensing agreements that exclude everyone else.
News sites should be permitted to opt out of requiring a license for inbound links (so that other services could confidently link to them without fear of being sued), but these
opt-outs must be all-or-nothing, applying to all services, so that the law doesn’t add to Google or Facebook's market power by allowing them to negotiate an exclusive exemption from
the link tax, while smaller competitors are saddled with license fees.
As part of the current negotiations, the text must be clarified to establish a clear definition of "noncommercial, personal linking," clarifying whether making links in a personal
capacity from a for-profit blogging or social media platform requires a license, and establishing that (for example) a personal blog with ads or affiliate links to recoup hosting
costs is "noncommercial."
In closing, we would like to reiterate that the flaws enumerated above are merely those elements of Articles 11 and 13 that are incoherent or not fit for purpose. At root, however,
Articles 11 and 13 are bad ideas that have no place in the Directive. Instead of effecting some piecemeal fixes to the most glaring problems in these Articles, the Trilogue take a
simpler approach, and cut them from the Directive altogether.
Special Consultant to the Electronic Frontier Foundation
>> mehr lesen
EFF Sues San Bernardino County Sheriff’s Department to Obtain Records About Use of Privacy Invasive Cell-Site Simulators
(Tue, 23 Oct 2018)
EFF Investigating Compliance with CalECPA
San Bernardino, California—The Electronic Frontier Foundation (EFF) sued the San Bernardino County
Sheriff’s Department today to gain access to records about search warrants where cell-site
simulators, devices that allow police to locate and track people by tricking their cell phones into a connection, were authorized in criminal investigations.
EFF seeks the records to investigate whether California law enforcement agencies are complying with the California Electronic Communications Privacy Act (CalECPA). The law, co-sponsored by EFF and passed in 2015, protects Californians’ personal
information by requiring police to obtain a
warrant to access people’s digital records—such as emails and geographic location information stored on devices or in the cloud—and notify those whose records are
being sought. Police can only bypass the warrant requirement under CalECPA if the records’ owner consents to the search or the records are needed in a life-or-death emergency.
Cell-site simulators, also known as Stingrays, are highly invasive surveillance tools that can
scoop up the location of all cell phones in a targeted area, the vast majority of which belong to people not suspected of committing any crime. Using cell-site simulators to locate a
person’s phone and track the phone’s movements generally requires police to obtain a warrant under CalECPA. Agencies are also required to provide information to the California
Department of Justice (DOJ) about warrants that don’t identify a specific target or in cases where they want to delay notifying the target. The DOJ then makes the information
available to the public, a key transparency provision of the law.
San Bernardino County law enforcement agencies were granted the most electronic
warrants to search digital records per resident in the state, according to analysis of the DOJ data by the Desert Sun. EFF determined that
the county has used cell-site simulators 231 times in the last year and filed a request under the California Public Records Act in August to obtain search warrant information for six
specific searches that were made public by the DOJ. Each of the searches included authorization for the use of “cell-site stimulators” [sic], an apparent misspelling of the
cell-phone tracking technology in the records submitted by San Bernardino to the DOJ.
EFF’s public records request sought court case numbers associated with the search warrants, which would enable researchers to locate court records like affidavits justifying the need
for a warrant and other information vital to assessing whether police are following the law and their own policies when obtaining warrants. The request contained detailed information
about each warrant, made public by the DOJ, such as the nature of the warrants, the precise start and end dates of the warrants and verbatim quotes about the grounds for each
Yet San Bernardino denied the EFF request, claiming it was “vague, overly broad,” and didn’t describe an “identifiable record.” The county also claimed that such records would be
investigative records exempt from disclosure. In September EFF Senior Investigative Researcher Dave Maass contacted the county explaining that the California DOJ specifically informed
him that he can obtain the search warrant court numbers from San Bernardino County, and showing that the request was narrow and contained granular detail on just six searches.
The county has not responded.
“We are seeking search warrant records to explore first whether CalECPA is working and second whether law enforcement agencies are complying with the laws’ warrant and transparency
requirements,” said Maass. “The law is only as good as counties like San Bernardino’s compliance with its rules, which are intended to protect the highly personal and intensely
private information contained on Californians’ digital devices. Our lawsuit aims to shine a light on police use of cell-site simulators. CalECPA was meant to provide the public with a
check on law enforcement’s use of this highly intrusive tool.”
EFF is being represented by attorney Michael T. Risher.
For the complaint:
For more on cell-site simulators:
Street Level SurveillanceCell-site simulatorsStingrays
Senior Investigative Researcher
>> mehr lesen
Italy Steps Up To Defend EU Internet Users Against Copyright Filters – Who Will Be Next?
(Tue, 23 Oct 2018)
The latest news from Brussels: Italy is not happy with Article 13 or Article 11, and wants them gone.
What is going on with Europe’s meme-filtering Article 13 (and the hyperlink-meddling Article 11)? After the proposals sneaked over the finish line in a close European Parliamentary
vote in July, the decision-making has dipped out of the spotlight into the backrooms of the EU. Behind the scenes, attitudes are still shifting against the new Internet rules. Italy’s
domestic government has now taken a strong position against the bill. If they hear from EU citizens, other governments may shift too.
The Copyright in the Digital Single Market Directive — the legal instrument that hold both articles — is now in its “trilogue” phase. That’s where the governments of the EU’s member
countries send their permanent representatives and legal experts to huddle in meeting rooms with the Parliament’s negotiators, and thrash out a text that works for the central
European Parliament and the governments of individual European countries (who have to implement and enforce it).
Under normal circumstances, the trilogue should be a fine-tuned bureaucratic debate on the subtle legal details of the Directive, with member states contributing their understanding
of their own legal systems, and the Parliament’s negotiators offering to change wordings to reflect those practicalities.
But Articles 13 and 11 have never been part of a normal, consensus-driven procedure. Parliament was divided over Articles 13 and 11, and even the member states don’t agree with one
another whether these provisions make sense.
Back in May 25, when member countries met to settle on their original “negotiating text”, the national governments barely agreed between themselves on whether the directive should go
When the member states vote together as the European Council, a proposal fails if a “blocking minority” oppose it – that’s either 13 member states by number or any number of states that, combined, hold more than 35% of the EU’s population. In May –
or so the EU gossip had it, because these votes aren’t made public – Germany, Finland, the Netherlands, Slovenia, Belgium and Hungary all opposed the directive,
largely because of Article 13 and 11. With 25% of the EU population between them, their opposition wasn’t enough to vote it down.
Then, this July, Italy publicly switched sides. After Italians rose up to warn their new government about the directive (thank you, Italian Internet users!), the country’s new Deputy
Prime Minister publicly voiced his concern about the proposals.
Since then, Italy has been the strongest proponent at the EU of getting rid of the two articles
altogether. Italy also holds 11% of the EU population – tipping the total opposition among the states to over 36%.
So why isn’t this the end of the Article 13/11 fiasco? There may now be sufficiently large opposition to the articles to create a blocking minority if they all vote together, but the
new bloc has not settled on a united answer. Other countries are suspicious of Italy’s no-compromise approach. They want to add extra safeguards to the two articles, not kill them
entirely. That includes some of the countries that were originally opposed in May, including Germany.
Axl Voss, Article 13/11’s strongest advocate in the Parliament, won his pro-Article 13/11 victory there by splitting the opposition in a similar way. Some Members of Parliament voted
for amendments to delete the articles entirely, while others voted for various separate compromises, but Voss was able to coordinate all those who supported the articles to unite to
vote in favor of just his proposals. His single set of pro-Article13/11 amendments beat out many individual opposing amendments.
That’s where matters stand now: a growing set of countries who think copyright filters and link taxes go too far, but no agreement yet on rejecting or fixing them.
The trilogues are not a process designed to resolve such large rifts when both the EU states and the parliament are so deeply divided.
What happens now depends entirely on how the members states decide to go forward: and how hard they push for real reform of Articles 13 and 11. The balance in that discussion has
changed, because Italy changed its position. Italy changed its position because Italians spoke up. If you reach out to your countries’ ministry in charge of copyright, and tell them
that these Articles are a concern to you, they’ll start paying attention too. And we’ll have a chance to stop this terrible directive from becoming terrible law across Europe.
>> mehr lesen
Appeals Court Tells Georgia: State Code Can’t be Copyrighted
(Tue, 23 Oct 2018)
In a democracy, people should have the right to read, and publish, the law. In theory, that should be easier than ever today. The Internet has vastly improved public access to the
“operating system” of our government—the local, state, and federal statutes and regulations we are expected to abide by.
Unfortunately, some states have actually fought against easy access to the law. In Georgia, state officials have used copyright to extract fees and reward companies with
On Friday, the U.S. Court of Appeals for the 11th Circuit handed down a powerful opinion [PDF] that struck down the state of Georgia’s attempt to use copyright to suppress publication of its own laws. The ruling, which gives Georgians the right to
read and publish the Official Code of Georgia Annotated, or OCGA, may also improve public access to legislative documents in other states. It’s just in time for this year’s
Open Access Week, a time to celebrate the social benefits that we all reap when information is readily
The case originated when Georgia’s Code Revision Commission threatened, and ultimately sued, open records activist Carl Malamud and his organization Public.Resource.Org (PRO). In an effort to make Georgia’s official laws easily accessible, Malamud had bought a hard copy of the
OCGA, paying more than $1,200 for it. (The 11th Circuit opinion reports that a copy currently costs $404, although it isn’t clear if that price applies to
non-residents.) Malamud then scanned the books, and sent each Georgia legislator a USB stick with two full copies—one of the scanned OCGA, and another encoded in XML format.
"Access to the law is a fundamental aspect of our system of democracy, an essential element of due process, equal protection, and access to justice," wrote Malamud in the letter
he included in the package.
One would think that publishing and distributing the very laws passed by Georgia lawmakers might be viewed as a common-sense public good. After all, these are the rules Georgia
residents are supposed to follow. But when PRO distributed the OCGA online and on USB drives, Georgia’s Code Revision Commission actually sued for copyright infringement. The
commission, which collects royalties from sales of electronic copies of the OCGA, claimed that only its chosen publisher, LexisNexis, had the right to distribute copies.
Friday’s decision means that Malamud and PRO can continue with the project, as no part of the OCGA is covered by copyright. The opinion throws out the specious notion that official
copies of the law can be privatized by adding annotations, when those annotations were themselves dictated by the legislature and were widely considered to be “part and parcel” of the
state’s legal code.
More importantly, it makes clear that Georgia’s annotated laws “are attributable to the constructive authorship of the People.” The opinion recognizes that when it comes to accessing
the work of lawmakers, the debate must be grounded in our notions of democratic rights. It isn’t simply an argument about dividing up a market for published items.
This is a major step forward in a larger fight to free the law from copyright. EFF represents PRO in a separate litigation, in which Malamud and PRO are fighting to publish codes and
standards that have been incorporated by reference into law. Those standards, which relate to building and product safety, energy efficiency, and educational testing, were
incorporated by reference into regulations by state and federal agencies, after heavy lobbying by the standards development organizations that created them. Yet those same groups have
fought PRO’s efforts to publish the standards. That case is headed back to district court for further proceedings, after EFF and PRO scored a win this summer when an appeals court
ordered the district court to re-consider the issue of fair
In the Georgia case, PRO was represented pro bono by Alston & Bird and Elizabeth Rader.
Lawyers for Georgia’s Code Revision Commission didn’t try to argue that the words in the statutes themselves were copyrighted. Rather, they argued that it was the annotations
in the “Official Code of Georgia Annotated” that placed the work under state copyright, and mandated the payment of fees.
Annotations are notes and citations that are interspersed within the statutes and help to guide lawyers and judges. The appeals court found that the annotations “represent a work,
like the statutes themselves, that is constructively authored by the People.” To reach that conclusion, the court took a close look at how the annotations came to be.
Those annotations were created by the Code Revision Commission, which “indisputably is an arm of the General Assembly [the state legislature],” according to the 11th
Circuit opinion. Of the 15 members of the Commission, nine of them are sitting members of the General Assembly, and the Lieutenant Governor also has a seat. The Commission hires
LexisNexis Group, a firm that publishes legal documents and data, to prepare and publish the annotated code.
While Lexis editors actually draft the annotations, the appeals court noted that they do so “pursuant to highly detailed instructions,” laid out in Lexis’ contract with the
Commission. The Commission must sign off and approve a final draft of the annotations. Finally, the OCGA is subject to the approval of the Georgia General Assembly itself, which votes
annually to make the OCGA the official publication of the state’s laws, including the annotations.
“In short, the Commission exercises direct, authoritative control over the creation of the OCGA annotations at every stage of their preparation,” the opinion states. “[I]n light of
how it is funded and staffed, and since its work is legislative in nature, it is abundantly clear that the Commission is a creation and an agent of the Georgia General Assembly.”
While the annotations don’t have the full force of law, the appeals court found they are “law-like” in that they are official commentary “on the meaning of Georgia statutes.” The
judges held that “the annotations cast an undeniable, official shadow over how Georgia laws are interpreted and understood.” Indeed, Georgia state courts regularly turn to OCGA
comments as “conclusive statements about statutory meaning and legislative intent.” The opinion notes 11 state court cases in which OCGA comments, rather than simply statutes, were
cited as definitive sources on legislative intent and other matters.
“The People” as Author
The Supreme Court first addressed the issue of whether government rules can be copyrighted in the 1834 case of Wheaton v. Peters, when it held that “no reporter has or can have
any copyright in the written opinions delivered by this Court.” In an 1888 case called Banks v. Manchester, the Supreme Court held that the opinions of state court judges can’t
be copyrighted, either.
In Banks, the plaintiff was a publishing firm chosen by the state of Ohio. But the high court emphasized that only “authors” can obtain a copyright in their work, and that the
firm publishing court reports had not really created new works. Nor could the judge in the case be regarded as an author with a valid copyright, having prepared the opinion in his
official judicial capacity.
In the 1909 Copyright Act, Congress made it clear that no work of the federal government could ever be under copyright. That prohibition continues today. The Copyright Office has long
noted that “judicially established” rules also prevent copyright “in the text of state laws, municipal ordinances, court decisions, and similar official documents.”
The 11th Circuit followed the Supreme Court’s directives. The court properly framed the issue as part of the democratic bargain between the government and the governed; not
as a matter of simply divvying up potential profits from publishing. Lawmakers and judges are the “draftsmen of the law… whatever they produce the People are the true authors,” the
panel wrote. “When the legislative or judicial chords are plucked it is in fact the People’s voice that is heard.”
Even in cases where there is enough legal wiggle room for states to place some government documents under copyright, in our view, it’s terrible policy. Copyright is meant to spur the
production of new works, not create profit motives around the work of public employees. Friday’s decision in the PRO case places appropriate limits on the validity of state
copyrights, and we hope states outside Georgia take the opportunity to reconsider the practice of using copyright in ways that limit public access to public information.
Photo Credit: Kirk Walter / Public.Resource.Org
>> mehr lesen
EFF Urges Supreme Court to Support Fair Use in TVEyes Case
(Tue, 23 Oct 2018)
Debates about the media have become a big part of U.S. political discourse. Is the coverage on networks like Fox, CNN, or MSNBC accurate? Is it fair? Is it fake? The networks run
24/7, so analysts have to review a staggering volume of material to really answer these questions. That’s where a service like TVEyes, which creates a searchable database of broadcast
content from thousands of television and radio stations, comes in. A recent
decision from the Second Circuit found that some of TVEyes’ services are not fair use and infringe copyright. EFF has joined an amicus brief [PDF] urging the Supreme Court to review and overturn this decision.
The case began when Fox News sued TVEyes in federal court in New York. Fox argued that TVEyes’ service infringes the copyright in its broadcasts. The district court issued two
opinions. First, it wrote an encouraging
ruling that found the TVEyes search engine and displays of clips are transformative and serve the new purpose of analysis and commentary. Later, the court issued a
second decision that found some aspects of TVEyes’ service (such as allowing
clips to be archived) are not fair use. Both sides then appealed to the Second Circuit.
The Second Circuit found that TVEyes’ searchable database of video is not fair use and infringes Fox’s copyright. While we have criticized many aspects of this decision, one error stands out: its
finding on market harm. Fair use analysis considers four factors, with the final factor being potential harm to the market for the copyrighted work. This is an important issue in this
case because Fox insists that it is suffering economic harm while TVEyes insists that Fox simply does not want to allow analysis and criticism.
In our view, the undisputed facts strongly support TVEyes on the market harm factor. As the district court correctly noted, Fox requires those that license its content to agree that
they will not use clips in a way that is derogatory or critical of Fox News. This means that Fox, by its own admission, has no interest in the market for criticism and analysis that
TVEyes serves. Indeed, it wants to prohibit such uses.
The Second Circuit did not even discuss these facts. Instead, it found market harm because TVEyes’ service generates revenue. This is improper. Courts have long recognized that market
harm cannot be inferred simply because the defendant made some money. If that were the case, any commercial fair use would fail on the fourth factor. Our amicus brief argues that the
Second Circuit’s decision conflicts with binding Supreme Court authority
on this question. We hope the Supreme Court takes the case to correct this error.
We thank the students and faculty at the Juelsgaard Intellectual
Property and Innovation Clinic at Stanford Law School for their work on the brief. EFF was joined as an amicus by Brave New Films, Eric Alterman, Fairness and Accuracy in Reporting, the Internet Archive, the Organization for Transformative Works, Professor Rebecca Tushnet, and the Wikimedia Foundation.
Fox News v. TVEyes
>> mehr lesen
The Heavy Focus on 5G Wireless Means We Are Ignoring 68 Million Americans Facing High-Speed Cable Monopolies
(Tue, 23 Oct 2018)
All across the country right now, major wireless Internet Service Providers (ISPs) are talking to legislators, mayors, regulators, and the press about the potential of 5G wireless
services as if they will cure all of the problems Americans face right now in the high-speed access market. But the cold hard reality is the newest advancements in wireless services
will probably do very little about the high-speed monopolies that a majority of this country faces. According to a ground-breaking study by the Institute for Local Self-Reliance, more than 68 million
Americans facing high-speed cable monopolies today.
This is why we see wild claims about how 5G will do things like solve rural America’s lack of access to broadband or that wireless broadband will be just as good as any
wireline service (it won’t). In reality, we are already woefully behind South Korea and many countries in the EU. In essence, 5G is being aggressively marketed in policy circles
because it provides a useful distraction from the fundamental fact that the United States market is missing out on 21st century broadband access, affordable prices, and
extraordinary advancements coming from fiber to the home (FTTH) networks. Rather than aggressively wire the country for the future, major competitors to cable companies are
opting for 5G because it will cost about half as much as FTTH to deploy and allows them to avoid directly competing with cable. In effect, they are splitting the market with each other
and hope policymakers do not notice.
It Is a Real Problem That Major Wireless ISPs Are Avoiding Direct Competition With Gigabit Cable Systems
To date, major ISPs poised to compete with cable companies like Comcast and Charter have billions of dollars (including additional billions after Congress cut the corporate tax
rates) but have chosen not to widely deploy FTTH networks that vastly outperform current cable systems. The fact is they have made it clear they do not want to spend the
money to build FTTH (Verizon stopped eight years ago now whereas
AT&T's limited fiber build
is mandated as a merger condition). This is not because these networks are unaffordable. In fact, nearly half of the new FTTH networks being deployed today are done by small ISPs
(which the FCC might stifle next year on behalf of
AT&T and Verizon) and local governments, which have limited budgets.
If the corporations with the most resources are unwilling to challenge cable monopolies, it means we have a failure in competition policy and consumers will pay substantially more
than they should for high-speed Internet access. When we look at the parts of the country that have multiple options for high-speed services, we see symmetrical (i.e. the download and
upload speeds are equal) gigabit services selling from a range of $40 a month to $80 a month. It is worth noting that in the absence of protections against “redlining,” the practice
of only selling broadband access to wealthy neighborhoods, many times these areas already tend to be at the top of the economic ladder. However, in the markets that now have gigabit
cable services but no FTTH competitor, the price for broadband jumps dramatically for no reason other than a lack of choice.
So whether you live in a big city like Miami, Florida (where you pay $139.95 a month), or smaller cities like
Manchester, New Hampshire ($104.95 a month), or Imperial, Pennsylvania ($104.95 a month), the story is the same. In my own backyard, I would have to pay close to quadruple ($159.95) the competitive price for gigabit service because of my cable monopoly,
while my coworkers at EFF living in San Francisco are paying $40 for a superior service.
Fiber Services Vastly Outpace Wireless Services in Terms of Capacity and Future Potential, Including 5G
There is no real comparison between the proven Internet access speeds provided to users through a FTTH connection and what potentially may arrive through 5G wireless in some unknown
distant future. Seven years ago, a single strand of optical fiber was able to transmit 100 terabits of information
per second, which is enough to deliver three months of HD video per second. Three years ago, an ISP launched 10 gigabit speeds in the United States because it
already had a FTTH deployment to leverage (and upgrading FTTH is cheap). 5G systems are just about to enter the market, but nothing has been shown to demonstrate it competes with
gigabit cable networks.
Current estimates for 5G wireless services are showing that the median user experience is somewhere between 490 Mbps up to 1.4 gigabits per second
and the upward trajectory for wireless speeds are limited by a lot of different factors including some that are outside the control of the technology itself. Things like interference
from other signals, physical obstructions (they are dependent on line of sight), multi-year government spectrum allocations, and the shortening range of towers for ultra high speeds
(estimated around 1000 feet per
tower) all serve as limitations on wireless potential.
Fiber optic lines, however, have a clear path forward to increasing their capacity as the electronics themselves become more advanced. Recent innovations in increasing the number of
signals that could be pushed through the glass pipe have meant increases in capacity are decoupled from environmental or government restraints that exist for wireless. For example,
time and wavelength division multiplexed passive optical network
technology has literally meant previously existing fiber optic deployments will be upgraded without a single new wire being replaced and is yielding at least ten times the
5G Has Not Reinvented the Economics of ISPs and the Challenges Rural America Presents to Connectivity
The basic formula for how ISPs monetize their network is a fairly straightforward balance between revenues and expenses, as well as one-time sunk investments associated with upgrades
and laying the network (usually handled through some financing vehicle repaid over time). Revenues come from the monthly subscriptions people pay for things like telephone service,
television, and broadband. Their expenses come from the maintenance cost of the network including employees repairing the system and customer service. The larger the network
becomes, the greater the expense to maintain it, the more customers they need to sustain profits.
It is these factors that make rural broadband so challenging. Rural areas by their very nature are more spread out with fewer potential customers (this is why we see a rise in
rural coops where people build their own ISP that isn’t solely focused on
profits). The US Department of Agriculture’s Rural Utilities Service (which provides loans for building telecom services) defines rural as areas where the local community population
is less than 20,000 and that
it is not closely adjacent to an urbanized area of 50,000 or more people. The US Census Bureau defines rural America as areas where no more than 1,000 people per square mile reside.
When you take the economics of an ISP and map it over rural America, you quickly see that they have fewer revenue sources and higher expenses than a more densely populated urban area.
You will need a lot of extra 5G towers to cover the territory while dealing with geographic barriers such as forests or mountains that will physically interfere with the wireless
transmission. In fact, what is ironic is you need a significant deployment of fiber to make rural 5G where one should question why not just go all the way to FTTH. When the Senate
Commerce Committee recently held its field hearing in South Dakota, a witness explained that Sioux Falls coverage
will require 350 towers for 74 square miles but given that the rest of the state has an average density of 11 people per square mile, it is unrealistic to pin their hopes that 5G
wireless will present a revolutionary solution.
The solution to rural America’s problem, in the end, is the same solution we adopted for electricity, water, and the roads. We start treating access to the service in the same way we
look at essential infrastructure rather than a private luxury. We adopt policies that promote fiber’s construction and return to the Telecom Act’s fundamental goal, which
was to ensure all Americans had universal affordable access to communications services no matter where you live.
The Sooner Policymakers Catch on That We Have a Monopoly Problem, the Sooner We Can Break the Monopoly with Competition
5G wireless is important for mobility, but its only one piece of the broadband future and should be treated as such. New innovations to wireless technology such as “network slicing” along with ways to address autonomous vehicles and the Internet of Things are valuable, but it should be recognized that will be separate from
broadband access competition in the high-speed market.
We should be asking our elected officials and regulators what are they doing to help bring high-speed competition and get approximately 68 million people out of their cable
monopoly. To date, the Federal Communications Commission (FCC) has largely ignored this problem claiming that its complete abdication of authority over the ISP market will solve it
for us (and we are seeing how that is going right now). Or worse yet, that monopolies are ok because potential
competition is as good as actual competition. But with each passing month consumers are paying too much for their service, or simply have no service at all, and the United States
Internet will continue to languish behind its international competitors.
>> mehr lesen
It's Repair Day: No One Should Be Punished for "Contempt of Business Model"
(Sat, 20 Oct 2018)
Repair is one of the secret keys to a better life. Repairs keep our gadgets in use longer (saving our pocketbooks) and divert e-waste from landfills or toxic recycling processes
(saving our planet). Repair is an engine of community prosperity: when you get your phone screen fixed at your corner repair shop, your money goes to a locally owned small business
(my daughter and the phone screen guy's daughter go to the same school and he always tut-tuts over the state of my chipped and dented phone at parent-teacher nights).
Fixing stuff has deep roots in the American psyche, from the motorheads who rebuilt and souped-up their cars, to the farmers whose ingenuity wrung every last bit of value out of their
heavy equipment, to the electronics tinkerers who are lionized today as some of the founders of Silicon Valley.
Repairs are amazing: they account for up to 4% of GDP, create local jobs (fix a ton of electronics and generate
200 jobs, send a ton of electronics to a dump to be dismantled and recycled and you create a measly 15 jobs, along with a mountain of toxic waste – reuse is always greener
than recycling), and they generate a stream of low-cost, refurbished devices and products that are within reach of low-income Americans.
The twenty-first century should be a golden age of repairs. A simple web-search can yield up instructions for fixing your stuff, a wealth of replacement-part options, and thriving
communities of other people in the same boat as you, ready to brainstorm solutions when you hit a wall.
But instead, digital technology has been a godsend for big corporations that want to control how you use, fix, and replace your property.
One trick is to put small, inexpensive microprocessors on each part in a complex product -- everything from tractors to phones -- that force you
to use the manufacturer's authorized parts and service technicians. Third-party parts may be functionally identical to the manufacturer's own parts (or even better!), but your device
won't recognize them unless they have the manufacturer's "security" chip and its associated cryptographic authentication systems. Even if you put an original manufacturer's part in
your device (say, one you've bought from the manufacturer or harvested from a scrapped system), some devices won't start using the original part until an authorized service technician
inputs an activation code.
As if that wasn't bad enough, corporations routinely withhold service manuals, lists of diagnostic codes, and parts.
This would be merely unconscionable and obnoxious, but thanks to some toxic technology laws, these practices become more than a hurdle for independent service technicians to overcome
– they become a legal risk.
Section 1201 of the Digital Millennium Copyright Act (DMCA 1201) contains broad prohibitions on bypassing "access controls" for "copyrighted works," with potentially stiff criminal
penalties (five years in prison and a $500,000 fine for a first offense) for "commercial trafficking" in tools to bypass an access control. Manufacturers have interpreted this law
very broadly, asserting that the software in their gadgets – cars, medical implants, HVAC systems and thermostats, phones, TVs, etc. – is a "copyrighted work" and the systems that
block independent service (checks for original parts, activation codes for new parts, access to diagnostic systems) are "access controls." If the firmware in your car is a
"copyrighted work" and the system that stops it from recognizing a new engine part is an "access control" then your auto manufacturer can threaten a competitor with civil prosecution
and prison time for making a gadget that allows your corner mechanic to figure out what's wrong with your car and fix it.
Manufacturers can also look to other notorious tech laws, like the Computer Fraud and Abuse Act (CFAA), as well as end-user license agreements, nondisclosure agreements, trade
secrecy, and onerous supply-chain deals. Taken together, these rules and agreements have allowed the country's increasingly concentrated industries to turn purchasing a simple device, appliance, or
vehicle into a long-term relationship with the manufacturer, like it or not.
The corporations involved make all kinds of bad faith arguments, claiming that they are protecting their customers from safety risks, like getting malware on their phone via an
unscrupulous service technician or winding up with defective replacement parts.
But the reality is that anyone can screw up a repair, including the
manufacturer's authorized technicians. Only in the bizarro universe of monopoly corporatethink do consumers get a better deal and more reliable service when companies don't have
to compete to get their business (and of course, controlling repairs means controlling product life: in California, a law requires manufacturers to supply parts for seven years, and
in California, laptops and phones and other electronics last for seven years, while manufacturers in neighboring states often declare their products to be obsolete after three or five
Last year, 18 states introduced Right to Repair legislation that requires manufacturers to get out of the way of independent
repair: to make parts, manuals and diagnostic codes available to third-party service depots and refrain from other practices that limit your ability to decide who can fix your things.
The corporate blowback from these bills was massive: with so much money at stake when it comes to monopolizing repair, and so many manufacturers using Big Tech's tricks for
freezing out indie service, millions were on the line, and the lobbying money managed to stifle these proposals – for now.
But there is nothing harder to kill than an idea whose time has come. This is the golden age of repairs, a moment made for a renaissance of shade-tree mechanics, electronics
tinkerers, jackleg fixit shops, and mom-and-pop service depots. It has to be: our planet, our pocketbooks, and our neighborhoods all benefit when our property lasts longer,
works better and does more.
>> mehr lesen
We’re Telling a Court (Again) That President Trump and Other Government Officials Can’t Block People on Twitter For Disagreeing With Them
(Sat, 20 Oct 2018)
President Donald Trump and his lawyers still believe he can block people on Twitter because he doesn’t like their views, so today we’ve filed
a brief telling a court, again, that doing so violates the First Amendment. We’re hopeful that the court, like
the last one that considered the
case, will side with the plaintiffs, seven individuals blocked by Trump who are represented by the Knight First Amendment Institute.
As we explain in the brief, the case has broad implications for the public as social media
use by the government becomes more and more ubiquitous.
Trump lost the first round of the
case when a judge sided with the plaintiffs, who
include a university professor, a surgeon, a comedy writer, a community organizer, an author, a legal analyst, and a police officer. The judge agreed with the Knight Institute, which
argued that the interactive spaces associated with the @realDonaldTrump account are “public forums” under the First Amendment, meaning that the government cannot exclude people from
them simply because it disagrees with their views. In a brieffiled in round one, we argued governmental use of social
media platforms to communicate to and with the public—and allow the public to communication with each other—is now the rule of democratic engagement, not the exception. As a result,
First Amendment rights of both access to those accounts and the ability to speak in them must apply in full force.
The ruling in round one was a great victory for free speech and recognizes that in the digital age, when a local, state, or federal agent officially communicates, through Twitter,
with the public about the government’s business, he or she doesn’t get to block people from receiving those messages because they’ve used the forum to express their disagreement with
the official’s policies. Trump was forced to unblock the plaintiffs.
The president’s attorneys are now trying to convince an appeals court to overturn this ruling, making the same arguments they made in the lower court that @realDonaldTrump, Trump’s
Twitter handle, is the president’s private property and he can block people if he wants.
In the brief we filed today we’ve told the appeals court that those arguments—which were wrong on the law in the first place—are still wrong. The president has chosen to use his
longtime Twitter handle to communicate his administration’s goals, announce policy decisions, and talk about government activity. Similarly, public agencies and officials, from city
mayors and county sheriff offices to U.S. Secretaries of State and members of Congress, routinely use social media to communicate official positions, services, and important public
safety and policy messages. Twitter has become a vital communications tool for government, allowing local and federal officials to transmit information when natural disasters such as
hurricanes and wildfires strike, hold online town halls, and answer citizens’ questions about programs.
When governmental officials and agencies choose a particular technique or technology to communicate with the public about governmental affairs, they have endowed the public
with First Amendment rights to receive those messages. And this right, we told the appeals court, is infringed when government denies access to these messages
because it disagrees with someone’s viewpoints.
>> mehr lesen
Federal Circuit Overturns Fee Award In Crowdsourcing Patent Case
(Fri, 19 Oct 2018)
Patent trolls know that it costs a lot of money to defend a patent case. The high cost of defensive litigation means that defendants are pressured to settle even if the patent is
invalid. Fee awards can change this calculus and give defendants a chance to fight back against weak claims. A recent decision [PDF] from the Federal Circuit has overturned a fee award in a case involving
an abstract software patent on crowdsourcing. This disappointing ruling may encourage other patent trolls to file meritless cases.
Patent troll AlphaCap Ventures claimed that its patent covered various forms of
online equity financing. It filed suit against ten different crowdfunding platforms. Most of the defendants settled quickly. But one defendant, Gust, fought back. After nearly two years of litigation in both the Eastern District of Texas and the Southern District of New York, AlphaCap Ventures
dismissed its claim against Gust. The judge in the Southern District of New York ruled that AlphaCap Ventures’ attorneys had litigated unreasonably and ordered them to pay Gust’s
attorneys’ fees. Those lawyers then appealed.
EFF filed an amicus brief [PDF] to respond to one of the lawyers’ key arguments.
AlphaCap Ventures’ attorneys argued that the law of patent eligibility—particularly the law regarding when a claimed invention is an abstract idea and thus ineligible for patent
protection under the Supreme Court’s decision in Alice v. CLS
Bank—is so unsettled that a court should never award fees when a party loses on the issue. Our brief argued that such a rule could embolden lawyers to file suits with
patents they should know are invalid.
As we were drafting our brief in the AlphaCap Ventures case, the Federal Circuit issued a decision in Inventor Holdings v. Bed Bath & Beyond. The patent owner in
Inventor Holdings had asked the court to overturn a fee award against it on the ground that the law of patent eligibility was too uncertain for its arguments to have been
unreasonable. The Federal Circuit rejected this in a unanimous panel opinion. It wrote:
[W]hile we agree with [Inventor Holdings] as a general matter that it was and is sometimes difficult to analyze patent eligibility under the framework prescribed by the Supreme
Court . . . , there is no uncertainty or difficulty in applying the principles set out in Alice to reach the conclusion that the ’582 patent's claims are ineligible.
In other words, it rejected a very similar argument to the one advanced by Alphacap Ventures’ lawyers.
In the Alphacap Ventures decision, in contrast, the two-judge majority emphasized that “abstract idea law was unsettled” and found that the lawyers’ arguments were not so
unreasonable to warrant fees. The majority did not distinguish or even cite Inventor Holdings. (Judge Wallach’s dissent does cite Inventor Holdings.) The appeals
involved different patents, and the fee awards were made under different statutes, but it was still surprising that the majority did not discuss the Inventor Holdings
decision at all.
We hope that the decision in Alphacap Ventures does not encourage other patent trolls to bring suits with invalid patents. The Inventor Holdings decision remains
good law and shows that, at least sometimes, they will be held to account for bringing unreasonable cases.
>> mehr lesen
Open Access Is the Law in California
(Fri, 19 Oct 2018)
Governor Jerry Brown recently signed A.B. 2192, a law requiring that all peer-reviewed, scientific research funded by the state of California be made available to the public no later
than one year after publication.
EFF applauds Governor Brown for signing A.B. 2192 and the legislature for unanimously passing it—particularly Assemblymember Mark Stone, who introduced the bill and championed it at
every step. To our knowledge, no other state has adopted an open access bill this comprehensive.
As we’ve explained before, it’s a problem when cutting-edge scientific research is available
only to people who can afford expensive journal subscriptions and academic databases. It insulates scientific research from a broader field of innovators: if the latest research is
only available to people with the most resources, then the next breakthroughs will only come from
A.B. 2192 doesn’t solve that problem entirely, but it does limit it. Under the new law, researchers can still publish their papers in subscription-based journals so long as they
upload them to public open access repositories no later than one year after publication.
What Now? Future Wins for Open Access
While legislators were considering passing A.B. 2192, we urged them to consider passing a stronger law
making research available to the public on the date of publication. In the fast-moving world of science, a one-year embargo period is simply too long.
The best way to maximize the public benefit of state-funded research is to publish it in an open access journal, so that everyone can read it for free on the day it’s
published—ideally under an open license that allows anyone to adapt
and republish it.
Opponents of open access sometimes claim that open publishing hurts researchers’ reputations, but increasingly, the exact opposite is true; indeed, some of the most important discoveries of the modern era were published in open access
journals. That change in practices has come thanks in no small part to a growing list
of foundations requiring their grantees to publish in open access journals. Funders can use their influence to change norms in publishing to benefit the public. With the majority of scientific research in the United States funded by government bodies, lawmakers ought to use their
power to push for open access. Ultimately, requiring government grantees to publish in open access journals won’t hurt scientists’ reputations; it will help open access’ reputation.
While A.B. 2192’s passage is good news, Congress has still failed to pass an open access law covering science funded by the federal government. FASTR—the Fair Access to Science and
Technology Act (S. 1701, H.R. 3427)—is very similar to the California law. It would require every
federal agency that spends more than $100 million on grants for research to adopt an open access policy. The bill gives each agency flexibility to choose a policy suited to the
work it funds, as long as research is made available to the general public no later than one year after publication. Like the California law, FASTR isn’t perfect, but it’s a great
start. Unfortunately, despite strong support in both political parties, FASTR has floundered in Congressional gridlock for five years.
As we celebrate the win for open access in California, please take a moment to write your members of Congress and urge them to pass FASTR.
Take actionTell Congress: It’s time to move FASTR
>> mehr lesen
From Canada to Argentina, Security Researchers Have Rights—Our New Report
(Wed, 17 Oct 2018)
EFF is introducing a new Coders' Rights project to connect the work of security research with the fundamental rights of its
practitioners throughout the Americas. The project seeks to support the right of free expression that lies at the heart of researchers' creations and use of computer code to examine
computer systems, and relay their discoveries among their peers and to the wider public.
To kick off the project, EFF published a whitepaper today, “Protecting Security Researchers' Rights in the
Americas” (PDF), to provide the legal and policy basis for
our work, outlining human rights standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights, should use to protect the fundamental rights of security researchers.
We started this project because hackers and security researchers have never been more important to the security of the Internet. By identifying and disclosing
vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work.
Computer security researchers work, often independently from large public and private institutions, to analyze, explore, and fix the vulnerabilities that are scattered across the
digital landscape. While most of this work is conducted unobtrusively as consultants or as employees, sometimes their work is done in the public interest—which gathers researchers
headlines and plaudits, but can also attract civil or criminal suits. They can be targeted and threatened with laws intended to prevent malicious intrusion, even when their own work
is anything but malicious. The result is that security researchers work in an environment of legal uncertainty, even as their job becomes more vital to the orderly functioning of
Drawing on rights recognized by the American Convention on Human Rights, and
examples from North and South American jurisprudence, this paper analyzes what rights security researchers have; how those rights are expressed in the Americas’ unique arrangement of
human rights instruments, and how we might best interpret the requirements of human rights law—including rights of privacy, free expression, and due process—when applied to the domain
of computer security research and its practitioners. In cooperation with technical and legal experts across the continent, we explain that:
Computer programming is expressive activity protected by the American Convention of Human Rights. We explain how free expression lies at the heart of researchers’ creation and use
of computer code to examine computer systems and to relay their discoveries among their peers and to the wider public.
Courts and the law should guarantee that the creation, possession or distribution of tools related to cybersecurity are protected by Article 13 of the American Convention of Human
Rights, as legitimate acts of free expression, and should not be criminalized or otherwise restricted. These tools are critical to the practice of defensive security and have
legitimate, socially desirable uses, such as identifying and testing practical vulnerabilities.
Lawmakers and judges should discourage the use of criminal law as a response to behavior by security researchers which, while technically in violation of a computer crime
law, is socially beneficial.
Cybercrime laws should include malicious intent and actual damage in its definition of criminal liability.
The “Terms of service” (ToS) of private entities have created inappropriate and dangerous criminal liability among researchers by redefining “unauthorized access” in the United
States. In Latin America, under the Legality Principle, ToS provisions cannot be used to meet the vague and ambiguous standards established in criminal provisions (for example,
"without authorization"). Criminal liability cannot be based on how private companies would like their services to be used. On the contrary, criminal liability must be based on laws
which describe in a precise manner which conduct is forbidden and which is punishable.
Penalties for crimes committed with computers should, at a minimum, be no higher than penalties for analogous crimes committed without computers.
Criminal law punishment provisions should be proportionate to the crime, especially when cybercrimes demonstrate little harmful effects, or are comparable to minor traditional
Proactive actions that will secure the free flow of information in the security research community are needed.
We’d like to thank EFF Senior Staff Attorney Nate Cardozo, Deputy Executive Director and General Counsel Kurt Opsahl, International Rights Director Katitza Rodríguez, Staff Attorney
Jamie Lee Williams, as well as consultant Ramiro Ugarte and Tamir Israel, Staff Attorney at Canadian Internet Policy and Public Interest Clinic at the Centre for Law, Technology and
Society at the University of Ottawa, for their assistance in researching and writing this paper.
>> mehr lesen
What To Do If Your Account Was Caught in the Facebook Breach
(Wed, 17 Oct 2018)
Keeping up with Facebook privacy scandals is basically a full-time job these days. Two weeks ago, it announced a massive breach with scant
details. Then, this past Friday, Facebook released more
information, revising earlier estimates about the number of affected users and outlining exactly what types of user data were accessed. Here are the key
details you need to know, as well as recommendations about what to do if your account was affected.
30 Million Accounts Affected
The number of users whose access tokens were stolen is lower than Facebook originally estimated. When Facebook first announced this incident, it stated that attackers may have been able to steal
access tokens—digital “keys” that control your login information and keep you logged in—from 50 to 90 million accounts. Since then, further investigation has revised that number down
to 30 million accounts.
The attackers were able to access an incredibly broad array of information from those accounts. The 30 million compromised accounts fall into three main categories. For 15
million users, attackers access names and phone numbers, emails, or both (depending on what people had listed).
For 14 million, attackers access those two sets of information as well as extensive profile details including:
Self-reported current city
Device types used to access Facebook
The last 10 places they checked into or were tagged in
People or Pages they follow
Their 15 most recent searches
For the remaining 1 million users whose access tokens were stolen, attackers did not access any information.
Facebook is in the process of sending messages to affected users. In the meantime, you can also check Facebook’s Help Center to find out if your account was among the 30 million compromised—and if it was, which of the three rough groups above it
fell into. Information about your account will be at the bottom in the box titled “Is my Facebook account impacted by this security issue?”
What Should You Do If Your Account Was Hit?
The most worrying potential outcome of this hack for most people is what someone might be able to do with this mountain of sensitive personal information. In particular,
adversaries could use this information to turbocharge their efforts to break into other accounts, particularly by using phishing messages or exploiting legitimate account recovery
flows. With that in mind, the best thing to do is stay on top of some digital security basics: look out for common signs of phishing, keep your software updated, consider using a password manager, and avoid using easy-to-guess security questions that rely on personal information.
The difference between a clumsy, obviously fake phishing email and a frighteningly convincing phishing email is
personal information. The information that attackers stole from Facebook is essentially a database connecting millions of people’s contact information to their personal information,
which amounts to a treasure trove for phishers and scammers. Details about your hometown, education, and places you recently checked in, for example, could allow scammers to craft
emails impersonating your college, your employer, or even an old friend.
In addition, the combination of email addresses and personal details could help someone break into one of your accounts on another service. All a would-be hacker needs to do is
impersonate you and pretend to be locked out of your account—usually starting with the “Forgot your password?” option you see on log-in pages. Because so many services across the web
still have insecure methods of account recovery like security questions, information like birthdate, hometown, and alternate contact methods like phone numbers could give hackers more
than enough to break into weakly protected accounts.
Facebook stated that it has not seen evidence of this kind of information being used “in the wild” for phishing attempts or account recovery break-ins. Facebook has also assured
users that no credit card information or actual passwords were stolen (which means you don’t need to change those) but for many that is cold comfort. Credit card numbers and passwords
can be changed, but the deeply private insights revealed by your 15 most recent searches or 10 most recent locations cannot be so easily reset.
What Do We Still Need To Know?
Because it’s cooperating with the FBI, Facebook cannot discuss any findings about the hackers’ identity or motivations. However, from Facebook’s more detailed description of how they carried out the attack, it’s clear that
the attackers were determined and coordinated enough to find an obscure, complex vulnerability in Facebook’s code. It’s also clear that they had the resources necessary to
automatically exfiltrate data on a large scale.
We still don’t know what exactly the hackers were after: were they targeting particular individuals or groups, or did they just want to gather as much information as possible?
It’s also unclear if the attackers abused the platform in ways beyond what Facebook has reported, or used the particular vulnerability behind this attack to launch other, more subtle
attacks that Facebook has not yet found.
There is only so much individual users can do to protect themselves from this kind of attack and its aftermath. Ultimately, it is Facebook’s and other companies’ responsibility
to not only protect against these kinds of attacks, but also to avoid retaining and making vulnerable so much personal information in the first place.
>> mehr lesen
Lawsuit Seeking to Unmask Contributors to ‘Shitty Media Men’ List Would Violate Anonymous Speakers’ First Amendment Rights
(Tue, 16 Oct 2018)
A lawsuit filed in New York federal court last week against the
creator of the “Shitty Media Men” list and its anonymous contributors exemplifies how individuals often misuse the court system to unmask anonymous speakers and chill their speech.
That’s why we’re watching this case closely, and we’re prepared to advocate for the First Amendment rights of the list’s anonymous contributors.
On paper, the lawsuit is a defamation case brought by the writer Stephen Elliott, who was named on the list. The Shitty Media Men list was a Google spreadsheet shared via link and
made editable by anyone, making it particularly easy for anonymous speakers to share their experiences with men identified on the list. But a review of the complaint suggests that the
lawsuit is focused more broadly on retaliating against the list’s creator, Moira Donegan, and publicly identifying those who contributed to it.
For example, after naming several anonymous defendants as Jane Does, the complaint stresses that “Plaintiff will know, through initial discovery, the names, email addresses,
pseudonyms and/or ‘Internet handles’ used by Jane Doe Defendants to create the List, enter information into the List, circulate the List, and otherwise publish information in the List
or publicize the List.”
In other words, Elliott wants to obtain identifying information about anyone and everyone who contributed to, distributed, or called attention to the list, not just those who provided
information about Elliot specifically.
The First Amendment, however, protects anonymous speakers like the contributors to the Shitty Media Men list, who were trying to raise awareness about what they see as a pervasive
problem: predatory men in media. As the Supreme Court has ruled, anonymity is a
historic and essential way of speaking on matters of public concern—it is a “shield against the tyranny of the majority.”
Anonymity is particularly critical for people who need to communicate honestly and openly without fear of retribution.
People rely on anonymity in a variety of contexts, including reporting harassment, violence, and other abusive behavior they’ve experienced or witnessed. This was the exact purpose
behind the Shitty Media Men list. Donegan, who after learning she would be identified as the creator of the list, came forward and wrote that she “wanted to create a place for women to share their stories
of harassment and assault without being needlessly discredited or judged. The hope was to create an alternate avenue to report this kind of behavior and warn others without fear of
It’s easy to understand why contributors to the list did so anonymously, and that they very likely would not have provided the information had they not been able to remain anonymous.
By threatening that anonymity, lawsuits like this one risk discouraging anyone in the future from creating similar tools that share information and warn people about violence, abuse,
To be clear, our courts do allow plaintiffs to pierce anonymity if they can show need to do so in order to pursue legitimate claims. That does not seem to be the case here, because
the claims against Donegan appear to be without merit. Given that she initially created the spreadsheet as a platform to allow others to provide information, Donegan is likely immune
from suit under Section 230, the federal law that protects creators of online forums like the “Shitty Media Men” list from
being treated as the publisher of the information added by other users, here the list’s contributors. And even if Donegan did in fact create the content about Elliott, she could still
argue that the First Amendment requires that he show that the allegations were not only false but also made with actual malice.
EFF has long fought for robust protections for anonymous online speakers, representing speakers in
court cases and also pushing courts to adopt broad protections for them. Given the potential
dangers to anonymous contributors to this list and the thin allegations in the complaint, we hope the court hearing the lawsuit quickly dismisses the case and protects the First
Amendment rights of the speakers who provided information to it. We also applaud Google, which has said that it will fight any subpoenas seeking information on its users who contributed
to the list.
EFF will continue to monitor the case and seek to advocate for the First Amendment rights of those who contributed to the list should it become necessary. If you contributed to the
list and are concerned about being identified or otherwise have questions, contact us at email@example.com. As with all
inquiries about legal assistance from EFF, the attorney/client privilege applies, even if we can’t take your
>> mehr lesen
Federal Circuit (Finally) Makes Briefs Immediately Available to the Public
(Tue, 16 Oct 2018)
In a victory for transparency, the Federal Circuit has changed its policies to give the public immediate access to briefs. Previously, the court had marked submitted briefs as
“tendered” and withheld them from the public pending review by the
Clerk’s Office. That process sometimes took a number of days. EFF wrote a letter [PDF] asking the court to make briefs available as soon as they
are filed. The court has published new
procedures [PDF] that will allow immediate access to submitted briefs.
Regular readers might note that this is the second
time we have announced this modest victory. Unfortunately, our earlier blog post was wrong and arose out of a miscommunication with the court (the Clerk’s Office informed
us of our mistake and we corrected that post). This time, the new policy clearly provides for briefs to be immediately available to the public. The announcement states:
The revised procedure will allow for the immediate filing and public availability of all electronically-filed briefs and appendices. … As of December 1, 2018, when a party files a
brief or appendix with the court, the document will immediately appear on the public docket as filed, with a notation of pending compliance review.
In our letter to the Federal Circuit, we had explained that the public’s right of access to courts includes a right to timelyaccess. The Federal Circuit is the federal court of appeal that hears appeals in
patent cases from all across the country, and many of its cases are of interest to the public at large. We are glad that the court
will now give the press and the public immediate access to filed briefs.
Overall, the Federal Circuit has a good record on transparency. The court has issued rulings making it clear that it will only allow material to be sealed for good reason. The court’s rules of practice require parties to file a separate motion if
they want to seal more than 15 consecutive words in a motion or a brief. The Federal Circuit’s new filing policy brings its docketing practices in line with this
record of transparency and promotes timely access to court records.
>> mehr lesen
Ten Legislative Victories You Helped Us Win in California
(Tue, 16 Oct 2018)
Your strong support helped us persuade California’s lawmakers to do the right thing on many important technology bills debated on the chamber floors this year. With your help,
EFF won an unprecedented number of victories, supporting good bills and stopping those that would have hurt innovation and digital freedoms.
Here’s a list of victories you helped us get the legislature to pass and the governor to sign, through your direct participation in our advocacy campaigns and your other contributions
to support our work.
Net Neutrality for California
Our biggest win of the year, the quest to pass California’s net
neutrality law and set a gold standard for the whole country, was hard-fought. S.B. 822 not only prevents Internet service providers from blocking or interfering with
traffic, but also from prioritizing their own services in ways that discriminate.
California made a bold declaration to support the nation’s strongest protections of a free and open Internet. As the state fights for the ability to enact its law—following an
ill-conceived legal challenge from the Trump
administration—you can continue to let lawmakers know that you support its principles.
Increased Transparency into Local Law Enforcement Policies
Transparency is the foundation of trust. Thanks to the passage of S.B. 978, California police departments and sheriff’s offices will
now be required to post their policies and training materials online, starting in January 2020. The California Commission on Peace Officer Standards and Training will be required to
make its vast catalog of trainings available as well. This will encourage better and more open relationships between law enforcement agencies and the communities they serve.
Increasing public access to police materials about training and procedures benefits everyone by making it easier to understand what to expect from a police encounter. It also helps
ensure that communities have a better grasp of new police surveillance technologies, including body cameras and drones.
Public Access to Footage from Police Body Cameras
Cameras worn by police officers are increasingly common. While intended to promote police accountability, unregulated body cams can instead become high tech police snooping devices.
Some police departments have withheld recordings of high-profile police use of force against civilians, even when communities demand release. Prior to this bill’s introduction, Los
Angeles, for example, had a policy that didn’t allow for any kind of public access at all.
The public now has the right to access those recordings. A.B. 748 ensures that starting July 1, 2019, you will have the right
to access this important transparency resource.
EFF sent a letter stating its support for this law, which makes it more likely that body-worn cameras will be used as
a tool for holding officers accountable, rather than a tool of police surveillance against the public.
Privacy Protections for Cannabis Users
As the legal marijuana market develops in California, it is critical that the state protects the data privacy rights of cannabis users. A.B. 2402 is a step in the right direction,
providing modest but vital privacy measures.
A.B. 2402 stops cannabis distributors from sharing the personal
information of their customers without their consent, granting cannabis users an important data privacy right. The bill also prohibits dispensaries from discriminating
against a customer who chooses to withhold that consent.
As more vendors use technology such as apps and websites to market marijuana, the breadth of their data collection continues to grow. News reports have found that dispensaries are
scanning and retaining driver license data, as well as requiring names and phone numbers before purchases.
This new law ensures that users can deny consent to having their personal information shared with other companies, without penalty.
Better DNA Privacy for Youths
DNA information reveals a tremendous amount about a person – their medical conditions, their ancestry, and many other immutable traits – and handing over a sample to law enforcement
has long-lasting consequences. Unfortunately, at least one police agency has demanded DNA from youths in circumstances that are confusing and coercive.
A.B. 1584 makes sure that before this happens,
kids will have an adult in the room to explain the implications of handing a DNA sample over to law enforcement. Once this law takes effect in January 2019, law enforcement officials
must have the consent of a parent, guardian, or attorney, in addition to consent from the minor, to collect a DNA sample.
EFF wrote a letter supporting this bill as a vital protection for California’s youths, particularly in light of press reports about police demanding DNA from young people without a
clear reason. In one case, police approached kids coming back from a basketball game at a rec center and had them sign forms “consenting” to check swabs.
A.B. 1584 adds sensible privacy protections for children, to ensure that they fully understand how police may use these DNA samples. It also guarantees that, if the sample doesn’t
implicate them in a crime, it will be deleted from the system promptly.
Guaranteed Internet Access for Kids in Foster Care and Juvenile Detention
Internet access is vital to succeeding in today’s world. With your support, we persuaded lawmakers to recognize how important it is for some of California’s most vulnerable young
people—those involved in the child welfare and juvenile justice systems— to be able to access the Internet, as a way to further their education. A.B. 2448 guarantees that access.
EFF testified before a Senate committee to advocate for the 2017 version of this bill, which the governor vetoed with the condition that he would sign a more narrow text. The second
version, however, passed Gov. Brown’s muster. Throughout the process, EFF launched email campaigns and enlisted the help of tech companies, including Facebook, to lend their support
to the effort.
This law affirms that some of the state’s most at-risk young people have access to all the resources the Internet has to offer. And it shows the country that if California can promise
Internet access to disadvantaged youth, then other states can, too.
Better Privacy Protections for ID Scanning
Getting your ID card checked at a bar? The bouncer may be extracting digital information from your ID, and the bar may then be sharing that information with others. California law
limits bars from sharing information they collected through swiping your ID, but some companies and police departments believed they could bypass those safeguards as long as IDs were
“scanned” rather than “swiped.”
A.B. 2769 closes this loophole. It makes sure that you have the same protections against having your information shared without your consent whether the bouncer checking you out
is swiping your card or scanning it.
EFF sent a letter in support of this bill to the governor. People shouldn’t lose the right to consent to data sharing simply because the place they go chooses a different method of
checking their identification.
Thankfully, the governor signed this common-sense bill.
Open Access to Government-funded Research
A.B. 2192 was a huge victory
for open access to knowledge in the state of California. It gives everyone access to research that’s been funded by the government within a year of its publication date.
EFF went to Sacramento to testify in support of this bill. We also wrote to explain that it would have at most a negligible financial impact on the state budget to require researchers to make their reports open to
the public. This prompted lawmakers to reconsider the bill after previously setting it aside.
A.B. 2192 is a good first step. EFF would like to see other states adopt similar measures. We also want California to take further strides to make research available to other researchers looking to advance their
work, and to the general public.
No Government Committee Deciding What is “Fake News”
Fighting “fake news” has become a priority for a lot of lawmakers, but S.B. 1424, a bill EFF opposed, was not the way to do it. The bill would have set up a state advisory committee
to recommend ways to “mitigate” the spread of “fake news.” That would have created an excessive risk of new laws that restrict the First Amendment rights of Californians.
EFF sent a letter to the governor, outlining our concerns about having the government be the arbiter of what is true and what isn’t. This is an especially difficult task when censors
examine complex speech, such as parody and satire.
Gov. Brown vetoed this bill, ultimately concluding that it was not needed. “As evidenced by the numerous studies by academic and policy groups on the spread of false information, the
creation of a statutory advisory group to examine this issue is not necessary,” he wrote.
Helped Craft a Better Bot-Labeling Law
California's new bot-labeling bill, S.B. 1001, initially included overbroad language that would have swept up bots used for ordinary and protected speech activities. Early
drafts of the bill would have regulated accounts used for poetry, political speech, or satire. The original bill also created a takedown system that could have been used to censor or
discredit important voices, like civil rights leaders or activists.
EFF worked with the bill's sponsor, Senator Robert Hertzberg, to remove the dangerous language and think through the original bill's unintended negative consequences. We thank the
California legislature for hearing our concerns and amending this bill.
On to 2019!
You spoke, and California’s legislature and governor listened. In 2018, we made great progress for digital liberty. With your help, we look forward to more successes in 2019. Thank
>> mehr lesen
New Witness Panel Tells Congress How to Protect Consumer Data Privacy
(Thu, 11 Oct 2018)
Yesterday’s Senate Commerce Committee hearing on consumer data privacy was a welcome improvement. The last time the Committee convened around this topic, all of the witnesses were
industry and corporate representatives. This time, we were happy to
see witnesses from consumer advocacy groups and the European Union, who argued for robust consumer privacy laws on this side of the Atlantic.
The Dangers of Rolling Back State Privacy Protections
Last time, the panel of industry witnesses (Amazon, Apple, AT&T, Charter, Google, and Twitter) all testified in favor of a federal law to preempt state data privacy laws, such as California’s new Consumer Privacy Act
Today was different. Chairman Thune kicked off the hearing by reminding the Committee of the importance of hearing from independent stakeholders and experts. We were also glad to hear Chairman
Thune say that industry self-regulation is not enough to protect consumer privacy, and that new standards are needed.
A single weak federal privacy law will be worse for consumers than a patchwork of robust state laws.
The first witness forcefully argued that strong consumer privacy laws do not hurt business. Alastair Mactaggart, who helped pass the CCPA, reminded the Committee that he is a businessman with several successful companies operating in the Bay Area alongside the
tech giants. He argued that the CCPA is not anti-business. Indeed, the fact that no major tech companies have made plans to pull out of Europe after the watershed GDPR went into
effect earlier this year is proof that business can co-exist with robust privacy protections. The CCPA empowers the California Attorney General to enact—and change—regulations to
address evolving tech and other issues. Mactaggart argued that this flexibility is designed to ensure that future innovators can enter the market and compete with the existing giants,
while also ensuring that the giants cannot exploit an overlooked loophole in the law. While we have concerns about the CCPA that the California legislature must fix in 2019, we also
look forward to participating in the Attorney General’s process to help make new rules as strong as possible.
The President and CEO of the Center for Democracy & Technology, Nuala O’Connor, acknowledged that some businesses want a single federal data privacy law that preempts all state data privacy laws, to avoid the challenges of
complying with a patchwork of state laws. O’Connor cautioned the committee that the “price of pre-emption would be very, very high”—meaning any federal law that shuts down state laws
must provide gold-standard privacy protection.
A single weak federal privacy law will be worse for consumers than a patchwork of robust state laws. As explained by Laura Moy, Executive Director and Adjunct Professor of Law at the Georgetown Law Center on Privacy & Technology, a federal law should be a floor, not a ceiling.
As we’ve said before, current state laws in Vermont and Illinois, in addition to California, have already created strong protections for user privacy,
with more states to follow. If Congress enacts weaker federal data privacy legislation that blocks such stronger state laws, the result will be a massive step backward for user
Asking The Right Questions
We were heartened that several Senators understood the complexity of creating a strong, comprehensive federal consumer privacy framework, and are asking the right questions.
In his opening statement, Senator Markey stated that a new law must include, at minimum, “Knowledge,
Notice, and No”: Knowledge of what data is being collected, Notice of how that data is being used, and the ability to say “No.” This is a great starting point, and we look
forward to seeing his draft of consumer protection legislation.
Senator Duckworth asked the witnesses if it is too soon to know if existing laws and regulations are working, and wanted to know how Congress should assess the impact on consumer
privacy. These are hard questions, but the right ones.
In the hearing with company representatives two weeks ago, Senator Schatz questioned whether companies were coming to Congress simply to block state privacy laws, and raised the
prospect of creating an actual federal privacy regulator with broad authority. This time, Senator Schatz again accused some of the companies of trying to “do the minimum” for their
consumers, focusing his questions on adequate and robust enforcement.
While all the witnesses agreed that robust rulemaking from the FTC is necessary, it is not clear that the current enforcement or penalty structure is where it needs to be. O’Connor
said that only 60 employees at the FTC are tasked with enforcing consumer privacy for all of the United States, which is not nearly enough. Senator Schatz also called for stiffer
financial penalties, as under the GDPR, explaining that even a $22.5 million fine is only a few hours of revenue for Google.
Right to be Let Alone
Dr. Andrea Jelinek, Chair of the European Data Protection Board, reminded the Committee of the writings of U.S. Supreme Court Justice Louis Brandeis. Long before he was
on the Court, Brandeis wrote in the Harvard Law review in 1890, “Recent inventions and business
methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual … the right ‘to be let alone’ … Numerous mechanical
devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”
Technology has changed and continues to change, but the right of an individual to privacy and to be let alone has not. Congress should continue to allow the states to protect their
citizens, even as it discusses how to build a stronger national framework that supports these efforts.
>> mehr lesen
The Google+ Bug Is More About The Cover-Up Than The Crime
(Thu, 11 Oct 2018)
Earlier this week, Google dropped a bombshell: in
March, the company discovered a “bug” in its Google+ API that allowed third-party apps to access private data from its millions of users. The company confirmed that at least 500,000
people were “potentially affected.”
Google’s mishandling of data was bad. But its mishandling of the aftermath was worse. Google should have told the public as soon as it knew something was wrong, giving users a
chance to protect themselves and policymakers a chance to react. Instead, amidst a torrent of outrage over the Facebook-Cambridge Analytica scandal, Google decided to hide its
mistakes from the public for over half a year.
The story behind Google’s latest snafu bears a strong resemblance to the design flaw that allowed Cambridge Analytica to harvest millions of users’ private Facebook data.
According to a Google blog post, an internal review
discovered a bug in one of the ways that third-party apps could access data about a user and their friends. Quoting from the post:
Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public.
It’s important to note that Google “found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence
that any Profile data was misused.” Nevertheless, potential exposure of user data on such a large scale is more than enough to cause concern. A full list of the vulnerable data
points is available here, and you can update the privacy settings
on your own account here.
Potential exposure of user data on such a large scale is more than enough to cause concern.
What would this bug look like in practice? Suppose Alice is friends with Bob on Google+. Alice has shared personal information with her friends, including her occupation,
relationship status, and email. Then, her friend Bob decides to connect to a third-party app. He is prompted to give that app access to his own data, plus “public information” about
his friends, and he clicks “ok.” Before March, the app would have been granted access to all the details—not marked public—that Alice had shared with
Bob. Similar to Facebook’s Cambridge Analytica scandal, a bad API made it possible for third parties to access private data about people who never had a chance to consent.
Google also announced in the same post that it would begin phasing out the consumer version of Google+, heading for a complete shutdown in August 2019. The company cited “low
usage” of the service. This bug’s discovery may have been the final nail in the social network’s coffin.
Should You Be Concerned?
We know very little about whose data was taken by whom, if any, so it’s hard to say. For many people, the data affected by the bug may not be very revealing. However, when
combined with other information, it could expose some people to serious risks.
Email addresses, for example, are used to log in to most services around the web. Since many of those services still have insecure methods of account recovery, information like
birthdays, location history, occupations, and other personal details could give hackers more than enough to break into weakly protected accounts. And a database of millions of email
addresses linked to personal information would be a treasure trove for phishers and scammers.
Furthermore, the combination of real names, gender identity, relationship status, and occupation with residence information could pose serious risks to certain individuals and
communities. Survivors of domestic violence or victims of targeted harassment may be comfortable sharing their residence with trusted friends, but not the public at large. A breach of
these data could also harm undocumented migrants, or LGBTQ people living in countries where their relationships are illegal.
Based on our reading of Google’s announcement, there’s no way to know how many people were affected. Since Google deletes API logs after two weeks, the company was only able to
audit API activity for the two weeks leading up to the bug’s discovery. Google has said that “up to 500,000” accounts might have been affected, but that’s apparently based on an audit
of a single two-week slice of time. The company hasn’t revealed when exactly the vulnerability was introduced.
Even worse, many of the people affected may not even know they have a Google+ account. Since the platform’s launch in 2011, Google has aggressively pushed users to
sign-up for Google+, and sometimes even required a Google+ account to use other Google services like Gmail and YouTube. Contrary to all the jokes about its low
adoption, this bug shows that Google+ accounts have still represented a weak link for its unwitting users’ online security and privacy.
It’s Not The Crime, It’s The Cover-Up
Google never should have put its users at risk. But once it realized its mistake, there was only one correct choice: fix the bug and tell its users
Instead, Google chose to keep the vulnerability secret, perhaps waiting for the backlash against Facebook to blow over.
Google wrote a pitch when it was supposed to write an apology.
The blog post announcing the breach is confusing, cluttered, and riddled with bizarre doublespeak. It introduces “Project Strobe,” and is subtitled “Protecting your data...” as
if screwing up an API and hiding it for months was somehow a bold step forward for consumer privacy. In a section headed “There are significant challenges in creating and maintaining
a successful Google+ product that meets consumers’ expectations,” the company explains regarding the breach, then gives a roundabout, legalistic excuse for not telling the public
about it sooner. Finally, the post describes improvements to Google Account’s privacy permissions interface and Gmail’s and Android’s API policies, which, while nice, are unrelated to
the breach in question.
Overall, the disclosure does not give the impression of a contrite company that has learned its lesson. Users don’t need to know the ins and outs of Google’s UX process, they
need to be convinced that this won’t happen again. Google wrote a pitch when it was supposed to write an apology.
Public trust in Silicon Valley is at an all-time low, and politicians are in a fervor, throwing around dangerously irresponsible ideas that threaten free expression on the Internet. In this
climate, Google needs to be as transparent and trustworthy as possible. Instead, incidents like this hurt its users and violate their privacy and security expectations.
>> mehr lesen
When Police Misuse Their Power to Control News Coverage, They Shouldn’t Be Allowed To Use Probable Cause As a Shield Against Claims of First Amendment Violations
(Thu, 11 Oct 2018)
Journalists face increasingly hostile conditions covering
public protests, presidential rallies, corruption, and police brutality in the course of work as watchdogs over government power. A case before the U.S. Supreme Court threatens press
freedoms even further by potentially giving the government freer rein to arrest media people in retaliation for publishing stories or gathering news the government doesn’t like.
EFF joined the National Press Photographers Association and 30 other media and nonprofit free speech organizations in urging the court to allow lawsuits by individuals who show they
were arrested in retaliation for exercising their rights under the First Amendment—for example, in the case of the news media by newsgathering, interviewing protestors, recording
events—even if the police had probable cause for the arrests. Instead of foreclosing such lawsuits, we urged the court to adopt a procedure whereby when there’s an allegation of First
Amendment retaliation, the burden shifts to police to show not only the presence of probable cause, but that they would have made the arrests anyway, regardless of the targets’ First
Amendment activities. EFF and its partners filed a brief with the Supreme Court October 9, 2018.
The court’s decision in this case may well have far-reaching implications for all First Amendment rights, including freedom of the press. Examples abound of journalists and news
photographers being arrested while doing their jobs, swept up by
police as they try to cover violent demonstrations and confrontations with law enforcement—where press scrutiny is most needed. Last year 34 journalists were arrested while
seeking to document or report news. Nine journalists
covering violent protests around President Trump’s inauguration were arrested. Police arrested reporters covering the Black Lives Matter
protests in Ferguson, Missouri. Ninety journalists were arrested covering Occupy Wall Street protests between 2011 and 2012.
Arrests designed to simply halt or to punish one’s speech are common: police haul journalists and photographers into wagons amid protests, or while they’re videotaping police, or for
persistently asking questions of a public servant. A tenacious reporter in West Virginia was
arrested in the state capitol building for shouting questions to the Secretary of Health and Human Services as he walked through a public hallway. The journalist was charged with
disrupting a government process, but, as is typical, the charge was dropped
after prosecutors found no crime had been committed. This “catch and release” technique is not unusual, and it disrupts news gathering and chills the media from doing its job.
The case at issue before the Supreme Court, Nieves v. Bartlett, doesn’t involve the press, but the potential impact on First Amendment rights broadly and press freedoms, in
particular, is clear. The lawsuit involves an Alaska man who sued police for false arrest and imprisonment, and retaliatory arrest, alleging he was arrested for disorderly conduct in
retaliation for his refusal to speak with a police officer. The U.S. Court of Appeals for the Ninth Circuit upheld the dismissal of all but the retaliatory arrest charge. The court
said that while there was probable cause for the arrest, that didn’t preclude the man from pursuing his claim that his arrest was in retaliation for exercising his First Amendment
right to not speak to police. This was the right decision, and we urge the Supreme Court to uphold it.
For the brief:
For more on EFF’s work supporting the First Amendment right to record the police:
Fields v. City of Philadelphia
>> mehr lesen
EU Internet Censorship Will Censor the Whole World's Internet
(Wed, 10 Oct 2018)
As the EU advances the new Copyright Directive towards becoming law in its 28
member-states, it's important to realise that the EU's plan will end up censoring the Internet for everyone, not just Europeans.
A quick refresher: Under Article 13 of the new Copyright Directive, anyone who operates a (sufficiently large) platform where people can post works that might be copyrighted (like
text, pictures, videos, code, games, audio etc) will have to crowdsource a database of "copyrighted works" that users aren't allowed to post, and block anything that seems to match
one of the database entries.
These blacklist databases will be open to all comers (after all, anyone can create a copyrighted work): that means that billions of people around the world will be able to submit
anything to the blacklists, without having to prove that they hold the copyright to their submissions (or, for that matter, that their submissions are copyrighted). The
Directive does not specify any punishment for making false claims to a copyright, and a platform that decided to block someone for making repeated fake claims would run the risk of
being liable to the abuser if a user posts a work to which the abuser does own the rights.
The major targets of this censorship plan are the social media platforms, and it's the "social" that should give us all pause.
That's because the currency of social media is social interaction between users. I post something, you reply, a third person chimes in, I reply again, and so on.
Now, let's take a hypothetical Twitter discussion between three users: Alice (an American), Bob (a Bulgarian) and Carol (a Canadian).
Alice posts a picture of a political march: thousands of protesters and counterprotesters, waving signs. As is commonaroundtheworld, these signs include
copyrighted images, whose use is permitted under US "fair use" rules that permit parody. Because Twitter enables users to communicate significant amounts of user-generated content,
they’ll fall within the ambit of Article 13.
Bob lives in Bulgaria, an EU member-state whose copyright law does not permit parody. He might want to
reply to Alice with a quote from the Bulgarian dissident Georgi Markov, whose works were translated into
English in the late 1970s and are still in copyright.
Carol, a Canadian who met Bob and Alice through their shared love of Doctor Who, decides to post a witty meme from "The Mark of the Rani," a 1985 episode in which Colin Baker travels back to witness the Luddite protests of the 19th Century.
Alice, Bob and Carol are all expressing themselves through use of copyrighted cultural works, in ways that might not be lawful in the EU’s most speech-restrictive copyright
jurisdictions. But because (under today's system) the platform typically is only required to to respond to copyright complaints when a rightsholder objects to the use, everyone can
see everyone else's posts and carry on a discussion using tools and modes that have become the norm in all our modern, digital discourse.
But once Article 13 is in effect, Twitter faces an impossible conundrum. The Article 13 filter will be tripped by Alice's lulzy protest signs, by Bob's political quotes, and by
Carol's Doctor Who meme, but suppose that Twitter is only required to block Bob from seeing these infringing materials.
Should Twitter hide Alice and Carol's messages from Bob? If Bob's quote is censored in Bulgaria, should Twitter go ahead and show it to Alice and Carol (but hide it from Bob, who
posted it?). What about when Bob travels outside of the EU and looks back on his timeline? Or when Alice goes to visit Bob in Bulgaria for a Doctor Who convention and tries to call up
the thread? Bear in mind that there's no way to be certain where a user is visiting from, either.
The dangerous but simple option is to subject all Twitter messages to European copyright censorship, a disaster for online speech.
And it’s not just Twitter, of course: any platform with EU users will have to solve this problem. Google, Facebook, Linkedin, Instagram, Tiktok, Snapchat, Flickr, Tumblr -- every
network will have to contend with this.
With Article 13, the EU would create a system where copyright complainants get a huge stick to beat the internet with, where people who abuse this power face no penalties, and where
platforms that err on the side of free speech will get that stick right in the face.
As the EU's censorship plan works its way through the next steps on the way to
becoming bindin g across the EU, the whole world has a stake -- but only a handful of appointed negotiators get a say.
If you are a European, the rest of the world would be very grateful indeed if you would take a moment to contact your MEP
and urge them to protect us all in the new Copyright Directive.
(Image: The World Flag, CC-BY-SA)
>> mehr lesen
Chicago Should Reject a Proposal for Private-Sector Face Surveillance
(Tue, 09 Oct 2018)
A proposed amendment to the Chicago municipal code would allow businesses to use face
surveillance systems that could invade biometric and location privacy, and violate a pioneering state privacy law adopted by Illinois a decade ago. EFF joined a letter with several
allied privacy organizations explaining our concerns, which include issues with both the proposed law and the invasive technology it would irresponsibly expand.
At its core, facial recognition technology is an extraordinary menace to our digital liberties.
At its core, facial recognition technology is an extraordinary menace to our digital
liberties. Unchecked, the expanding proliferation of surveillance cameras, coupled with constant improvements in facial recognition technology, can create a surveillance
infrastructure that the government and big companies can use to track everywhere we go in public places, including who we are with and what we are doing.
This system will deter law-abiding people from exercising their First Amendment rights in public places. Given continued inaccuracies in facial recognition systems, many people will be
falsely identified as dangerous or wanted on warrants, which will subject them to unwanted—and often dangerous—interactions with law enforcement. This system will disparately burden
people of color, who suffer a higher “false positive” rate due to additional flaws in these emerging systems.
In short, police should not be using facial recognition technology at all. Nor should businesses that wire their surveillance cameras into police spying networks.
Moreover, the Chicago ordinance would violate the Illinois Biometric
Information Privacy Act (BIPA). This state law, adopted by Illinois statewide in 2008, is a groundbreaking measure that set a national standard. It requires companies to
gain informed, opt-in consent from any individual before collecting biometric information from that person, or disclosing it to a third party. It also requires companies to store
biometric information securely, sets a three-year limit on retaining information before it must be deleted, and empowers individuals whose rights are violated to enforce its
provisions in court.
Having overcome severalpreviousattempts to rescind or water down its requirements at the state level, BIPA now
faces a new threat in a recently proposed municipal amendment in Chicago. The proposal to add a section on “Face Geometry Data” to the city’s municipal code would allow businesses to
use controversial and discriminatory face surveillance systems pursuant to licensing agreements with the Chicago Police Department.
As the letter we joined makes clear, the proposal suffers from numerous defects.
For example, the proposal does not effectively limit authorized uses. While it prohibits “commercial uses” of biometric information, it authorizes “security purposes.” That
distinction is meaningless in the context of predictable commercial security efforts, like for-profit mining and deployment of face recognition data to prevent shoplifting. The
attempt to differentiate permissible from impermissible uses also rings hollow because the proposal in no way restricts how biometric data can be shared with other companies, who
might not be subject to Chicago’s municipal regulation.
Contradicting the consent required by Illinois BIPA, the Chicago ordinance would allow businesses to collect biometric information from customers and visitors without their consent,
by merely posting signs giving patrons notice about some—but not all—of their surveillance practices. In particular, the required notice would need not address corporate use of
biometric information beyond in-store collection. It would also fail to inform customers who are visually impaired.
The Chicago proposal also invites misuse by the police department, which would face no reporting requirements. Transparency is critical, especially given Chicago’s unfortunatehistory of racialprofiling, and other police misconduct (which
detaining suspects without access to counsel, and torturing hundreds of African-American suspects into false confessions). Even in cities with fewer historical problems, police secrecy is incompatible with
the trend elsewhere across the country towards greater transparency
and accountability in local policing.
Also, despite the documented
susceptibility of face recognition systems to discrimination and
bias, the Chicago ordinance would not require any documentation of, for instance, how often biometric information collected from businesses may be used to inaccurately
identify a supposed criminal suspect. And it would violate BIPA’s requirements for data retention limits and secure data storage.
We oppose the proposed municipal code amendment in Chicago. We hope you will join us in encouraging the city’s policymakers to reject the proposal. It would violate existing and
well-established state law. More importantly, businesses working hand-in-glove with police surveillance centers should not be imposing facial recognition on their patrons—especially
under an ordinance as unprotective as the one proposed in Chicago.
>> mehr lesen
What's Next For Europe's Internet Censorship Plan?
(Mon, 08 Oct 2018)
Last month, a key European vote brought the EU much closer to a system of universal mass
censorship and surveillance, in the name of defending copyright.
Members of the EU Parliament voted to advance the new Copyright Directive, even though it contained two extreme and unworkable clauses: Article 13 ("Censorship Machines") that would
filter everything everyone posts to online platforms to see if matches a crowdsourced database of "copyrighted works" that anyone could add anything to; and Article 11 ("The Link
Tax"), a ban on quote more than one word from an article when linking to them unless you are using a platform that has paid for a linking license. The link tax provision allows, but
does not require, member states to create exceptions and limitations to protect online speech.
With the vote out of the way, the next step is the "trilogues." These closed-door meetings are held between representatives from European national governments, the European
commission, and the European Parliament. This is the last time the language of the Directive can be substantially altered without a (rare) second Parliamentary debate.
Normally the trilogues are completely opaque. But Julia Reda, the German MEP who has led the principled opposition to Articles 11 and 13, has committed to publishing all of the
negotiating documents from the Trilogues as they take place (Reda is relying on a recent European Court of Justice ruling that upheld the right of the public) to know what's going on in the trilogues).
This is an incredibly important moment. The trilogues are not held in secret because the negotiators are sure that you'll be delighted with the outcome and don't want to spoil the
surprise. They're meetings where well-organised, powerful corporate lobbyists' voices are heard and the public is unable to speak. By making these documents public, Reda is changing
the way European law is made, and not a moment too soon.
Articles 11 and 13 are so defective as to be unsalvageable; when they are challenged in the European Court of Justice, they may well be struck down. In the meantime, the trilogues —
if they do their job right — must struggle to clarify their terms so that some of their potential for abuse and their unnavigable ambiguity is resolved.
The trilogues have it in their power to expand on the Directive's hollow feints toward due process and proportionality and produce real, concrete protections that will minimise the
damage this terrible law wreaks while we work to have it invalidated by the courts.
Existing copyright filters (like YouTube's ContentID system) are set up to block people who attract too many copyright complaints, but what about people who make false copyright
claims? The platforms must be allowed to terminate access to the copyright filter system for those who repeatedly make false or inaccurate claims about which copyright works are
A public record of which rightsholders demanded which takedowns would be vital for transparency and oversight, but could only work if implemented at a mandatory, EU-level.
On links, the existing Article 11 language does not define when quotation amounts to a use that must be licensed, though proponents have argued that quoting more than a single word
requires a license.
The Trilogues could resolve that ambiguity by carving out a clear safe-harbor for users, and ensure that there’s a consistent set of Europe-wide exceptions and limitations to news
media’s new pseudo-copyright that ensure they don’t overreach with their power.
The Trilogue must safeguard against dominant players (Google, Facebook, the news giants) creating licensing agreements that exclude everyone else.
News sites should be permitted to opt out of requiring a license for inbound links (so that other services could confidently link to them without fear of being sued), but these
opt-outs must be all-or-nothing, applying to all services, so that the law doesn’t add to Google's market power by allowing them to negotiate an exclusive exemption from the link tax,
while smaller competitors are saddled with license fees.
The Trilogues must establish a clear definition of "noncommercial, personal linking," clarifying whether making links in a personal capacity from a for-profit blogging or social media
platform requires a license, and establishing that (for example) a personal blog with ads or affiliate links to recoup hosting costs is "noncommercial."
These patches are the minimum steps that the Trilogues must take to make the Directive clear enough to understand and obey. They won't make the Directive fit for purpose – merely
coherent enough to understand. Implementing these patches would at least demonstrate that the negotiators understand the magnitude of the damage the directive will cause to the
From what we've gathered in whispers and hints, the leaders of the Trilogues recognise that these Articles are the most politically contentious of the Directive — but those
negotiators think these glaring, foundational flaws can be finessed in a few weeks, with a few closed door meetings.
We’re sceptical, but at least there’s a chance that we’ll see what is going on. We’ll be watching for Reda's publication of the negotiating documents and analysing them as they
appear. In the meantime, you can and should talk to your MEP about talking to your country's trilogue reps about softening
the blow that the new Copyright Directive is set to deliver to our internet.
>> mehr lesen
Victory! Dangerous Elements Removed From California’s Bot-Labeling Bill
(Sat, 06 Oct 2018)
Governor Jerry Brown recently signed S.B.
1001, a new law requiring all “bots” used for purposes of influencing a commercial transaction or a vote in an election to be labeled. The bill, introduced by Senator
Robert Hertzberg, originally included a provision that would have been abused as a censorship tool, and would have threatened online anonymity and resulted in the takedown of lawful
human speech. EFF urged the
California legislature to amend the bill and worked with Senator Hertzberg's office to ensure that the bill’s dangerous elements were removed. We’re happy to report that the bill
Governor Brown signed last week was free of the problematic language.
This is a crucial victory. S.B. 1001 is the first bill of its kind, and it will likely serve as a model for other states. Here’s where we think the bill went right.
First, the original bill targeted all bots, regardless of what a bot was being used for or whether it was causing any harm to society. This would have swept up one-off bots used
for parodies or art projects—a far cry from the armies of
Russian bots that plagued social media
prior to the 2016 election or spambots deployed at scale used for fraud or commercial
gain. It’s important to remember that bots often represent the speech of real people, processed through a computer program. The human speech underlying bots is protected by the First
Amendment, and such a broadly reaching bill raised serious First Amendment concerns. An across-the-board bot-labeling mandate would also predictably lead to demands for
verification of whether individual accounts were controlled by an actual person, which would result in piercing anonymity. Luckily, S.B. 1001 was amended to target the
harmful bots that prompted the legislation—bots used surreptitiously in an attempt to influence commercial transactions or how people vote in
Second, S.B. 1001’s definition of “bot”—“an automated online account where all or substantially all of the actions or posts of that account are not the result of a
person”—ensures that use of simple technological tools like vacation responders and scheduled tweets won’t be unintentionally impacted. The definition was previously
limited to online accounts automated or designed to mimic an account of a natural person, which would have applied to parody accounts that didn’t even involve automation, but
not auto-generated posts from fake organizational accounts. This was fixed.
Third, earlier versions of the bill required that platforms create a notice and takedown system for suspected bots that would have predictably caused innocent human users to
have their accounts labeled as bots or deleted altogether. The provision, inspired by the notoriously problematic DMCA takedown system, required
platforms to determine within 72 hours for any reported account whether to remove the account or label it as a bot. On its face, this may sound like a
positive step in improving public discourse, but years of attempts at content moderation by large platforms show that things inevitably go wrong in a panoply of ways. As a preliminary matter, it is not always easy to determine
whether an account is controlled by a bot, a human, or a “centaur” (i.e., a human-machine team). Platforms can try to guess based on the account’s IP
addresses, mouse pointer movement, or keystroke timing, but these techniques are imperfect. They could, for example, sweep in individuals using VPNs or Tor for privacy. And accounts
of those with special accessibility needs who use speech to text input could be mislabeled by a mouse or keyboard heuristic.
This is not far-fetched: bots are getting increasingly good
at sneaking their way through Turing tests. And particularly given the short
turnaround time, platforms would have had little incentive to make sure to always get it right—to ensure that a human reviewed and verified every decision their systems
made to take down or label an account—when simply taking an account offline would have fulfilled any and all legal obligations
What’s more, any such system—just like the DMCA—would be abused to censor speech. Those seeking to censor legitimate speech have become experts at figuring out precisely how to
use platforms’ policies to silence or otherwise discredit their opponents on social media platforms. The targets of this sort of abuse have been the sorts of voices the supporters of
S.B. 1001 would likely want to protect—including Muslim civil rights
leaders, pro-democracy activists in Vietnam, and Black Lives Matter activists whose
posts were censored due to efforts by white supremacists. It is naive to think that online trolls wouldn't figure out how to game S.B. 1001’s system as well.
The takedown regime would also have been hard to enforce in practice without unmasking anonymous human speakers. While merely labeling an account as a bot does not pierce
anonymity, platforms might have required identity verification in order for a human to challenge their decisions about whether to takedown an account or label it as a bot.
Finally, as enacted, S.B. 1001 targets large platforms—those with 10 million or more unique monthly United States visitors. The problems this new law aims to solve are caused by
bots deployed at scale on large platforms, and limiting the law to large platforms ensures that it will not unduly burden small businesses or community-run forums.
As with any legislation—and particularly with legislation involving technology—to avoid unintended negative consequences, it is important that policy makers take the time to
think about the specific harms they seek to address and tailor legislation accordingly. We thank the California legislature for hearing our concerns and doing that with S.B.
>> mehr lesen