Massachusetts Can Become a National Leader to Stop Face Surveillance
(Di, 18 Jun 2019)
Massachusetts has a long history of standing up for liberty. Right now, it has the opportunity to become a national leader in fighting invasive government surveillance. Lawmakers need
to hear from the people of Massachusetts to say they oppose government use of face surveillance.
Face surveillance poses a threat to our privacy, chills protest in public places, and gives law enforcement unregulated power to undermine due process. The city of Somerville—home of
Tufts University—has heard these concerns and is considering a ban on that city’s use of face surveillance. Meanwhile, bills before the Massachusetts General Court would pause the
government’s use of face surveillance technology on a statewide basis. This moratorium would remain in place unless the legislature passes measures to regulate these technologies,
protect civil liberties, and ensure oversight of face surveillance use.
Face recognition technology has disproportionately high errorrates for women and people of color. Making matters worse, law enforcement agencies often rely on images pulled from
mugshot databases—which exacerbates historical biases born of unfair policing in Black and Latinx neighborhoods. If such systems are incorporated into street
lights or other forms of surveillance cameras, people in these communities may be unfairly targeted simply because
they appeared in another database or were subject to discriminatory policing in the past.
Last month, San Francisco became the first city in the country to ban
government use of face surveillance, showing it is possible for us to take back our privacy in public places. Oakland is now examining a similar proposal. Somerville is the first
community on the East Coast to consider a ban.
The people of Somerville, with support from Ward 3 Council Member Ben Ewen-Campen, have a chance now to
stand against government use of face surveillance and proclaim that they do not want it in their community. Speak up to protect your privacy rights, and demand that the Somerville
City Council pass Councilor Ewen-Campen’s ordinance banning government use of face surveillance in Somerville.
TAKE ACTIONSupport Somerville’s ban on face surveillance
If you are in the Somerville area and would like to speak at the city’s legislative affairs council meeting, please contact firstname.lastname@example.org.
The Somerville City Council has also endorsed a pair of bills in the state legislature that would press pause on
the use of face surveillance throughout Massachusetts. Specifically, Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of face surveillance.
If you are in the Somerville area and would like to speak at the city’s legislative affairs council meeting, please contact email@example.com.
The Somerville City Council has also endorsed a pair of bills in the state legislature that would press pause on
the use of face surveillance throughout Massachusetts. Specifically, Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of face surveillance.
TAKE ACTIONTell your legislators to press the pause button on face surveillance
Polling from the ACLU of Massachusetts has found that
91 percent of likely voters in the state support government regulation of face recognition surveillance and other biometric tracking. More than three-quarters, 79 percent, support
a statewide moratorium.
Governments should immediately stop use of face surveillance in our communities, given what researchers at MIT’s Media Lab and others have said about its high error rates—particularly
for women and people of color. But even if manufacturers someday mitigate these risks, government use of face recognition technology will threaten safety and privacy, amplify
discrimination in our criminal justice system, and chill of every resident’s free speech.
Support bans in your own communities and tell lawmakers it’s time to hit the pause button on face surveillance across the country.
>> mehr lesen
The Lofgren-Amash Amendment Would Check Warrantless Surveillance
(Tue, 18 Jun 2019)
The NSA has used Section 702 of the FISA Amendments Act to justify collecting and storing millions of Americans’ online communications.
Now, the House of Representatives has a chance to pull the plug on funding for Section 702 unless the government agrees to limit the reach of that program.
The House of Representatives must vote yes in order to make this important corrective. Amendment #24 offered by Representatives Lofgren (CA) and Amash (MI) would make sure that no
money in next year’s budget would fund the warrantless surveillance of people residing in the United States. Specifically, their amendment would withhold money [PDF] intended to fund Section 702 unless the government
commits not to knowingly collect the data of people communicating from within the U.S. to other U.S. residents, and who are not specifically communicating with a foreign surveillance
Section 702 allows the government to collect and store the communications of foreign intelligence targets outside of the U.S if a significant purpose is to collect “foreign
intelligence” information. Although the law contains some protections—for example, a prohibition on knowingly collecting communications between two U.S. citizens on U.S. soil—we
have learned that the program actually does sweep up billions of communications involving people not explicitly targeted, including Americans. For example,
a 2014 report by the Washington Post that reviewed of a “large cache of intercepted conversations” provided by Edward Snowden revealed that 9 out of 10 account holders “were not
the intended surveillance targets but were caught in a net the agency had cast for somebody else.”
The Lofgren-Amash amendment would require the government to acknowledge the protections in the law and to explicitly promise not to engage in “about collection,” the practice of collecting communications that merely mention a foreign
intelligence target. About collection has been one of the most controversial aspects of Section 702 surveillance, and although the government ended this practice in 2017, it has
consisted claimed the right to restart it.
With a big fight looming later this year on whether Congress
should renew another controversial national security law, Section 215 of the Patriot Act, we encourage the House of Representatives to vote Yes on the Lofgren-Amash Amendment to take
a step toward reining in Section 702.
>> mehr lesen
Certbot's Website Gets a Refresh
(Tue, 18 Jun 2019) Certbot has a brand new website! Today we’ve launched a major update that will help Certbot’s users get started even more quickly and
Certbot is a free, open source software tool for enabling HTTPS on manually-administered websites, by automatically deploying Let’s Encrypt certificates. Since we introduced it
in 2016, Certbot has helped over a million users enable encryption on their sites, and we think this update will better meet the needs of the next million, and beyond.
Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with our browser add-on, HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against
This change is the culmination of a year’s work in understanding how users interact with the Certbot tool and information around it. Last year, the Certbot team ran user studies
to identify areas of confusion—from questions users had when getting started to common mistakes that were often made. These findings led to changes in both the instructions for
interacting with the command-line tool, and in how users get the full range of information necessary to set up HTTPS.
The new site will make it clearer what the best steps are for all users, whether that’s understanding the prerequisites to running Certbot, getting clear steps to install and
run it, or figuring out how to get HTTPS in their setup without using Certbot at all.
Over a year ago, Let’s Encrypt hit 50 million active
users—and counting. We hope this update will help us expand on that peak, and make unencrypted websites a thing of the past.
>> mehr lesen
EFF's Recommendations for Consumer Data Privacy Laws
(Mon, 17 Jun 2019)
Strong privacy legislation in the United States is possible, necessary, and long overdue. EFF emphasizes the following concrete recommendations for proposed legislation regarding
consumer data privacy.
Three Top Priorities
First, we outline three of our biggest priorities: avoiding federal preemption, ensuring consumers have a private right of action, and using non-discrimination rules to avoid
No federal preemption of stronger state laws
We have long soundedthealarm against federal legislation that would wipe the slate clean of stronger state
privacy laws in exchange for one, weaker federal one. Avoiding such preemption of state laws is our top priority when reviewing federal privacy bills.
State legislatures have long been known as “laboratories of democracy” and they are serving that role now for
data privacy protections. In addition to passing strong laws, state legislation also allows for a more dynamic dialogue as technology and social norms continue to change. Last year,
Vermont enacted a law reining in data brokers, and California enacted its ConsumerPrivacy Act. Nearly a decade ago, Illinois enacted its BiometricInformationPrivacy Act. Many other states have passed data privacy laws and many are considering data
But some tech giants aren’t happy about that, and they are trying to get Congress to
pass a weak federal data privacy law that would foreclose state efforts. They are right about one thing: it
would be helpful to have one nationwide set of protections. However, consumers lose—and big tech companies win—if those federal protections are weaker than state protections.
Private right of action
It is not enough for government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary for these laws to have bite,
to ensure companies do not ignore them. The best way to do so is to empower
ordinary consumers to bring their own lawsuits against the companies that violate their privacy rights.
Often, government agencies will lack the resources necessary to enforce the laws. Other times, regulated companies will “capture” the agency, and shut down enforcement actions. For
these reasons, many privacy and other laws provide for enforcement by ordinary consumers.
Companies must not be able to punish consumers for exercising their privacy rights. New legislation should include non-discrimination rules, which forbid companies from denying goods,
charging different prices, or providing a different level of quality to users who choose more private options.
Absent non-discrimination rules, companies will adopt and enforce “pay-for-privacy” schemes. But corporations should not be allowed to require a consumer to pay a premium, or waive a
discount, in order to stop the corporation from vacuuming up—and profiting from—the consumer’s personal information. Privacy is a fundamentalhumanright. Pay-for-privacy schemes undermine this fundamental right. They discourage all people from exercising their right to
privacy. They also lead to unequal classes of privacy “haves” and “have-nots,” depending upon the income of the user.
Critical Privacy Rights
In addition to the three priorities discussed above, strong data privacy legislation must also ensure certain rights: the right to opt-in consent, the right to
know, and the right to data portability. Along with those core rights, EFF would like to see data privacy legislation including information fiduciary
rules, data broker registration, and data breach protection and notification.
Right to opt-in consent
New legislation should require the operators of online services to obtain opt-in consent to collect, use, or share personal data, particularly where that collection, use, or transfer
is not necessary to provide the service.
Any request for opt-in consent should be easy to understand and clearly advise the
user what data the operator seeks to gather, how they will use it, how long they will keep it, and with whom they will share it. This opt-in consent should also be ongoing—that is,
the request should be renewed any time the operator wishes to use or share data in a new way, or gather a new kind of data. And the user should be able to withdraw consent, including
for particular purposes, at any time.
Opt-in consent is better than opt-out consent. The default should be against collecting, using, and sharing personal information. Many consumers cannot or will not alter the defaults
in the technologies they use, even if they prefer that companies do not collect their information.
Some limits are in order. For example, opt-in consent might not be required for a service to take steps that the user has requested, like collecting a user's phone number to turn on
two-factor authentication. But the service should always give the user clear
notice of the data collection and use, especially when the proposed use is not part of the transaction, like using that phone
number for targeted advertising.
There is a risk that extensive and detailed opt-out requirements can lead to “consent fatigue.” Any new regulations should encourage entities seeking consent to explore new ways of
obtaining meaningful consent to avoid that fatigue. At the same time, research suggests companies are becoming skilled at manipulating consent and steering users to share personal data.
Finally, for consent to be real, data privacy laws must prohibit companies from discriminating against consumers who choose not to consent. As discussed above, “pay-for-privacy”
systems undermine privacy rules and must be prohibited.
Right to know
Users should have an affirmative “right to know” what personal data companies have gathered about them, where they got it, and with whom these companies have shared it (including the
government). This includes the specific items of personal information, and the specific third parties who received it, and not just categorical descriptions of the general kinds of
data and recipients.
Again, some limits are in order to ensure that the right to know doesn’t impinge on other important rights and privileges. For example, there needs to be an exception for news
gathering, which is protected by the First Amendment, when undertaken by professional reporters and lay members of the public alike. Thus, if a newspaper tracked visitors to its
online edition, the visitors’ right-to-know could cover that information, but not extend to a reporter’s investigative file.
There also needs to be an effective verification process to ensure that an
adversary cannot steal a consumer’s personal information by submitting a fraudulent right to know request to a business.
Right to data portability
Users should have a legal right to obtain a copy of the data they have
provided to an online service provider. Such “data portability” lets a user take their
data from a service and transfer or “port” it elsewhere.
One purpose of data portability is to empower consumers to leave a particular social media platform and take their data with them to a rival service. This may improve competition.
Other equally important purposes include analyzing your data to better understand your relationship with a service, building something new out of your data, self-publishing what you
learn, and generally achieving greater transparency.
Regardless of whether you are “porting” your data to a different service or to a personal spreadsheet, data that is “portable” should be easy to download, organized, tagged, and
Information fiduciary rules
One tool in the data privacy legislation toolbox is “information fiduciary”
rules. The basic idea is this: When you give your personal information to an online company in order to get a service, that company should have a duty to exercise loyalty and care in
how it uses that information.
Professions that already follow fiduciary rules—such as doctors, lawyers, and accountants—have much in common with the online businesses that collect and monetize users’ personal
data. Both have a direct relationship with customers; both collect information that could be used against those customers; and both have one-sided power over their customers.
Accordingly, severallawprofessors have proposed adapting these venerable fiduciary rules to apply to
online companies that collect personal data from their customers. New laws would define such companies as “information fiduciaries.” However, such rules should not be a replacement
for the other fundamental privacy protections discussed in this post.
Data broker registration
Data brokers harvest and monetize our personal information without our knowledge or consent. Worse, many data brokers fail to securely store this sensitive information, predictably
leading to data breaches (likeEquifax) that put millions of people at risk of identity theft, stalking, and other harms
for years to come.
Legislators should take a page from Vermont’s new data privacy law, which requires data brokers to
register annually with the government (among other significant reforms). When data broker registration and the right-to-know are put together, the whole is greater than the sum of the
parts. Consumers might want to learn what information data brokers have collected about them, but have no idea who those data brokers are or how to contact them. Consumers can use the
data broker registry to help decide where to send their right-to-know requests.
Data breach protection and notification
Given the massive amounts of personal information about millions of people collected and stored by myriad companies, the inherent risk of data theft and misuse is substantial. Data
privacy legislation must address this risk. Three tools deserveemphasis.
First, data brokers and other companies that gather large amounts of sensitive information must promptly notify consumers when their data is leaked, misused, or stolen.
Second, it must be simple, fast, and free for consumers to freeze their credit. When a consumer seeks credit from a company, that company runs a credit check with one of the major
credit agencies. When a consumer places a credit freeze with these credit agencies, an identity thief cannot use their stolen personal information to borrow money in their name.
Third, companies must have a legal duty to securely store consumers’ personal information. Also, where a company fails to meet this duty, it should be easier for people harmed by data
breaches—including those suffering non-financial harms—to take those companies to court.
Some Things To Avoid
Data privacy laws should not expand the scope or penalties of computer crime laws. Existing computer crime laws are already far too broad.
Any new regulations must be judicious and narrowly tailored, avoiding tech mandates.
Policymakers must take care that any of the above requirements don’t create an unfair burden for smaller companies, nonprofits, open source projects, and the like. To avoid
one-size-fits-all rules, they should tailor new obligations based on size of the service in question. For example, policymakers might take account of the entity’s
revenue, or the number of people whose data the entity collects.
Too often, users gain new rights only to effectively lose them when they “agree” to terms of service and end user license agreements that they haven’t read and aren’t expected to
read. Policymakers should consider the effect such waivers have on the rights and obligations they create, and be especially wary of mandatory arbitration
There is a daily drip-drip of badnewsabouthowbigtechcompaniesareintrudingonourprivacy. It is long past time to enact new laws to protect consumer data privacy. We
are pleased to see legislators across the country considering bills to do so, and we hope they will consider the principles above.
>> mehr lesen
Congress Should Pass the Protecting Data at the Border Act
(Fri, 14 Jun 2019)
Under the bipartisan Protecting Data at the Border Act, border officers
would be required to get a warrant before searching a traveler’s electronic device. Last month, the bill was re-introduced into the U.S. Senate by Sen. Ron
Wyden (D-Ore.) and Sen. Rand Paul (R-Ky.). It is co-sponsored by Sen. Ed Markey (D-Mass.) and Sen. Jeff Merkley (D-Ore.), and the House companion bill is co-sponsored by Rep. Ted Lieu
The rights guaranteed by the U.S. constitution don’t fade away at the border. And yet the Department of Homeland Security (DHS) asserts the power to freely search the electronic
devices of travelers before allowing them entrance into, or exit from, the United States. This practice will end if Congress passes the Protecting Data at the Border Act.
Think about all of the things your cell phone or laptop computer could tell a stranger about you. Modern electronic devices could reveal your romantic and familial connections, daily
routines, and financial standings. Ordinarily, law enforcement cannot obtain this sensitive information absent a signed warrant from a judge based on probable cause. But DHS claims
they need no suspicion at all to search and seize this information at the border.
The bill does much more to protect digital liberty at the border. It would protect free speech by preventing federal agents from requiring a person to reveal their social media
handles, usernames, or passwords. No one crossing the U.S. border should fear that a tweet critical of ICE or CBP will complicate their travel plans.
The bill also blocks agents from denying entry or exit from the United States to any U.S. person who refuses to disclose digital account information, the contents of social media
accounts, or provide access to electronic equipment. Further, the bill would prevent border agencies from holding any lawful U.S. persons for over four hours in pursuit of consensual
access to online accounts or the information on electronic equipment. It would also prevent the retention of traveler’s private information absent probable cause—a protection that is
increasingly important after CBP
admitted this week that photographs of almost 100,000 travelers’ faces and license plates were stolen from a federal subcontractor. Can we really trust this agency to securely
retain our text messages and phone camera rolls?
The bill has teeth. It forbids the use of any materials gathered in violation of the Act from being used as evidence in court, including any immigration hearings.
More than ever before, our devices hold all sorts of personal and sensitive information about us, and this bill would be an important step forward in recognizing and protecting us and
our devices. Congress should pass the Protecting Data at the Border Bill.
To learn more, check out EFF’s pages on how you can protect your privacy when you travel, on our
lawsuit challenging border searches of traveler’s devices without a warrant, and our support for the original version of this bill.
Alasaad v. Nielsen
>> mehr lesen
Details of Justice Department Efforts To Break Encryption of Facebook Messenger Must Be Made Public, EFF Tells Court
(Thu, 13 Jun 2019)
Ruling Blocking DOJ Should Be Unsealed To Keep Public Informed About Anti-Encryption Tactics
San Francisco—The Electronic Frontier Foundation, ACLU and Stanford cybersecurity scholar Riana Pfefferkorn
asked a federal appeals court today to make public a ruling that reportedly forbade the Justice Department from forcing Facebook to break the encryption
of a communications service for users.
Media widely reported last fall that a federal court in Fresno, California denied
the government’s effort to compromise the security and privacy promised to users of Facebook’s Messenger application. But the court’s order and details about the legal dispute have
been kept secret, preventing people
from learning about how DOJ sought to break encryption, and why a federal judge rejected those efforts.
EFF, ACLU and Pfefferkorn told the appeals court in a filing today that the
public has First Amendment and common law rights to access judicial opinions and court records about the laws that govern us. Unsealing documents in the Facebook Messenger case is
especially important because the public deserves to know when law enforcement tries to compel a company that hosts massive amounts of private communications to circumvent its own
security features and hand over users’ private data, EFF, ACLU and Pfefferkorn said in a filing to the U.S. Court of Appeals for the Ninth Circuit. ACLU and Pfefferkorn,
Associate Director of Surveillance and Cybersecurity at Stanford University’s Center for Internet and Society, joined EFF’s request to unseal. A federal judge in Fresno denied a motion to unseal the documents,
leading to this appeal.
Media reports last year
revealed DOJ’s attempt to get Facebook to turn over customer data and unencrypted Messenger voice calls based on a wiretap order in an investigation of suspected M-13 gang activity.
Facebook refused the government’s request, leading DOJ to try to hold the company in contempt. Because the judge’s ruling denying the government’s request is entirely under seal, the
public has no way of knowing how the government tried to justify its request or why the judge turned it down—both of which could impact users’ ability to protect their communications
from prying eyes.
“The ruling likely interprets the scope of the Wiretap Act, which impacts the privacy and security of
Americans’ communications, and it involves an application used by hundreds of millions of people around the world,” said EFF Senior Staff Attorney Andrew Crocker. “Unsealing the court
records could help us understand how this case fits into the government’s larger campaign to make sure it can access any encrypted communication.’’
In 2016 the FBI attempted to force Apple to disable
security features of its mobile operating system to allow access to a locked iPhone belonging to one of the shooters alleged to have killed 14 people in San Bernardino, California.
Apple fought the order, and EFF supported the company’s efforts. Eventually the FBI announced that it had received a third-party tip with a method to unlock the phone without
Apple's assistance. We believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security
“The government should not be able to rely on a secret body of law for accessing encrypted communications and surveilling Americans,” said EFF Staff Attorney Aaron Mackey. “We are
asking the court to rule that every American has a right to know about rules governing who can access their private conversations.
For the motion:
Senior Staff Attorney
>> mehr lesen
Experts Warn Congress: Proposed Changes to Patent Law Would Thwart Innovation
(Thu, 13 Jun 2019)
It should be clear now that messing around with Section 101 of the Patent Act is a bad idea. A Senate subcommittee has just finished hearing testimony about a bill that
would wreak havoc on the patent system. Dozens of witnesses have testified, including EFF Staff Attorney Alex Moss. Alex’s testimony [PDF] emphasized EFF’s success in protecting individuals and small businesses from threats of
meritless patent litigation, thanks to Section 101.
Section 101 is one the most powerful tools patent law provides for defending against patents that never should have been issued in the first place. We’ve written many times about
small businesses that were saved because the patents being used to sue them were thrown out under Section 101, especially following the Supreme
Court’s Alice v. CLS Bank decision. Now, the Senate IP
subcommittee is currently considering a proposal that will eviscerate Section 101, opening the door to more stupid patents, more aggressive patent licensing demands, and more
litigation threats from patent trolls.
Three days of testimony has made it clear that we’re far from alone in seeing the problems in this bill. Patents that would fail today’s Section 101 aren’t necessary to promote
innovation. We’ve written about how the proposal, by Senators Thom Tillis and Chris Coons, would create a field day for patent trolls with abstract software patents. Here, we’ll take
a look at a few of the other potential effects of the proposal, none of them good.
Private Companies Could Patent Human Genes
The ACLU, together with 169 other civil rights, medical, and scientific groups, has sent a letter to the Senate Judiciary Committee
explaining that the draft bill would open the door to patents on human genes.
The bill sponsors have said they don’t intend to allow for patents on the human genome. But as currently written, the draft bill would do just that. The bill explicitly overrules
recent Supreme Court rulings that prevent patents on things that occur in nature, like cells in the human body. Those protections were made explicit in the 2013 Myriad
decision, which held that Section 101 bars patents on genes as they occur in the human body. A Utah company called Myriad Genetics had monopolized tests on the BRCA1 and BRCA2 genes,
which can be used to determine a person's likelihood of developing breast or ovarian cancer. Myriad said that because its scientists had identified and isolated the genes from the
rest of the human genome, it had invented something that warranted a patent. The Supreme Court disagreed, holding that DNA is a product of nature and “is not patent eligible merely because it has been
Once Myriad couldn’t enforce its patents, competitors offering diagnostic screening for breast and ovarian cancer could, and did, enter the market immediately, charging just a
fraction of what Myriad’s test cost. Myriad’s patent did not claim to invent any of the technology actually used to perform the DNA analysis or isolation, which was available before
and apart from Myriad’s gene patents.
It’s just one example of how Section 101 protects innovation and enhances access to medicine, by prohibiting monopolies on things no person could have invented.
Alice Versus the Patent Trolls
Starting around the late 1990s, the Federal Circuit opened the door to broad patenting of software.
“The problem of patent trolls grew to epic proportions,” Stanford Law Professor Mark Lemley told the Senate subcommittee last week. “One of the things that brought it under control
was the Alice case and Section 101.”
A representative of the National Retail Federation (NRF) explained how, before Alice, small Main Street businesses were subject to constant litigation brought by “non-practicing
entities,” also known as patent trolls. Patent trolls are not a thing of the past—even after Alice, the majority of patent lawsuits continue to be filed by non-practicing entities.
“Our members are a target-rich environment for those with loose patent claims,” NRF’s Stephanie Martz told the subcommittee.
She went on to give examples of patents that were rightfully invalidated under Section 101, like a patent for posting nutrition information and picture menus online, which was
used to sue Whataburger, Dairy
Queen, and other chain restaurants—more than 60 cases in all. A patent for an online shopping cart was used to sue candy shops and 1-800-Flowers. And a patent for online maps showing
properties in a particular area was used to sue Realtors and homeowners
[PDF], leading to decades of litigation.
The Alice decision didn’t end such cases, but it did make it much easier to fight back. As Martz explained, since Alice, the cost of litigation has gone down between 40 and 45 percent.
The sponsors of the draft litigation have made it clear they intend to overturn Alice. That will take us back to a time not so long ago, when small businesses had to pay unjustified
licensing fees to patent trolls, or face the possibility of multimillion-dollar legal bills to fight off wrongly issued patents.
More Litigation, Less Research
The High Tech Inventors Alliance (HTIA), a group of large technology companies, also spoke against the current draft proposal.
The proposal “would allow patenting of business methods, fundamental scientific principles, and mathematical equations, as long as they were performed on a computer,” said David
Jones, representing HTIA. “A more stringent test is needed, and perhaps even required by the Constitution.”
Jones also cited recent research showing that the availability of business method patents actually lowered
R&D among firms that sought those patents. After Alice limited their availability, the same companies that had been seeking those patents stopped doing so, and increased their
research and development budgets.
The current legal test for patents is not arbitrary or harmful to innovation, Jones argued. On the contrary, the Alice-Mayo framework “has improved patent clarity and decreased
EFF’s Alex Moss also disagreed that the current case law was “a mess” or “confusing.” Rather than throw out decades of case law, she urged Congress to look to history to consider
changes that could actually point the patent system towards promoting progress.
“In the 19th century, when patent owners wanted to get a term extension, they would come to Congress and bring their accounting papers, and say—look how much we invested,”
Moss explained. “I’d like to see that practical element, to make sure our patent system is promoting innovation—which is its job under the Constitution—and not just a proliferation of
At the conclusion of testimony, Sen. Tillis has said that he and Sen. Coons will take these testimonies into account as they work towards a bill that could be
introduced as early as next month. We hope the Senators will begin to consider proposals that could improve the patent system, rather than open the door to the worst kinds of
patents. In the meantime, please tell your members of Congress that the proposed bill is not the right solution.
TELL CONGRESS WE DON'T NEED MORE BAD PATENTS
>> mehr lesen
Social Media Platforms Increase Transparency About Content Removal Requests, But Many Keep Users in the Dark When Their Speech Is Censored, EFF Report Shows
(Wed, 12 Jun 2019)
Who Has Your Back Spotlights Good, and Not So Good, Content Moderation Policies
San Francisco and Tunis, Tunisia—While social media platforms are increasingly giving users the opportunity to appeal decisions to censor their posts, very few
platforms comprehensively commit to notifying users that their content has been removed in the first place, raising questions
about their accountability and transparency, the Electronic Frontier Foundation (EFF) said today in a new report.
How users are supposed to challenge content removals that they’ve never been told about is among the key issues illuminated by EFF in the second installment of its Who Has Your Back: Censorship Edition report. The paper comes amid a wave of new government regulations and actions around the
world meant to rid platforms of extremist content. But in response to calls to
remove objectionable content, social media companies and platforms have all too oftencensored valuable speech.
EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram. Only four companies—Facebook, Reddit,
Apple, and GitHub—commit to notifying users when any content is censored and specifying the legal request or community guideline violation that led to the removal. While Twitter
notifies users when tweets are removed, it carves out an exception for tweets related to “terrorism,” a class of content that is difficult to accurately identify and can include
counter-speech or documentation of war crimes. Notably, Facebook and GitHub were found to have more comprehensive notice policies than their peers.
“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,”
said Gennie Gebhart, EFF associate director of research, who co-authored the report. “Notifying people when their content has been removed or censored is a challenge when your users
number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”
In the report, EFF awarded stars in six categories, including transparency reporting of government takedown requests, providing meaningful notice to users when content or accounts are
removed, allowing users to appeal removal decisions, and public support of the Santa Clara Principles, a set of guidelines for speech moderation based on a human rights framework. The
report was released today at the RightsCon summit on human rights in the digital age, held in Tunis, Tunisia.
Reddit leads the pack with six stars, followed by Apple’s App Store and GitHub with five stars, and Medium, Google Play, and YouTube with four stars. Facebook, Reddit, Pinterest and
Snap each improved their scores over the past year since our inaugural censorship edition of Who Has Your Back in 2018. Nine companies meet our criteria for transparency reporting of
takedown requests from governments, and 11 have appeals policies, but only one—Reddit—discloses the number of appeals it receives. Reddit also takes the extra step of disclosing the
percentage of appeals resolved in favor of or against the appeal.
Importantly, 12 companies are publicly supporting the Santa Clara Principles, which outline a set of minimum content moderation policy
standards in three areas: transparency, notice, and appeals.
“Our goal in publishing Who Has Your Back is to inform users about how transparent social media companies are about content removal and encourage improved content moderation practices
across the industry,” said EFF Director of International Free Expression Jillian York. “People around the world rely heavily on social media platforms to communicate and share ideas,
including activists, dissidents, journalists, and struggling communities. So it’s important for tech companies to disclose the extent to
which governments censor speech, and which governments are doing it.”
For the report:
For more on platform censorship:
Associate Director of Research
Senior Staff Attorney
>> mehr lesen
EFF to U.N.: Ola Bini's Case Highlights The Dangers of Vague Cybercrime Law
(Wed, 12 Jun 2019)
For decades, journalists, activists and lawyers who work on human rights issues around the world have been harassed, and even detained, by repressive and authoritarian regimes
seeking to halt any assistance they provide to human rights defenders. Digital communication technology and privacy-protective tools like end-to-end encryption have made this work
safer, in part by making it harder for governments to target those doing the work. But that has led to technologists building those tools being increasingly targeted for the same
harassment and arrest, most commonly under overbroad cybercrime laws that cast suspicion on
even the most innocent online activities.
Right now, that combination of misplaced suspicion, and arbitrary detention under cyber-security regulations, is being played out in Ecuador. Ola Bini, a Swedish security
researcher, is being detained in that country under unsubstantiated accusations, based on an overbroad reading of the country’s cybercrime law. This week, we submitted comments to the Office of the U.N. High
Commissioner for Human Rights (OHCHR) and the Inter-American Commission on Human Rights (IACHR) for their upcoming 2019 joint report on the situation of human rights defenders in the
Americas. Our comments focus on how Ola Bini’s detainment is a flagship case of the targeting of technologists, and dangers of cyber-crime laws.
While the pattern of demonizing benign uses of technology is global, EFF has noted its rise in the Americas in particular. Our 2018 report, “Protecting Security Researchers' Rights in the Americas,” was created in part to push back against ill-defined,
broadly interpreted cybercrime laws. It also promotes standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights might use to protect
the fundamental rights of security researchers, and ensure the safe and secure development of the Internet and digital technology in the Americas and across the world.
We noted that these laws fail in several ways. First, they don't meet the requirements established by the Inter-American Human Rights Standards, which bars any
restriction of a right through the use of criminal law. Vague and ambiguous criminal laws are an impermissible basis to restrict the rights of a person.
These criminal provisions also fail to clarify the definition of malicious intent or mens rea, and actual damage turning general behaviors into strict liability crimes.
That means they can affect the free expression of security researchers since they can be interpreted broadly by prosecutors seeking to target individuals.
For instance, Ola Bini is currently being charged under Article 232 of the Ecuadorian Criminal Code:
Any person who destroys, damages, erases, deteriorates, alters, suspends, blocks, causes malfunctions, unwanted behavior or deletes computer data, e-mails, information
processing systems, telematics or telecommunications from all or parts of its governing logical components shall be liable to a term of imprisonment of three to five years,
Designs, develops, programs, acquires, sends, introduces, executes, sells or distributes in any way, devices or malicious computer programs or programs destined to cause the
effects indicated in the first paragraph of this article, or:
Destroys or alters, without the authorization of its owner, the technological infrastructure necessary for the transmission, reception or processing of information in
If the offense is committed on computer goods intended for the provision of a public service or linked to public safety, the penalty shall be five to seven years'
deprivation of liberty.
Bini’s case highlights two consistent problems with cybercrime laws: the statute can be interpreted in such a way that any software that could be
misused creates criminal liability for its creator; indeed, potentially more liability than on those who conduct malicious acts. This allows misguided prosecutions against human
rights defenders to proceed on the basis that the code created by technologists might possibly be used for malicious purposes.
Additionally, we point the OHCHR-IACHR to the chain of events associated with Ola Bini’s arrest. Bini is a free software developer, who works to improve the security and privacy
of the Internet for all its users. He has contributed to several key open source projects used to maintain the infrastructure of public Internet services, including JRuby, several
Ruby libraries, as well as multiple implementations of the secure and open communication protocol OTR. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has
provided strong encryption for millions of websites around the world.
His arrest and detention was full of irregularities: his warrant was for a “Russian
hacker” (Bini is neither Russian nor a hacker); he was not read his rights, nor allowed to contact his lawyer, nor offered a translator. The arrest was preceded by a press
conference, and framed as part of a process of defending Ecuador from retaliation by associates of Wikileaks. During the press conference, Ecuador’s Interior Minister announced that
the government was about to apprehend individuals who are supposedly involved in trying to establish a “piracy center” in Ecuador, including two Russian hackers, a Wikileaks
collaborator, and a person close to Julian Assange. She stated: “We are not going to allow Ecuador to become a hacking center, and we cannot allow illegal activities to take place in
the country, either to harm Ecuadorian citizens or those from other countries or any government.”
Neither she nor any investigative authority has provided any evidence to back these claims.
As we wrote in our comments, prosecutions of technologists working in this space should be treated in the same way as the prosecution of journalists, lawyers, and other human
rights defenders — with extreme caution, and with regard to the risk of politicization and misuse of such prosecutions. Unfortunately, Bini’s arrest is typical of the treatment of
security researchers conducting human rights work.
We hope that the OHCHR and IACHR carefully consider our comments, and recognize how broad cybercrime laws, and their misuse by political actors, can directly challenge human
rights defenders. Ola Bini’s case—and the other examples we’ve given—present clear evidence for why we must treat cybercrime law as connected to human rights considerations.
>> mehr lesen
How LBS Innovations Keeps Trying to Monopolize Online Maps
(Tue, 11 Jun 2019)
Stupid Patent of the Month
For years, the Eastern District of Texas (EDTX) has been a magnet for lawsuits filed by patent trolls—companies who make money with patent threats, rather than selling products or
services. Technology companies large and small were sued in EDTX every week. We’ve written about how that district’s unfair and irregular procedures made it a haven for patent trolls.
In 2017, the Supreme Court put limits on this venue abuse with its TC Heartland decision. The court ruled that
companies can only be sued in a particular venue if they are incorporated there, or have a “regular and established” place of business.
That was great for tech companies that had no connection to EDTX, but it left brick-and-mortar retailers exposed. In February, Apple, a company that has been sued hundreds of times in
EDTX, closed its only two stores that were in the district, located in Richardson and Plano. With no stores located in EDTX, Apple will be able to ask for a transfer in any future
In the last few days those stores were open, Apple was sued for patent infringement four times, as patent trolls took what is likely their last chance to sue Apple in EDTX.
This month, as part of our Stupid Patent of the Month series, we’re taking a closer look at one of these last-minute lawsuits against Apple. On April 12, the last day the store was
open, Apple was sued by LBS Innovations, LLC, a patent-licensing company owned by two New
York patent lawyers, Daniel Mitry and Timothy Salmon. Since it was formed in 2011, LBS has sued more than 60 companies, all in the Eastern District of Texas. Those defendants
include some companies that make their own technology, like Yahoo, Waze, and Microsoft, but they’re mostly retailers that use software made by others. LBS has sued tire stores, pizza
shops, pet-food stores, and many others, all for using internet-based maps and “store location” features. LBS has sued retailers that use software made by Microsoft, others that use
Mapquest, some that use Google, as well as those that use the open-source provider OpenStreetMaps.
Early Internet Maps
LBS’ lawsuits accuse retailers of infringing one or more claims of U.S. Patent No. 6,091,956, titled “Situation Information
System.” The most relevant claim, which is specifically cited in many lawsuits, is claim 11, which describes a method of showing “transmittable mappable hypertext items” to a user.
The claim language describes “buildings, roads, vehicles, and signs” as possible examples of those items. It also describes providing “timely situation information” on the hypertext
There’s a big problem with the ’956 patent, and its owners’ broad claim to have invented Internet mapping. The patent application was filed on June 12, 1997—but electronic maps, and
specifically Internet-based maps, were well-known by then. Not only that, but the maps were already were adding what one would think of as “timely situation information,” such as
weather and traffic updates.
Mapquest, the first commercial internet mapping service, is one example. Mapquest launched in 1996—before this patent’s 1997 priority date—and by July of that year, it was offering
not just driving directions but personalized maps of cities that
included favorite destinations.
And Mapquest wasn’t the first. Xerox Parc’s free interactive map was online as far back as 1993. By January 1997, it was getting more than 80,000 mapping requests per day. Michigan State University was
getting 159,000 daily requests [PDF] for its weather map, which was updated regularly, in
March 1997. Some cities, such as Houston, had online traffic maps available in that time period, which also got timely updates.
In 1997, any Internet user, let alone anyone actually developing online maps, would have been aware of these very public examples.
As technology advanced, and Internet use became widespread, the information available on the electronic maps we all use became richer and more frequently updated. This was no
surprise. What’s described in the ‘956 patent added nothing to this clear and well-known path.
The Trouble With Prior Art
How has the LBS Innovations patent hold up in court? Despite the fact that these examples of earlier Internet maps can be found online fairly easily, that doesn’t mean it’s easy to
get rid of a patent like the ‘956 patent in court. The process of invalidating patents using prior art—patent law’s term for relevant knowledge about earlier inventions—is difficult
and expensive. It requires the hiring of high-priced experts, the filing of long reports, and months or years of litigation. And it often requires the substantial risk of a jury
trial, since it’s difficult to get an early ruling on prior art defenses.
Because of that drawn-out process, LBS has been able to extract settlements from dozens of defendants. It’s also reached settlements with companies like Microsoft and Google, which intervened after
users of their respective mapping software were sued. In one case where LBS got near trial, after having settled with several other defendants, it simply dropped its lawsuit against the final company that was willing to fight, avoiding an invalidity judgment against
LBS never should have been issued this patent in the first place. But patent examiners are given less than 20 hours, on average, to examine an application. Faced with far-reaching claims by an
ambitious applicant, but little time to scrutinize them, examiners don’t have many options—especially since applicants can return again and again. That means the only way for
examiners can get applications off their desk for good is by approving them. Given that incentive, it’s no surprise judges and juries often find issued patents invalid.
For software, it can be extremely difficult to find prior art that can invalidate the patent. Software was generally not patentable until the mid-1990s, when a Federal Circuit
decision called State Street Bank opened the door. That means patents aren’t good prior art for the vast majority of 20th century advances in computer science. Also,
software is often protected by copyright or trade secret, and therefore not published or otherwise made public.
Often, published information may not precisely match the limitations of each patent claim. Did the earlier maps search “unique mappable information code sequences,” where each code
sequence represented the mapped items, “copied from the memory of said computer”? They may well have done so—but published papers on internet mapping wouldn’t bother specifying inane
steps that just recite basic computer technology.
The success of a litigation campaign like the one pushed by LBS Innovations shows why we can’t rely on the parts of the Patent Act that cover prior art to weed out bad patents.
Section 101 allows courts to find patents ineligible on their face and early in a case. That saves defendants the staggering costs of litigation or an unnecessary settlement. Since
the Alice v. CLS Bank decision, Section 101 has been used to
dispose of hundreds of abstract software patents before trial.
Right now, key U.S. Senators are crafting a bill that would weaken Section
101. That will greatly increase the leverage of patent trolls like LBS Innovations, and their claims to own widespread Internet technology.
Proponents of the Tillis-Coons patent bill argue that there’s little need to worry about bad patents slipping through Section 101, because other sections of the patent law—the
sections which allow for patents to be invalidated because of earlier inventions—will ensure that wrongly granted patents don’t win in court. But patent trolls simply aren’t afraid of
those sections of law, because their effects are so limited. For many defendants, the costs of attempting to prove a patent invalid under these sections makes them unusable. Faced
with legal bills of hundreds of thousands of dollars, if not millions, many defendants will have little choice but to settle.
We all lose when small businesses and independent programmers lose their most powerful means of fighting against bad patents. That’s why we’re asking EFF supporters to contact their
representatives in Congress, and ask them to reject the
Tillis-Coons patent proposal.
>> mehr lesen
EFF’s Newest Advisory Board Member: Michael R. Nelson
(Tue, 11 Jun 2019)
EFF is proud to announce our newest member of our already star-studded advisory board: Michael R. Nelson. Michael has worked on
Internet-related global public policy issues for more than 30 years, including working on technology policy in the U.S. Senate and the Clinton White House.
Michael’s broad expertise in many different aspects of technology will be invaluable to the work we do at EFF. His experience includes launching the Washington, D.C. policy office for
Cloudflare, and working as a Principal Technology Policy Strategist in Microsoft’s Technology Policy Group, a Senior Technology and Telecommunications Analyst with Bloomberg
Government, and the Director of Internet Technology and Strategy at IBM. In addition, Michael has been affiliated with the CCT Program at Georgetown University for more than ten
years, teaching courses and doing research on the future of the Internet, cyber-policy, technology policy, innovation policy, and e-government.
In the 1990s, Michael was Director for Technology Policy at the Federal Communications Commission and Special Assistant for Information Technology at the White House Office of Science
and Technology Policy. There, he worked with Vice President Al Gore and the President's Science Advisor on issues relating telecommunications policy, information technology,
encryption, electronic commerce, and information policy. He also served as a professional staff member for the Senate's Subcommittee on Science, Technology, and Space, chaired by
then-Senator Gore and was the lead Senate staffer for the High-Performance Computing Act. He has a B.S. from Caltech and a Ph.D. from MIT. Welcome Michael!
>> mehr lesen
California: No Face Recognition on Body-Worn Cameras
(Tue, 11 Jun 2019)
EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other
biometric surveillance technologies to footage collected by body-worn cameras.
About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4
million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if
enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into
metadata, and stored in searchable databases.
Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting
introduced A.B. 1215 to prohibit the use of face recognition, or other
forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.
“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in
violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also
generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”
Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by
banning government use of face recognition. Meanwhile, law
enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.
The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.
EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice - Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our
Electronic Frontier Alliance allies have joined
forces in supporting this critical legislation.
Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face
surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice
system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.
Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU's
polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather
than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person
using their biometric information.
Last month, Reuters reported
that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.
“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”
We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must
use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.
>> mehr lesen
Five California Cities Are Trying to Kill an Important Location Privacy Bill
(Tue, 11 Jun 2019)
If you rely on shared biked or scooters, your location privacy is at risk. Cities across the United States are currently pushing companies that operate shared mobility services like
Jump, Lime, and Bird to share individual trip data for any and all trips taken within their boundaries, including where and when trips start and stop and granular details about the
specific routes taken. This data is extremely sensitive, as it can be used to reidentify riders—particularly for habitual
trips—and to track movements and patterns over time. While it is beneficial for cities to have access to aggregate data about shared mobility devices to ensure that they
are deployed safely, efficiently, and equitably, cities should not be allowed to force operators to turn over sensitive, personally identifiable information about riders.
As these programs become more common, the California Legislature is considering a bill, A.B. 1112, that would ensure that local authorities receive only aggregated or
non-identifiable trip data from shared mobility providers. EFF supports A.B.
1112, authored by Assemblymember Laura Friedman, which strikes the appropriate balance between protecting individual privacy and ensuring that local authorities have enough
information to regulate our public streets so that they work for all Californians. The bill makes sure that local authorities will have the ability to impose deployment requirements
in low-income areas to ensure equitable access, fleet caps to decrease congestion, and limits on device speed to ensure safety. And importantly, the bill clarifies that CalEPCA—California’s landmark electronic
privacy law—applies to data generated by shared mobility devices, just as it would any other electronic devices.
Five California cities, however, are opposing this
privacy-protective legislation. At least four of these cities—Los Angeles, Santa Monica, San Francisco, and Oakland—have pilot programs underway that require shared mobility
companies to turn over sensitive individual trip data as a condition to receiving a permit. Currently, any company that does not comply cannot operate in the city. The cities want
continued access to individual trip data and argue that removing “customer identifiers” like names from this data should be enough to protect rider privacy.
The problem? Even with names stripped out, location information is notoriously easy to reidentify, particularly for
habitual trips. This is especially true when location information is aggregated over time. And the data shows that riders are, in fact, using dockless mobility vehicles for their
regular commutes. For example, as documented in Lime’s Year End Report for 2018, 40 percent of Lime
riders reported commuting to or from work or school during their most recent trip. And remember, in the case of dockless scooters and bikes, these devices may be parked directly
outside a rider’s home or work. If a rider used the same shared scooter or bike service every day to commute between their work and home, it’s not hard to imagine how easy it
might be to reidentify them—even if their name was not explicitly connected to their trip data. Time-stamped geolocation data could also reveal trips to medical specialists, specific
places of worship, and particular neighborhoods or bars. Patterns in the data could reveal social relationships, and potentially even extramarital affairs, as well as personal habits,
such as when people typically leave the house in the morning, go to the gym or run errands, how often they go out on evenings and weekends, and where they like to go.
The cities claim that they will institute “technical safeguards” and “business processes” to prohibit reidentification of individual consumers, but so long as the cities have the
individual trip data, reidentification will be possible—by city transportation agencies, law enforcement, ICE, or any other third parties that receive data from cities.
The cities’ promises to keep the data confidential and make sure the records are exempt from disclosure under public records laws also fall flat. One big issue is that the cities have
not outlined and limited the specific purposes for which they plan to use the geolocation data they are demanding. They also have not delineated how they will minimize their
collection of personal information (including trip data) to data necessary to achieve those objectives. This violates both the letter and the spirit of the California Constitution’s
right to privacy, which explicitly lists privacy as an inalienable right of all people and, in the words of the California Supreme Court, “prevents government and business interests from
collecting and stockpiling unnecessary information about us” or “misusing information gathered for one purpose in order to serve other purposes[.]”
The biggest mistake local jurisdictions could make would be to collect data first and think about what to do with it later—after consumers’ privacy has been put at risk. That’s
unfortunately what cities are doing now, and A.B. 1112 will put a stop to it.
The time is ripe for thoughtful state regulation reining in local demands for individual trip data. As we’ve told the California legislature, bike- and scooter- sharing services are
proliferating in cities across the United States, and local authorities should have the right to regulate their use. But those efforts should not come at the cost of riders’ privacy.
We urge the California legislature to pass A.B. 1112 and protect the privacy of all Californians who rely on shared mobility devices for their transportation needs. And we urge cities
in California and across the United States to start respecting the privacy of riders. Cities should start working with regulators and the public to strike the right balance between
their need to obtain data for city planning purposes and the need to protect individual privacy—and they should stop working to undermine rider privacy.
>> mehr lesen
EFF and Open Rights Group Defend the Right to Publish Open Source Software to the UK Government
(Mon, 10 Jun 2019)
EFF and Open Rights Group today submitted formal comments to the British Treasury, urging restraint in applying anti-money-laundering regulations to the publication of open-source
The UK government sought public feedback on proposals to update its financial regulations pertaining to money laundering and terrorism in alignment with a larger European directive.
The consultation asked for feedback on applying onerous customer due diligence regulations to the cryptocurrency space as well as what approach the government should take in
addressing “privacy coins” like Zcash and Monero. Most worrisome, the government also asked “whether the publication of open-source software should be subject to [customer due
We’ve seen these kind of attacks on the publication of open source software before, in fights dating back to the 90s, when the Clinton administration attempted to require that anyone merely publishing cryptography source code obtain a government-issued license as an arms
dealer. Attempting to force today’s open-source software publishers to follow financial regulations designed to go after those engaged in money laundering is equally obtuse.
In our comments, we describe the breadth of free, libre, and open source software (FLOSS) that benefits the world today across industries and government institutions. We discuss how
these regulatory proposals could have large and unpredictable consequences not only for the emerging technology of the blockchain ecosystem, but also for the FLOSS software ecosystem
at large. As we stated in our comments:
If the UK government was to determine that open source software publication should be regulated under money-laundering regulations, it would be unclear how this would be enforced,
or how the limits of those falling under the regulation would be determined. Software that could, in theory, provide the ability to enable cryptocurrency transactions, could be
modified before release to remove these features. Software that lacked this capability could be quickly adapted to provide it. The core cryptographic algorithms that underlie
various blockchain implementations, smart contract construction and execution, and secure communications are publicly known and relative trivial to express and implement. They are
published, examined and improved by academics, enthusiasts, and professionals alike…
The level of uncertainty this would provide to FLOSS use and provision within the United Kingdom would be considerable. Such regulations would burden multiple industries to
attempt to guarantee that their software could not be considered part of the infrastructure of a cryptographic money-laundering scheme.
Moreover, source code is a form of written creative expression, and open source code is a form of public discourse. Regulating its publication under anti-money-laundering provisions
fails to honor the free expression rights of software creators in the United Kingdom, and their collaborators and users in the rest of the world.
Source code is a form of written creative expression, and open source code is a form of public discourse.
EFF is monitoring the regulatory and legislative reactions to new blockchain technologies, and we’ve recently spoken out about misguided ideas for banning cryptocurrencies and overbroad regulatory responses to decentralized exchanges.
Increasingly, the regulatory backlash against cryptocurrencies is being tied to overbroad proposals that would censor the publication of open-source software, and restrict
researchers’ ability to investigate, critique and communicate about the opportunities and risks of cryptocurrency.
This issue transcends controversies surrounding blockchain tech and could have significant implications for technological innovation, academic research, and freedom of expression.
We’ll continue to watch the proceedings with HM Treasury, but fear similar anti-FLOSS proposals could emerge—particularly as other member states of the European Union transpose the
same Anti-Money Laundering Directive into their own laws.
Read our full comments.
Thanks to Marta Belcher, who assisted with the comments.
>> mehr lesen
Hearing Tuesday: EFF Will Voice Support For California Bill Reining In Law Enforcement Use of Facial Recognition
(Mon, 10 Jun 2019)
Assembly Bill 1215 Would Bar Police From Adding Facial Scanning to Body-Worn Cameras
Sacramento, California—On Tuesday, June 11, at 8:30 am, EFF Grassroots Advocacy Organizer Nathan Sheard will testify before the California Senate Public Safety Committee in support of
a measure to prohibit law enforcement from using facial recognition in body cams.
Following San Francisco’s historic ban on police use of the technology—which can invade privacy, chill free
speech and disproportionately harm already marginalized communities—California lawmakers are considering AB 1215, proposed legislation that would extend the ban across the state.
Face recognition technology has been shown to have disproportionately high errorrates
for women, the elderly, and people of color. Making matters worse, law enforcement agencies often rely on images pulled from mugshot databases. This exacerbates historical biases born
of, and contributing to, over-policing in Black and Latinx neighborhoods. The San Francisco Board of Supervisors and other Bay Area communities have decided that police should be stopped from using the technology on the
The utilization of face recognition technology in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law
enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion, Sheard will tell lawmakers.
Hearing before the California Senate Public Safety Committee on SB 1215
EFF Grassroots Advocacy Organizer Nathan Sheard
Tuesday, July 11, 8:30 am
California State Capitol
10th and L Streets
Sacramento, CA 95814
Grassroots Advocacy Organizer
>> mehr lesen
Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today's Monopolies
(Fri, 07 Jun 2019)
Today, Apple is one of the largest, most profitable companies on Earth, but in the early 2000s, the company was fighting for its life. Microsoft's Windows operating system was ascendant, and
Microsoft leveraged its dominance to ensure that every Windows user relied on its Microsoft Office suite (Word, Excel, Powerpoint, etc). Apple users—a small minority of computer users—who wanted to exchange documents with the much larger world of Windows users were
dependent on Microsoft's Office for the Macintosh operating system (which worked inconsistently with Windows Office documents,
with unexpected behaviors like corrupting documents so they were no longer readable, or partially/incorrectly displaying parts of exchanged documents). Alternatively, Apple users
could ask Windows users to export their Office documents to an "interoperable" file format like Rich Text Format (for text), or Comma-Separated Values (for spreadsheets). These, too,
were inconsistent and error-prone, interpreted in different ways by different programs on both Mac and Windows systems.
Apple could have begged Microsoft to improve its Macintosh offerings, or they could have begged the company to standardize its flagship products at a standards body like OASIS or ISO.
But Microsoft had little motive to do such a thing: its Office products were a tremendous competitive advantage, and despite the fact that Apple was too small to be a real threat,
Microsoft had a well-deserved reputation for going to enormous lengths to snuff out potential competitors, including both Macintosh computers and computers running the GNU/Linux
Apple did not rely on Microsoft's goodwill and generosity: instead, it relied on reverse-engineering. After its 2002
"Switch" ad campaign—which begged potential Apple customers to ignore the "myths" about how hard it was to integrate Macs into Windows workflows—it intensified work on its
iWork productivity suite, which launched in 2005, incorporating a word-processor (Pages), a spreadsheet (Numbers) and a presentation
program (Keynote). These were feature-rich applications in their own right, with many innovations that leapfrogged the incumbent Microsoft tools, but this superiority would still not
have been sufficient to ensure the adoption of iWork, because the world's greatest spreadsheets are of no use if everyone you need to work with can't open them.
What made iWork a success—and helped re-launch Apple—was the fact that Pages could open and save most Word files; Numbers could open and save most Excel files; and Keynote could open
and save most PowerPoint presentations. Apple did not attain this compatibility through Microsoft's cooperation: it attained it despite Microsoft's noncooperation. Apple
didn't just make an "interoperable" product that worked with an existing product in the market: they made an adversarially interoperable product whose compatibility was
wrested from the incumbent, through diligent reverse-engineering and reimplementation. What's more, Apple committed to maintaining that interoperability, even though Microsoft
continued to update its products in ways that temporarily undermined the ability of Apple customers to exchange documents with Microsoft customers, paying engineers to unbreak
everything that Microsoft's maneuvers broke. Apple's persistence paid off: over time, Microsoft's customers became dependent on compatibility with Apple customers, and they would
complain if Microsoft changed its Office products in ways that broke their cross-platform workflow.
Since Pages' launch, document interoperability has stabilized, with multiple parties entering the market, including Google's cloud-based Docs offerings, and the free/open alternatives
from LibreOffice. The convergence on this standard was not undertaken with the blessing of the dominant player: rather, it came about despite Microsoft's opposition. Docs are
not just interoperable, they're adversarially interoperable: each has its own file format, but each can read Microsoft's file format.
The document wars are just one of many key junctures in which adversarial interoperability made a dominant player vulnerable to new entrants:
Hayes modemsUsenet's alt.* hierarchySupercard's compatibility with HypercardSearch engines' web-crawlers
Servers of every kind, which routinely impersonate PCs, printers, and other devices
Scratch the surface of most Big Tech giants and you'll find an adversarial interoperability story: Facebook grew by making a tool that let its users stay in touch with MySpace users;
Google products from search to Docs and beyond depend on adversarial interoperability layers; Amazon's cloud is full of virtual machines pretending to be discrete CPUs, impersonating
real computers so well that the programs running within them have no idea that they're trapped in the Matrix.
Adversarial interoperability converts market dominance from an unassailable asset to a liability. Once Facebook could give new users the ability to stay in touch with MySpace friends,
then every message those Facebook users sent back to MySpace—with a footer advertising Facebook's superiority—became a recruiting tool for more Facebook users. MySpace served Facebook
as a reservoir of conveniently organized potential users that could be easily reached with a compelling pitch about why they should switch.
Today, Facebook is posting 30-54% annual year-on-year revenue growth and boasts 2.3 billion users, many of whom are deeply unhappy with the service, but who are stuck within its confines because their friends are there (and
A company making billions and growing by double-digits with 2.3 billion unhappy customers should be every investor's white whale, but instead, Facebook and its associated businesses
are known as "the kill zone" in investment
Facebook's advantage is in "network effects": the idea that Facebook increases in value with every user who joins it (because more users increase the likelihood that the person you're
looking for is on Facebook). But adversarial interoperability could allow new market entrants to arrogate those network effects to themselves, by allowing their users to remain in
contact with Facebook friends even after they've left Facebook.
This kind of adversarial interoperability goes beyond the sort of thing envisioned by "data portability," which usually refers to tools that allow users to make a one-off export of
all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you're
in the process of migrating away from.
Big Tech platforms leverage both their users' behavioral data and the ability to lock their users into "walled gardens" to drive incredible growth and profits. The customers for these
systems are treated as though they have entered into a negotiated contract with the companies, trading privacy for service, or vendor lock-in for some kind of subsidy or convenience.
And when Big Tech lobbies against privacy regulations and
anti-walled-garden measures like Right to Repair legislation, they
say that their customers negotiated a deal in which they surrendered their personal information to be plundered and sold, or their freedom to buy service and parts on the open market.
But it's obvious that no such negotiation has taken place. Your browser invisibly and silently hemorrhages your personal information as you move about the web; you paid for your phone
or printer and should have the right to decide whose ink or apps go into them.
Adversarial interoperability is the consumer's bargaining chip in these coercive "negotiations." More than a quarter of Internet users have installed ad-blockers, making it the biggest consumer revolt in human history. These users are making
counteroffers: the platforms say, "We want all of your data in exchange for this service," and their users say, "How about none?" Now we have a negotiation!
Or think of the iPhone owners who patronize independent service centers instead of using
Apple's service: Apple's opening bid is "You only ever get your stuff fixed from us, at a price we set," and the owners of Apple devices say, "Hard pass." Now it's up to Apple to
make a counteroffer. We'll know it's a fair one if iPhone owners decide to patronize Apple's service centers.
This is what a competitive market looks like. In the absence of competitive offerings from rival firms, consumers make counteroffers by other means.
There is good reason to want to see a reinvigorated approach to competition in America, but it's important to remember that
competition is enabled or constrained not just by mergers and acquisitions. Companies can use a whole package of laws to attain and maintain dominance, to the detriment of the public
Today, consumers and toolsmiths confront a thicket of laws and rules that stand between them and technological self-determination. To change that, we need to reform the Computer Fraud and Abuse Act, Section 1201 of the Digital Millennium Copyright Act, , patent law, and other rules and laws. Adversarial interoperability is in the history of every tech giant that rules today, and if it was good
enough for them in the past, it's good enough for the companies that will topple them in the future.
>> mehr lesen
Same Problem, Different Day: Government Accountability Office Updates Its Review of FBI’s Use of Face Recognition—and It’s Still Terrible
(Fri, 07 Jun 2019)
This week the federal Government Accountability Office (GAO) issued an update to its 2016 report on the
FBI’s use of face recognition. The takeaway, which they also shared during a Congressional House Oversight Committee hearing: the FBI now has access to 641 million photos—including
driver’s license and ID photos—but it still refuses to assess the accuracy of its systems.
According to the latest GAO Report, FBI’s Facial Analysis, Comparison, and Evaluation
Services unit not only has access to FBI’s Next Generation Identification
(NGI) face recognition database of nearly 30 million civil and criminal mug shot photos, it also has access to the State Department’s Visa and Passport databases, the
Defense Department’s biometric database, and the driver’s license databases of at least 21 states. Totaling 641 million images—an increase of 230 million images since GAO’s 2016
report—this is an unprecedented number of photographs, most of which are of Americans and foreigners who have committed no crimes.
The FBI Still Hasn’t Properly Tested the Accuracy of Its Internal or External Searches
Although GAO criticized FBI in 2016 for failing to conduct accuracy assessments of either its internal NGI database or the searches it conducts on its state and federal
partners’ databases, the FBI has done little in the last three years to make sure that its search results are accurate, according to the new report. As of 2016, the FBI had conducted
only very limited testing to assess the accuracy of NGI's face recognition capabilities. These tests only assessed the ability of the system to detect
a match—not whether that detection was accurate, and as GAO notes, “reporting a detection rate of 86 percent without reporting the accompanying
false positive rate presents an incomplete view of the system’s accuracy.”
As we know from previous research, face recognition is notoriously inaccurate across the board and may also misidentify African Americans and ethnic minorities, young
people, and women at higher rates than whites, older people, and men, respectively. By failing to assess the accuracy of its internal systems, GAO writes—and we agree—that the FBI is
also failing to ensure it is “sufficiently protecting the privacy and civil liberties of U.S. citizens enrolled in the database.” This is especially concerning given that, according
to the FBI, they’ve run a massive 152,500
searches between fiscal year 2017 and April 2019—since the original report came out.
The FBI also has not taken any steps to determine whether the face recognition systems of its external partners—states and other federal agencies—are sufficiently accurate to
prevent innocent people from being identified as criminal suspects. These databases, which are accessible to the FACE services unit, are mostly made up of images taken for
identification, certification, or other non-criminal purposes. Extending their use to FBI investigations exacerbates concerns of accuracy, not least of which because, as GAO notes,
the “FBI’s accuracy requirements for criminal investigative purposes may be different than a state’s accuracy requirements for preventing driver’s license fraud.” The FBI claims
that it has no authority to set or enforce accuracy standards outside the agency. GAO disagrees: because the FBI is using these outside databases as a component of its routine
operations, it is responsible for ensuring the systems are accurate, and given the lack of testing, it is unclear “whether photos of innocent people are unnecessarily included as
Many of the 641 million face images to which the FBI has access are through 21 states’ driver’s license databases. 10 more states are in negotiations to provide similar
As the report points out, most of the 641 million face images to which the FBI has access—like driver’s license and passport and visa photos—were never collected for criminal or
national security purposes. And yet, under agreements and “Memorandums of Understanding” we’ve never seen between the FBI and its state and federal partners, the FBI may search these
civil photos whenever it’s trying to find a suspect in a crime. As the map above shows, 10 more states are in negotiations with the FBI to provide similar access to their driver’s
Images from the states’ databases aren’t only available through external searches. The states have also been very involved in the development of the FBI’s own NGI database,
which includes nearly 30 million of the 641 million face images accessible to the Bureau (we’ve writtenextensively about NGI in the past). As of 2016, NGI included more than
20 million civil and criminal images received directly from at least six states, including California, Louisiana, Michigan, New York, Texas, and Virginia. And it’s not a way one-way
street: it appears that five additional states—Florida, Maryland, Maine, New Mexico, and Arkansas—could send their own search requests directly to the NGI database. As of December
2015, the FBI was working with eight more states to grant them access to NGI, and an additional 24 states were also interested.
New Report, Same Criticisms
The original GAO report heavily criticized the FBI for rolling out these massive face recognition capabilities without ever explaining the privacy implications of its actions to
the public, and the current report reiterates those criticisms. Federal law and Department of Justice policies require the FBI to complete a Privacy Impact Assessment (PIA) of all
programs that collect data on Americans, both at the beginning of development and any time there’s a significant change to the program. While the FBI produced a PIA in 2008, when it
first started planning out the face recognition component of NGI, it didn’t update that PIA until late 2015—seven years later and well after it began making the changes. It also
failed to produce a PIA for the FACE Services unit until May 2015—three years after FACE began supporting FBI with face recognition searches.
Federal law and regulations also require agencies to publish a “System of Records Notice” (SORN) in the Federal Register, which announces any new federal system designed to
collect and use information on Americans. SORNs are important to inform the public of the existence of systems of records; the kinds of information maintained; the kinds of
individuals on whom information is maintained; the purposes for which they are used; and how individuals can exercise their rights under the Privacy Act. Although agencies are
required to do this before they start operating their systems, FBI failed to issue one until May 2016—five years after it started collecting personal
information on Americans. As GAO noted, the whole point of PIAs and SORNs is to give the public notice of the privacy implications of data collection programs and to ensure that
privacy protections are built into systems from the start. The FBI failed at this.
This latest GAO report couldn’t come at a more important time. There is a growing mountain of evidence that face recognition used by law enforcement is dangerously inaccurate,
from our white paper, “Face Off,” to two Georgetown studies released just last
month which show that law enforcement agencies in some cities are implementing real-time face recognition
systems and others are using the systems on flawed data.
Two years ago, EFF testified
before The Congressional House Oversight Committee on the subject, pointing out the FBI's efforts to build up and link together these massive facial recognition databases that
may be used to track innocent people as they go about their daily lives. The Congressional House Oversight Committee held two more hearings in the last month on
the subject which saw bipartisan agreement over the need to rein in law enforcement’s use of this technology, and during which GAO pointed out many of the issues raised by this
report. At least one more hearing is planned. As the Congressional House Oversight Committee continues
to assesslaw enforcement use of face
recognition databases, and as more and more cities are working to
incorporate flawed and
untested face recognition technology into their police and government-maintained cameras, we need all the information we can get on how law enforcement like the FBI
are currently using face recognition and how they plan to use it in the future. Armed with that knowledge, we can pushcities, states, and possibly even the federal government to pass moratoria or
bans on the use of face recognition.
>> mehr lesen
30 Years Since Tiananmen Square: The State of Chinese Censorship and Digital Surveillance
(Wed, 05 Jun 2019)
Thirty years ago today, the Chinese Communist Party used military force to suppress a peaceful pro-democracy demonstration by thousands of university students. Hundreds (some
estimates go as high as thousands) of innocent
protesters were killed. Every year, people aroundtheworld come together to mourn and
commemorate the fallen; within China, however, things are oddly silent.
The Tiananmen Square protest is one of the most tightly censored
topics in China. The Chinese government’s network and social media censorship is more than just pervasive; it’s sloppy, overbroad, inaccurate, and always errs on the
side of more takedowns. Every year, the Chinese government ramps up VPN shutdowns, activist arrests, digital surveillance,
and social media censorship
in anticipation of the anniversary of the Tiananmen Square protests. This year is no different; and to mark the thirtieth anniversary, the controls have never been
Keyword filtering on social media and messaging platforms
It’s a fact of life for many Chinese that social media and messaging platforms perform silent content takedowns via regular keyword filtering and more recently, image matching.
In June 2013, Citizen Lab documented a list of words censored from social media related
to the anniversary of the protests, which included words like “today” and “tomorrow.”
Since then, researchers at the University of Hong Kong have developed real-time censorship monitoring and transparency projects—“WeiboScope” and “WechatScope”—to document the
scope and history of censorship on Weibo and Wechat. A couple of months ago, Dr. Fu King-wa, who works on these transparency projects, released an archive of over 1200 censored Weibo image posts relating to the Tiananmen anniversary since
2012. Net Alert has released a similar archive of historically censored images.
Simultaneous service disruptions for “system maintenance” across social media platforms
This year, there has been a sweep of simultaneous social media shutdowns a week prior to the anniversary, calling back to similar “Internet maintenance” shutdowns that happened during the twentieth
anniversary of the Tiananmen Square protests. Five popular video and livestreaming platforms are suspending all comments until June
6th, citing the need for “system upgrades and maintenance.” Douban, a Chinese social networking service, is locking some of their larger news
groups from any discussion June 29th, also for “system maintenance.” And popular messaging service WeChat recently blocked users from changing their status messages, profile pictures, and nicknames for
the same reason.
Apple censors music and applications alike
Since 2017, Apple has removed VPNs from its mainland Chinese app store.
These application bans have continued and worsened over time. A censorship transparency project by GreatFire, AppleCensorship.com, allows users to look up which applications are available in the US but not in China. Apart from VPNs, the
Chinese Apple app store has also censored applications from news
organizations, including the New York Times, Radio Free Asia, Tibetan News, Voice of Tibet, and other Chinese-language human rights publications. They have also taken down other
censorship circumvention tools like Tor and Psiphon.
Leading up to this year’s 30-year Tiananmen anniversary, Apple Music has been removing songs from its Chinese streaming
service. A 1990 song by Hong Kong’s Jacky Cheung that references Tiananmen Square was removed, as were songs by pro-democracy activists from Hong Kong’s Umbrella Movement
Activist accounts caught in Twitter sweep
On May 31st, a slew of China-related Twitter accounts were suspended, including
prominent activists, human rights lawyers, journalists, and other dissidents. Activists feared this action was in preparation for further June 4th related censorship. Since then, some
of the more prominent accounts have been restored, but many remain suspended. An announcement from
Twitter claimed that these accounts weren’t reported by Chinese authorities, but were just caught up in a large anti-spam sweep.
The lack of transparency, poor timing, and huge number of false positives on Twitter’s part has led to real fear and uncertainty in Chinese-language activism circles.
Beyond Tiananmen Square: Chinese Censorship and Surveillance in 2019
Xinjiang, China’s ground zero for pervasive surveillance and social control
Thanks to work by Human Rights Watch, security researchers, and many brave investigators and journalists, a lot has come to light about China’s terrifying acceleration of social
and digital controls in Xinjiang in the past two years. And the chilling effect is real—as we approach the end of Ramadan, a holiday which is discouraged and banned for Party members
and public school students to observe—mosques remain
empty. Uighur students and other expatriates abroad fear returning home, as many of their families have already been detained for no cause.
China’s extensive reliance on surveillance technology in Xinjiang is a human rights nightmare, and according to the
New York Times, “the first known example of a government intentionally using artificial intelligence for racial profiling.” Researchers have noticed that more and more computer vision papers coming out of China are specifically trained to build facial recognition for
China has long been a master of security theater,
overstating and over-performing its own surveillance capabilities in order to spread a “chilling effect” over digital and social behavior. Something similar is happening here, albeit
at a much larger scale than we’ve ever seen before. Despite the government’s claims of fully automated and efficient systems, even the best automated facial
recognition systems they use are only accurate in less than 20 percent of
cases, leading to mistakes and the
need for hundreds of workers to monitor cameras and confirm the results. These smoke-and-mirrors “pseudo-AI” systems are more than common in the AI startup industry. For a lot of “automated”
technologies, we just aren’t quite there yet.
Resource or technical limitations aren’t going to stop the Chinese government. Security spending since 2017 shows that Chinese officials are serious
about building a panopticon, no matter the cost. The development of the surveillance apparatus in Xinjiang shows us just how expensive building pervasive surveillance can be; local
governments in Xinjiang have accrued hundreds of millions (in
USD) of “invisible debt” as they continue to ramp up investment in their surveillance state. A large portion of that cost is labor. “We risk understating the extent to which this
high-tech police state continues to require a lot of manpower,” says Adrien
Zenz for the New York Times.
Client-side blocking of labor movements on Github
996 is a recent labor movement in China by white-collar tech workers who demand regular 40-hour work weeks and the explicit outlaw of
the draconian but standard “996” schedule; that is, 9 am to 9 pm, six days a week. The movement, like other labor-organizing movements, has been to subject to keyword censorship on
social media platforms, but individuals have been able to continue organizing on Github.
Github itself has remained relatively
immune to Chinese censorship efforts. Thanks to widespread
deployment of HTTPS, Chinese network operators must either block the entire website or
nothing at all. Github was briefly
blocked in 2013, but the backlash from developers was too
great, and the site was unblocked shortly thereafter. China’s tech sector, like the rest of the world, rely on open-source projects hosted on the website. But although Github is no
longer censored at the network level, Chinese-built browsers and Wechat’s web viewer started blacklisting specific
URLs from being accessed, including the 996 Github repository.
Google’s sleeping Dragonfly
Late last year, we stood in solidarity with over 70 human rights
groups led by Human Rights Watch and Amnesty International, calling on Google to end their secret internal project to architect a
censored Chinese search engine codenamed Dragonfly. Google employees wrote their own letter protesting the project, some resigning in protest, demanding transparency at the very least.
In March, some Google employees found that changes were still being
committed to the Dragonfly codebase. Google has yet to publicly commit to ending the project, leading many to believe the project could just be on the back burner for
How are people fighting back?
Relatively little news gets out of Xinjiang to the rest of the world, and China wants to keep it that way— journalists are denied visas,
their relatives are detained, and journalists on the ground
are arrested. Any work by groups that help
shed light on the situation is extremely valuable. Earlier this year, we wrote about the amazing work by Humans Rights
Watch, Amnesty International, other human rights groups,
and otherindependentresearchers and journalists in helping uncover the inner workings of China’s
Censorship transparency projects like WechatScope, WeiboScope, Tor’s OONI, and GreatFire's
AppleCensorship, as well as ongoingcensorshipresearch by academic centers like The Citizen Lab and organizations like GreatFire continue to shed
light on the methods and intentions of broader Chinese censorship efforts.
And of course, we have to take a look at the individuals and activists within and outside China who continue to fight to havetheirvoicesheard. Despite the continued rise of crackdowns on VPNs, VPN
usage across Chinese web users continues to rise. In the first quarter of 2019, 35% of web users use
VPNs, not just for accessing better music and TV shows, but also commonly for accessing blocked social networks, and blocked news sites.
Human rights groups, security researchers, investigators, journalists, and activists on the ground continue to make tremendous sacrifices in fighting for a more free
>> mehr lesen
EFF Tells Congress: Don’t Throw Out Good Patent Laws
(Tue, 04 Jun 2019)
At a Senate hearing today, EFF Staff Attorney Alex Moss gave formal testimony [PDF]
about how to make sure our patent laws promote innovation, not abusive litigation.
Moss described how Section 101 of the U.S. patent laws serves a crucial role in protecting the public. She urged the Senate IP Subcommittee, which is considering radical changes to Section 101, to preserve the law to protect users,
developers, and small businesses.
Since the Supreme Court’s decision in Alice v. CLS Bank, courts
have been empowered to quickly dismiss lawsuits based on abstract patents. That has allowed many small businesses to fight back against meritless patent demands, which are often
brought by "patent assertion entities," also known as patent trolls.
At EFF, we often hear from businesses or individuals that are being harassed or threatened by ridiculous patents. Moss told the Senate IP Subcommittee the story of Ruth Taylor, who was sued for infringement over a patent that claimed the idea of holding a
contest with an audience voting for the winner but simply added generic computer language. The patent owner wanted Ruth to pay $50,000. Because of today’s Section 101, EFF was able to
help Ruth pro bono, and ask the court to dismiss the case under Alice. The patent owner dropped the lawsuit days before the hearing.
We hope the Senate takes our testimony to heart and reconsiders the proposal by Senators Thom Tillis and Chris Coons, which would dismantle Section 101 as we know it. This would lead
to a free-for-all for patent trolls, but huge costs and headaches for those who actually work in technology.
We need your help. Contact your representatives in Congress today, and tell them to reject the Tillis-Coons patent proposal.
TELL CONGRESS WE DON'T NEED MORE BAD PATENTS
>> mehr lesen
Hearing Today: EFF Staff Attorney Alex Moss Will Testify About Proposed Changes to Patent Law That Will Benefit Trolls, Foster Bad Patents
(Tue, 04 Jun 2019)
Tillis-Coons Section 101 “Framework” Will Make Patent System Worse for Small Businesses, Consumers
Washington D.C.—EFF Staff Attorney Alex Moss will tell U.S. lawmakers today that proposed changes to Section 101 of the
U.S. Patent Act—the section that defines, and limits, what can get a patent—will upend years of case law that ensures only true inventions, not basic practices or rudimentary ideas,
should get a patent. Moss is among a panel of patent experts testifying today before the Senate Subcommittee on Intellectual Property about the state of patent eligibility in
The Supreme Court ruled in Alice v. CLS Bank that an abstract idea does not become eligible for a patent simply by being implemented on a generic computer. For example, a
patent on the basic practice of letting people access content in exchange for watching an online ad was upheld in court before Alice. EFF’s “Saved by Alice” project has collected stories about small businesses that were helped, or even saved, by the Supreme Court’s Alice decision.
A proposal by Senators Thom Tillis and Chris Coons, chairman and ranking member of the subcommittee, would rewrite Section 101 of the Patent Act. The proposal is aimed squarely at
killing the Alice decision. It will primarily benefit companies that aggressively license and litigate patents, as well as patent trolls—entities that produce no products, but make money by threatening developers and companies, often with
vague software patents.
Section 101, as it stands, prevents monopolies on basic research tools that nobody could have invented. That protects developers, start-ups, and makers of all kinds, especially in
software-based fields, Moss will tell senators.
Hearing before Senate Subcommittee on Intellectual Property: The State of Patent Eligibility in America, Part I
EFF Staff Attorney Alex Moss
Today at 2:30 pm
Dirksen Senate Office Building 226
50 Constitution Ave NE
Washington D.C. 20002
For more on Alice v. CLS Bank:
Mark Cuban Chair to Eliminate Stupid Patents and Staff Attorney
>> mehr lesen
Caught in the Net: The Impact of ‘Extremist’ Speech Regulations on Human Rights Content
(Mon, 03 Jun 2019)
New Report from EFF, Syrian Archive, and WITNESS Examine Content Moderation and the Christchurch Call to Action
San Francisco – Social media companies have long struggled with what to do about extremist content that advocates for or celebrates terrorism and violence. But the dominant current
approach, which features overbroad and vague policies and practices for removing content, is already decimating human rights content online, according to a new report from Electronic Frontier Foundation (EFF), Syrian Archive, and WITNESS. The
report confirms that the reality of faulty content moderation must be addressed in ongoing efforts to address extremist content.
The pressure on platforms like Facebook, Twitter, and YouTube to moderate extremist content only increased after the mosque shootings in Christchurch, New Zealand earlier this year.
In the wake of the Christchurch Call to Action Summit held last month, EFF teamed up with Syrian Archive and WITNESS to show how faulty moderation inadvertently captures and censors
vital content, including activism, counter-speech, satire, and even evidence of war crimes.
“It’s hard to tell criticism of extremism from extremism itself when you are moderating thousands of pieces of content a day,” said EFF Director for International Freedom of
Expression Jillian York. “Automated tools often make everything worse, since context is critical when making these decisions. Marginalized people speaking out on tricky political and
human rights issues are too often the ones who are silenced.”
The examples cited in the report include a Facebook group advocating for the independence of the Chechen Republic of Iskeria that was mistakenly removed in its entirety for “terrorist
activity or organized criminal activity.” Groups advocating for an independent Kurdistan are also often a target of overbroad content moderation, even though only one such group is
considered a terrorist organization by governments. In another example of political content being wrongly censored, Facebook removed an image of a leader of Hezbollah with a rainbow
Pride flag overlaid on it. The image was intended as satire, yet the mere fact that it included a face of a leader of Hezbollah led to its removal.
Social media is often used to as a vital lifeline to publicize on-the-ground political conflict and social unrest. In Syria, human rights defenders use this tactic as many as 50 times
a day, and there are now more hours of social media content about the Syrian conflict than there have been hours in the conflict itself. Yet, YouTube used machine-learning-powered
automated flagging to terminate thousands of Syrian YouTube channels that published videos of human rights violations, endangering the ability of those defenders to create a public
record of those violations.
“In the frenzied rush to delete so-called extremist content, YouTube is erasing the history of the conflict in Syria almost as quickly as human rights defenders can hit ‘post,’” said
Dia Kayyali, Program Manager for Tech and Accountability at Witness. “While ‘just taking it down’ might seem like a simple way to deal with extremist content online, we know current
practices not only hurt freedom of expression and the right to access information, they are also harmful to real efforts to fight extremism.”
For the full report:
Director for International Freedom of Expression
>> mehr lesen
Research Shows Publishers Benefit Little From Tracking Ads
(Mon, 03 Jun 2019)
Advertising industry lobbyists have long argued that tracking users is necessary to power a publishing industry that makes its content available to users for “free”— despite a heavy
privacy cost. Right now, a majority of publishers make money by working with advertisers that collect personal information about users as they move from site to site. Ad companies
then combine that information with additional data bought from other sources, such as data brokers, to create detailed profiles they claim are necessary to tailor effective ads to an
But new research, based on publisher data, has found that using
this invasive tracking technique brings publishers just 4% more in revenue — or just $.00008 per ad — than ads based on context (for example, ads for sporting goods placed next to the
New research, based on publisher data, has found that using this invasive tracking technique brings publishers just 4% more in revenue— or just $.00008 per ad—than ads
based on context
This new report reinforces previous doubts about how much the tracking ecosystem actually benefits publishers. In 2016, Guardian staff bought ads in their own newspaper and found that
as little as 30% of the amount spent reached the paper. Together,
these studies show that while privacy-invasive behavioral advertising may enrich the adtech industry, it’s little help to publishing businesses scrambling to survive the digital
transition. Publishers should rethink their involvement in practices which yield them minor gains but expose them to increased risks in terms of compliance and reputation which may
undermine their business.
Advertisers, Not Publishers, Make Money from Ads that Track
Researchers Alessandro Acquisti, Veronica Marotta, and Vibhanshu Abhishek got access to a dataset from a large, unidentified media company that operates multiple websites with
different scales and audiences.
Most online ads today are sold in “real time bidding” (RTB) auctions that take place in the microseconds between when you click on a link, and when the content and an ad appears on
your screen. RTB is currently the subject of complaints and regulatory scrutiny in Europe. Over
90% of the transactions in this new study’s dataset relate to ads sold in these types of auctions, in which ads are sold as “impressions” associated with data to inform and attract
potential bidders. The dataset included the viewer’s IP address, the URL of the page on which the ad was displayed, ad format, price received, and whether cookie information was
available. As cookies remain the most popular, but not the only, method of tracking users, this cookie data allowed the researchers to separate ads sold based on context from those
that included information about user behavior.
Earlier experiments by the sameresearchers found that advertisers pay up to 500% more
for targeted ads than contextual ads. This price increase is heralded by adtech as evidence of their worth. But if the benefit to publishers is just 4%, what happens to the remaining
surplus? It is siphoned off by the archipelago of intermediary adtech companies that, alongside Facebook and Google, operate the ad targeting infrastructure.
Publishers Should Put Their Readers First
Publishers should consider whether any small benefits to them from intrusive tracking of their readers’ behavior are offset by direct costs, such as the cost of compliance with laws
on data protection. Publishers also should consider indirect costs. The invasion of their readers’ privacy undermines reader trust, which should especially be a concern to
publications who are trying to raise voluntary contributions from their readers. Ironically, cross-site tracking can also demonetize their audience, by enabling advertisers to
follow readers from a high-value web site and target them on low-end sites where ads
are cheap — a process sometimes referred to as "data leakage" or "audience arbitrage."
Given this new paper — and how it challenges the argument that tracking is necessary to support the publishing business — publishers need to reset their association with adtech, and
put their relationship with their readers first.
>> mehr lesen
The Impact of "Extremist" Speech Regulations on Human Rights Content
(Mon, 03 Jun 2019)
Today, EFF is publishing a new white paper, "Caught in the Net: The Impact of
'Extremist' Speech Regulations on Human Rights Content." The paper is a joint effort of EFF, Syrian Archive, and Witness and was written in response to the Christchurch Call
to Action. This paper analyzes the impact of platform rules and content moderation practices related to "extremist" speech on human rights defenders.
The key audiences for this paper are governments and companies, particularly those that endorsed the Christchurch Call. As we wrote last month, the Call contains several important ideas, but also advocates for measures
that would undoubtedly have a negative impact on freedom of expression online. The paper details several concrete instances in which content moderation has resulted in the removal
of content under anti-extremism provisions, including speech advocating for Chechen and Kurdish self-determination; satirical speech about a key Hezbollah figure; and
documentation of the ongoing conflicts in Syria, Yemen, and Ukraine. We also hope that our allies will find these examples useful for their ongoing advocacy.
As more governments move to impose regulatory measures that would require the removal of extremist speech or privatize enforcement of existing laws, it's imperative that
they consider the potential for collateral damage that such censorship imposes, and consider more holistic measures to combat extremism.
>> mehr lesen
AT&T Sues California to Prevent Oversight Over IP Based 911 Calls Using State Law AT&T Supported and Wants Renewed
(Fri, 31 May 2019)
The California legislature in 2011 passed a law to remove state and local authority over the broadband access market to “ensure a vibrant and competitive open Internet that allows
California's technology businesses to continue to flourish and contribute to economic development throughout the state.” Sounds good, right?
But that never happened. Instead, the broadband access market in California is heading into a high-speed monopoly that, for many, is more expensive and slower than many other markets.
In fact, all the law does is protect broadband
monopolies, and the major ISPs are working it hard to get it renewed through Assembly Member Lorena Gonzalez’s A.B. 1366.
Take ActionTell Your Legislators Not to Extend Broadband Monopolies
and vote NO on a.b. 1366
Renewing the law rather than letting it expire carries significant consequences for California residents. This is because prohibiting the state’s authority over broadband access
impacts everything that relies on broadband access. For example, as the Assembly signed off on renewing the law that has provided no tangible benefits to residents, AT&T was
actively using it to block state oversight over the broadband-based 911 calling system known as Next Generation 911.
If AT&T wins, a critical emergency service would be built by a private corporation with no government authority to regulate it.
In a lawsuit AT&T filed against the Office of Emergency Services, it claims a broad immunity from
state regulation by interpreting California law to mean that state and local governments have no power to oversee broadband services. In particular, the Office of Emergency Services
requires all bidders with the state to build Next Generation 911 to submit to oversight by the California Public Utilities Commission (CPUC). AT&T's claim is that since the CPUC
(or any state and local agency) has regulatory authority over broadband, they can't be forced to show things like how much they intend to charge the state or be subject to state
audits to ensure they didn't overcharge taxpayers for building an emergency system. Should AT&T's argument carry the day, it would mean that a critical emergency
service—literally a matter of life and death—would be built by a private corporation with no government authority to regulate it.
AT&T Wants Taxpayer Money to Build a New 911 System with No State Oversight
The transition towards Next Generation 911 has been part of a decade-long effort to transition 911 calls to a system where all broadband-connected communication devices can make
emergency calls. It will, for example, help emergency responders deal with call overload and have more accurate information about where callers are. This improves public safety by
giving first responders better knowledge and information about emergencies, a goal laid out in bipartisan federal law introduced by Congresswoman Anna Eshoo (D-CA) and Congressman John Shimkus (R-IL)
AT&T argues that the state law says no state agency can regulate their broadband products, and therefore the Office of Emergency Services cannot force companies seeking taxpayer
money to submit themselves to state oversight. That means if a broadband-enabled 911 call doesn’t work, the state and local government effectively can’t penalize, audit, investigate,
issue rules or do anything to remedy the problem, simply because it’s a broadband version of the product. That is plainly unreasonable, but also clearly the point of the law they
pushed years ago.
The irony in the litigation is that AT&T is relying on a law that is set to expire in 7 months, but appears to have confidence that the legislature will hand them their renewed
law. Given that these companies had few qualms about throttling firefighters in the middle of a state
emergency—a situation that the state legislature is working to ban this year with Assembly Member Levine’s AB 1699—we hope that California’s legislators avoid subjecting
Next Generation 911 emergency calling to such a risky litigation space. Otherwise, companies that dislike the rules they’re asked to follow can sue to strike them down. Unlike other
services, where some failure may be acceptable, emergency systems have to be functional 100% of the time, and are fundamentally a government concern for public safety reasons.
It is possible that AT&T’s lawsuit will fail, and the company will not duck state regulation for broadband based 911 services. In fact, the Attorney General of California's
response is persuasive as to why AT&T's lawsuit is without merit, and the declarations filed by the state indicate AT&T truly is being suspect in its decision to avoid government
oversight. But we will have to wait and see what the courts decide. The cleanest solution would be to not renew the law, which would terminate AT&T's ability to
file these kind of lawsuits and put an end to any future lawsuits of this nature.
Hopefully, this is a wake-up call to the legislature to illustrate how the laws they pass (or are about to renew) can be used by the industry that backs them. If companies are willing
to claim oversight immunity from 911 calls, then there is nothing they wouldn’t use it against—and EFF believes they would regularly block efforts to promote
competition. To stop this, Californians must contact their state Senators and ask them to vote NO on A.B. 1366.
>> mehr lesen
A Terrible Patent Bill is On the Way
(Wed, 29 May 2019)
Recently, we reported on the problems with a proposal from Senators Coons and
Tillis to rewrite Section 101 of the Patent Act. Now, those senators have released a draft
bill of changes they want to make. It’s not any better.
Section 101 prevents monopolies on basic research tools that nobody could have invented. That protects developers, start-ups, and makers of all kinds, especially in software-based
fields. The proposal by Tillis and Coons will seriously weaken Section 101, leaving makers vulnerable to patent trolls, and other abusers of the patent system.
The draft legislation does remove a few aspects of the earlier proposal, but it has the exact same effect: it will erase more than a century of Section 101 case law—including the
recent decision in Alice v. CLS Bank—and take away courts’ power to restore them.
The new draft bill relabels the existing law (subsection (a) below) and tacks a new subsection (b) after it. The new part is in bold below:
Section 101: (a) Whoever invents or discovers any useful process, machine, manufacture, or composition of matter, or any useful improvement thereof, may obtain a patent
therefor, subject to the conditions and requirements of this title. (b) Eligibility under this section shall be determined only while considering the claimed invention as
a whole, without discounting or disregarding any claim limitation.
Requiring eligibility to be determined based on “the claimed invention as a whole” is another way of saying: ignore the words the claim actually uses to describe the invention. The
claim is the part of a patent that actually defines the “invention” that others are prevented from using. And it is the “claim as a whole” that’s considered the invention, not any
particular element by itself.
But that doesn’t mean courts can’t consider the individual elements of a patent claim. In fact, it’s often critical that they do so. For example, the patent in Alice included
a “data storage unit,” which the court considered “purely functional and generic,” and therefore rejected this element—because it didn’t have the “inventive concept” that Section 101
What Tillis and Coons are doing here essentially says: ignore the words the claim actually uses to describe the invention. This change will abrogate Alice and make it
inapplicable in any future case.
That’s no accident. Alice has been so effective that patent trolls and other companies dependent on patent-licensing, rather than products, are pushing for Congress to undo
what the Supreme Court has done.
Let's not let that happen. Protect basic research and stop patent abusers from tilting the system in their favor. E-mail your representatives in Congress and tell them to oppose the
Tillis-Coons patent bill.
TELL CONGRESS WE DON'T NEED MORE BAD PATENTS
>> mehr lesen
EFF Receives $300,000 Donation from Craig Newmark Philanthropies to Support Threat Lab
(Wed, 29 May 2019)
Great news for EFF’s Threat Lab: Craig Newmark Philanthropies has donated $300,000 to support its work to protect journalists,
sources, and others against targeted malware attacks.
EFF identifies and tracks the rise of malware attacks, which primarily affect journalists and their sources globally. We have
collaborated with groups like Citizen Lab and mobile security company Lookout to
conduct these investigations, and the results of the research have helped the
world understand this growing
With the help of Craig Newmark Philanthropies, Threat Lab will continue to identify and track the complex web of groups who use malware against
reporters and activists. Threat Lab will apply this information to educate the public and put pressure on the companies that build, sell, and
license this spyware.
Threat Lab also creates and updates tools that EFF uses to educate and train journalists and others in digital security. Mobile devices, in particular,
contain a wealth of data that could endanger reporters and their sources if certain security measures are not taken. This donation will help Threat Lab keep our Surveillance Self-Defense guide and our Security Education Companion accurate and up-to-date in the ever-changing
We are very grateful to Craig Newmark Philanthropies for this generous and important donation to EFF. Threat Lab’s work is key to protecting free and independent
journalism, and we are proud to keep fighting for everyone’s digital security.
>> mehr lesen
Fines Aren’t Enough: Here’s How the FTC Can Make Facebook Better
(Tue, 28 May 2019)
The Federal Trade Commission is likely to announce that
Facebook’s many violations of users’ privacy in recent years also violated its consent decree with the commission. In its financial filings, Facebook
has indicated that it expects to be fined between $3 and $5 billion by the FTC. But punitive fines alone, no matter the size, are unlikely to change the overlapping privacy and
competition harms at the center of Facebook’s business model. Whether or not it levies fines, the FTC should use its power to make Facebook better in meaningful ways. A new settlement
with the company could compel it to change its behavior. We have some suggestions.
A $3 billion fine would be, by far, the largest privacy-related fine in the FTC’s history. The biggest to date was $22.5 million, levied against Google in 2012. But
even after setting aside $3 billion to cover a potential fine, Facebook still managed to rake in $3.3 billion in profit during the first quarter of
2019. It’s rumored that Facebook will agree to
create a “privacy committee” as part of this settlement. But the company needs to change its actions, not just its org chart. That’s why the settlement the FTC is negotiating now also
needs to include limits on Facebook’s behavior.
Stop Third-Party Tracking
Facebook uses “Like” buttons, invisible Pixel conversion trackers, and ad code in mobile apps to track its users nearly any time they use the Internet—even when they’re off
Facebook products. This program allows Facebook to build nauseatingly detailed profiles of users’—and non-users’—personal activity. Facebook’s unique ability to match third-party
website activity to real-world identities also gives it a competitive advantage in both the social media and third-party ad markets. The FTC should order Facebook to stop linking data
it collects outside of Facebook with user profiles inside the social network.
Don’t Merge WhatsApp, Instagram, and Facebook Data
Facebook has announced
plans to build a unified chat platform so that users can send messages between WhatsApp, Messenger, and Instagram accounts seamlessly. Letting users of different services talk
to each other is reasonable, and Facebook’s commitment to end-to-end encryption for the unified
service is great (if it’s for real). But in order to link the services together, Facebook will likely need to merge account data from its disparate properties. This
may help Facebook enrich its user profiles for ad targeting and make it harder for users to fully extricate their data from the Facebook empire should they decide to leave.
Furthermore, there’s a risk that people with one set of expectations for a service like Instagram, which allows pseudonyms and does not require a phone number, will be blindsided when
Facebook links their accounts to real identities. This could expose sensitive information about vulnerable people to friends, family, ex-partners, or law enforcement. In short, there
are dozens of ways the great messenger union could go
Facebook promises that messaging “interoperability” will be opt-in. But corporations are fickle, and Facebook and other tech giants have repeatedlywalkedback privacy commitments they’ve
made in the past. The FTC should make sure Facebook stays true to its word by ordering it not to merge user data from its different properties without express opt-in consent.
Furthermore, if users do decide to opt-in to merging their Instagram or WhatsApp accounts with Facebook data, the FTC should make sure they reserve the right to opt back out.
Stop Data Broker-Powered Ad Targeting
Last March, Facebook shut
down its “Partner Categories” program, in which it purchased data from data brokers like Acxiom and Oracle in order to boost its own ad-targeting system. But over a
year later, advertisers are still using data broker-provided information to target users on Facebook, and both Facebook and data brokers are still raking in profit. That’s because
Facebook allows data brokers to upload “custom audience data files”—lists of contact information, drawn from the brokers’ vast tranches of personal data—where they can charge
advertisers to access those lists. As a result, though the interface has changed, data broker-powered targeting on Facebook is alive and well.
Data brokers are some of the
shadiest actors in the digital marketplace. They make money by buying and selling detailed information about billions of people. And most of the people they profile don’t know they
exist. The FTC should order Facebook to stop allowing data brokers to upload and share custom audiences with advertisers, and to explicitly disallow advertisers from using data
broker-provided information on Facebook. This will make Facebook a safer, less creepy place for users, and it will put a serious dent in the dirty business of buying and selling
A Good Start, But Not the End
We can’t fix all of the problems with Facebook in one fell swoop. Facebook’s content moderation policies need serious work. The platform should be more interoperable and more open. We
need to remove barriers to competition so that more privacy-respecting social networks can emerge. And users around the world deserve to have baseline privacy protections
enshrined in law. But the FTC has a
rare opportunity to tell one of the most powerful companies in the world how to make its business more privacy-protective and less exploitative for everyone. These changes would be a
serious step in the right direction.
>> mehr lesen
EFF Asks San Bernardino Court to Review Cell-Site Simulator and Digital Search Warrants That Are Likely Improperly Sealed
(Tue, 28 May 2019)
Since the California legislature passed a 2015 law requiring cops to get a search
warrant before probing our devices, rifling through our online accounts, or tracking our phones, EFF has been on a quest to examine court filings to determine whether law enforcement
agencies are following the new rules. We have been especially concerned that cops and the courts have been disregarding the transparency measures baked into the California Electronic
Communications Privacy Act (CalECPA).
As it turns out, our suspicions were well warranted. A lawsuit we filed last year against the San
Bernardino County Sheriff’s Office has turned up evidence that potentially hundreds of digital search warrants have been improperly and indefinitely sealed, blocking the public’s
right to inspect court records.
EFF, represented by the Law Office of Michael T. Risher, has filed a formal request with the Presiding Judge of the San Bernardino County Superior Court to review and unseal 22
search warrants that appear to be sealed in violation California’s penal code. We are also asking that the court “take whatever steps are necessary to ensure that similar files—both
in the past and in the future—are open to the public as required by law.”
Read EFF’s letter to the San Bernardino County Superior Court Judge John
P. Vander Feer.
When CalECPA was passed, it was hailed as the “Nation’s Best Digital Privacy Law” by outlets such as Wired, because it prevents the government from forcing companies to hand over
electronic communications, files, or metadata, without first obtaining a warrant. It similarly requires the government to obtain a warrant before searching our devices or tracking our
location through our devices. This includes the use of cell-site simulators, a surveillance technology that
masquerades as a fake cell phone tower to connect to a target’s phone. The law also included several accountability measures, such as requiring agencies to file public disclosures
with the California Department of Justice, which EFF uses to identify search warrants across the state that deserve greater scrutiny deserving of great scrutiny.
Last year, EFF picked out six suspicious warrants filed by the San Bernardino Sheriff for a deeper dive, since they all referred to the use of a “cell-site stimulator” (a
misspelling guaranteed to make privacy advocates snicker). Those were the only warrants to directly make reference to the technology, even though the sheriff had separately disclosed
to EFF that it had used a cell-site simulator 231 times in 2017 alone. The sheriff refused to turn over these warrants, and so EFF took the agency to court. We
subsequently filed requests for 18 other CalECPA warrants, including searches of devices and accounts and phone surveillance techniques known as a pen register or a trap and trace.
Again, San Bernardino County officials resisted handing over the records.
In many cases, San Bernardino County claimed the records could not be released since they had been indefinitely sealed by the court. San Bernardino only provided copies of two
search warrant applications, which include sealing requests that were rejected by a judge. Based on these documents, we advise the court that it appears “the Sheriff’s Department
requests indefinite sealing orders as part of every application for a warrant or court order under these statutes.”
The problem is that this isn’t how the system is supposed to work.
In 2016, the legislature changed state law to require that when an order for a pen register or trap and trace expires, so does any sealing order. Similarly, CalECPA requires
that after a search warrant has been executed and “returned” to the court, the records can only be held secret for 10 days. After that the search warrants must be open to the
The passage of CalECPA represented a fundamental breakthrough for civil liberties in the digital age. But a law is only as good as its enforcement. If the San Bernardino courts
and sheriff keep these records secret, then not only does that violate the will of the people of California, but it blocks the ability of the public to ensure that other elements of
the law are also being obeyed.
San Bernardino may just be the tip of the iceberg. We hope courts in other jurisdictions take notice and also examine their CalECPA warrants to ensure the law is being
California's Electronic Communications Privacy Act (CalECPA) - SB 178
>> mehr lesen
If Regulators Won’t Stop The Sale of Cell Phone Users’ Location Data, Consumers Must
(Tue, 28 May 2019)
A Motherboard investigation revealed in
January how any cellphone users’ real-time location could be obtained for $300. The pervasiveness of the practice, coupled with the extreme invasion of people’s privacy, is alarming.
The reporting showed there is a vibrant market for location data generated by everyone’s cell phones—information that can be incredibly detailed and provide a window into people’s most sensitive and private activities. The investigation also laid bare that
cell phone carriers AT&T, Sprint, and T-Mobile, and the many third parties with access to the companies’ location data, have little interest or incentive to stop.
This market of your personal information violates federal law and Federal Communication Commission (FCC) rules that
protect people’s location privacy. The market also violates FCC rules prohibiting disclosure of extremely sensitive
location information derived in part from GPS data that is only to be disclosed when emergency responders
need to find people during an emergency.
We expected the FCC to take immediate action to shut down the unlawful location data market and to punish the bad actors.
But many months later, the FCC has not taken any public action. It’s a bad sign when minority FCC commissioners have to take to the pages of the New York Times to call for an end to the practices, or must send their own letters to carriers to get
basic information about the problem. Although some members of Congress have
investigated and demanded an end to the practice, no solution is in sight.
Earlier this year, the major cell phone providers promised that they
have endedor will end the practices. Those promises ring hollow after they promised to end sale of the same location data in 2018.
In light of this inaction, consumers must step up to make sure that their location data is no longer so easily sold and that laws are enforced to prohibit it from happening again.
Although much of the reporting has focused on bounty hunters’ ability to obtain anyone’s location information, documents created by the companies that accessed and sold the data show
it was used for many other purposes. This includes marketing materials for
car dealerships to buy real-time location data of potential buyers, and
for landlords to find the locations of their potential tenants. Even more
troubling, stalkers and bounty hunters
appeared to be able to impersonate law enforcement officials and use the system to find people, including victims of domestic violence.
EFF wants to stop this illegal violation of the location privacy of millions of phone users. So please tell us your stories.
If you believe these companies unlawfully shared your cell phone location information, please let us know. In particular, it would be helpful if you could tell us:
Who obtained your cell phone location information?
How did they get it?
How did they use it?
When and where did this happen?
What cell phone provider were you using?
How do you know this?
Do you have any documents or other evidence that shows this?
Please write to us at firstname.lastname@example.org.
>> mehr lesen
The Government’s Indictment of Julian Assange Poses a Clear and Present Danger to Journalism, the Freedom of the Press, and Freedom of Speech
(Fri, 24 May 2019)
The century-old tradition that the Espionage Act not be used against
journalistic activities has now been broken. Seventeen new charges were filed
yesterday against Wikileaks founder Julian Assange. These new charges make clear that he is being prosecuted for basic journalistic tasks, including being openly available to receive
leaked information, expressing interest in publishing information regarding certain otherwise secret operations of government, and then disseminating newsworthy information to the
public. The government has now dropped the charade that this prosecution is only about hacking or helping in hacking. Regardless of whether Assange himself is labeled a “journalist,”
the indictment targets routine journalistic practices.
But the indictment is also a challenge to fundamental principles of freedom of speech. As the Supreme Court has explained, every person has the right to disseminate truthful information pertaining to matters of public
interest, even if that information was obtained by someone else illegally. The indictment purports to evade this protection by repeatedly alleging that Assange simply “encouraged” his
sources to provide information to him. This places a fundamental free speech right on uncertain and ambiguous footing.
A Threat To The Free Press
Make no mistake, this not just about Assange or Wikileaks—this is a threat to all journalism, and the public interest. The press stands in place of the public in holding the
government accountable, and the Assange charges threaten that critical role. The charges threaten reporters who communicate with and knowingly obtain information of public interest
from sources and whistleblowers, or publish that information, by sending a clear signal that they can be charged with spying simply for doing their jobs. And they threaten everyone
seeking to educate the public about the operation of government and expose government wrongdoing, whether or not they are professional journalists.
Assistant Attorney General John Demers, head of the Department of Justice’s National
Security Division, told reporters after the indictment that the department “takes seriously the role of journalists in our democracy and we thank you for it,” and that it’s not the
government’s policy to target them for reporting. But it’s difficult to separate the Assange indictment from President Trump’s repeated attacks on the press, including his
declarations on Twitter, at White House briefings, and in interviews that the press is “the enemy of the people,” “dishonest,” “out of control,” and “fake news.” Demers’
statement was very narrow—disavowing the “targeting” of journalists, but not the prosecution of them as part of targeting their sources. And contrary to the DOJ’s public statements,
the actual text of the Assange Indictment sets a dangerous precedent; by the same reasoning it asserts here, the administration could turn its fervent anti-press sentiments into
charges against any other media organization it disfavors for engaging in routine journalistic practices.
Most dangerously, the indictment contends that anyone who “counsels, commands, induces” (under 18 USC §2, for aiding and
abetting) a source to obtain or attempt to obtain classified information violates the Espionage Act, 18 USC § 793(b).
Under the language of the statute, this includes literally “anything connected with the national defense,” so long as there is an “intent or reason to believe that the
information is to be used to the injury of the United States, or to the advantage of any foreign nation.” The indictment relies heavily and repeatedly on allegations that Assange
“encouraged” his sources to leak documents to Wikileaks, even though he knew that the documents contained national security information.
But encouraging sources and knowingly receiving documents containing classified information are standard journalistic practices, especially among national security reporters. Neither
law nor custom has ever required a journalist to be a purely passive, unexpected, or unknowing recipient of a leaked document. And the U.S. government has regularly maintained, in
EFF’s own cases and elsewhere, that virtually any release of classified information injures the United States and advantages foreign nations.
The DOJ indictment thus raises questions about what specific acts of “encouragement” the department believes cross the bright line between First Amendment protected newsgathering and
crime. If a journalist, like then-candidate Trump, had said: "Russia, if you’re listening, I hope you’re able to find the [classified] emails that are missing. I think you will
probably be rewarded mightily by our press," would that be a chargeable crime?
The DOJ Does Not Decide What Is And Isn’t Journalism
Demers said Assange was “no journalist,” perhaps to justify the DOJ’s decision to charge Assange and show that it is not targeting the press. But it is not the DOJ’s role to determine
who is or is not a “journalist,” and courts have consistently found that what makes something journalism is the function of the work, not the character of the person. As
the Second Circuit once wrote in a case about the reporters’ privilege, the question is whether they
intended to “use material—sought, gathered, or received—to disseminate information to the public.” No government label or approval is necessary, nor is any job title or formal
affiliation. Rather than justifying the indictment, Demers’ non-sequitur appears aimed at distracting from the reality of it.
Moreover, Demers’ statement is as dangerous as it is irrelevant. None of the elements of the 18 statutory charges (Assange is also facing a charge under the Computer Fraud and Abuse Act) require a
determination that Assange is not a journalist. Instead, the charges broadly describe journalism–seeking, gathering and receiving information for dissemination to the public,
and then publishing that information–as unlawful espionage when it involves classified information.
Of course news organizations routinely publish classified information. This is not considered unusual, nor (previously) illegal. When the government went to the Supreme Court
to stop the publication of the classified Pentagon Papers, the Supreme Court refused (though it did not reach the question
of whether the Espionage Act could constitutionally be charged against the publishers). Justice Hugo Black, concurring in the judgment, explained why:
In the First Amendment, the Founding Fathers gave the free press the protection it must have to fulfill its essential role in our democracy. The press was to serve the governed,
not the governors. The Government's power to censor the press was abolished so that the press would remain forever free to censure the Government. The press was protected so that
it could bare the secrets of government and inform the people. Only a free and unrestrained press can effectively expose deception in government. And paramount among the
responsibilities of a free press is the duty to prevent any part of the government from deceiving the people and sending them off to distant lands to die of foreign fevers and
foreign shot and shell.
Despite this precedent and American tradition, three of the DOJ charges against Assange specifically focus solely on the purported crime of publication. These three charges are for
Wikileaks’ publication of the State Department cables and the Significant Activity Reports (war logs) for Iraq and Afghanistan, documents which were also published in Der Spiegel, The Guardian, The New York
Times, Al Jazeera, and Le Monde, and republished by many other news
For these charges, the government included allegations that Assange failed to properly redact, and thereby endangered sources. This may be another attempt to make a distinction
between Wikileaks and other publishers, and perhaps to tarnish Assange along the way. Yet this is not a distinction that makes a difference, as sometimes the media may need to provide
unredacted data. For example, in 2017 the New York Times published the name of a CIA official who was behind the CIA program to
use drones to kill high-ranking militants, explaining “that the American public has a right to know who is making life-or-death decisions in its name.”
While one can certainly criticize the press’ publication of sensitive data, including identities of sources or covert officials, especially if that leads to harm, this does not mean
the government must have the power to decide what can be published, or to criminalize publication that does not first get the approval of a government censor. The Supreme Court has
justly held the government to a very high standard for abridging the ability of the press to publish, limited to exceptional circumstances like “publication of the sailing dates of transports or the number and
location of troops” during wartime.
A Threat to Free Speech
In a broader context, the indictment challenges a fundamental principle of free speech: that a person has a strong First Amendment right to disseminate truthful information pertaining
to matters of public interest, including in situations in which the person’s source obtained the information illegally. In Bartnicki v. Vopper, the Supreme Court affirmed this, explaining: “it would be quite remarkable to hold
that speech by a law-abiding possessor of information can be suppressed in order to deter conduct by a non-law-abiding third party. ... [A] stranger's illegal conduct does not suffice
to remove the First Amendment shield from speech about a matter of public concern.”
While Bartnicki involved an unknown source who anonymously left an illegal recording with Bartnicki, later courts have acknowledged that the rule applies, and perhaps even more strongly, to recipients who knowingly and
willfully received material from sources, even when they know the source obtained it illegally. In one such case,
the court rejected a claim that the willing acceptance of such material could sustain a charge of conspiracy between the publisher and her source.
Regardless of what one thinks of Assange’s personal behavior, the indictment itself will inevitably have a chilling effect on critical national security journalism, and the
dissemination in the public interest of available information that the government would prefer to hide. There can be no doubt now that the Assange indictment is an attack on the
freedoms of speech and the press, and it must not stand.
Bank Julius Baer & Co v. Wikileaks
>> mehr lesen
Rep. Thompson Works to Secure Our Elections
(Fri, 24 May 2019)
Foreign adversaries and domestic dirty tricksters can secretly hack our nation’s electronic voting systems. That’s why information security experts agree we must go back to basics:
paper ballots. We also need “risk-limiting audits,” meaning mandatory post-election review of a sample of the paper ballots, to ensure the election-night “official” results are
A new federal bill is a step in the right direction: H.R. 2660, the Election
Security Act. It was introduced on May 10
by Rep. Bennie Thompson, Chair of the House Homeland Security Committee; Rep. Zoe Lofgren, Chair of the House Administration Committee; and Rep. John Sarbanes, Chair of the Democracy
Reform Task Force.
This bill would help secure our democracy from digital attack in the following ways:
It requires paper ballots in all federal elections. In any post-election dispute, these paper ballots are the official expression of voter intent. Each voter may choose whether to
mark their paper ballot by hand, or by using a device that marks paper ballots in a manner that is easy for the voter to read.
It authorizes expenditure of $1 billion in the coming year to pay for the transition to paper ballot voting systems, and an additional $700 million over the next six years.
It authorizes expenditure of $20 million to support risk-limiting audits.
It authorizes expenditure of $180 million over nine years to improve the security of election infrastructure.
It authorizes $5 million for research to make voting systems more accessible for people with disabilities.
It requires the creation of cybersecurity standards for voting systems
It creates a “bug bounty” program for voting systems.
This is a good start. The bill would be even stronger if it adopted key parts of another election security bill, introduced last week by Sen. Wyden with 14
Senate co-sponsors. As EFF recently wrote, that bill would not only require paper
ballots and help pay for paper ballots and tabulation machines, it would also require risk-limiting audits, ban the connection of voting machines to the internet, and empower ordinary
voters with a private right of action to enforce new election security rules.
Congress must act now to secure our voting systems, before the next federal elections. We thank both Rep. Thompson and Sen. Wyden for leading the way.
>> mehr lesen
Digital Advertising, Consumer Privacy and More
(Thu, 23 May 2019)
Digital ads—and control of the user data that makes them so profitable—are the heart of the consumer privacy debate. Earlier this week, the U.S. Senate Judiciary Committee held a
hearing titled “Understanding the Digital Advertising
Ecosystem and the Impact of Data Privacy and Competition Policy” to grapple with these complicated and often conflicting ideas.
Judiciary Committee Chairman Senator Lindsey Graham opened the hearing by asking the panelists if Congress should pass a federal privacy law at all, and if they thought we should have
“one national standard”—a potentially dangerous idea, as it could preempt stronger existingstatelaws. The panelists more or less supported a single national law, though we were glad to see Dr. Johnny Ryan argue that a federal law should preempt state law only to the extent that it provides greater
protections. Dr. Fiona M. Scott Morton further clarified that even with a strong federal standard, states were still going
to want to regulate on top of the law.
Troublingly, when discussing whether or not states should be able to create their own protections for consumers, Chairman Graham said, “It’s my job to make sure we have a viable
[digital advertising] industry when this is all done.” But enacting a law that would protect the advertising industry at the expense of his constituents and their privacy is
antithetical to the Chairman’s job. Previous hearings on data privacy have featured witness panels stacked with industry advocates, and corporate interests
threatening to hollow out existing data privacy laws. Consumers are relying on Congress to resist this pressure,
not bow to it.
The rest of the hearing turned to discussions of the role competition plays in a healthy innovation landscape, how to engineer meaningful consent, and who actually owns users’ data.
In his opening statement, Dr. Johnny Ryan urged the Committee “to enact strong privacy rules so that a
healthy marketplace can develop. Give consumers freedom to choose the companies and services they want to reward.” He reiterated this point in response to many questions over the
course of the hearing, arguing that if people truly had a choice in the services they use online, it would allow them to deny the use of their data in operations which they found
unacceptable, which would, in turn, undermine those business models.
In EFF’s letter to the NTIA on consumer privacy, we advocated
for a consumer data privacy law that would allow users to delete and port their data, and
we argued that interoperability between platforms would give consumers a real choice about what services to use. As Dr. Scott Morton mentioned, the real benefit to the social media
platforms is that “all your friends” are also on the platform, but if your friends go elsewhere, you will too. Forcing interoperability among social media platforms, like the way the
FCC once mandated interoperability between instant messaging platforms,
would provide an incentive to compete for users.
The discussion of consumer choice in the marketplace also led some Senators to question how to achieve meaningful user consent for use of their data. Several Senators correctly noted
that the “choice” of either allowing data collection or not visiting a website at all is not a real choice. Brian O’Kelly emphasized this illusion of choice, saying that users in Europe are just getting trained to
click “yes” on cookie pop-ups. Further, he said, “If you gave me the choice of getting robbed, I’d say no. But why give me the choice? Just make robbery illegal.” We agree: notice and consent is not enough protection for users and any consumer data privacy law must also limit the data that companies can
Some elected officials (including the Governor of California)
seem to think it would also be a good idea to allow users to directly profit from the sale or use of their data. We disagree. In this hearing, Senator Coons brought up the idea
of allowing users to treat their own personal data as a thing to be bartered away in exchange for benefits from companies. Fortunately, the witnesses pushed back on this troubling
idea. While it may sound appealing, putting this idea into law would treat privacy not as the a fundamental humanright it is, but as a luxury item to be enjoyed by the people who can afford it.
While it’s difficult to draw one or two discrete conclusions from this hearing, it’s heartening to see Senators wrestling with thorny questions and having a real conversation with
experts. We look forward to many more of these conversations in the future.
>> mehr lesen
Congress Can End the Digital Divide or Replace It with a Speed Chasm with Its Broadband Infrastructure Bill
(Thu, 23 May 2019)
The House Energy and Commerce Committee held its first hearing on a major infrastructure bill called the “Leading Infrastructure for Tomorrow’s (LIFT) America Act,” which authorizes
$45 billion in broadband infrastructure money. Such a massive infusion of federal dollars would reshape the United States communications market and help put the United States on more
even footing with the EU and Asian market.
However, there is a real danger of lowering expectations of what can and should be done with a massive federal investment as a means of rigging who receives the federal dollars. If
Congress does dedicate an enormous sum of money to build broadband infrastructure, it is important that it goes towards infrastructure that can withstand the test of time. As
currently drafted, the legislation makes some concrete steps in that direction, but EFF finds that some areas need improvement in order to really make this bill about building
infrastructure for the 21st century.
The Positives and Areas for Improvement in the LIFT Act
One of the most valuable provisions in the legislation is the creation of a $5 billion low-interest financing vehicle for broadband infrastructure projects. United States is missing a
vibrant “open access fiber” industry like what exists in the EU and other parts of the world. Part of the
problem is that we do not have a dedicated funding source for long-term focused broadband infrastructure planning that would support the construction efforts of non-traditional
broadband market actors. Where it exists—sadly only in limited areas of the United States—this approach to broadband has taken root with incredible results. In Utah, for example,
there are eleven options for $ 50-gigabit symmetrical services. In fact, some telecom analysts predict that open access
fiber providers might be able to connect rural communities with
zero, subsidies should long-term low-interest financing be made available.
The legislation focuses on “unserved areas,” which are defined as areas that do not have access to 25 Mbps download and 3 Mbps upload. Under this bill, 40 billion dollars in federal
funds would be granted through a reverse auction (meaning whoever can build it the cheapest) to an entity that can deliver speeds of at least 100 Mbps download and 20 Mbps upload.
This is a problematic plan, as instead of spending more for long-term, better infrastructure, we’d get low-cost improvements to nearly obsolete technology. This shortchanges these
communities by not giving them the best Internet access—Internet access with speeds to handle whatever future technology brings—but by merely extending the life of slow, bad service.
If the goal is to make this a one-time infusion of taxpayer money to end the digital divide, then Congress must invest in the future. As former FCC Commissioner Mignon Clyburn testified before the Energy and Commerce Committee, “Congress
should be investing taxpayers’ money in infrastructure that will deliver high-speed broadband of at least one Gigabit, future-proof symmetrical service.” EFF supports such an
amendment to the legislation in order to overcome the speed chasm that exists among different broadband network technologies.
The Speed Chasm
In an interview with Professor Susan Crawford, Peter Rubin of WIRED summed up the massive discrepancy of potential capacity reachable by fiber optics as compared to copper, cable, and
wireless networks as the “speed chasm.” Essentially, fiber optics have capacity potential that leaves other legacy networks like
copper and coaxial cable in the dust. While we do not know the exact difference, what we do know is fiber has a capacity that is orders of magnitude greater than legacy efforts.
Some will argue to Congress that it is better to get slower speeds out to more people on the cheap, which means subsidizing incremental improvements to legacy copper and cable
networks. The problem though is as consumption and demand for Internet products and services continue to increase, the legacy networks have no financially feasible way to keep pace
with exponential growth absent transitioning to fiber due to the capacity differences.
Fiber optics are also incredibly cost efficient once deployed. This is because fiber to the home (FTTH) has the potential to be a “future proof” network infrastructure that probably
will not have to be replaced for decades. We see evidence of this
when analyzing the financials of the world’s fastest ISP (EPB Chattanooga) when it upgraded from a one-gigabit fiber network to a 10-gigabit network in 2016. You barely see a change
in capital expenditures while profits continue to rise year after year and likely future advancements in capacity will be even cheaper by comparison as it leverages advancements in
Congress Has an Enormous Opportunity to Bring Millions of Americans into the 21st Century of Broadband Access
EFF supports efforts that seek to connect all Americans to fiber infrastructure, which can
support gigabit and 10-gigabit networks today with unknown potential to expand into the future. Giving everyone access to affordable high-speed broadband will ensure that everyone
benefits from the Internet as both creators and distributors of content and culture. But under-investing and lowering expectations on what should be built will only put a temporary,
and expensive, band-aid on the problem of the digital divide.
>> mehr lesen
Nominations Open for 2019 Barlows!
(Wed, 22 May 2019) Nominations are now open for the 2019 Barlows to be presented at EFF's 28th Annual Pioneer Award Ceremony. Established in 1992,
the Pioneer Award Ceremony recognizes leaders who are extending freedom and innovation in the realm of technology. In honor of Internet visionary, Grateful Dead lyricist, and EFF
co-founder John Perry Barlow, recipients are awarded a “Barlow," previously known as the Pioneer Awards. The nomination window will be open until 11:59pm PDT on June 5, 2019. You
could nominate the next Barlow winner today!
What does it take to be a Barlow winner? Nominees must have contributed substantially to the health, growth, accessibility, or freedom of computer-based communications. Their
contributions may be technical, social, legal, academic, economic or cultural. This year’s winners will join an esteemed group of past award winners that includes the visionary
activist Aaron Swartz, global human rights and
security researchers The Citizen Lab, open-source pioneer Limor "Ladyada" Fried, and whistle-blower Chelsea Manning, among many remarkable
journalists, entrepreneurs, public interest attorneys, and others.
The 2018 Barlows!
The Pioneer Award Ceremony depends on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the Pioneer Award
Ceremony, please email email@example.com.
Remember, nominations are due no later than 11:59pm PDT on Wednesday, June 5th! After you nominate your favorite contenders, we hope you will consider joining us this fall in San
Francisco to celebrate the work of the 2019 winners. If you have any questions or if you'd like to receive updates about the event, including ticket information, please email
Nominate your favorite digital rights hero now!
>> mehr lesen
Broadband Monopolies Are Acting Like Old Phone Monopolies. Good Thing Solutions to That Problem Already Exist
(Wed, 22 May 2019)
The future of competition in high-speed broadband access looks bleak. A vast majority of homes only have their cable monopoly as their choice for speeds in excess of 100 mbps and
small ISPs and local governments are carrying the heavy load of deploying fiber networks that surpass gigabit cable networks. Research now shows that these new monopolies have
striking similarities to the telephone monopolies of old. But we don’t have to repeat the past; we’ve already seen how laws can promoted competition and broke monopolies.
In the United States, high-speed fiber deployment is low and slow. EFF decided to look into this problem, and we now have a research report by the Samuelson-Glushko Technology Law
& Policy Clinic (TLPC) on behalf of EFF that details the history of our telecom competition policies, why they came into existence with the passage of the 1996 Telecommunications
Act, and the FCC’s mistakes—starting in 2005—that eroded the law and has given us a high-speed broadband access monopoly that disproportionately impacts low income and rural
The full report is available
online, but here are some striking takeaways.
Americans Have Been Here Before
AT&T’s telephone monopoly lasted for generations because states and the federal government both allowed and tolerated it. Prior to government intervention, private industry failed
to challenge the dominance of AT&T because the incumbent monopoly regularly took extraordinary steps to cut off competitors. The tide began to shift once states, the FCC, the
courts, the president, and eventually Congress took dramatic steps to end the monopolization of telecom services.
Many of the provisions in the 1996 Telecom Act that exist now come from solutions tailored by the litigation, regulatory, and state efforts to promote competition among phone
companies. For example, it was California and New York states’ efforts to open up competion in local phone calls that inspired Congress to adopt a federal approach of “unbundled
network elements (UNEs)” requirements. And those rules—with roots in fighting anticompetitiveness in phones—have helped create several small ISPs that exist today. The requirement for
networks to “interconnect” under federal law stemmed from a Department of Justice antitrust action to mandated interconnection decades earlier. These and other federal provisions in
law are still disliked by the major incumbent ISPs as AT&T and Verizon are actively asking the FCC today to eliminate
UNEs and Comcast is suing California’s net
neutrality law because of its interconnection provisions.
This is why the majorISPs have waged such a long war against net neutrality, because all
of the federal provisions that involve curtailing the power of monopoly by promoting competition also empower the FCC to enforce net neutrality. Provisions that, again, have their
roots in phone monopolies. Provisions that worked. This is also why the competitors to the major ISPs regularly support efforts to restore
Title II regulation of the industry, because the 1996 Act’s point was to codify competition as law and helped enable their entry.
The FCC Kept Getting What Would Happen Without Laws and Rules Wrong
Starting in 2005, the FCC began to classify broadband companies as Title I “information services” instead of Title II "telecommunications services" and no longer subjected them to
competition law on the premise that broadband options would flourish and telephone would deploy fiber to the home (FTTH). Today, we know none of that happened, but what is striking
about the TLPC paper’s historical analysis is the fact that every justification the FCC premised its abandonment of competition law have not panned out.
The FCC thought that wireless companies, satellite companies, and broadband over powerline companies would be
hot competition for the telephone and cable industry. But wireless technologies are not able to
substitute for wireline broadband services in higher speeds. Even 5G is not competitive with gigabit cable
systems, and most people have probably never heard of broadband over powerline today. The reality is a duopoly had persisted for years. Now that many haveabandoned FTTH,
cable has monopolized high-speed Internet service. The superior position of the incumbents are not easily surpassed much in the same way the original AT&T monopoly’s dominance was
not simply displaced by private actors and people voting with their dollars.
The FCC also thought that your cable companies would be opening up their own networks in the absence of a mandate to give you several options, but we only know what it is like when
your broadband choices are Comcast and only Comcast when you have to do things like pretend to cancel your cable service to get
overbilled a little less. And probably central to the argument by major ISPs to have competition law removed was it would spur them to invest heavily in their networks all across
the country, but other than hyping 5G while avoiding direct competition with cable, they are not investing at levels that will cover the entire nation.
We Do Not Have to Repeat History
The data we currently have shows the future for a majority of Americans, particularly low income and
rural Americans, is a monopoly for high-speed broadband access. This is why EFF has called for policymakers and regulators to begin addressing this problem now while it is still
early with a call for plans that would promote gigabit fiber to all people including local
community broadband efforts like San Francisco’s
fiber project. This is why EFF will fight to preserve state authority over ISPs so they can promote competition like in California today. And this is why EFF full heartedly supports
enactment of the recently House passed Save the Net Act, which restores the key
federal provisions needed to promote competition (and why ISP talking points about “outdated” laws are really just arguments against its anti-monopoly provisions). A broadband
monopoly future can be stopped because we have done it already and history yields all the lessons we need in how to get it done.
>> mehr lesen
Why We Can’t Support Modifications to Texas’ Anti-SLAPP Law
(Wed, 22 May 2019)
Update: Texas Governor Greg Abbott signed H.B. 2730 on June 2, 2019.
Earlier this year, a critical free speech law in Texas came under attack. Texas bill H.B. 2730, as introduced, would have gutted the Texas Citizens Protection Act, or TCPA.
The TCPA has been one of the strongest laws in the nation protecting citizens against SLAPPs. SLAPP is a shorthand way of referring to lawsuits in which the legal claims are just a
pretext for silencing or punishing individuals who use their First Amendment rights to speak up on public matters. At EFF, we have supported so-called “anti-SLAPP” laws, like the
TCPA, which allow speakers to quickly dismiss frivolous cases against them and often obtain attorney’s fees.
The original bill, H.B. 2730, would have severely limited the average Texan's ability to use the TCPA and allowed litigious businesses to once again use courts to intimidate their
critics. But a broad coalition of groups spoke out against the bill, including journalism associations, environmental groups, and hundreds of Texas-based EFF supporters who emailed
their state representatives.
We’re grateful for that vocal opposition, which created momentum for big changes to be made to H.B. 2730. Through your activism, some of the biggest problems have been fixed. But
despite those changes, EFF still cannot support the bill, because of two issues that remain.
First, the bill prevents the TCPA from applying when companies sue over alleged trade secret violations, or sue former employees based on non-compete agreements. That leaves big
loopholes for parties to allege trade secret or non-compete violations to silence critics or whistleblowers.
Second, the bill increases the ambiguity over whether TCPA defendants can get their legal fees paid, if they use pro bono or contingent-fee counsel.
We had hoped that lawmakers would address these important concerns before sending the bill to the governor. But that didn’t happen, and we think Texans would be better off if Gov.
Greg Abbott vetoes H.B. 2730.
H.B. 2730: A solution in search of a problem
Since it was passed in 2011, the TCPA has served to protect a wide variety of Texas residents. It has stopped meritless lawsuits, including a case against a Dallas couple who were sued by a pet-sitting company over
a negative Yelp review; a lawsuit against individuals who used Facebook to complain about a cosmetic medical treatment; and two
lawyers’ attempt to unmask anonymous speakers who posted online comments about Texas’ family court system.
All of which raises the question: if the law was working to protect Texans from vexatious litigation aimed at chilling their First Amendment rights, why was H.B. 2730 needed?
It wasn’t. H.B. 2730 is a solution in search of a problem. Much of the bill was pushed by a group called Texans for Lawsuit Reform, a big-business lobby that published a report on the
TCPA in 2018.
In its original form, H.B. 2730 would have severely narrowed what counts as speech about an issue of “public concern” that can be protected by the TCPA, which would have blunted the
law’s application to a number of expressive activities. The original bill also would have allowed plaintiffs to unmask online anonymous speakers using a Texas procedure that allows
for pre-litigation discovery, by making that process no longer subject to TCPA. This, too, was fixed.
Protecting anonymous speech online has been a particular concern for EFF. Last year, we filed an amicus
brief in support of anonymous Texas speakers who wrote posts on Glassdoor, an employer review site. A business had attempted to use Texas’ pre-litigation discovery process to
learn their identities. In that case, the Texas Supreme Court protected the speakers’ identities.
Big improvements, bad exemptions
Strong opposition to H.B. 2730 caused a series of amendments. The amended bill, which passed the Texas State House of Representatives and is now in the Texas State Senate, is a huge
improvement over the original proposal.
The amended bill replaces the narrow definition of “public concern” with a much broader standard. The bill also makes clear the TCPA would protect anonymous speakers subject to the
Texas procedure for pre-litigation discovery. And it specifically protects Internet users who face lawsuits for expressing their opinions online about businesses and services.
To all those Texans who sent emails in support of TCPA: thank you for standing up for free speech. You made a terrible bill much better.
Unfortunately, problems still remain with H.B. 2730. Exemptions for cases related to trade-secrets and non-compete agreements mean that when a company sues a current or former
employee for allegedly disclosing trade secrets, or violating a non-compete agreement, the worker won’t be able to use the TCPA to dismiss the case.
But it’s a mistake to assume that things like trade secret accusations can’t be used to stifle speech. The blood-testing company Theranos, whose founder Elizabeth Holmes has now been
charged with fraud, used trade secrets threats to try to intimidate both journalists and their sources. Keith Raniere, who is currently on trial for sex-trafficking and extortion charges related to his role as founder of a purported self-help group called
NXIVM, used trade secret litigation to sue critics
who said NXIVM, a group in which some members were branded, was a cult.
We’re also concerned that H.B. 2730 will make it harder for ordinary people to get their attorneys’ fees paid. There’s already a split in Texas appeals courts over whether or not the
TCPA can be used to repay legal fees in cases where defendants don’t pay attorney’s fees up front or as the case progresses, but instead rely on pro bono or contingent-fee counsel—the
types of lawyers used by the great majority of middle-class and low-income folks who get wrapped up in legal disputes.
As Public Citizen’s Paul Levy explains in a detailed blog post, small changes in
the wording of H.B. 2730 may actually limit TCPA fee awards to only those defendants who can afford to pay their attorneys’ fees up front. This could lead to a great many speakers
caving to demands that they take down their critical but honest posts, rather than vindicating their First Amendment rights. It’s not a small mistake. One of the key components of any
anti-SLAPP laws is to encourage attorneys to defend ordinary people who are targeted by SLAPPs, but may not have the money to pay legal bills up front.
We’re pleased that Texas legislators listened to the public and removed the most drastic problems with H.B. 2730. But at the end of the day, the bill still serves to exempt some of
big business’ favorite types of litigation, while making life harder for everyday people who want to exercise their free speech rights. Texas Gov. Greg Abbott should veto the bill.
>> mehr lesen
Reddit Commenter's Fight for Anonymity Is a Win for Free Speech and Fair Use
(Tue, 21 May 2019)
A fight over unmasking an anonymous Reddit commenter has turned into a significant win for online
speech and fair use. A federal court has affirmed the right to share copyrighted material for criticism and commentary, and shot
down arguments that Internet users from outside the United States can’t ever rely on First Amendment protections for anonymous speech.
EFF represents the Reddit commenter, who uses the name “Darkspilver.” A lifelong member of the Jehovah’s Witness community, Darkspilver shared comments and concerns about the
Jehovah’s Witness organization via one of Reddit’s online discussion groups. Darkspilver’s posts included a copy of an advertisement asking for donations that appeared on the back of
a Watch Tower magazine, as well as a chart they edited and reformatted to show the kinds of data that the Jehovah’s Witness organization collects and processes.
Earlier this year, Watch Tower subpoenaed Reddit for information on Darkspilver as part of a potential copyright lawsuit. The Watch Tower Bible and Tract Society, a group that
publishes doctrines for Jehovah’s Witnesses, claimed that Darkspilver’s posts infringed their copyright, and that they needed Darkspilver’s identity to pursue legal action. EFF filed
a motion to quash the subpoena, explaining that Watch Tower’s copyright claims were absurd, and that Darkspilver had deep concerns
that disclosure of their identity would cause them to be disfellowshipped by their community. Accordingly, Watch Tower’s subpoena could not pass the well-established “Doe” test, which
allows a party to use the courts to pierce anonymity only where they can show that their claims are valid and also that the balance of harms favors disclosure. The Doe test is
designed to balance the constitutional right to share and access information anonymously with the right to seek redress for legitimate complaints.
In a hearing earlier this month, Watch Tower argued that they met
the requirements of the Doe test, claiming that their copyright was infringed and also that the Doe test did not apply because Darkspilver is not a U.S. resident. On Friday, May 17,
Magistrate Judge Sallie Kim rejected the latter argument, holding that the First Amendment can apply even if a Doe is not in the
U.S. The court noted that because Darkspilver’s speech was on a U.S. company’s platform and has a U.S. audience, silencing them would have unavoidable domestic ripple effects. As
Judge Kim explained, “The subpoena here was issued by a court in the United States, on behalf of a United States company (Watch Tower) and was directed against another United States
company (Reddit). Moreover, the First Amendment protects the audience as well as the speaker.”
The court also rejected Watch Tower’s claim of infringement regarding the Excel spreadsheet. It held that Watch Tower had a potentially valid claim with respect to the advertisement,
but went on to conclude that Darkspilver’s use was likely lawful under the fair use doctrine. The court carefully
reviewed the fair use factors and concluded that “they tip sharply in Darkpilver’s favor.” We wholeheartedly agree.
But we disagree with the court’s final decision: to order limited disclosure so that Watch Tower might attempt to shore up its copyright claim. While the court agreed that “Watch
Tower has not demonstrated any actual harm or likelihood of future harm”—the fourth fair use factor—it gave undue credence Watch Tower’s
claim that “the harm it suffered from people infringing on its copyrights was directing others away from its website.” Based on the theory that “[p]erhaps Watch Tower, if
provided the opportunity, could demonstrate that fewer people visited its website after Darkspilver’s posting,” the court decided to allow Watch Tower’s counsel access to
Darkpsilver’s identifying information.
Based on the court’s approach, the Doe standard offers weak protections for fair users. Even a far-fetched theory regarding a particular fair use factor, like the one posited here,
might be enough to justify disclosure even if the rest of the fair use analysis clearly suggests the use was lawful. That said, the disclosure is subject to strict limits. Reddit may
disclose it only to Watch Tower’s counsel of record, and that counsel is prohibited from sharing that information with anyone else—including the client—without a separate court order.
In addition, the court explicitly “admonished that any violation of this Order will be sanctioned.”
This case touches on a lot of EFF’s most important issues, and it’s a prime example of how intellectual property, free speech, and privacy can intersect in complicated ways, making it
hard for people to speak out about controversial issues. We are considering next steps. But in the meantime, we are also celebrating a crucial win for the First Amendment and access
to anonymous speech for Internet users everywhere.
In Re DMCA Section 512(h) Subpoena to Reddit, Inc.
>> mehr lesen
Hearing Wednesday: Can Criminal Defendants Review DNA Analysis Software Used to Prosecute Them?
(Mon, 20 May 2019)
California Appeals Court to Hear Arguments on Defense Review of DNA Analysis System
Fresno – On Wednesday, May 22, at 9 am, the Electronic Frontier Foundation (EFF) will argue that criminal defendants have a right to review and evaluate the source code of forensic
DNA analysis software programs used to create evidence against them. The case, California v. Johnson, is on
appeal to a California appeals court.
In Johnson, the defendant was allegedly linked to a series of crimes by a software program called TrueAllele, used to evaluate complex mixtures of DNA samples from multiple
people. As part of his defense, Johnson wants his defense team to examine the source code to see exactly how TrueAllele estimates whether a person’s DNA is likely to have contributed
to a mixture, including whether the code works in practice as it has been described. However, prosecutors and the manufacturers of TrueAllele claim that the source code is a trade
secret and that the commercial interest in secrecy should prevent a defendant from reviewing the source code—even though the defense has offered to follow regular procedure and agree
to a court order not to disclose the code beyond the defense team.
EFF is participating in Johnson as amicus, and
has pointed out that at least two other DNA matching
programs have been found to have serious source code errors that could lead to false convictions. In court Wednesday, EFF Senior Staff Attorney Kit Walsh will argue that Johnson has a
constitutionally-protected right to inspect and challenge the evidence used to prosecute him—and that extends to the source code of the forensic software.
California v. Johnson
EFF Senior Staff Attorney Kit Walsh
Wednesday, May 22
Fifth District Court of Appeal
2424 Ventura Street
Fresno, California, 93721
For more on this case:
>> mehr lesen
TOSsed Out: Highlighting the Effects of Content Rules Online
(Mon, 20 May 2019)
Today we are launching TOSsed Out, a new iteration of EFF’s longstanding work in tracking and documenting the ways that Terms
of Service (TOS) and other speech moderating rules are unevenly and unthinkingly applied to people by online services. As a result of these practices, posts are deleted and accounts
banned, harming those for whom the Internet is an irreplaceable forum to express ideas, connect with others, and find support.
TOSsed Out continues in the vein of Onlinecensorship.org, which EFF launched in 2014 to collect reports from users in an effort
to encourage social media companies to operate with greater transparency and accountability as they regulate speech. TOSsed Out will highlight the myriad ways that all kinds of people
are negatively affected by these rules and their uneven enforcement.
Last week the White House launched a tool for people to report incidents of “censorship” on social media, following the President’s repeated allegations of a bias against
conservatives in how these companies apply their rules. In reality, commercial content moderation practices negatively affect all kinds of people, especially people who already face
marginalization. We’ve seen everything from Black women flagged for sharing their experiences of racism to sex educators whose content is deemed too risqué. TOSsed Out will show that
trying to censor social media at scale ends up removing legal, protected speech that should be allowed on platforms
TOSsed Out’s debut today is the result of brainstorming, research, design, and writing work that began in late 2018 after we saw an uptick in takedowns resulting from increased public
and government pressure, as well as the rise in automated tools. A diverse group of entries are being published today, including a Twitter account parodying Beto O’Rourke being deemed
as “confusing” or “deceptive,” a gallery focused on creating awareness of diversity of women’s bodies, a Black Lives Matter-themed concert, and an archive aimed at documenting human
These examples, and the ones added in the future, make clear the need for companies to embrace the Santa Clara
Principles. We helped create the Principles to establish a human rights framework for online speech moderation, require transparency about content removal, and
specify appeals processes to help users get their content back online. We call on companies to make that commitment now, rather than later.
People rely on Internet platforms to share experiences and build communities, and not everyone has good alternatives to speak out or stay in touch when a tech company censors or bans
them. Rules need to be clear, processes need to be transparent, and appeals need to be accessible.
TOSsed Out Entries Launched Today:
Documentation of War Crimes Disappeared by Automated ToolsRapid Moderation Misses Key PhraseFacebook’s Ad Policy Prevents a Unitarian Church From Promoting a Black Lives Matter Concert Before the
Concert HappensBeto O’Rourke Parody Account Suspended Until It Adds the Word ‘Fake’ to Its NameExplicitly Art or Sexually Explicit?What Tumblr’s Ban on 'Adult Content' Actually DidTransthetics Gets BlockedWho Owns a Word?Proving Their Point: 'White Men Are So Fragile' Lands Teacher in
>> mehr lesen
EFF Project Shows How People Are Unfairly “TOSsed Out” By Platforms’ Absurd Enforcement of Content Rules
(Mon, 20 May 2019)
Users Without Resources to Fight Back Are Most Affected by Unevenly-Enforced Rules
San Francisco—The Electronic Frontier Foundation (EFF) today launched TOSsed Out, a project to highlight the vast spectrum of people
silenced by social media platforms that inconsistently and erroneously apply terms of service (TOS) rules.
TOSsed Out will track and publicize the ways in which TOS and other speech moderation rules are unevenly enforced, with little to no transparency, against a range people for whom the
Internet is an irreplaceable forum to express ideas, connect with others, and find support.
This includes people on the margins who question authority, criticize the powerful, educate, and call attention to
discrimination. The project is a continuation of work EFF began five years ago when it launched Onlinecensorship.org to collect speech
takedown reports from users.
“Last week the White House launched a tool to report take downs, following the president’s repeated allegations that conservatives are being censored on social media,” said Jillian
York, EFF Director for International Freedom of Expression. “But in reality, commercial content moderation practices negatively affect all kinds of people with all kinds of political views. Black women get
flagged for posting hate speech when they share experiences of racism. Sex educators’ content is removed because it was deemed too risqué. TOSsed Out will show that trying to censor
social media at scale ends up removing far too much legal, protected speech that should be allowed on platforms.”
EFF conceived TOSsed Out in late 2018 after seeing more takedowns resulting from increased public and government pressure to deal with objectionable content, as well as the rise in
automated tools. While calls for censorship abound, TOSsed Out aims to demonstrate how difficult it is for platforms to get it right. Platform rules—either through automation or human
moderators—unfairly ban many people who don’t deserve it and disproportionately
impact those with insufficient resources to easily move to other mediums to speak out, express their ideas, and build a community.
EFF is launching TOSsed Out with several examples of TOS enforcement gone wrong, and invites visitors to the site to submit more. In one example, a reverend couldn’t initially promote
a Black Lives Matter-themed concert on Facebook, eventually discovering that using the words “Black Lives Matter” required additional review. Other examples include queer sex
education videos being removed and automated filters on Tumblr flagging a law professor’s black and white drawings of design patents as adult content. Political speech is also
impacted; one case highlights the removal of a parody account lampooning presidential candidate Beto O’Rourke.
“The current debates and complaints too often center on people with huge followings getting kicked off of social media because of their political ideologies. This threatens to miss
the bigger problem. TOS enforcement by corporate gatekeepers far more often hits people without the resources and networks to fight back to regain their voice online,” said EFF Policy
Analyst Katharine Trendacosta. “Platforms over-filter in response to pressure to weed out objectionable content, and a broad range of people at the margins are paying the price. With
TOSsed Out, we seek to put pressure on those platforms to take a closer look at who is being actually hurt by their speech moderation rules, instead of just responding to the headline
of the day.”
Director for International Freedom of Expression
Manager of Policy and Activism
>> mehr lesen
Senator Wyden Leads on Securing Elections Before 2020
(Fri, 17 May 2019)
Sen. Ron Wyden’s new proposal to
protect the integrity of U.S. elections, the Protecting American Votes and Elections (PAVE) Act of 2019, takes a much needed step forward by
requiring a return to paper ballots.
The bill forcefully addresses a grave threat to American democracy—outdated election technologies used in polling places all over the country that run the risk of recording inaccurate
votes or even allowing outside actors to maliciously interfere with the votes that individuals cast.
The simple solution: paper ballots and audits of paper ballots. EFF along with security experts have long-supported this approach—arguing that the gold standard for security of
our election infrastructure is paper ballots that are backed by risk-limiting audits (an audit that
statistically determines how many votes need to be recounted in order to confirm an election result). As Sen. Kamala Harris, one of the fourteen co-sponsors the bill, recently said, “Russia can’t hack a piece of paper.”
The last two decades have shown that the touchscreen and other machines used at polling places to cast votes are not only
susceptible to tampering, but also that the outdated software and poorly configured settings lead to countless problems
like inaccurately recording votes or large drop-offs on down-ballot races.
Sen. Wyden’s bill sets out necessary steps to make sure that state and local governments can respond to the election security concerns raised by experts:
It requires paper ballots and risk-limiting audits in federal elections.
It allocates $500 million dollars to states to buy secure machines that can scan paper ballots.
It allocates an additional $250 million dollars to states to buy ballot-marking devices to be used by voters with disabilities or who are face language barriers.
It bans voting machines from connecting to the Internet.
It gives the Department of Homeland Security the authority to set mandatory national minimum cybersecurity standards for voting machines, voter registration databases, electronic
poll books, and election reporting websites.
It empowers ordinary voters to enforce these critical safeguards with a private right of action.
The PAVE Act is supported by a large coalition of senators and a companion bill has been introduced in the House of Representatives. The foreign interference in the 2016 election
stands to be repeated in 2020 if Congress does not act now to address the numerous
concerns with the integrity of our voting system repeatedly identified by the information security community.
>> mehr lesen
EFF Files Freedom of Information (FOIA) Request for Submissions to the White House’s Platform Moderation Tool
(Fri, 17 May 2019)
When social media platforms enforce their content moderation rules unfairly, it affects
everyone’s ability to speak out online. Unfair and inconsistent online censorship magnifies existing power imbalances,
giving people who already have the least power in society fewer places where they are allowed a voice online.
President Donald Trump and some members of Congress have complained on Twitter that the
people most silenced online are those who share the President’s political views. So the White House launched this week a website inviting people to report
examples of being banned online because of political bias.
The website leads users through a series of forms seeking their names, citizenship
status, email address, social media handles, and examples of the censorship they encountered. People are required to accept a user
agreement that permits the U.S. government to use the information in unspecified ways and edit the submissions, raising a natural concern that any political operation will
selectively disclose the results.
For this reason, we have sent a FOIA request to the Executive Office of the President, Office
of Science and Technology, for all submissions received. And because we are concerned that the White House's website asks for a lot of personal information without sufficient privacy
protections, we have asked that first name, last name, email address, phone number and citizenship status be redacted.
It’s troubling to see people being asked whether they are a U.S. citizen or a permanent resident in light of the administration’s hard line immigration policies. It raises legitimate
questions about how that and other required personal information may be used and who the government really wants to hear from.
There’s no question that social media platforms are failing at speech moderation—that’s why EFF has been monitoring their practices for years and working with other civil society
organizations to establish guidelines to encourage fairness and transparency. Starting in 2014 EFF has been collecting stories of people censored by social media at Onlinecensorship.org. The reports we've received over the years indicate that all kinds of groups and individuals experience censorship on
platforms, and that marginalized groups around the world with fewer traditional outlets and resources to speak out are disproportionately affected by flawed speech moderation rules.
While we agree that mapping online censorship is important, we should remember that surveys like these will, at best, give an incomplete picture of social media companies' practices.
And let’s be clear: the answer to bad content moderation isn’t to empower the government to enforce moderation practices.
Rather, social media platform owners must commit to real transparency about what speech they are removing, under what rules, and at whose behest. They should adopt moderation
frameworks, like the Santa Clara Principles, that are consistent with human rights, with clear take down rules, fair and transparent
removal processes, and mechanisms for users to appeal take down decisions.
>> mehr lesen
California Now Classifies Immigration Enforcement as “Misuse” of Statewide Law Enforcement Network
(Fri, 17 May 2019)
It has taken more than a year, but the California Attorney General’s Office has implemented steps to protect immigrants from U.S. Immigration and Customs Enforcement (ICE) and
other agencies that abuse the state’s public safety network, the California Law Enforcement Telecommunications System (CLETS).
Following calls for reform from EFF and immigrant rights advocacy groups, the Attorney General issued new protocols that say if an agency accesses anything other than criminal
history records on CLETS for immigration enforcement purposes, it may be treated as "system misuse," and the agency or individual could be subject to sanctions.
In October 2017, in response to threats of a mass deportation campaign by the Trump administration, the California legislature passed S.B. 54, also known as the California Values Act. More colloquially
referred to as the “sanctuary state” law, S.B. 54 restricts state and local law enforcement from using resources to assist with immigration enforcement. Among these measures, the
legislature prohibited agencies from providing personal information (such as a home or work address) for the purposes of enforcing immigration laws. The legislation also requires the
Attorney General to “publish guidance, audit criteria, and training recommendations” to ensure to the “fullest extent practicable” that state and local law enforcement databases are
not used to enforce immigration laws.
For years, EFF has been investigating abuse of CLETS, a sprawling computer network that connects law enforcement agencies across the state to a number of state databases,
including driver license and vehicle registration information maintained by the California Department of Motor Vehicles. Often these cases of misuse have involved police accessing the
system to spy on former or potential romantic partners, but police have also used the system to snoop on critical
officials or to improperly check
whether students live within a particular school district. CLETS users are not just local cops: a wide range of federal agencies, including many field offices within the U.S.
Department of Homeland Security, have access to CLETS.
Following the passage of S.B. 54, EFF told the California Attorney General’s CLETS Advisory
Committee that accessing CLETS (aside from criminal history information, which is exempt due to federal law) for immigration enforcement should be formally classified
by as misuse of the system. Such misuse can result in sanctions ranging from a letter of censure to cutting off access to CLETS to criminal prosecution. EFF repeated this call in
a series of recommendations sent to the
Attorney General’s office co-authored by the ACLU, the National Immigration Law Center, the Urban Peace Institute, Advancing Justice – Asian Law Caucus, and more than a dozen other
The Attorney General’s office listened. According to the latest set of “Policies,
Practices and Procedures,” which state law requires
agencies to follow if they access CLETS:
[F]ederal, state or local law enforcement agencies shall not use any non-criminal history information contained within these databases for immigration enforcement purposes.
“Immigration enforcement” includes any and all efforts to investigate, enforce, or assist in the investigation or enforcement of any federal civil immigration law, and also
includes any and all efforts to investigate, enforce, or assist in the investigation or enforcement of any federal criminal immigration law that penalizes a person’s presence in,
entry, or reentry to, or employment in, the United States.
The new policies for system misuse also clearly define accessing non-criminal data for immigration enforcement as “prohibited/unauthorized” use, which can result in either the
suspension or removal of the agency or individual’s access to CLETS. State and local law enforcement agencies are advised to update their own governance policies and alert users that
CLETS, or other databases associated with CLETS, should not be used for immigration enforcement.
While this change is a major step forward, it does not settle all our concerns. The Attorney General leaves it to the agencies to investigate their own cases of misuse, as long
as they report the results of those investigations back to the Attorney General. We are skeptical that ICE or affiliated agencies will investigate themselves for using CLETS to
support their deportation efforts. We urge the Attorney General to enact a more rigorous oversight process for federal agencies.
CLETS is one significant part of a sprawling web of state and local databases accessed by federal agencies. To further protect immigrants from ICE’s abuses, we urge the
California state legislature to pass legislation prohibiting the access of all state and local databases for immigration enforcement.
>> mehr lesen
Supreme Court Extends Antitrust Protections to App Store Customers
(Fri, 17 May 2019)
On Monday, the U.S. Supreme Court ruled that
consumers who buy apps through the Apple app store are direct purchasers and may seek antitrust relief under well-settled law. On the one hand, the decision carries major implications
not only for Apple, but also for other companies that host app stores, and antitrust law in other contexts. On the other hand, it represents not a new jurisprudential doctrine, so
much as the application of well-established law to a novel
factual context presented by app stores.
Monday’s decision affirmed the Illinois Brick framework under federal law, while announcing that consumers who purchase apps from app
stores are in fact direct purchasers for the purposes of antitrust analysis, not indirect purchasers as Apple had claimed. As a result, the suit against Apple can continue.
Since 1977, courts have deemed suits by indirect purchasers to be outside the bounds of antitrust law. In Illinois Brick Co. v. Illinois, the Supreme Court heard claims from the state of Illinois, which alleged harms
from a price-fixing conspiracy among brick manufacturers who sold to masonry contractors with whom, in turn, the state government contracted for new construction projects. The Court
held that only the masonry contractors enjoyed a right to challenge the alleged price-fixing conspiracy, whereas an indirect purchaser of downstream products (in that case, the state
government plaintiff) did not have standing to bring a lawsuit.
The Illinois Brick rule has often been used to shut the courthouse doors to everyday consumers, preventing them from challenging anti-competitive conduct. Often, the
intermediate purchasers, like the contractors in Illinois Brick, are less motivated to bring a lawsuit, and the price-fixing or other illegal conduct goes unchallenged. Many
state courts and legislatures have ruled that the Illinois Brick doctrine doesn’t apply to their state’s own antitrust laws.
Monday’s decision affirmed the Illinois Brick framework under federal law, while announcing that consumers who purchase apps from app stores are in fact direct purchasers for
the purposes of antitrust analysis, not indirect purchasers as Apple had claimed. As a result, the suit against Apple can continue.
Software developers who wish to sell apps in the Apple App Store can set their own prices, but Apple charges a 30 percent fee on all sales. The plaintiffs argued that Apple’s fees
make apps more expensive than they would be in a competitive—rather than exclusive—marketplace. The justices didn’t rule on whether Apple’s conduct violated the antitrust laws, but
they ruled that the plaintiffs are direct purchasers, and that their suit should proceed in the district court.
The Court’s 5-4 opinion was written by new Justice Brett Kavanaugh, who joined the Court’s moderate justices, rather than the conservative colleagues with whom he usually votes.
If the courts go on to rule that Apple’s 30% commission on app sales violates the antitrust laws, it could open the door for more third-party software marketplaces for mobile
applications. It could also lead to more choices of app markets in the Android world. Many
app developers (including EFF) and users have chafed under
the seemingly arbitrary restrictions imposed by app stores, so this case could have wide-ranging
We are glad to see the Supreme Court empower antitrust enforcement, especially given the general trend over the past generation towards limiting antitrust law. While the Apple case
remains unresolved and will continue, Monday’s decision ensures that iOS users will enjoy access to the courts, and that companies crafting platforms can’t abuse monopoly power to
extract higher returns while enjoying immunity from suits by consumers.
>> mehr lesen
What You Need to Know About the Latest WhatsApp Vulnerability
(Thu, 16 May 2019)
If you are one of WhatsApp’s billion-plus users, you may have read that on Monday the company announced that it had found a vulnerability. This vulnerability allowed an attacker to remotely
upload malicious code onto a phone by sending packets of data that look like phone calls from a number not in your contacts list. These repeated calls then cause WhatsApp to crash.
This is a particularly scary vulnerability because the does not require that the user pick up the phone, click a link, enter their login credentials, or interact in any way.
Fortunately, the company fixed the vulnerability on the server side over the weekend and rolled out a patch for the client side on Monday.
What does that mean for you? First and foremost, it means today is a good day to make sure that you are running the latest version of WhatsApp. Until you update your
software, your phone may still be vulnerable to this exploit.
Are you likely to have been targeted by this exploit? Facebook (which owns WhatsApp) has not indicated that they know how many people have been targeted by this
vulnerability, but they have attributed its use to an Israeli security company, NSO Group, which has long
claimed to be able to install its software by sending a single text message. The exploit market pays top-dollar for “zero-click install” vulnerabilities in the latest
versions of popular applications. It is not so remarkable that such capabilities exist, but it is remarkable that WhatsApp’s security team found and patched the
NSO Group is known to sell its software to governments such as Mexico
and Saudi Arabia, where
these capabilities have been used to spy on human rights activists, scientists, and journalists, including Jamal Khashoggi, who was allegedly tracked using NSO Group’s Pegasus spyware in the weeks leading up to
his murder by agents of the Saudi government.
What can you do if you have antagonized a government known to use NSO Group’s spyware and your WhatsApp is getting strange calls and crashing? You can contact Eva
Galperin at EFF’s Threat Lab at firstname.lastname@example.org.
As for everyone else, stay calm, update your software, and keep using chat apps like WhatsApp that offer end-to-end encryption. Advanced malware and vulnerabilities like this
may grab headlines, but for most people most of the time end-to-end encryption is still one of the most effective ways to protect the contents of your messages.
>> mehr lesen
California: Speak Out for the Right to Take Companies That Violate Your Privacy to Court
(Thu, 16 May 2019)
If a company disclosed information about your cable subscription without your permission, you already have the legal right
to take them to court. Why should it be any different if a company ignores your requests about how to treat some of your most private information—where you go, where you live, or who
94 percent of Californians agree they should be able to take companies that
violate their privacy to court. S.B. 561, authored by Sen. Hannah-Beth Jackson and sponsored by the Attorney General, would allow individuals to
stand up to the big companies that abuse their information and invade their privacy.
California: Tell The Senate to Empower You To Stand Up For Your Privacy
This bill is the only one in the California legislature today to strengthen
enforcement of the California Consumer Privacy Act (CCPA), an important privacy law passed last year and slated to go into effect in January.
The CCPA established important rights, but lacks the bite it needs to back up its bark. Empowering consumers to be able to sue companies directly, also known as a private right of
action, is one of EFF’s highest priorities in any data privacy legislation.
A private right of action means that every person can act as their own privacy enforcer. Many privacy statutes allow people to sue companies directly, including federal laws on
wiretaps, stored electronic communications, video rentals, driver’s licenses, and, yes, cable subscriptions.
The CCPA gives Californians a limited right to do this—just in cases of data breach. But failing to protect against data breaches is not the only way that companies violate our
privacy and abuse our trust.
S.B. 561 would give consumers this powerful tool in all cases where companies violate their CCPA rights. If passed, this law would allow consumers to take companies to court if:
companies sell their data after being told not to
companies do not delete information after being asked to
companies refuse to comply with data portability requests
companies discriminate against them for exercising their privacy rights
companies sell the information of those younger than 13 without first obtaining explicit permission to do so
Private enforcement is a necessary right for consumers to have as a check on the behavior of giant companies that vacuum up our personal information and ignore our wishes.
Private enforcement is a necessary right for consumers to have as a check on the behavior of giant companies that vacuum up our personal information and ignore our wishes.
Government agencies alone cannot sufficiently protect individual privacy. Agencies may fail to enforce privacy laws for any number of reasons, including competing priorities, regulatory capture, or, as is the case in California, a lack of resources.
Stacey Schesser, Supervising Deputy Attorney General on Consumer Protection, said in an April hearing that her office—even after an expansion—would only be able to prosecute three cases a year to protect the rights of 40 million Californians.
“The reason that the PRA [private right of action] is so important here is because it provides a critical adjunct to the work of the Attorney General. It would work in parallel in
ensuring that the law is enforced,” Schesser said. “To provide those rights and then say that you can’t enforce them if companies don’t comply, that’s about fundamental fairness as
It is not enough for government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary for these laws to have bite,
to ensure companies do not ignore them.
Tell your state Senator to publicly support S.B. 561 and empower you to enforce your own privacy rights.
>> mehr lesen
The Christchurch Call: The Good, the Not-So-Good, and the Ugly
(Thu, 16 May 2019)
In the wake of the mass shootings at two mosques in Christchurch, New Zealand, that killed fifty-one people and injured more than forty others, the New Zealand government has
released a plan to combat terrorist and violent content online, dubbed the Christchurch Call. The Call has been endorsed by more than a dozen countries, as
well as eight major tech companies.
The massacre, committed on
March 15 by an Australian living in New Zealand connected with white supremacist groups in various countries, was intentionally live-streamed and disseminated widely on social media.
Although most companies acted quickly to remove the video, many New Zealanders—and others around the world—saw it by accident on their feeds.
Just ahead of the Call's release, the New Zealand government hosted a civil society meeting in Paris. The meeting included not only digital rights and civil liberties
organizations (including EFF), but also those working on countering violent extremism (CVE) and against white supremacy. In the days prior to the meeting, members of civil society
from dozens of countries worked together to create a document outlining recommendations, concerns, and points for discussion for the meeting (see PDF at bottom).
As is too often the case, civil society was invited late to the conversation, which rather unfortunately took place during Ramadan. That said, New Zealand Prime Minister Jacinda
Ardern attended the meeting personally and engaged directly with civil society members for several hours to understand our concerns about the Call—a rather unprecedented move, in our
The concerns raised by civil society were as diverse as the groups represented, but there was general agreement that content takedowns are not the answer to the problem at hand,
and that governments should be focusing on the root causes of extremism. PM Ardern specifically acknowledged that in times of crisis, governments want to act immediately and look to
existing levers—which, as we’ve noted manytimesover the years, are often censorship and surveillance.
We appreciate that recognition. Unfortunately, however, the Christchurch Call released the following day is a mixed bag that contains important ideas but also endorses those
The first point of the Christchurch Call, addressing government commitments, is a refreshing departure from the usual. It calls on governments to commit to “strengthening
the resilience and inclusiveness of our societies” through education, media literacy, and fighting inequality.
We were also happy to see a call for companies to provide greater transparency regarding their community standards or terms of service. Specifically, companies are called
upon to outline and publish the consequences of sharing terrorist and violent extremist content; describe policies for detecting and removing such content and; provide an
efficient complaints and appeals process. This ask is consistent with the Santa Clara Principles and a
vital part of protecting rights in the context of content moderation.
The Call asks governments to “consider appropriate action” to prevent the use of online services to disseminate terrorist content through loosely defined practices such as
“capacity-building activities” aimed at small online service providers, the development of “industry standards or voluntary frameworks,” and “regulatory or policy measures
consistent with a free, open and secure internet and international human rights law.” While we’re glad to see the inclusion of human rights law and concern for keeping the
internet free, open and secure, industry standards and voluntary frameworks—such as the existing hash database utilized by several major
companies—have all too often resulted in opaque measures that undermine freedom of expression.
While the government of New Zealand acknowledged to civil society that their efforts are aimed at social media platforms, we’re dismayed that the Call itself doesn’t
distinguish between such platforms and core internet infrastructure such as internet service providers (ISPs) and content delivery networks (CDNs). Given that, in the wake of
attacks, New Zealand’s ISPs acted extrajudicially to block access to sites like 8Chan, this is clearly a relevant concern.
The Call asks companies to take “transparent, specific measures” to prevent the upload of terrorist and violent extremist content and prevent its dissemination “in a manner
consistent with human rights and fundamental freedoms.” But as numerous civil society organizations pointed out in the May 14 meeting, upload filters are inherently inconsistent
with fundamental freedoms. Moreover, driving content underground may do little to prevent attacks and can even impede efforts to do so by making the perpetrators more
difficult to identify.
We also have grave concerns about how “terrorism” and “violent extremism” are defined, by whom. Companies regularly use blunt measures to determine what
constitutes terrorism, while a variety of governments—including Call signatories Jordan and Spain—have used anti-terror measures to silence
New Zealand has expressed interest in continuing the dialogue with civil society, and has acknowledged that many rights organizations lack the resources to engage at the same
level as industry groups. So here's our call: New Zealand must take its new role as a leader in this space seriously and ensure that civil society has a early seat at the table in all
future platform censorship conversations. Online or offline, “Nothing about us without us.”
UPDATED May 16, 2019: This post was edited to correct the nationality of the shooter in the Christchurch massacre.
>> mehr lesen
Send a Message to Congress: The Last Thing We Need is More Bad Patents
(Wed, 15 May 2019)
Two Senators are working on a bill that will make it much easier to get, and
threaten lawsuits over, worthless patents. That will make small businesses even more vulnerable to patent trolls, and raise prices for consumers. We need to speak up now and tell
Congress this is the wrong direction for the U.S. patent system.
Tell Congress we don't need more bad patents
There’s no published bill yet, but Senators Thom Tillis (R-N.C.) and Chris Coons (D-Del.) have published a “framework” outlining how they intend to undermine Section 101 of the U.S.
patent law. That’s the section of law that forbids patents on abstract ideas, laws of nature, and things that occur in nature.
Section 101’s longstanding requirement should be uncontroversial—applicants have to claim a “new and useful” invention to get a patent—a requirement that, remarkably, Tillis and Coons
say they will dismantle.
In recent years, important Supreme Court rulings like Alice v. CLS Bank have ensured that courts give full effect to Section 101. That’s given small businesses a fighting
chance against meritless patents, since they can be evaluated—and thrown out—at early stages of a lawsuit.
Check out the businesses we profile in our “Saved by Alice” page. Patent trolls sued a bike shop over message notifications; a photographer for running online contests; and a startup that put restaurant menus online. It’s ridiculous that patents were granted on such basic practices—and it would be even
more outrageous if those businesses had to hire experts, undergo expensive discovery, and endure a jury trial before they get a serious evaluation of such “inventions.”
Listen to our interview with Justus Decher. Decher’s health company
was threatened by a company called MyHealth over a patent on “remote patient monitoring.” MyHealth never built a product, but they demanded $25,000 from Decher—even before his
business had any profits.
Why is the Tillis-Coons proposal moving forward? Pharmaceutical and biotech companies are working together with lobbyists for patent lawyers and companies that have aggressive
licensing practices. They’re pushing a false narrative about the need to resolve “uncertainty” in the patent law. But the only uncertainty produced by a strong Section 101 is in the
profit margins of patent trolls and the lawyers filing their meritless lawsuits.
Tell Congress—don’t feed the patent trolls. Say no to the Tillis-Coons patent proposal.
TELL CONGRESS WE DON'T NEED MORE BAD PATENTS
>> mehr lesen
Why Has San Francisco Allowed Comcast and AT&T to Dictate Its Broadband Future (Or Lack Thereof)?
(Wed, 15 May 2019)
American cities across the country face the same problem: major private Internet providers, facing little in the way of competition, refusing to invest and upgrade their networks to
all residents. But not every city has gone through the trouble to analyze the problem, come up with a solution, and still done nothing like San Francisco.
For over a year, the city has sat on a fully vetted and ready to implement
strategy to bring affordable high-speed broadband competition to all of its residents. According to the city’s own analysis, private providers will never address some of
the most serious problems with the community’s infrastructure—such as the fact that 100,000 residents lack access to broadband, and 50,000 residents only have access to dial up
speeds. Of the city’s public school students, 15
percentdo not have access to the
Internet, with that number rising to 30 percent for communities of color.
No Evidence That Private Competition Will Come to the Entire City
Within the city’s 195 page report on its options, the most important fact reported by the local government was that major private providers such as Comcast and AT&T (both strongly opposed the city’s effort) had absolutely no
plans to compete with each other anywhere in the city. While certain parts of San Francisco’s market enjoy competition—usually driven by $40-gigabit fiber deployed by Sonic, a smaller regional ISP—many parts of the city do not.
The remedy to this problem is within reach: connecting every home and business with open access fiber. When the city invests in this infrastructure, it enables more private companies
to compete in the market for Internet service. The model is proven to be successful internationally and even smaller, less well-financed
communities in the United States such as Layton, Utah, are deploying open access fiber that delivers
gigabit fiber services (with multiple ISPs competing) at an average of $50 a month. The city even has three qualified bidders ready to build the
infrastructure if the city gives them the green light, and most importantly, the financial investment.
The $1.8 billion Universal High-Speed Fiber Network Will Benefit Residents for Decades
While the price tag may seem high, this is an infrastructure investment that will be used for generations, and will be able to affordably scale upwards in capacity as technology
improves. In other words, the fiber won’t need replacing, only the electronics on either end. That's much less expensive than upgrades that would require digging up streets. Many
private ISPs have to answer to investors who expect a shorter turn around in profits, which is why projects like Verizon FiOS have been shut down for years. There is no feasible way
to build a fiber infrastructure and expect a quick turnaround in profit. Rather, the right way to look at what is essentially 21st-century broadband infrastructure is to think long
term over a multi-decade window.
If you live in San Francisco and are paying a lot already for broadband service (if you can get it at all), look at how much you are paying over just one year, and then over multiple
years. When the total of paying for an uncompetitive service starts adding up to several thousands of dollars, then the city’s estimated $2000 per household for the entirety of the
project starts to not look so bad. In fact, homes have an expected three percent increase in home value
when connected to fiber, which averages out to be $5437 per household. Furthermore, it would not be any less expensive for a private provider to undertake this fiber
project. And as noted by the city's analysis, private sector investment can't be relied on, because the private sector has made clear that, if they are focused on short-term
returns, it does not intend to invest that kind of money.
Therefore, we must continue to push our local leaders to be bold, forward-thinking, and to stand up to the private incumbents, who are more than happy with the status quo. Otherwise,
most residents within the city of San Francisco will have to accept the fact that they will miss out on broadband access, or be forced into subscribing to a high-speed
>> mehr lesen
San Francisco Takes a Historic Step Forward in the Fight for Privacy
(Wed, 15 May 2019)
The San Francisco Board of Supervisors voted today by 8-to-1 to make San Francisco the first major city in the United States to ban government use of face surveillance technology.
This historic measure applies to all city departments. The Stop Secret
Surveillance Ordinance also takes an important step toward ensuring a more informed and democratic process before the San Francisco Police Department and other city agencies
may acquire other kinds of surveillance technologies.
Face recognition technology is a particularly pernicious form of surveillance, given its disparate propensity to misidentify
women, and people of color. However, even if those failures were addressed, we are at a precipice where this technology could soon be used to track people in real-time. This would
place entire communities of law-abiding residents into a perpetual line-up, as they attend worship, spend time with romantic partners, attend protests, or simply go about their daily
It is encouraging to see San Francisco take this proactive step in anticipating the surveillance problems on the horizon and heading them off in advance.
It is encouraging to see San Francisco take this proactive step in anticipating the surveillance problems on the horizon and heading them off in advance. This is far easier than
trying to put the proverbial genie back in the bottle after it causes harm.
Today’s 8-1 vote appears veto-proof, especially because two sponsors of the ordinance were not in attendance. However, the fight for the privacy and civil rights of the people of San
Francisco is not over. EFF will continue to work with our members, coalition partners,
lawmakers, and neighbors, to urge Mayor Breed to sign into law the Stop Secret Surveillance Ordinance. Please join us in this fight by contacting Mayor Breed and expressing your support for the Stop Secret Surveillance Ordinance.
>> mehr lesen