Congress Must Protect Americans’ Location Privacy
(Sa, 18 Feb 2017)
Your smartphone, navigation system, fitness device, and more know where you are most of the time. Law enforcement should need a warrant to access the information these technologies
Lawmakers have a chance to create warrant requirements for the sensitive location information collected by your devices.
Sen. Ron Wyden and Reps. Jason Chaffetz and John Conyers reintroduced the Geolocation Privacy and
Surveillance Act (H.R. 1062) earlier this week. Congress should quickly move this bill and protect
consumers’ privacy from warrantless searches.
Currently, law enforcement need to obtain a warrant before they can use their own GPS device to track individuals—like by attaching a GPS unit to a suspect’s car—under the Supreme
Court’s 2012 ruling in U.S. v. Jones. But that kind of court oversight is missing when law enforcement goes to a third-party company to get location information or when law
enforcement uses devices that mimic cellphone towers and siphon off users’ location information.
Tell Congress to put in place basic and necessary privacy protections for the sensitive location
information collected by the devices in your pockets, in your car, and on your wrist.
>> mehr lesen
Federal Circuit Sticks By Its Bad Law on Personal Jurisdiction In Patent Cases
(Fr, 17 Feb 2017)
Xilinx will get to fight patent troll in home court, but many troll targets will still be dragged to distant and inconvenient forums.
If a patent troll threatens your company, can you go to your nearest federal court and ask for a ruling that the patent is invalid or that you aren’t infringing it? According to the
Federal Circuit (the court that hears all patent appeals), the answer to this question is usually no. The court has a special rule for patent owners that demand letters cannot create
jurisdiction. EFF, together with Public Knowledge, recently filed a friend-of-the-court brief asking for this rule to be overturned. But in a decision this week, the Federal Circuit reached the right result for the accused infringer in the case, but
left its bad law largely in place.
In Xilinx v Papst Licensing, a German patent troll, Pabst, accused Xilinx of infringing a patent relating to memory tests in electronics. Papst sent Xilinx a couple of letters
and visited the company at its offices in California to demand payment of a license fee. Xilinx then filed a lawsuit in the Northern District of California asking the court to rule
that the patent was invalid and it did not infringe. The district court dismissed the case. On appeal, the Federal Circuit was asked to determine whether the California district court
could exercise personal jurisdiction over Papst.
At EFF we’ve long complained about unfair rules in patent cases that give patent owners almost complete control over where disputes are litigated. The Federal Circuit has developed
two strands of jurisprudence that, in tandem, have led to this result. First, in a 1990 case called VE Holding, the Federal Circuit held that companies that sell products nationwide can be sued for patent infringement in any federal court in
the country. (The Supreme Court is set to decide whether this
holding should be overruled.)
Second, in a case called Red Wing Shoe, the Federal Circuit ruled that companies who receive
patent demand letters from trolls can’t sue them in their home district to get a determination the patent is invalid or not-infringed. As others have noted, the Federal Circuit has “gone to great lengths to deny
jurisdiction over patentees sending demand letters from afar.”
As a practical matter, VE Holding and Red Wing Shoe operate as a one-two punch that gives patent owners almost complete control over where patent disputes can be
litigated. This means that a productive company threatened by a troll may have no choice but to litigate in a distant and expensive forum, such as the Eastern District of Texas, where
local rules systematicallyfavorpatentowners over patent defendants.
In our amicus brief, we argued that the Federal Circuit should hear the case en banc and overrule Red Wing Shoe. But
the court did not go so far. Instead, it relied on the physical visit of Papst employees to Xilinx’s offices to justify jurisdiction in the forum. This allowed the court to
distinguish other cases where it held that demand letters are never enough to establish jurisdiction. So while this is a good result for Xilinx, it won’t help most targets of patent
But the rule in Red Wing Shoe is wrong and should be overruled. Indeed, it is part of a longpattern of Federal Circuit decisions that create rigid rules favoring patent owners. While we suspect it
would not survive review by the Supreme Court, that question will have to wait for another case.
>> mehr lesen
A Step Forward in Microsoft’s Legal Battle for Transparency about Government Data Requests
(Fr, 17 Feb 2017)
Last week, a federal court in Seattle issued a ruling in Microsoft’s ongoing challenge to the law that
lets courts impose indefinite gag orders on Internet companies when they receive requests for information about their customers. Judge James Robart—he of recent Washington v. Trump fame—allowed Microsoft’s claim that the gags violate the First Amendment to proceed, denying the
government’s motion to dismiss that claim. It’s an important ruling, with implications for a range of government secrecy provisions, including national security letters (NSLs). Unfortunately, the court also dismissed Microsoft’s Fourth Amendment claim on behalf of
When tech companies can’t tell users that the government is knocking
Before looking at the substance of Judge Robart’s ruling, it’s worth remembering why EFF thinks Microsoft’s lawsuit is important. In fact, we’d go so far as to say that challenging
gag orders imposed alongside government data requests is one of the key digital civil liberties issues of our time. That’s true for at least two reasons:
First, there has been a sea change in where we keep our sensitive personal information— “papers and effects” protected by the Fourth Amendment and records of First
Amendment-protected speech and associations. Just twenty or thirty years ago, most or all of this information would have been found in people’s homes. In order to get at your
information—whether by breaking down your door or serving you with a grand jury subpoena—the government usually couldn’t help tipping you off. These days, private information is more
likely to be stored in Microsoft Office 365 or with another third-party provider than a home office. In that case, you won’t know the government is interested in your information
unless you hear from the government or the third-party provider. But the government isn’t always required to notify the targets of data requests, and it routinely gags providers from
notifying their users. The long-standing default—notice that the government is after your information—has in just a short time effectively flipped to no notice.
Second, gags distort the public’s understanding of government surveillance and correspondingly place far more responsibility on providers. The statutory provision at issue in
Microsoft’s lawsuit, 18 U.S.C. § 2705, applies in criminal cases. This statute allows the government to gag service providers if a court finds that informing the user will
result in one of several enumerated harms—death or injury to a particular person, destruction of evidence, witness tampering, and so on. But as Microsoft’s complaint explains, Section 2705 gag orders accompany at least half of the data demands the company
receives, and courts often grant them without explicit findings of potential harm. In many cases, they also do so without setting a date for the gag to dissolve. The result is a de
facto permanent gag order. That’s an abuse of what is intended as a limited power, granted to the government to protect specific, sensitive investigations.
Unless a provider takes extraordinary steps—like filing a facial constitutional challenge as Microsoft did—it’s likely that the public won’t be aware of this abuse. This intensifies
the role that providers play as trustees of our data. That’s why EFF tracks both transparency reports and user notification as part of our annual Who Has Your Back report. We don’t just rely on companies to keep our data secure, we also need them to stand
up to the government on our behalf. It’s a point often missed by those who dismiss companies’ growing commitments to privacy as empty marketing. If not Microsoft, Apple, Google,
Facebook and all the others, then who?
The ruling: first party prior restraints and third-party Fourth Amendment rights
Despite the importance of these issues, the government argued that Microsoft’s challenge should be bounced out of court at the preliminary motion to dismiss stage. On the First
Amendment claim, at least, the court disagreed. Microsoft’s basic argument will be familiar if you’ve followed EFF’s NSL cases: when the government prevents you from speaking in advance, it’s known as a prior restraint. Under
the First Amendment, prior restraints must meet “exacting scrutiny” and are rarely constitutional. Here, the court found that Microsoft had more than adequately alleged that Section
2705 does not meet this exacting scrutiny because it does not require courts to time-limit gags to situations where they are actually necessary based on the facts of the case.
This is nearly identical to one of the issues in EFF’s NSL cases—NSLs similarly allow the FBI to gag service providers indefinitely.1 However, NSLs are even more egregious in several ways: the FBI can issue
them without any involvement by a court at all, and it need not even claim that one of the specified harms will actually result without an NSL gag. We hope the Ninth Circuit will
consider our NSL clients’ arguments about their First Amendment rights as thoroughly as Judge Robart did here.
Finally, the court reached an unsatisfying conclusion about Microsoft’s attempt to raise its users’ Fourth Amendment rights. As EFF explained in our amicus brief earlier in the case, notice of a search is a core part of the Fourth Amendment’s
protections. When Microsoft is precluded from notifying users, it is the only party with knowledge of the search and therefore should be able to raise its users’ Fourth Amendment
rights. Nevertheless, the court found that Fourth Amendment rights are inherently personal and cannot be raised by a third party, leading it to dismiss Microsoft’s claim. We think that’s wrong on the law, and we hope Microsoft will consider seeking leave to
appeal. Meanwhile, we’ll watch as the case progresses on Microsoft’s First Amendment claim.
1. Judge Robart’s order wrongly states that NSL are time-limited.
Microsoft v. Department of Justice
>> mehr lesen
Event This Friday: EFF Talks Constitutional Law at the Internet Archive
(Fr, 17 Feb 2017)
This Friday, EFF lawyers and other experts from the field will lead a conversation about constitutional law at the Internet Archive. The event is open to the public, totally free,
and will stream live on Facebook for anybody who can't make it in person.
Come learn about censorship, surveillance, digital search and seizure, and more. Plus, if you can be there in person, there will be a potluck emphasizing apple pie.
Donations are welcome but not required. Details below.
When: Friday, February 17th 5:30pm-9pm (program 6-8)
Where: Internet Archive
300 Funston Ave. SF, CA 94118
Potluck-style: Please bring apple pie or other food
Reserve your free ticket here
Streamed via Facebook LiveSpeakers:
Cindy Cohn – Executive Director of EFF
Corynne McSherry – Legal Director of EFF
Stephanie Lacambra – Staff Attorney at EFF
Victoria Baranetsky – First Look Media Technology Legal Fellow for the Reporter’s Committee for Freedom of the Press
Geoff King – Lecturer at UC Berkeley, and Non-Residential Fellow at Stanford Center for Internet and Society
Bill Fernholz – Lecturer In Residence at Berkeley Law
>> mehr lesen
Civil Society Condemns Malware Attacks Against Mexican Public Health Advocates
(Do, 16 Feb 2017)
A group of Mexican nutrition policy makers and public health workers have been the latest targets of government
malware attacks. According to the New York Times, several public
health advocates were targeted by spyware developed by NSO Group, a surveillance software company that sells its products exclusively to
governments. The targets were all vocal proponents of Mexico’s 2014 soda tax—a regulation that
the soda industry saw as a threat to its commercial interests in Mexico.
It's no secret that Mexico has a deeply-rooted culture of secrecy surrounding surveillance. Mexican digital rights NGO, Red en Defensa de los Derechos Digitales, has been raising awareness about the lack
of control of communications surveillance in the country and advocating for surveillance law that complies with human rights standards. Today, EFF joins more than 40 organizations in expressing our concern about the use of
highly intrusive software against these public health advocates and demand that the Mexican government identify and punish those responsible for conducting illegal surveillance in
Here is the text of the letter:
On July 11, an investigation by the Citizen Lab at the University of Toronto’s Munk School of Global Affairs and the New York Times revealed evidence that Dr. Simon Barquera,
researcher at Mexico’s Public Health National Institute, Alejandro Calvillo, Director at El Poder del Consumidor and Luis Manuel Encarnación, Coordinator of ContraPESO Coalition
received targeted attacks with the objective of infecting their mobile devices with surveillance malware exclusively sold to governments by the company NSO Group.
According to the evidence, the attacks are related to the target’s activities in defense of public health, particularly advocating for a soda-tax and criticizing deficient food
labeling regulation. In the light of these revelations, the signatory national and international civil society organizations:
1. Condemn the illegal surveillance revealed and show our solidarity and stand with the academic institutions and civil society organizations targeted with these attacks.
2. Express our concern about the Mexican government’s use of highly intrusive software such as the Pegasus malware commercialized by the NSO Group, particularly against
researchers and civil society organizations. This type of surveillance malware that exploits unknown security vulnerabilities (zero-day) in commercial software and products to
obtain an absolute control of a device, severely compromises the right to privacy, especially when there is no legal controls or democratic oversight of state surveillance.
3. Demand the government of Mexico to stop the threats and surveillance against researchers and civil society organizations and call for an immediate investigation to identify and
punish the officials responsible for illegal surveillance in Mexico.
4. Call international organizations, governments around the world and the international community as a whole, to investigate the activities of the NSO Group and other companies
that sell surveillance capabilities to Mexico, a country with a record of human rights abuses.
5. Express our special concern regarding this new instance of harassment against researchers and health activists that affect the interests of the food and beverage industries. We
call the industry to clarify its involvement or knowledge of the revealed surveillance and to publicly reject any act of intimidation against human rights defenders.
Asociación Nacional de la Prensa de Bolivia (ANP)
Asociación para el Progreso de las Comunicaciones (APC)
Asociación por los Derechos Civiles (ADC)
Association of Caribbean Media Workers
Australian Privacy Foundation
Centro Nacional de Comunicación Social AC (Cencos)
Centro de Estudios Constitucionales y en Derechos Humanos de Rosario
Centro de Reportes Informativos Sobre Guatemala (CERIGUA)
Comisión Mexicana de Defensa y Promoción de los Derechos Humanos, A.C.(CMPDH)
Electronic Frontier Foundation (EFF)
Espacio Público, Venezuela
Fundación para la Libertad de Prensa (FLIP)
Fundar, Centro de Análisis e Investigación
Intercambio Internacional por la Libertad de Expresión (IFEX-ALC)
Instituto de Liderazgo Simone de Beauvoir (ILSB)
Instituto de Prensa y Libertad de Expresión (IPLEX)
Instituto Prensa y Sociedad (IPYS)
Organización Fraternal Negra Hondureña (OFRANEH)
Patient Privacy Rights
Public KnowledgeRed en Defensa de los Derechos Digitales (R3D)
Renata Aquino Ribeiro, Researcher E.I. Collective
Reporteros Sin Fronteras
SonTusDatos Artículo 12, A.C.
Sursiendo, Comunicación y Cultura Digital (Chiapas, MX)
Usuarios Digitales, Ecuador
Washington Office on Latin America (WOLA)
>> mehr lesen
San Diego Police Target African American Children for Unlawful DNA Collection
(Mi, 15 Feb 2017)
Specifically targeting black children for unlawful DNA collection is a gross abuse of technology by law enforcement. But it’s exactly what the San Diego Police Department is doing,
according to a lawsuit just filed by the ACLU Foundation of San Diego & Imperial Counties on behalf of one of the families affected. SDPD’s actions, as
alleged in the complaint, illustrate the severe and very real threats to privacy, civil liberties, and civil rights presented by granting law enforcement access to our DNA. SDPD must
stop its discriminatory abuse of DNA collection technology.
According to the ACLU’s complaint, on March 30, 2016, police officers stopped
five African American minors as they were walking through a park in southeast San Diego. There was no legal basis for the stop. As an officer admitted at a hearing in June 2016, they
stopped the boys simply because they were black and wearing blue on what the officers believed to be a gang “holiday.”
Despite having no valid basis for the stop, and having determined that none of the boys had any gang affiliation or criminal record, the officers handcuffed at least some of the boys
and searched all of their pockets. They found nothing but still proceeded to search the bag of one of the boys—P.D., a plaintiff in the ACLU’s case. (It’s standard to use minors’
initials, rather than their full names, in court documents.) The officers found an unloaded revolver, which was lawfully registered to the father of one of the boys, and arrested P.D.
The officers told the other four boys that they could go free after submitting to a mouth swab. The officers had them sign a consent form, by which they “voluntarily” agreed to provide their DNA to the police for inclusion in
SDPD’s local DNA database. The officers then swabbed their cheeks and let them go.
P.D. was then told to sign the form as well. After he signed, the officers swabbed his cheek and transported him to the police department. The San Diego District Attorney filed
numerous charges against P.D., but they were all dropped as a result of the illegal stop. The court did not, however, order the police to destroy either P.D.’s DNA sample or the DNA
profile generated via his sample. The ACLU seeks destruction of the sample and profile, along with a permanent injunction "forbidding SDPD officers from obtaining DNA from minors
without a judicial order, warrant, or parental consent."
The Police Did Not Get Meaningful, Voluntarily Consent For These Highly Invasive DNA Searches
There are a few huge problems with SDPD’s actions here. One is that the officers apparently didn’t explain to the boys what either signing the form or swabbing their cheeks
meant—i.e., that they were asking the boys to both waive their Fourth Amendment rights and turn over highly sensitive genetic material. The officers wanted the boys to
consent to the seizure of their DNA because consent is an exception to the Fourth Amendment’s
prohibition on unreasonable searches and seizures. But a person can’t meaningfully consent to a DNA search without fully understanding the serious privacy invasion that accompanies a
perhaps seemingly innocuous mouth swab. DNA can reveal an extraordinary amount of private information about a person, including familial relationships, medical history, predisposition for disease, and possibly even
behavioral tendencies and sexual orientation. And DNA samples collected via mouth
swabs are used to create DNA profiles, which are added—in most cases permanently—into law enforcement databases used for solving crimes.
Furthermore, for consent to be valid, it must be voluntary—and not motivated by threats, promises, pressure, or any other form of coercion. Here, the boys were in handcuffs, and the
officers made it clear that they could go freely once they signed the form and submitted to the mouth swab. This presents both an implied threat of arrest for failure to cooperate and
an implied promise of “leniency” in return for cooperation—two distinct types of coercion. California courts have
recognized that threats and promises have more of a coercive effect on children than on adults, making SDPD’s abuse of the consent exemption in this case all the more appalling.
And as the Voice of San Diego reports, this isn't the first time the ACLU
has sued SDPD over unlawful DNA collection. In 2013, SDPD paid $35,000 to settle a lawsuit involving a 2011 incident where officers improperly collected
DNA without cause from five family members of a parolee.
SDPD's Policy Flouts Protections Built Into California’s DNA Collection Law
SDPD’s policy on obtaining DNA from kids specifically provides for the use of
these so-called “consent” searches. The terms of the policy, obtained via a public record act request by the Voice of San Diego, are problematic on their own. For example, the policy
fails to require parental notification prior to seeking a child’s consent. But what’s even more problematic is that SDPD’s policy seems to intentionally sidestep the minimal
protections the California legislature built into California’s DNA collection law, Cal. Penal Code § 296.
California’s law specifies that DNA can be collected from juveniles only in very narrow—and serious—circumstances: after they’ve been convicted of or plead guilty to a felony, or if
they are required to register as a sex offender or in a court-mandated sex offender treatment program. And there’s a reason California law limits the
situations in which law enforcement can collection DNA from minors—DNA collection involves a serious invasion of privacy. SDPD’s actions are in direct conflict with the protections
for children built into the law.
SDPD’s policy acknowledges the limits in Section 296, but it gets around these limits by keeping the DNA profiles collected via its “consent” searches in a local database, rather than
adding them into the statewide DNA database. As the policy points out, Section 296 only governs DNA seized for inclusion in the statewide database. So, as the Voice of San Diego
puts it, "the San Diego Police
Department has found a way around state law." SDPD’s apparent efforts to flout limitations designed to protect children are deeply troubling.
Targeting Black Children For DNA Collection Is a Gross Abuse of Power
The complaint’s allegations regarding SDPD’s coercive tactics to collect DNA from these children are astounding. But what's even uglier is that, based on the ACLU’s allegations, the
collection here was racially motivated. Law enforcement believes these databases will help them solve crimes, and it seems that underlying efforts to target African American minors
for inclusion in San Diego's local DNA database is the biased assumption that these children are criminals—that they either have or will in the future commit some crime. So per the
ACLU’s allegations, SDPD is not only abusing its power, but it's doing so in a racially discriminatory way.
We applaud the ACLU Foundation of San Diego & Imperial Counties and Voice of San Diego for shedding light on SDPD’s abuse of DNA collection technology, and we’ll be following this
 California’s DNA collection law does allow pre-conviction DNA collection from adults who are charged with a felony offense—a
provision that we’ve argued violates the Fourth Amendment—but it does not permit the same for juveniles.
>> mehr lesen
Publishers Still Fighting to Bury Universities, Libraries in Fees for Making Fair Use of Academic Excerpts
(Di, 14 Feb 2017)
On behalf of three nationallibraryassociations, EFF today urged a federal appeals court for the second time to protect librarians’ and students’ rights to make fair use of excerpts from
academic books and research.
Nearly a decade ago, three of the largest academic publishers in the
world— backed by the Association of American Publishers (AAP) trade group— sued Georgia State University (GSU) for copyright infringement, insisting that GSU owed licensing fees for
the use of excerpts of academic works in its electronic reserve system. Such systems are commonly used to help students save money; rather than forcing students to buy a whole book
when they only need a short excerpt from it, professors will place the excerpts “on reserve” for students to access. GSU argued that posting excerpts in the e-reserve systems was a
“fair use” of the material, thus not subject to licensing fees. GSU also changed its e-reserve policy to ensure its practices were consistent with a set of fair use best practices
that were developed pursuant to a broad consensus among libraries and other stakeholders. The practices are widely used, and were even praised by the AAP itself.
But that was not enough to satisfy the publishers. Rather than declare victory, they’ve doggedly pursued their claims. It seems the publishers will not be content until universities
and libraries agree to further decimate their budgets. As we explain in our brief, that outcome would undermine the
fundamental purposes of copyright, not to mention both the public interest, and the interests of the authors of the works in question. The excerpts are from academic works
whose authors are not looking to get rich on licensing fees. They are motivated, instead, by a desire to contribute to the greater store of knowledge, and by the benefits accrued to their professional
reputation when other scholars read, and cite, their published work. They care about recognition, not royalties.
Moreover, the fair use analysis is supposed to consider whether the practice at issue will cause material harm to an actual or potential market. But there’s no real market for digital
excerpts that the libraries’ practices could harm. Indeed, as GSU explained in their brief, “[m]any professors testified that they would not have used any excerpt if students were
required to pay a licensing fee.” And even if such a market existed, most libraries likely couldn’t afford to be part of it. In light of rising costs and shrinking resources, “academic libraries simply do not have the budget to participate in any “new” licensing market" without diverting funds away
from other areas—like those used to add new works to their collections.
Copyright is supposed to help foster the creation of new works. Requiring university libraries to devote even more of their budgets to licensing fees will have the opposite
effect. We hope the court agrees.
>> mehr lesen
Not Okay: Professor Smeared After Advocating for Election Integrity
(Di, 14 Feb 2017)
Imagine if someone, after reading something you wrote online that they didn’t agree with, decided to forge racist and anti-Semitic emails under your name. This appears to be what
happened to J. Alex Halderman, a computer security researcher and professor of computer science at the University of Michigan. Halderman is one of many election
security experts—along with EFF, of course—who has advocated for auditing the results of the 2016 presidential
election. The recent attempts to smear his name in retaliation for standing up for election integrity are a threat to online free speech.
Halderman, who is a frequent collaborator and sometimes client of EFF, published a piece on Medium in November 2016 arguing that we should perform
recounts in three states—Wisconsin, Michigan, and Pennsylvania—to ensure that the election had not been “hacked.” To be clear, despite a report in New York Magazine, Halderman never stated that there was hard evidence that the election results had in
fact been electronically manipulated. He just stated that we should check to be sure:
The only way to know whether a cyberattack changed the result is to closely examine the available physical evidence — paper ballots and voting equipment in critical states like
Wisconsin, Michigan, and Pennsylvania.
Concern over a “hacked election” isn’t unfounded. In 2014, pro-Russia hacking collective CyberBerkut attempted to sabatoge Ukrainian’s vote-counting infrastructure
just prior to a presidential election. This is just one example. With these threats out there, auditing should be basic election hygiene. As computer security expert Poorvi
Vora of George Washington University says, “Brush your teeth. Eat your spinach. Audit your elections.”
Halderman specifically calls in his post for risk-limiting audits, a statistical method we’ve also
advocated for that involves randomly selecting a certain number of paper ballots for manual recount. And it’s something we should be doing after every election. It’s a
Someone, however, does not agree. On February 7, about two and a half months after Halderman’s post, someone sent racist and anti-Semitic emails to University of Michigan engineering
students purporting to be from Halderman. According to the AP, the emails had subject lines like “African American
Student Diversity” and “Jewish Student Diversity,” and two of the emails contained the phrase “Heil Trump.”
This type of smear campaign is unsophisticated and easy to pull off. The smear artist(s) here didn’t break into Halderman's e-mail account. They simply created a “spoofed” email header, which made the messages appear to have originated from Halderman rather than their actual source. This is a
ploy all-too-common in phishing campaigns, as it can trick Internet users into providing sensitive information or clicking on
malicious links. Read: this could happen to anyone.
Luckily, the spoof here was quickly revealed, and we doubt that many of the recipients—students at the university, where Halderman is well-known and liked—were deceived. But it did
still result in a 40-student protest outside the
home of the university’s president.
Halderman has called these attempts to smear his name “cowardly action,” and we agree. But we’re also concerned. The
threat of being the target of a smear campaign could chill the speech of others who want to speak out on the need for
ensuring the integrity of our election system—an increasingly critical topic. Such efforts to chill speech threaten the very nature of the Internet as we know it—a place for open,
robust, and diverse discourse.
We expect the University of Michigan to take a strong stand against this type of retaliation targeting a member of its community. And the rest of us should take a stand in support of
Halderman, not only for his efforts to move the debate on election integrity forward, but also to make sure that such ugly, dastardly, and quite frankly lame attempts to smear people
don’t become a more widely used method for chilling speech.
>> mehr lesen
“Smart Cities,” Surveillance, and New Streetlights in San Jose
(Di, 14 Feb 2017)
The San Jose City Council is considering a proposal to install over 39,000
“smart streetlights.” A pilot program is already underway.
These smart streetlights are not themselves a surveillance technology. But they have ports on top that, in the future, could accommodate surveillance technology, such as video cameras
EFF and our allies sent a letter to the San Jose City Council urging them to
adopt an ordinance to ensure democratic control of all of that community’s surveillance technology decisions—including whether to plug spy cameras into the ports of smart
What Are Smart Cities?
Under “smart cities” programs like
the one in San Jose, many municipalities across the country are building technology infrastructures in public places that collect data in order to save energy, reduce traffic
congestion, and advance other governmental goals. Some of these programs may improve urban life, and EFF does not oppose smart cities per se.
But we have a word for government use of technology to document how identifiable people are living their lives in public spaces: surveillance. And we strongly oppose the web of
street-level surveillance that is rapidly spreading across our urban landscapes. It invades privacy, chills free speech, and disparately burdens
communities of color and poor people.
There is an inherent risk of mission creep from smart cities programs to surveillance. For example, cameras installed for the benevolent purpose of traffic management might later be
used to track individuals as they attend a protest, visit a doctor, or go to church.
Democratic Control of Spy Tech
To prevent this mission creep, communities must adopt laws ensuring democratic decision-making and oversight of surveillance technology. All too often, police chiefs and other agency
executives unilaterally decide to install new spying tools. Instead, these decisions must be made by elected city councils after robust public debate in which all members of the
community have their voices heard. Communities will reject some proposed surveillance tools, and require strong privacy safeguards for others.
Last year, EFF supported the enactment of an ordinance in Santa Clara County that requires democratic control
of spy tech decisions. We now support similar efforts for BART, Oakland, and Palo Alto.
Our letter to the San Jose City Council urges them to adopt such an ordinance. Our allies on this letter are the ACLU (Santa Clara Valley Chapter), Asian Americans Advancing Justice,
the Coalition for Justice and Accountability, the Council on American-Islamic Relations (San Francisco Bay Area Office), the Center for Employment Training, the Japanese American
Citizens League (San Jose, Sequoia, and Silicon Valley Chapters), the Nihonmachi Outreach Committee, the Peninsula Peace and Justice Center, and The Utility Reform Network.
Privacy By Design
“Privacy by design” is an equally necessary means to ensure that smart cities do
not devolve into surveillance programs. Privacy by design means that technology manufacturers and municipal purchasers must work together at all stages of product development to build
privacy safeguards into smart cities technologies. It is not enough to bolt privacy safeguards onto completed tools at the last minute.
Privacy by design has substantive
and procedural components. Substantive protections include limits on initial collection of personal information; encryption and other security measures to control access to that
information; and strong policies restraining use and disclosure of that information.
A critical procedural measure is for cities to employ their own privacy officers. With the great power of smart cities tools comes the great responsibility to competently manage them.
A privacy officer must have expertise in the technological, legal, and policy issues presented by these powerful tools. Absent such in-house expertise, cities may inadvertently create
privacy problems, or unduly defer to the privacy judgments of vendors, which will not always have the same privacy goals as cities.
Now is the time for San Jose to ensure that its smart streetlights do not become another tool of street-level surveillance. To do so, San Jose must adopt an ordinance ensuring
democratic control of decisions about surveillance tools. It must also practice privacy by design. Otherwise, residents may find that the new "smart" technologies designed to improve
their lives have instead become tools of government spying.
>> mehr lesen
FBI Throws Up Digital Roadblock to Transparency
(Fr, 10 Feb 2017)
Beginning March 1, FBI Will No Longer Accept FOIA Requests Via Email
It’s well documented that the FBI is keen on adopting new technologies that intrude on our civil liberties. The FBI’s enthusiasm for technology, however, doesn’t extend to
tools that make it easier for the public to understand what the agency is up to—despite such transparency being mandated by law.
The FBI recently announced that it’s removing the ability for the public to send Freedom of
Information Act (FOIA) requests to the agency via email. Instead, the FBI will now only accept requests sent through snail mail, fax, or a poorly designed and extremely limited
The FBI’s decision to abandon email—a free and ubiquitous method of communication—as a means of sending in FOIA requests will make sending requests to the
agency far more difficult. The decision will thus undoubtedly thwart both transparency and accountability, and the FBI must be well aware of this. In a world in which thermostats and
toasters are increasingly connected to the Internet, the FBI's rejection of emailed FOIA requests is a slap in the face to transparency. The FBI's decision is all the more galling
given that other agencies are currently embracing technologies that both help people making FOIA requests and
help the agencies more efficiently and effectively process them.
What's more, the FBI’s alternative solution—it's new “E-FOIA” website website—is no solution at all. The website
places a 3,000 character limit on requests and has technical barriers that prevent automated FOIA requests. These constraints significantly limit the amount of information people can
seek via a single request and needlessly slow down the process.
Perhaps the biggest problem is the website’s terms of service, which place limits on the types of requests that can be filed digitally. They suggest the website will not accept FOIA
requests seeking records about FBI operations, activities, or communications. Not only does this make no sense from a technical standpoint, it runs directly counter to the very
purpose of FOIA: ensuring that the public can learn about an agency’s operations and activities.
EFF is grateful to Sen. Ron Wyden (D-Or.), who sent a letter (pdf) to the FBI on Friday highlighting many of the
concerns we have about the FBI’s abandonment of email and its reliance on an problematic website. We look forward to the FBI’s response.
The FBI's recent announcement makes one thing clear: Congress should—and easily could—update FOIA to require all federal agencies, including the FBI, to
accept FOIA requests via email. In the digital world we live in, this is a no-brainier. EFF has been calling for this simple fix, along with a host of other changes, for some time, and
we remain committed to supporting legislative efforts that increase government transparency.
>> mehr lesen
FBI Search Warrant That Fueled Massive Government Hacking Was Unconstitutional, EFF Tells Court
(Fr, 10 Feb 2017)
Appeals Court Should Find Warrant Violated Fourth Amendment Protections
Boston—An FBI search warrant used to hack into
thousands of computers around the world was unconstitutional, the Electronic Frontier Foundation (EFF) told a federal appeals court today in a case about a controversial criminal
investigation that resulted in the largest known government hacking campaign in domestic law enforcement history.
The Constitution requires law enforcement officers seeking a search warrant to show specific evidence of a possible crime, and tie that evidence to specific persons and places they
want to search. These fundamental rules protect people from invasions of privacy and police fishing expeditions.
But the government violated those rules while investigating “Playpen,” a child pornography website operating as a Tor hidden service. During the investigation, the FBI secretly seized servers running the website and, in a
controversial decision, continued to operate it for two weeks rather than shut it down, allowing thousands of images to be downloaded. While running the site, the bureau began to hack
its visitors, sending malware that it called a “Network Investigative Technique” (NIT) to visitors’ computers. The malware was then used to identify users of the site. Ultimately, the FBI hacked into 8,000 devices located in 120 countries around the
world. All of this hacking was done on the basis of a single warrant. The FBI charged hundreds of suspects who visited the website, several of whom are challenging the validity of the
In a filing today in one such case, U.S. v. Levin, EFF and the American Civil Liberties
Union of Massachusetts urged the U.S. Court of Appeals for the First Circuit to rule that the warrant is invalid and the searches it authorized unconstitutional because the warrant
lacked specifics about who was subject to search and what locations and specific devices should be searched. Because it was running the website, the government was already in
possession of information about visitors and their computers. Rather than taking the necessary steps to obtain narrow search warrants using that specific information, the FBI instead
sought a single, general warrant to authorize its massive hacking operation. The breadth of that warrant violated the Fourth Amendment.
“No one questions the need for the FBI to investigate serious crimes like child pornography. But even serious crimes can’t justify throwing out our basic constitutional principles.
Here, on the basis of a single warrant, the FBI searched 8,000 computers located all over the world. If the FBI tried to get a single warrant to search 8,000 houses, such a request
would unquestionably be denied. We can’t let unfamiliar technology and unsavory crimes lead to an erosion of everyone’s Fourth Amendment rights,” said EFF Senior Staff Attorney Mark
EFF filed a brief in January in a similar case in the Eighth Circuit Court of Appeals, and will be filing briefs in Playpen cases in the Third and Tenth Circuits in March. Some
trial courts have upheld the FBI’s actions in dangerous decisions that, if ultimately upheld, threaten to undermine individuals’ constitutional privacy protections over information on personal
“These cases will be cited for the future expansion of law enforcement hacking in domestic criminal investigations, and the precedent is likely to impact the
digital privacy rights of all Internet users for years to come,” said Andrew Crocker, EFF Staff Attorney. “Recent changes to federal rules for issuing warrants may allow the government to
hack into thousands of devices at a time. These devices can belong not just to suspected criminals but also to victims of botnets and other hacking crimes. For that reason, courts
need to send a very clear message that vague search warrants that lack the required specifics about who and what is to be searched won’t be upheld.”
For the brief:
Senior Staff Attorney
>> mehr lesen
Border Security Overreach Continues: DHS Wants Social Media Login Information
(Fr, 10 Feb 2017)
Now more than ever, it is apparent that U.S. Customs and
Border Protection (CBP) and its parent agency, the Department of Homeland Security (DHS), are embarking on a broad campaign to invade the digital lives of innocent individuals.
The new DHS secretary, John Kelly, told a congressional committee this week that the department may soon demand login information (usernames and passwords) for social media accounts
from foreign visa applicants—at least those subject to the controversial executive order on terrorism and immigration—and
those who don’t comply will be denied entry into the United States. This effort to access both public and private communications and associations is the latest move by a department
that is overreaching its border security authority.
In December 2016, DHS began asking another subset of foreign visitors, those from Visa Waiver Countries, for their social media handles. DHS defended itself by stating that not only would compliance be voluntary,
the government only wanted to access publicly viewable social media posts: “If an applicant chooses to answer this question, DHS will have timely visibility of the publicly available
information on those platforms, consistent with the privacy settings the applicant has set on the platforms.”
As we wrote last fall in comments to DHS, even seeking the ability to view the public social media posts of
international travelers implicates the universal human rights of free speech and privacy, and—importantly—the comparable constitutional rights of their American associates. Our
objections are still salient given that DHS may soon mandate access to both public and private social media content and contacts of another group of foreigners visitors.
Moreover, as a practical matter, such vetting is unlikely to weed out terrorists as they would surely scrub their social media accounts prior to seeking entry into the U.S.
Such border security overreach doesn’t stop there.
There have been several reports recently of CBP agents demanding access to social media information and digital devices of both American citizens and legal permanent residents. Most disturbing are the invasive
searches of Americans’ cell phones, where CBP has been accessing social media apps that may reveal private posts and relationships, as well as emails, texts messages, browsing
history, contact lists, photos—whatever is accessible via the phone.
Such border searches of Americans’ digital devices and cloud content are unconstitutional absent individualized suspicion, specifically, a probable cause warrant. In light of the DHS secretary’s statements
this week, we fear that DHS may soon take the next step down this invasive path and demand the login information for American travelers’ online accounts so that the government can
peruse private, highly personal information without relying on access to a mobile device.
These policies and practices of DHS/CBP must be chronicled and opposed.
Please tell us your border search stories. You can write to us at
email@example.com. If you want to contact us securely via email, please use PGP/GPG. Or you can call us at +1-415-436-9333.
We also encourage you to contact your congressional representatives in the Senate and House of Representatives.
You may also contact the DHS Office of Civil Rights and Civil Liberties (firstname.lastname@example.org) and the DHS Inspector General (email@example.com).
Join the fight for online privacy and free expression.
United States v. Saboonchi
>> mehr lesen
Healthy Domains Initiative Isn't Healthy for the Internet
(Fr, 10 Feb 2017)
EFF had high hopes that the Domain Name Association's Healthy Domains Initiative (HDI) wouldn't be just another secretive industry deal between rightsholders and domain name
intermediaries. Toward that end, we and other civil society organizations worked in good faith on many fronts to make sure HDI protected Internet users as well.
Those efforts seem to have failed. Yesterday, the Domain Name Association (DNA), a relatively new association of domain registries and registrars, suddenly launched a proposal for "Registry/Registrar Healthy Practices" on a surprised
world, calling on domain name companies to dive headlong into a new role as private arbiters of online speech. This ill-conceived proposal is the very epitome of Shadow Regulation. There was no forewarning about the release of this proposal on the HDI mailing list; indeed, the last update
posted there was on June 9, 2016, reporting "some good progress," and promising that any HDI best practice document "will be shared broadly to this group for additional feedback."
That never happened, and neither were any updates posted to HDI's blog.
While yesterday's announcement claims that "civil society" was part of a "year-long process of consultation" leading to this document, it doesn't say which groups participated, or how
they were selected. In any purported effort to develop a set of community-based principles, a failure to proactively reach out to affected stakeholders, especially if they
have already expressed interest, exposes the effort as a sham. "Inclusion" is one of the three key
criteria that EFF developed in explaining how fair processes can lead to better outcomes, and this means making sure that all stakeholders who are affected by Internet policies
have the opportunity to be heard. The onus here lies on the organization that aims to develop those policies, and in this the DNA has clearly failed.
Copyright Censorship Through Compulsory Private Arbitration
So, what did HDI propose in its Registry/Registrar Healthy Practices [PDF]? The Practices
divide into four categories, quite different from one another: Addressing Online Security Abuse, Complaint Handling for “Rogue” Pharmacies, Enhancing Child Abuse Mitigation
Systems, and Voluntary Third Party Handling of Copyright Infringement Cases. We will focus for now on the last of these, because it is the newest and most overreaching voluntary
enforcement mechanism described in the Practices.
The HDI recommends the construction of "a voluntary framework for copyright infringement disputes, so copyright holders could use a more efficient and cost-effective system for clear
cases of copyright abuse other than going to court." This would involve forcing everyone who registers a domain name to consent to an alternative dispute resolution (ADR) process for
any copyright claim that is made against their website. This process, labelled ADRP, would be modeled after the Uniform Dispute Resolution Policy (UDRP), an ADR process for disputes
between domain name owners and trademark holders, in which the latter can claim that a domain name infringes its trademark rights and have the domain transferred to their control.
This is a terrible proposal, for a number of reasons. First and foremost, a domain name owner who contracts with a registrar is doing so only for the domain name of their
website or Internet service. The content that happens to be posted within that website or service has nothing to do with the domain name registrar, and frankly, is none of its
business. If a website is hosting unlawful content, then it is the website host, not the domain registrar, who needs to take responsibility for that, and only to the extent of
fulfilling its obligations under the DMCA or its foreign equivalents.
Second, it seems too likely that any voluntary, private dispute resolution system paid for by the complaining parties will be captured by copyright holders and become a privatized
version of the failed Internet censorship bills SOPA and PIPA. While the HDI gives lip service to
the need to "ensure due process for respondents," if the process by which the HDI Practices themselves were developed is any guide, we cannot trust that this would be the case. If any
proof is needed of this, we only need to look at the ADRP's predecessor and namesake, the UDRP, a systemically biased process that has been used to censor domains used for legitimate
purposes such ascriticism, and domains that are generic English words. Extending this broken process beyond domain names themselves to cover the
contents of websites would make this censorship exponentially worse.
Donuts Are Not Healthy
Special interests who seek power to control others' speech on the Internet often cloak their desires in the rhetoric of "multistakeholder" standards development. HDI's use of terms
like "process of consultation," "best practices," and "network of industry partners" fits this mold. But buzzwords don't actually give legitimacy to a proposal, nor substitute for
meaningful input from everyone it will affect.
The HDI proposal was written by a group of domain name companies. They include Donuts Inc., a registry operator that controls over 200 of the new top-level domains, like .email,
.guru, and .movie. Donuts has taken many steps that serve the interests of major corporate trademark and copyright holders over those of other Internet users. These include a private agreement with the Motion Picture Association of America to suspend domain names
on request based on accusations of copyright infringement, and a "Domain Protected Marks List Plus" that gives brand owners the power to stop others from using common words and
phrases in domain names--a degree of control that they don't get from either ICANN procedures or trademark law.
The "Healthy Practices" proposal continues that solicitude towards corporate rightsholders over other Internet users. This proposal begs the question: healthy for whom?
If past is prologue, we can expect to see heaps of praise for this proposal from the same special interests it was designed to serve, and from their allies in government who use Shadow Regulations like this one to
avoid democratic accountability for unpopular, anti-user policies. But no talk of "self-regulation" nor "best practices" can transform an industry's private wishlist into legitimate
governance of the Internet, or an acceptable path for other Internet companies to follow.
>> mehr lesen
FCC Abandons Zero-Rating Investigation and Moves Backward on Net Neutrality
(Fr, 10 Feb 2017)
Bad news for Internet users. In his first few days in office, FCC Chairman Ajit Pai has shelved the Commission’s investigation into Internet companies’ zero-rating practices and
whether they violate the Commission's Open Internet Order.
As recently as January, the FCC was rebuking AT&T (PDF) for seemingly prioritizing its own DirecTV content over that of its
competitors. Now, Pai has made it clear that the FCC doesn’t plan to move forward with the investigation.
Simply put, zero-rating is the practice of ISPs and mobile providers choosing not to
count certain content toward users’ data limits, often in exchange for capping the speeds at which customers can access that content. Most major mobile providers in the U.S. offer
some form of zero-rated service today, like T-Mobile’s BingeOn
program for zero-rated streaming and Verizon and AT&T’s FreeBee Data program. Facebook, Wikimedia, and Google have their own zero-rated apps, too. While they are currently focused
on emerging mobile markets in developing countries, this recent development from the FCC
may open the domestic market to them in new ways.
EFF doesn’t flat-out oppose all zero-rating. But in current practice, it often has the consequence (intended or not) of giving ISPs unfair control over the content their customers
access and, ultimately, stifling competition. When the ISP has sole control over what content sources are eligible for zero-rating, it becomes a de facto Internet gatekeeper: its choices around free bandwidth can bias its
customers’ Internet usage toward certain sites and services. That can make it prohibitively difficult for new, innovative services to get off the ground. For example, entrepreneurs
trying to promote a new video streaming site will face hurdles to widespread adoption of their service if users have unmetered access to existing competitors like YouTube and Netflix.
This problem gets particularly dodgy when the mobile provider owns the zero-rated content source, as is the case with AT&T and DirecTV. In the course of its now-shuttered
zero-rating investigation, the FCC asked AT&T to prove that it treated DirecTV and other video services the same. AT&T claimed that it did, but the FCC found evidence that
AT&T’s practices were obstructing competition and harming users:
The limited information we have obtained to date… tends to support a conclusion… that AT&T offers Sponsored Data to third party content providers at terms and conditions
that are effectively less favorable than those it offers to its affiliate, DIRECTV. Such arrangements likely obstruct competition for video programming services delivered
over mobile Internet platforms and harm consumers by inhibiting unaffiliated edge providers’ ability to provide such service to AT&T’s wireless subscribers. (Emphasis added.)
According to Pai, “These free-data plans have proven to be popular among consumers, particularly low-income
Americans.” But that’s a red herring. That a service is popular doesn’t mean that rules protecting users’ freedoms shouldn’t apply to it. If anything, zero-rating’s supposed
popularity among low-income users is another reason to make sure that it doesn’t further curb users’ Internet experience and funnel vulnerable users towards certain content.
On top of that, users have different preferences and habits, and do not necessarily agree on the optimal
content and services to zero-rate. Instead of expanding carriers’ discretion over the content their customers can or cannot easily access, EFF would like to see edge providers
given a clear path to being included in zero-rating plans, one that doesn’t favor established players. And ultimately, users themselves should be empowered to decide what content gets
>> mehr lesen
Trump’s Attorney General’s Record on Privacy
(Do, 09 Feb 2017)
President Donald Trump’s nominee to lead the country’s law enforcement has cleared the Senate.
The Senate voted 52-47 on Wednesday to confirm
Sen. Jeff Sessions, whose record on civil liberties issues—including digital rights—has drawn fire from Democratic lawmakers and public interest groups.
EFF has expressed concerns about Sessions’ record on surveillance,
encryption, and freedom of the press. Those concerns intensified during his confirmation process.
Throughout his confirmation hearing in front of the Senate Judiciary Committee and his written responses to additional
questions from lawmakers, Sessions made a number of troubling statements. He said he would support legislation to enable a privacy-invasive Rapid DNA system. He refused to
definitively commit not to put journalists in jail for doing their job. He dodgedquestions about Justice Department policies on Stingrays, and wouldn't commit to publish guidelines on how federal law enforcement uses government
hacking. He called it “critical” that law enforcement be able to “overcome” encryption.
His Senate record on surveillance is also disturbing. Sessions helped to derail reform to the Electronic Communications Privacy Act in the
Senate. He also opposed the USA FREEDOM Act, a set of moderate reforms to the NSA’s mass collection of information about Americans’ domestic phone calls. In 2015, he went so far as to
pen an alarmist op-ed against the bill, in which
he claimed that the bulk phone records collection was “subject to extraordinary oversight” and warned the bill “would make it vastly more difficult for the NSA to stop a terrorist
than it is to stop a tax cheat.”
During the hearing, USA FREEDOM sponsor Sen. Patrick Leahy pressed Sessions on whether he is committed to enforcing the surveillance reform law. Sessions responded that the
prohibition on bulk collection “appears to be” the governing statute for U.S. government surveillance. His qualified answer raises concerns. And while he pledged to follow that law,
he couldn’t confirm it prohibited bulk collection of domestic phone records in all cases. (It does.)
In a marathon, all-night debate in opposition to Sessions, Senate Democrats pointed to his track record on surveillance and privacy as a source of concern.
Montana Democrat Sen. Jon Tester pointed to Sessions’ repeated votes in favor of “the most intrusive aspects of the Patriot Act.” He asked, “Will he fight on behalf of government
officials that listen into our phone calls or scroll through our emails or preserve our Snapchats?”
Washington Democrat Sen. Maria Cantwell said she is concerned by Sessions’ support for “President [George W.] Bush’s warrantless wiretapping and surveillance programs,” and his
support for backdoor access to encrypted technologies. “I do have concerns that the president’s nominee…may not stand up to the President of the United States in making sure that the
civil liberties of Americans are protected.”
Now that he has been confirmed, EFF and other civil liberties advocates will work to hold him accountable as Attorney General and block any attempts by him or anyone else to broaden
the government surveillance powers that threaten our basic privacy rights.
>> mehr lesen
A School Librarian Caught In The Middle of Student Privacy Extremes
(Mi, 08 Feb 2017) As a school librarian at a small
K-12 district in Illinois, Angela K. is at the center of a battle of extremes in educational technology and student privacy.
On one side, her district is careful and privacy-conscious when it comes to technology, with key administrators who take extreme caution with ID numbers, logins, and any other
potentially identifying information required to use online services. On the other side, the district has enough technology “cheerleaders” driving adoption forward that now students as
young as second grade are using Google’s G Suite for
In search of a middle ground that serves students, Angela is asking hard, fundamental questions. “We can use technology to do this, but should we? Is it giving us the same results as
something non-technological?” Angela asked. “We need to see the big picture. How do we take advantage of these tools while keeping information private and being aware of what we might
be giving away?”
School librarians are uniquely positioned to navigate this middle ground and advocate for privacy, both within the school library itself and in larger school- or district-wide
conversations about technology. Often, school librarians are the only staff members trained as educators, privacy specialists, and technologists, bringing not only the skills
but a professional mandate to lead
their communities in digital privacy and intellectual freedom. On top of that, librarians have trusted relationships across the student privacy stakeholder chain, from working
directly with students to training teachers to negotiating with technology vendors.
Following the money
Part of any school librarian’s job is making purchasing decisions with digital vendors for library catalogs, electronic databases, e-books, and more. That means that school librarians
like Angela are trained to work with ed tech providers and think critically about their services.
“I am always asking, ‘Where is this company making their money?’” Angela said. “That’s often the key to what’s going on with the student information they collect.”
School librarians know the questions to ask a vendor. Angela listed some of the questions she tends to ask: What student data is the vendor collecting? How and when is it anonymized,
if at all? What does the vendor do with student data? How long is it retained? Is authentication required to use a certain software or service, and, if so, how are students’ usernames
and passwords generated?
In reality, though, librarians are not always involved in contract negotiations. “More and more tech tools are being adopted either top-down through admin, who don’t always think
about privacy in a nuanced way, or directly through teachers, who approach it on a more pedagogical level,” Angela said. “We need people at the table who are trained to ask questions
about student privacy. Right now, these questions often don’t get asked until a product is implemented—and at that point, it’s too late.”
Angela wants to see more direct education around privacy concepts and expectations, and not just for students. Teachers and other staff in her district would benefit from more
thorough training, as well.
“As a librarian, I believe in the great things technology can offer,” she said, “but I think we need to do a better job educating students, teachers, and administrators on reasons for
For students, Angela’s district provides the digital literacy education mandated by Illinois’s Internet Safety
Act. However, compartmentalized curricula are not enough to transform the way students interact with technology; it has to be reinforced across subjects throughout the school
“We used to be able to reinforce it every time library staff worked with students throughout the year,” Angela said, “but now staff is too thin.”
Teachers also need training to understand the risks of the technology they are offering to students.
“For younger teachers, it’s hard to be simultaneously skeptical and enthusiastic about new educational technologies,” Angela said. “They are really alert to public records
considerations and FERPA laws, but they also come out of education programs so heavily trained in using data to improve educational experiences.”
In the absence of more thorough professional training, Angela sees teachers and administrators overwhelmed with the task of considering privacy in their teaching. “Sometime educators
default to not using any technology at all because they don’t have the time or resources to teach their kids about appropriate use. Or, teachers will use it all and not think about
privacy,” she said. “When people don’t know about their options, there can be this desperate feeling that there’s nothing we can do to protect our privacy.”
Angela fears that, without better privacy education and awareness, students' intellectual freedom will suffer. “If students don’t expect privacy, if they accept that a company or a
teacher or ‘big brother’ is always watching, then they won’t be creative anymore.”
A need for caution moving forward
Coming from librarianship’s tradition of facilitating the spread of information while also safeguarding users’ privacy and intellectual freedom, Angela is committed to adopting
and applying ed tech while also preserving student privacy.
“I am cautious in a realistic way. After all, I’m a tools user. I know I need a library catalog, for example. I know I need electronic databases. Technologies are a necessary utility,
not something we can walk away from.”
As ed tech use increases, school librarians like Angela have an opportunity to show that there is no need to compromise privacy for newer or more high-tech educational resources.
“Too many people in education have no expectation of privacy, or think it’s worth it to hand over our students’ personal information for ed tech services that are free. But we don’t
have to give up privacy to get the resources we need to do good education.”
>> mehr lesen
YODA, the Bill That Would Let You Own (and Sell) Your Devices, Is Re-Introduced in Congress
(Mi, 08 Feb 2017)
Rep. Blake Farenthold (R-Texas) and Jared Polis (D-Colo.) just re-introduced their You Own Devices
Act (YODA), a bill that aims to help you reclaim some of your ownership rights in the software-enabled devices you buy.
We first wrote about YODA when it was originally
introduced back in 2014. The bill would go a ways toward curbing abusive End User License Agreements (EULAs) by making sure companies can’t use restrictions on the software within
your device to keep you from selling, leasing, or giving away the device when you’re done with it by. The bill would override EULAs that purport to limit your ability to transfer
ownership of the device (and its software) and would make sure that whoever ends up with your device has the same access to security and bug fixes that you would have had.
Making sure that you can sell and transfer your old devices isn’t just good for you – it’s good for everyone else as well. Resale markets for consumer products help improve access to affordable technology and provide a valuable resource for innovators
We’re pleased to see some members of Congress tackling this issue, and there’s still a long way to go to make sure that outdated and unconstitutional copyright laws, like Section 1201, don’t keep you from controlling
your own media and devices.
>> mehr lesen
The Fight Over Email Privacy Moves to the Senate
(Di, 07 Feb 2017)
The House passed the Email Privacy Act (H.R. 387) yesterday, bringing us one step closer to requiring a warrant before law
enforcement can access private communications and documents stored online with companies such as Google, Facebook, and Dropbox. But the fight is just beginning.
We’ve long called for pro-privacy reforms to the 1986 Electronic Communications Privacy Act (ECPA), the outdated law that provides little protection for “cloud” content stored by
third-party service providers. H.R. 387 would codify the Sixth Circuit’s ruling in
U.S. v. Warshak, which held that the Fourth Amendment demands that the government first obtain a warrant based on probable cause before accessing emails stored with cloud
service providers. While imperfect—the House-passed bill doesn’t require the government to notify users when it obtains their data from companies like Google—the reforms in the Email
Privacy Act are a necessary step in the right direction.
EFF and more than 60 other privacy advocates, tech companies, and industry groups wrote to
lawmakers asking them to approve the Email Privacy Act.
Now the Senate needs to take up the measure and send it to the president’s desk without weakening it. Despite the fact that the House voted 419-0 to pass the Email Privacy Act last year, it stalled in the upper chamber, where senators attempted to use the
incredibly popular bill to attach additional provisions that would have harmed Internet users’ privacy.
Such “poison pill” amendments included one that would have expanded the already problematic Computer Fraud and Abuse Act, one that would have allowed the FBI to get more user
information with National Security Letters, and amendments that could have made it easier for law enforcement to abuse the exemption in the law that grants access to user data in the
case of emergencies without judicial oversight.
Senators need to be vigilant about fending off these kinds of amendments when the Email Privacy Act is considered in the Senate this time around.
The House’s unanimous vote on the Email Privacy Act last year and yesterday’s voice vote demonstrate bipartisan agreement that the emails in your inbox should have the same privacy
protections as the papers in your desk drawer. We urge the Senate to swiftly pass the H.R. 387 to protect online content with a probable cause warrant.
>> mehr lesen
It’s the End of the Copyright Alert System (As We Know It)
(Di, 07 Feb 2017)
The Copyright Alert System has called it quits, but questions remain about what, if anything, will replace it. Known also as the “six strikes” program, the Copyright Alert System (CAS) was a private agreement between several large Internet service
providers (ISPs) and big media and entertainment companies, with government support. The agreement allowed the media and entertainmenet companies to monitor those ISPs' subscribers'
peer-to-peer network traffic for potential copyright infringement, and imposed penalties on subscribers accused of infringing. Penalties ranged from “educational” notices, to throttling subscribers' connection speeds and, in some cases, temporarily restricting subscribers’ web access.
From the beginning, the Copyright Alert System presented problems for ordinary
Internet users. The agreement creating the CAS was negotiated without the opportunity for public input. As is often the result with such secretive private agreements, users’ interests
weren’t sufficiently protected when the program finally came into effect. For example, because the program treated accusations of infringement as conclusive, and the appeals process
was both costly and offered unnecessarily limited defenses, the CAS failed
to adequately protect users who were wrongfully accused of infringement. Further, the program included surveillance by copyright owners of Internet subscribers’ peer-to-peer network
activity, a level of monitoring that many found invasive of their online privacy. Even the program’s educational materials were biased. And throughout its operation, the program struggled to
provide enough transparency into how it was impacting Internet users.
But the CAS wasn’t nearly as bad as it could have been. For example, while the media companies could join swarms to track users’ activity on peer-to-peer networks, the ISPs themselves
were not required to monitor their subscriber’s activity by using deep packet inspection (DPI), a much more invasive tactic. And ISPs were not required, under the terms of the
agreement, to cut off subscribers’ Internet access after repeat allegations of infringement. Lastly, the program had an advisory board that did include consumer advocates (a measure we believed to be inadequate).
EFF had serious concerns with the program from the start, and we welcome its retirement. But we’re not celebrating just yet. The statement from the Center for Copyright Information (the organization that administered
the CAS) announcing the program’s retirement states:
While this particular program is ending, the parties remain committed to voluntary and cooperative efforts to address these issues.
As we’ve said before, a big problem with these private agreements is that they
frequently leave Internet users without at seat at the negotiating table, and with little or no recourse when the companies involved violate users’ privacy or silence users’ online
speech. And when government actors pressure companies to come to terms, these agreements can easily become the “de-facto” law of the Internet – with none of the potential for
democratic accountability that accompanies actual laws. If companies and governments are committed to protecting Internet users in future voluntary agreements, we’ve provided a
simple set of criteria for how those agreements can be done well.
While there are as yet no details as to why the CAS closed up shop, or what could be coming next, the MPAA’s statements following the announcement are far from reassuring. The MPAA’s
general counsel stated that he believed the program didn’t do enough to punish people the media companies decided were “repeat infringers”:
[the CAS] was simply not set up to deal with the hard-core repeat infringer problem. Ultimately, these persistent infringers must be addressed by ISPs under their 'repeat
infringer' policies as provided in the Digital Millennium Copyright Act.
This statement comes on the heels of another industry attempt to turn ISPs into draconian copyright enforcers in the BMG Rights Management v. Cox Communications case. Copyright holders in that
case argued that ISPs, like Cox (Cox was not part of the CAS) should cut off subscribers’ Internet access on the basis of copyright holders’ mere
allegations of infringement.
We hope the CAS is not being abandoned simply so big media and entertainment companies can try to impose something worse. Whatever happens, we’ll be on the lookout for threats to
Internet users from future Shadow Regulations like the CAS.
>> mehr lesen
Documents About Financial Censorship Under Operation Choke Point Show Concern from Congress, Provide Few Answers
(Di, 07 Feb 2017)
EFF recently received dozens of pages of documents in response to a FOIA request we submitted about Operation Choke Point, a Department of Justice project to pressure banks and
financial institutions into cutting off service to certain businesses. Unfortunately, the response from the Department of Justice leaves many questions unanswered.
EFF has been tracking instances of financial censorship for years to identify how online speech is indirectly silenced or intimidated by shuttering bank accounts, donation platforms,
and other financial institutions. The Wall Street Journal wrote about the Justice
Department’s controversial and secretive campaign against financial institutions in 2013, and one Justice Department official quoted in the article stated:
"We are changing the structures within the financial system that allow all kinds of fraudulent merchants to operate," with the intent of "choking them off from the very air they
need to survive."
While Operation Choke Point was purportedly aimed at shutting down fraudulent online payday loan companies, we became concerned that this campaign could also affect legal online
EFF filed FOIA requests with the Department of Justice (DOJ), the Consumer Financial Protection Bureau, and the Federal Trade Commission. The documents EFF received from the DOJ
are primarily correspondence between members of Congress and the Department of Justice. In that correspondence, Congress members raised concerns about Operation Choke Point,
asked questions about how it operates, and stated that this is an issue that constituents are sending letters about.
Sen. Harry Reid (D-NV) and Rep. Kenny Marchant (D-TX), for example, emailed the Department of Justice with specific questions about how the Department defines a “high risk” financial
In the correspondence we received, the DOJ overwhelmingly replied with form letters that didn’t describe the criteria the Department used to decide whether a company was considered
high risk, how many companies were currently labeled ‘high risk,’ whether a company would ever know if it was considered ‘high risk,’ or any appeal process for companies to have
themselves removed from that category.
Rep. Sean Duffy (R-WI) wrote a letter to then Attorney General Eric Holder describing how two community banks in Wisconsin were bullied by regional agents of the FDIC, who told them
to stop working with prominent online lenders:
These banks were informed that if they chose to ignore the FDIC's request, they would face "the highest levels of scrutiny they could imagine," and were given no explanation,
details of complaints, or any evidence as to why these demands were being made.'
Duffy called these threats "outrageous" and "intimidation tactics."
Other members of Congress wrote to the Department of Justice about how Operation Choke Point was hampering opportunities for law-abiding Native American tribes and the Hispanic
Rep. Brad Sherman (D-CA), who cosponsored the Dodd-Frank Wall Street Reform and Consumer Protection Act and advocates for additional financial regulation, expressed deep concern about
the Department of Justice stepping beyond the bounds of the law with Operation Choke Point. In his letter to Holder, he stated:
As much as I would like to see stronger regulation of consumer lenders, I've sworn to uphold the U.S. Constitution. Accordingly, I must oppose efforts to "legislate by
prosecution" and legislate by "criminal investigation," even if I agree partly or completely with the ultimate substantive aim.
He also said, "[y]our department should conduct criminal investigations for the purpose of enforcing laws we have—not laws you (and I) might wish we had."
Unfortunately, the responses from the Department of Justice left more questions than answers. Vital details about Operation Choke Point—including what industries beyond online loans
may be impacted, the exact criteria for labeling a business ‘high risk,’ and the tactics used to pressure banks into participation—are still unknown.
Many people believe that America’s financial institutions may need additional regulation, and some may believe that online lenders should face additional scrutiny. However, an
intimidation squadron secretly pressuring banks to cut off businesses without due process is not the right way forward. As we’ve seen with digital booksellers, whistleblower websites, online publishers, and online personal ads, payment providers often cave to
pressure—whether formal or informal—to shut down or restrict accounts of those engaged in First Amendment-protected activity. In order to foster a future where digital
expression can flourish, we need to ensure that necessary service providers like banks and payment processors don’t turn into the weak link used to cut off unpopular speech.
But that requires transparency. We need more information about how the government is pressuring financial institutions. Unfortunately, the Department of Justice’s nonresponses to
Congress don’t get us any closer to understanding this complicated issue.
Check out the most recent documents EFF got in response to its FOIA request on Operation Choke Point.
See documents EFF received earlier on this program.
>> mehr lesen
EFF to Supreme Court: Patent Holders Shouldn’t Be Allowed to Cherry Pick the Courts
(Mo, 06 Feb 2017)
Supreme Court Must End Texas’ Grip on Patent Cases, Restore Fairness in Court Selection
Washington, D.C.—The Electronic Frontier Foundation (EFF) urged the Supreme Court to
overturn a court decision that tilted the scales in favor of patent trolls by making it easier
for them to venue shop and file lawsuits in certain courts.
Venue shopping, also called forum shopping, is an insidious practice whereby
parties to a lawsuit look for courts with procedures favorable to their cases. Unfortunately, some courts have engaged in an even more insidious practice known as forum selling by
actively encouraging patent lawsuits in their districts. For example, a court might adopt plaintiff-friendly procedures and policies that undermine the rights of defendants.
One such court is the Eastern District of Texas, a
rural area with almost no manufacturing, research, or technology facilities, where more than one-third of all patent cases in the country were filed last year. That proportion is no
accident: patent litigants flock to Texas because the court has put in place a host of procedures that make it difficult for
defendants to terminate meritless cases early, while also speeding up the time it takes for cases to go to trial.
Those procedures drive up litigation costs for defendants, which in turn puts more pressure on them to settle cases even if they believe they should win. Such pressure is
especially beneficial to patent trolls—companies that don’t make any products but buy up patents, many of questionable validity, in order to file often frivolous infringement lawsuits
to extract settlements.
This kind of venue shopping in patent cases was made possible by a 1990 court decision that upended decades-old rules that required patent cases be filed in locations that were fair
and convenient to the person being involuntarily brought into court—such as the location of the defendant’s primary place of business. In a filing today in the lawsuit TC Heartland v. Kraft Foods, EFF asked the Supreme Court to overturn the 1990 decision and bring back
basic fairness to patent litigation. Kraft Foods, based in Illinois, sued Indiana-based TC Heartland for patent infringement in Delaware, where the defendant has no offices or
“The Supreme Court can fix a rampant problem in patent law and make the process more fair and balanced. As it stands, many defendants can be hauled into court in any corner of
the country, regardless of whether the location has anything to do with either party,” said EFF Staff Attorney Vera Ranieri. “Forum shopping harms all defendants, but it’s especially
burdensome for small companies or individuals with limited means to travel to distant places or fight costly lawsuits.”
“Patent owners aren’t the only ones taking advantage of a bad court decision. Forum selling by courts is a black stain on the judicial system. Our courts shouldn’t be tilting
the scales so that forum, as opposed to merits, ends up deciding the outcome of a case,” said Ranieri. “Venue shopping and selling drives up the costs of innovation for inventors and
erodes trust in our courts. The Supreme Court can and should fix this problem.”
For the brief:https://www.eff.org/document/tc-heartland-v-kraft-eff-brief
For more on this case:
>> mehr lesen
(Mo, 06 Feb 2017)
agreements should not be a crime—not only because people rarely read these agreements, but because the bounds of criminal law should not be defined by the preferences of website
We’ve convinced the Ninth Circuit Court of Appeals that the federal Computer Fraud and Abuse Act (CFAA) shouldn’t be read to criminalize corporate computer use restrictions,
The case, Oracle v. Rimini Street, is on appeal to the Ninth Circuit, and we just filed an amicus
brief explaining to the court why an overbroad interpretation of the state computer crime statutes would have the exact same disastrous outcome as an overbroad interpretation of
the CFAA. The Ninth Circuit should listen to its own
reasoning and avoid an interpretation of these statutes that turns innocent Internet users into criminals.
website. Rimini, which provides Oracle clients with third party enterprise software support, violated that provision by using automated scripts instead of downloading each file
individually. Oracle sent Rimini a cease and desist letter demanding that it stop using automated scripts. It did not, however, rescind Rimini’s authorization to access the files
outright. Rimini continued to use automated scripts, and Oracle sued. The jury found Rimini guilty under both the California and Nevada computer crime statues, and the judge upheld
that verdict—concluding that, under both statutes, violating a website’s terms of service counts as using a computer without authorization or permission.
Rimini Street appealed, and as we told the Ninth Circuit in our brief, the district court’s reasoning turns
millions of Internet users into criminals on the basis of innocuous and routine online conduct. What’s more, by making it completely unclear what conduct is criminal at any given time
on any given website, the district court’s holding is in violation of the long-held Rule of Lenity—which requires that criminal statutes be interpreted to give clear notice of what
conduct is criminal.
You might be thinking that no prosecutor would try to prosecute everyone who has ever provided false personal information on Facebook. But a prosecutor could—under either law.
This opens the possibility of arbitrary or discriminatory enforcement. For instance, assume you accidentally cut off your local district attorney in the turn lane. If terms of service
violations were criminal, that prosecutor could turn around and go after you for listing an incorrect birth year on your Facebook profile. Avoiding this very situation is one of the
reasons our Constitution requires
our laws to be clear.
We hope the Ninth Circuit recognizes the dangers of the district court’s interpretation and reverses the ruling.
>> mehr lesen
Digital Rights Issues on the Horizon at the Supreme Court
(Mo, 06 Feb 2017)
The Supreme Court already has a list of digital civil liberties issues to consider in the near future, and that list is likely to grow.
If confirmed, President Donald Trump’s nominee to fill the late Justice Antonin Scalia’s seat on the Supreme Court—Judge Neil Gorsuch of the U.S. Court of Appeals for the Tenth
Circuit—will be in a position to make crucial decisions affecting our basic rights to privacy, free expression, and innovation.
The Supreme Court is being asked to consider a pair of cases dealing with law enforcement obtaining cell phone location records: the U.S. v. Graham ruling out of the Fourth
Circuit Court of Appeals and the U.S. v. Carpenter out of the Sixth Circuit Court of Appeals. In both cases, the courts ruled that law enforcement did not need a warrant to
obtain long-term, historical cell phone location data pinpointing a suspect’s location and movement.
EFF filed a brief asking the Supreme Court to consider the
cases and arguing that previous rulings on the issue need to be reconsidered in light of how precise and revealing cell phone location data has become and as technology advances. “The
dramatic increase in the number of cell phones and cell sites and the amount of detailed, sensitive location data they generate, combined with the quantity and extent of law
enforcement demands for this data, show that it is time for this Court to address the Fourth Amendment privacy implications of [cell site location information],” we wrote.
Gorsuch’s rulings at the Tenth Circuit provide a possible glimpse of where he will come down on privacy issues. For instance, in a decision he wrote in U.S. v. Ackerman,
Gorsuch found that Fourth Amendment protections apply in instances where a person or organization is searching emails on behalf of the government.
The Supreme Court is set to hear arguments on Packingham vs. North Carolina and consider the constitutionality of a North Carolina law that bans registered sex offenders from
using online social media platforms that minors also access. In an amicus brief filed with the court, EFF and others argued that the state law violates the First Amendment.
Innovation and Fair Use
The Supreme Court has agreed to hear arguments in TC Heartland v. Kraft, a case centering on whether TC Heartland can have the infringement case against it considered in the
company’s home state of Indiana instead of Delaware. EFF has supported TC Heartland, and a ruling in
favor of reasonable venue limits could help tamp down on abusive patent lawsuits, which are often brought in the Eastern District of Texas despite any actual ties to that location
because that court is perceived as being friendly to abusive suits.
The Supreme Court has also agreed to hear arguments in Impression Products v. Lexmark, a case about patent exhaustion, or whether a patent holder can put limits on how a
customer can use, resell, tinker with, or analyze a patented product the customer has purchased. In a brief filed to the Supreme Court, we and others argued that
allowing patent owners to control goods after sale threatens all sorts of activities—like security research, reverse engineering, and device modification.
The Court may also consider Lenz v. Universal, aka the dancing baby case, which centers around an individual whose fair
use video was removed from YouTube because it had a Prince song playing in the background. In a brief filed for the petitioner, EFF argued that copyright holders should be held
accountable if they force content to be taken down based on unreasonable charges of infringement. The Court has not yet decided whether to take the case but has asked for the
Solicitor General’s views.
The Supreme Court has also been asked to consider whether the Patent Trial and Appeals Board should use the standard of common sense and knowledge of a skilled artisan to gauge the
obviousness of a patent in the case Google v. Arendi. EFF has encouraged the
court to take up the case, arguing that the court should bolster the country’s patent system by setting a stricter standard for obviousness.
United States v. GrahamTC Heartland v. Kraft FoodsLenz v. UniversalImpression Products, Inc. v. Lexmark International Inc.
>> mehr lesen
التفتيش الرقمي الاعتباطي على الحدود: أخبر قصتك لمؤسسة الجبهة الالكترونية
(Mo, 06 Feb 2017)
بعد الأمر التنفيذي المحيرحول الإرهاب والهجرة الذي أصدره الرئيس ترمب، ذكرت
بعض التقارير خلال عطلة نهاية الأسبوع أن عناصر الحدود في المطارات كانوا يفتشون هواتف المسافرين القادمين من الشرق الأوسط، بمن فيهم المقيمين الدائمين في الولايات
المتحدة (حاملي البطاقة الخضراء). ما
يقلقنا أن هذا يشير إلى توسعٍ في الممارسات الاقتحامية الرقمية القائمة أساساً لدى الجمارك وحماية الحدود الأمريكية، ولهذا السبب نطلب منكم إرسال قصصكم عن التفتيش الرقمي على الحدود.
بدأت دائرة الجمارك وحماية الحدود منذ بعض الوقت بممارسة مطالبة الأمريكيين والأجانب بمعلومات عن وسائل التواصل الاجتماعي وبالولوج إلى أجهزتهم الرقمية، والتي يختزن فيها معلومات واتصالات شخصية تماماً، أو ترتبط بتطبيقات
سحابية ذات بيانات على نفس الدرجة من الحساسية.
فعلى سبيل المثال، كتبنا الأسبوع الماضي عن شكاوى صادرة عن مواطنين
أمريكيين مسلمين حول ولوج الجمارك وحماية الحدود إلى منشورات عامة عبر المطالبة بأسماء الحسابات على وسائل التواصل الاجتماعية، ولربما اطلعت على منشورات خاصة عبر المطالبة بكلمات المرور
الخاصة بأجهزة الخليوي ومتابعة تطبيقات التواصل الاجتماعي. كما أن هناك مزاعم مفادها أن عناصر الحدود قد قاموا
بالاعتداء الجسدي على رجل كان قد رفض تسليم هاتفه غير المقفل.
كما قامت الجمارك وحماية الحدود بتفتيش أو محاولة تفتيش الأجهزة الرقمية الخاصة بالصحفيين، بما في ذلك أجهزة مراسل لصحيفة وول ستريت جورنال وهو مواطن أمريكي. وكذلك فثمة أمريكيين
آخرين يخضعون للتوقيف وتفتيش أجهزتهم الرقمية على الحدود، وقد طال ذلك مواطناً إيرانياً–أمريكياً مزدوج الجنسية كان
عائداً إلى الولايات المتحدة من إجازة قضاها في شلالات نياجارا، وقد قدمنا إلى المحكمة باسمه موجز "صديق المحكمة".
في الخريف الماضي رفعنا تعليقات إلى دائرة الجمارك وحماية الحدود نعارض فيه مقترحاً، كان قد تمت الموافقة عليه في كانون الأول/ديسمبر قبل تولي الرئيس
ترامب لمنصبه، يطلب من الزوار الأجانب من البلدان المعفاة من تأشيرة الدخول أن يكشفوا طواعيةً عن حسابات وسائل تواصلهم الاجتماعية. وذكرت قناة سي إن إن مؤخراً أن إدارة ترامب تفكر في الاشتراط على جميع الزوار
الأجانب "الكشف عن جميع المواقع الالكترونية ومواقع التواصل الاجتماعي التي يتصفحونها، ومشاركة جهات الاتصال الموجودة على هواتفهم المحمولة."
فنظراً لهذه التطورات الأخيرة، نشعر بالقلق من أنّ الاقتحامية والتكرار في تفتيش الأجهزة والتحقيق في الحياة الرقمية للمسافرين في ازدياد.
لأن هذا جزء من عملنا في مكافحة ما نؤمن أنها تصرفاتغير دستورية على الحدود، ولكي نفهم أكثر كيف يمكن لسياسات إدارة ترامب
الجديدة أن تغير الممارسات الحدودية، نرغب في سماع قصصكم.
رجاءً أعلمونا إذا قام أحد المسؤولين الأميركيين على الحدود بتفتيش جهازكم الخليوي أو حاسوبكم المحمول أو جهاز رقمي آخر، أو إذا طلب منكم كلمة المرور لجهازكم أو أمركم بفك قفل أو فك تشفير
الجهاز، أو إذا طلب منكم حساباتكم على مواقع التواصل الاجتماعي.
نود أن نسمع من الجميع، ولكن بشكل خاص إن كنت مواطناً أمريكياً أو مقيماً دائماً (حاملاً للبطاقة الخضراء) في الولايات المتحدة.
رجاءً أخبرنا عن :
وضعك القانوني في الولايات المتحدة (مواطن، مقيم بشكل دائم، حامل لتأشيرة دخول).
في أي مطار أو أي معبر حدودي كنت.
ما الأجهزة التي كانت بحوزتك.
ما الذي طلبه منك تحديداً عناصر حماية الحدود (بما في ذلك حسابات التواصل الاجتماعي وكلمات المرور) أو ما الذي فتشوه بالتحديد.
فيما إذا سجل عناصر الحدود أية معلومات.
فيما إذا صرّح عناصر الحدود أو لمحوا أن الامتثال لمطالبهم هو أمر تطوعي أو إجباري.
فيما إذا هددك عناصر الحدود بأي طريقة كانت.
فيما إذا صرّح عناصر الحدود عن سبب لمطالبهم تلك
يمكنكم التواصل مع EFF على firstname.lastname@example.org واستخدام PGP/GPG للتواصل الآمن، أو الاتصال بهم على: +1-415-436-9333
United States v. Saboonchi
>> mehr lesen
Federal Court Rules Against Public.Resource.Org, Says Public Safety Laws Can Be Locked Behind Paywalls
(Mo, 06 Feb 2017)
Everyone should be able to read the law, discuss it, and share it with others, without having to pay a toll or sign a contract. Seems obvious, right? Unfortunately, a federal district
court has said otherwise, ruling that private organizations can use copyright to control access to huge portions of our state and federal laws. The court ordered Public.Resource.Org to stop providing public access to these key legal rules.
Public.Resource.org has one mission: to improve public access to government documents, including our laws. To fulfill that mission, it acquires and posts online a wide variety of
public documents including regulations that have become law through “incorporation by reference,” meaning that they are initially created through private standards
organizations and later incorporated into federal law. Those regulations are often difficult to access because they aren’t published in the federal code, but they are vitally
important. For example, they include the rules that govern the safety of buildings and consumer products, promote energy efficiency, and control the design of standardized tests for
students and employees.
The industry associations that develop these rules insist that they have copyrights in them – even after they become binding regulations. Six of those associations sued Public
Resource for copyright infringement. According to their complaint, sharing the law means breaking the law.
EFF, along with co-counsel at Fenwick & West, Durie Tangri, and attorney David Halperin stepped up to defend Public Resource in court. Unfortunately, we didn’t win this round; the
district court has granted summary judgment motions in favor of the standards organizations, ruling that they can claim copyright in the regulations, and ordered Public Resource not
to post them online. The district court’s decision runs contrary to decisions in other parts of the country, and raises serious constitutional issues. We don’t see how the decision
can be reconciled with the due process right to know the law, nor our First Amendment right to share it.
The district court’s decision suggests that laws can be
copyrighted and put behind paywalls as long as they were first written down by someone outside of government. Of course, lobbyists and trade groups write bills and draft regulations
that get passed by Congress, or federal agencies, with scarcely a word changed. The ruling against Public Resource suggests that every one of those lobbyists and other private
interests “owns” a piece of the law and can control who accesses it, and how, and at what price. Will private parties be able to make parts of the law inaccessible, in an attempt to
boost sales of other publications? Three of the plaintiffs against Public.Resource.Org have already tried to do this with the 1999 Standards for Educational and Psychological Testing,
which is a part of both state and federal regulations.
We’re disappointed by this misguided ruling, but the case is far from over. Public Resource continues to make important government documents from many nations available on its
website, and to push those governments to allow everyone to read and speak the law. And EFF will continue to stand beside Public Resource in its mission to make the law available to
Freeing the Law with Public.Resource.Org
>> mehr lesen
Búsquedas digitales invasivas en la frontera: Cuéntele su historia a EFF
(Fr, 03 Feb 2017)
Después de la confusaorden ejecutiva del presidente Trump sobre terrorismo
e inmigración, surgieron informes, durante el fin de semana, que los agentes fronterizos de los aeropuertos estaban secuestrando los teléfonos celulares de pasajeros procedentes
de Oriente Medio, incluidos los residentespermanentes de Estados Unidos. Nos preocupa que
esto indique una expansión de las, ya invasivas, prácticas digitales del departamento de migraciones y protección de la frontera de Estados Unidos (US Customs and Border Protection -
CBP), por lo que estamos pidiendo nos cuenten sus historias en caso el Estado revisará sus equipos en la frontera.
CBP tiene, desde hace algún tiempo, la práctica de exigir a los estadounidenses y extranjeros información sobre redes sociales y acceso a sus dispositivos digitales, que contienen en su propio sistema de almacenamiento;
información y comunicaciones altamente personales o enlaces a aplicaciones basadas en la nube con datos igualmente sensibles.
La semana pasada, por ejemplo, escribimos sobre las quejas de ciudadanos estadounidenses musulmanes sobre como el CBP accedían a información pública de
redes sociales exigiéndoles su nombre de usuario y también accedían – potencialmente - a mensajes privados al exigir los códigos de desbloqueo de telefonos celulares y al explorando
las aplicaciones de redes sociales. Los agentes fronterizos, supuestamente, también
abusaron físicamente de un hombre que se negó a entregar su teléfono desbloqueado.
La CBP también ha buscado o intentado buscar en los dispositivos digitales de periodistas, entre ellos de un reportero del Wall Street Journal que es ciudadano
estadounidense. Otros estadounidenses también están sujetos a la incautación y la búsqueda de sus dispositivos digitales en la frontera, incluyendo a un ciudadano dual iraní-estadounidense que regresaba a los Estados Unidos de unas vacaciones a las Cataratas del Niágara y en
cuyo nombre escribimos un amicus curiae.
El otoño pasado, enviamos nuestros comentarios al CBP oponiéndonos a una propuesta aprobada en diciembre, antes de que el Presidente Trump
asumiera el cargo, pidiendo a los visitantes extranjeros del Programa de Exención de Visa (Visa Waiver Countries) revelar voluntariamente sus usuarios de redes sociales. Y CNN informó recientemente que la Administración Trump está contemplando la
obligación de todos los visitantes extranjeros "de revelar todos los sitios web y sitios de redes sociales que visitan, y de compartir los contactos en sus teléfonos celulares".
Teniendo en cuenta estos acontecimientos recientes, nos preocupa que lo invasivo y frecuente de las búsquedas en dispositivos e investigaciones sobre la vida digital de los viajeros
Como parte de nuestro trabajo para combatir lo que creemos que son prácticasinconstitucionales en la frontera, y para entender mejor cómo las
nuevas políticas de la Administración Trump pueden estar cambiando las prácticas fronterizas, nos gustaría escuchar sus historias.
Por favor, háganos saber si un funcionario estadounidense en la frontera examinó su teléfono celular, computadora portátil u otro dispositivo digital; Solicitó el código de
acceso de su dispositivo, le ordenó desbloquearlo o descifrarlo o preguntó por sus usuarios de redes sociales.
Nos gustaría saber de los casos de todos, pero especialmente si usted es ciudadano o residente permanente (tarjeta verde) de los Estados Unidos.
Por favor díganos:
Su estatus legal en los Estados Unidos (ciudadano, residente permanente, titular de la visa).
En qué aeropuerto o frontera estaba.
Qué dispositivos tenía con usted.
Lo que los agentes fronterizos exigieron específicamente (incluidos los usuarios de redes sociales y los códigos de acceso) o lo que buscaron específicamente.
Si los agentes fronterizos registraron alguna información.
Si los agentes fronterizos declararon o sugirieron que el cumplimiento de sus demandas era voluntario o obligatorio.
Si los agentes fronterizos lo amenazaron de alguna manera.
Si los agentes fronterizos manifestaron alguna razón para sus demandas.
Puede escribirnos a email@example.com. Si desea ponerse en contacto con nosotros de forma segura por correo electrónico, utilice PGP/GPG. O puede llamarnos al + 1-415-436-9333.
United States v. Saboonchi
>> mehr lesen
Can Foreign Governments Launch Malware Attacks on Americans Without Consequences?
(Fr, 03 Feb 2017)
Can foreign governments spy on Americans in America with impunity? That was the question in front of the U.S. Court of Appeals for the District of Columbia Circuit Thursday, when EFF,
human rights lawyer Scott Gilmore, and the law firms of Jones Day and Robins Kaplan went to court in Kidane v.
Jones Day partner Richard Martinez argued before a three-judge panel that an American should be allowed to continue his suit against
the Ethiopian government for infecting his computer with custom spyware and monitoring his communications for weeks on end. The judges questioned both sides for just over a half hour.
Despite the numerous issues on appeal, the argument focused on whether U.S. courts have jurisdiction to hear a case brought by an American citizen for wiretapping and invasion of his
privacy that occurred in his living room in suburban Maryland. The question is relevant because, under the Foreign Sovereign Immunities Act, foreign governments are only liable for
torts they commit within the United States.
Mr. Martinez argued that the location where the harm was inflicted upon Mr. Kidane was in Maryland, where his computer and he were the entire time he was spied upon. The question of
whether U.S. courts can provide a remedy to an American who was wiretapped shouldn't turn on where the eavesdropper was sitting, but rather where the actual wiretapping occurred,
which in this case was Silver Spring, MD.
Ethiopia's lawyer argued next, taking the position that it should be able to do anything to Americans in America, even set off a car bomb, as long as Ethiopia didn’t have a human
agent in the United States. One judge asked what would happen if Ethiopia mailed a letter bomb into the United States to assassinate an opponent, or hacked an American's self-driving
car, causing it to crash. Ethiopia didn't hesitate: their counsel said that they could not be sued for any of those.
This case began in early 2013, when, with the help of EFF and the University of Toronto's Citizen Lab, Mr. Kidane found Ethiopian government spyware on his personal computer in
Maryland. Our investigation concluded that the spyware which recorded Mr. Kidane's Skype calls was part of a systemic campaign by the Ethiopian government to spy on perceived
opponents. We filed this lawsuit in February 2014.
In Kidane v. Ethiopia, our client uses the pseudonym of Mr. Kidane in order to protect the safety and well-being of his family both in the United States and in Ethiopia. Mr.
Kidane is a supporter of members of the Ethiopian democracy movement and a critic of the government. He came to the U.S. over 20 years ago, obtaining asylum and eventually
citizenship. He currently lives with his family in Maryland.
We expect the D.C. Circuit to rule on this appeal in the next few months.
Kidane v. Ethiopia
>> mehr lesen
Another Loss for Perfect 10, Another Good Day for Copyright Law
(Fr, 03 Feb 2017)
Perfect 10 just can’t seem to help itself.
In case you missed it, the U.S. Court of Appeals for the Ninth Circuit handed (yet another) crushing
defeat to the adult website and serial copyright litigant Perfect 10, this time in its lawsuit against Usenet access provider Giganews. The Ninth Circuit soundly rejected each of
Perfect 10’s claims – clarifying that yes, direct copyright infringement still requires some volitional conduct on the part of the defendant, and no, Giganews could not be held liable
for contributory or vicarious copyright infringement either. We filed a brief arguing as much, and we’re happy the court agreed.
Back in 2011, Perfect 10 sued Giganews for copyright infringement, claiming that users of the service had uploaded copies of images in which Perfect 10 held the copyright. In the
district court, Perfect 10 argued that Giganews had not responded sufficiently to its notices of copyright infringement, and that Giganews was directly liable for its users' alleged
infringement. Fortunately, in 2014, that court sided withGiganews. The district court noted that many of Perfect 10’s notices didn’t conform to the standards required under the Digital
Millennium Copyright Act. Many of the notices, for example, failed even to identify the allegedly infringing content, providing only a set of search terms Perfect 10 suggested
Gigagnews could use to find it. The district court also noted that when Perfect 10 sent Giganews a takedown notice that
did sufficiently identify the allegedly infringing material, using a machine readable Message-ID, Giganews acted promptly to take down the targeted content.
In 2016, Perfect 10 appealed that district court decision to the United States Court of Appeals for the Ninth Circuit. As we said at the time, Perfect 10, this time backed by
the Recording Industry Association of America as an amicus, raised a potentially dangerous new argument on appeal. Seeking to overturn a key doctrine in copyright law, the
volitional conduct rule, Perfect 10 argued that Giganews could be held directly liable for copyright infringement by its users, simply for providing and operating a Usenet
access point. Usenet is a decentralized system that allows people to carry on conversations (and share files) in “newsgroups.”
The volitional conduct rule provides an important protection for service providers (or anyone really). It means that a defendant in a copyright infringement lawsuit can only be
directly liable for copyright infringement if they themselves are responsible for the decision to copy the infringed work. If a defendant merely provides the tools or services
that others use to make copies, they can still be liable for copyright infringement under indirect liability doctrines. But indirect liability is harder for copyright
holders to prove, and it allows more defenses for service providers. If the defendant is not the one who did the copying, the copyright holder has to show that the defendant knew of
the infringement and contributed to it, that the defendant intentionally induced the infringement, or that the defendant profited from the infringement while having the ability to
These extra requirements are vital legal protections for all kinds of Internet intermediaries, from Usenet providers to social networks to hosting providers to email services. Without
the volitional conduct rule, many of these service providers could be treated as direct infringers based on their users’ actions. That would create serious liability risks that could
threaten Internet services’ ability to do business, and users’ ability to communicate through those services.
Fortunately, the Ninth Circuit made it clear that it meant what it said back in
2013—direct infringement only applies to the one who does the copying:
Contrary to Perfect 10’s contention, this requirement of causation remains an element of a direct infringement claim. In Fox Broadcasting, we explained that”[i]nfringement
of the reproduction right requires copying by the defendant, which comprises a requirement that the defendant cause the copying.” In using this language, we indicated that
causation is an element of a direct infringement claim.
While Perfect 10 attempted to argue that the Supreme Court’s 2014 decision in
Aereo repealed that rule by implication, the Ninth Circuit rejected that argument, noting that the Supreme Court never addressed the issue. The TV-streaming service that
was challenged in Aereo was quite different from Giganews, and may have engaged in volitional conduct of its own.
In rejecting Perfect 10’s argument, the Ninth Circuit preserved an important protection for innovation and free speech online: if any service provider could be held directly
liable anytime a user uploaded an infringing item, those providers would have no incentive to continue to support user-generated content.
Perfect 10 also raised two indirect liability arguments against Giganews: "contributory liability" and "vicarious liability" theories.
The Ninth Circuit shot down these arguments as well. The court found that Perfect 10 was unable to show that Giganews either contributed to users' infringement, induced users to
infringe, or that Giganews received a direct financial benefit from specific acts of infringement. Allegations that some users might use the service to infringe were not enough.
That’s an important point: plaintiffs alleging vicarious liability must show that Giganews derived a financial benefit from infringement of the plaintiff’s works, not the
opportunity for infringement generally. As the court explained, “[s]uch a rule would allow cases to be built on the rights of owners and actions not before the court,” and would
conflict with the Supreme Court’s rules about who can bring a lawsuit in federal court.
This case was the latest in a decade-long run of cases where Perfect 10 (inadvertently) created some good copyright law. The
company has filed numerous copyright infringement lawsuits against service providers, seeking harsher rules that would force those providers to actively police their users’ activities
online. Fortunately, Perfect 10 has lost most of those lawsuits, and in the process, made some good law for the rest of us. Thanks, Perfect 10! And thank you, Giganews, for
standing up and fighting for Internet users.
Let’s hope with this last one, and the millions of dollars the company must pay Giganews in attorney’s fees and costs, Perfect 10 has learned its lesson, and won’t be trying to shake
down service providers in the future.
>> mehr lesen
Victories in Encrypting the Web: News and Government Sites Switch to HTTPS
(Fr, 03 Feb 2017)
The last year has seen enormous progress in encrypting the web. Two categories in particular have made extraordinary strides: news
sites and US government sites. The progress in those fields is due to months of hard work from many technologists; it can also be attributed in part to advocacy and sound policy.
Freedom of the Press Foundation has been leading the call for news organizations to implement HTTPS. In December 2016, it launched Secure the
News, which tracks HTTPS deployment across the industry, grading sites on the thoroughness of their implementation.
videos from many different domains. That embedded content additionally has to be upgraded to HTTPS, which can mean upgrading many different subdomains, wrangling third-party hosting
providers, and renegotiating CDN contracts. However, in the last year, many news sites have overcome the difficulties and deployed HTTPS in one form or another:
Wired, April 2016BuzzFeed, May 2016TechCrunch, June 2016The Guardian, November 2016Quartz, January 2017New York Times, January 2017Ars Technica, January 2017The Next Web, January 2017FiveThirtyEight, January 2017
We applaud Wired in particular for documenting its process of achieving full HTTPS in a series of
postsdiscussing its progress and providing useful advice for other sites switching to HTTPS.
The US government has also made great progress in securing its own websites. This is due in large part to the smart HTTPS-Only Standard issued in
2015 by the Office of Management and Budget. The standard mandates secure connections for federal websites. The General Services Administration tracks adoption of the standard with
its website Pulse.
According to 18F, “HTTPS/HSTS use in the U.S. government looks to have outpaced the
broader internet.” This is based on comparing research by Mozilla’s April King showing
that about 33% of Alexa Top 1M sites support HTTPS, compared to 70% of federal government domains according to Pulse.
The common thread between the news industry’s huge progress and the federal government’s huge progress in deploying HTTPS? Metrics. Like EFF’s 2013 Encrypt the Web Report, Pulse and Secure the News provide
important insight into how much progress is being made, and an incentive for individual sites to improve. It turns out that Pulse and Secure the News share a common ancestry: they are
based on a tool called “pshtt,” released under a CC0 public domain
dedication by the Department of Homeland Security and 18F. Pshtt makes it easy to scan sites for basic HTTPS implementation best practices and assemble a dashboard.
Under the Federal Source Code Policy, agencies are required to release at least 20 percent of
custom-developed code as open source software and are “strongly encouraged” to release as much as possible. EFF applauds open source government software; as we wrote last August, code written by government employees is, by law, in the public domain and should be available to the public. We recommend that the next revision of the Federal Source
Code Policy reflect that, by creating an “open-by-default” rule in place of the current 20 percent rule, regardless of whether the code was written by government employees or
The federal government is taking another bold step in securing its web presence: all newly-issued executive-branch domains under .gov will soon strongly enforce HTTPS. All newly registered domains will automatically be added to browsers’
HSTS preloading lists, ensuring that people visiting those domains will only connect over HTTPS. This change is particularly
valuable and important because the easiest time to make a website use HTTPS is when it is first launched.
January also saw two other important steps in encrypting the web:
Chrome and Firefox began to mark as non-secure any page
that uses HTTP and has a password field.Crossref announced new guidelines asking academic publishers to
Congratulations to all the hard-working folks that are part of the movement to encrypt the web. If your web site does not yet use HTTPS, visit our Encrypt the Web page for more information about why and how to encrypt your site.
>> mehr lesen
We Want a Copyright Office that Serves the Public
(Do, 02 Feb 2017)
The Copyright Office, and those who lead it, should serve the public as a whole, not just major media and entertainment companies. That’s what we told the leadership of the House
Judiciary Committee this week. If Congress restructures the Copyright Office, it has to put in safeguards against the agency becoming nothing more than a cheerleader for large
corporate copyright holders.
In December, after more than three years of hearings and discussions on copyright law, House Judiciary Committee Chair Bob Goodlatte, and ranking Democrat John Conyers released their
first suggestions for possible changes to copyright law in a short whitepaper [PDF]. While
light on details, the paper proposed “restructuring” the Copyright Office. EFF responded,
urging the lawmakers to make sure that any reforms to the Copyright Office help to curb that office’s bias in favor of large copyright industries.
To an extent unimagined when much of our copyright law was written, copyright now impacts the daily life of practically everybody. And with copyright’s increasingly broad impact comes
heightened stakes for the public. Copyright now shapes not only how we get and share information with the wider world, but also whether the devices we use on a daily basis are
trustworthy. That’s why it’s critical that we have a voice in how copyright rules affect our lives, and that those in a position to advise lawmakers aren’t in the pocket of big media
and entertainment companies.
The Copyright Office, like copyright generally, is supposed to serve the public. Unfortunately, the Copyright Office already ignores that mandate too often, following an approach that
puts industry interests first. Indeed, former Register of Copyrights Maria Pallante told an interviewer in 2012 that “[c]opyright is for the
author first and the nation second.” And EFF’s own investigations have revealed attempts by the Office to lobby other government agencies in favor of movie and
television industry interests, against long overdue reforms that would have benefited consumers, technology users, and the public as a whole.
Many major media and entertainment companies, and others who have long treated copyright as their private domain, want the Copyright Office removed from the Library of Congress,
making it an independent agency. They’ve said that they don’t want the Librarian of Congress, whose mission is promoting access to knowledge, overseeing the Copyright Office. We think
making the Copyright Office independent will remove important checks and balances on that agency, which will likely put it even more under the control of the traditional gatekeepers
of media and culture. That’s what Congress needs to avoid.
The Next Register of Copyrights Should Be Someone Who Listens to the Public – Not Just the Entertainment Companies
The new Librarian of Congress, Dr. Carla Hayden, also asked the public to weigh in on how the Copyright Office should be run. The Librarian oversees the Copyright Office and appoints
its director, known as the Register of Copyrights. In December, Hayden released an online survey, seeking public input on what qualifications and goals the next Register should have.
We’re pleased to see the Librarian taking this step to solicit public participation. The Librarian’s survey provided a valuable opportunity for people, including independent creators
and everyday technology users, to have their say. But the Librarian has to keep working to ensure the next Register is one who really serves the public as a whole.
In our comments, we urged the Librarian to pick a Register with a track
record for good management and an unbiased approach to copyright that’s up-to-date on how people use creative works. A good Register will be familiar with the how remixers, fan
artists, tinkerers, and security researchers rely on copyright’s limits. And the next Register should be someone that seeks out and listens to everyone affected by these laws –
copyright holders, technology companies, libraries, teachers, students, and everyday technology users. And they should make sure that any proposals to change U.S. copyright law or
export it to other countries take into account the impact on users of copyrighted works, and the interests of the public as a whole.
Along with making it easier for the public to access the Office’s data about registered copyrighted works, the next Register should also focus on getting rid of the unnecessary
barriers that Section 1201 places on research, repair, modification and resale of electronic devices.
Proposals for a Copyright “Small Claims” Process Risks Feeding the Copyright Trolls
The whitepaper from Representatives Goodlatte and Conyers also briefly suggested creating a copyright “small claims” tribunal, run by the Copyright Office, that would make filing a
copyright lawsuit easier than it already is. Considering there’s already an industry built around abusive litigation strategies, we’re concerned that any attempt to set up a
specialized copyright court would also make it easier for copyright trolls to shakedown small businesses, consumers, and
Internet subscribers. Because of this, we urged Chairman Goodlatte and Ranking Member
Conyers not to move forward with proposals to create a small claims process. While complex copyright lawsuits are sometimes expensive to bring, the vast majority are not, because most
cases settle quickly. It would be a mistake for Congress to look only at the cost of rare complex cases while ignoring the impact of straightforward ones, where the protections for
innocent and legally unsophisticated defendants have gotten stronger. Moving copyright cases to a brand new system would mean throwing out many of those protections.
We’re pleased to see both the Librarian of Congress and the House Judiciary Committee reaching out beyond the traditional players in copyright policymaking, to seek public input on
decisions that impact everyone. But that’s just the first step – we need to make sure they’re giving the public’s feedback adequate consideration and that their final decisions
represent the interests of everyone. We’ll be watching what they do, and speaking up to make sure that the interests of the public – including Internet and technology users,
consumers, and independent creators – are protected.
>> mehr lesen
California Bills to Safeguard Privacy from the Federal Government Advance
(Do, 02 Feb 2017)
New state bills that would create a database firewall between California and the federal government passed out of their respective Senate committees on Tuesday. Both are headed to the
Appropriations Committee and then could soon see votes by the full California Senate. If passed, these critical bills would help prevent Muslim registries and mass deportations in
California and would send a strong message to the Trump administration that Californians will resist his attacks on digital liberty.
Senate Bill 54, authored by Senate President Pro Tempore Kevin de León would prevent
law enforcement agencies in California from sharing department databases or private information with the federal government for immigration enforcement. It would also require
California state agencies to update their confidentiality polices so that they stop collecting or sharing unnecessary data about every Californian.
Senate Bill 31, authored by Sen. Ricardo Lara, would prevent local and state
government agencies from collecting data, sharing data, or using resources to participate in any program that would create a registry of people based on their religion, ethnicity, or
national origin. Police agencies would also be forbidden from creating a database of religious minorities in California.
President Trump has openly discussed requiring Muslims in America to register in a database. When asked subsequently about that proposal, he has said that he “would certainly
implement that—absolutely.” According to the New
Asked later, as he signed autographs, how such a database would be different from Jews having to register in Nazi Germany, Mr. Trump repeatedly said, “You tell me,” until he
stopped responding to the question.
ABC reports that Trump’s executive order on immigration
could impact 11 million people in the United States. While the details of that program are still unknown, one thing is clear: California shouldn’t be participating in federal plans to
roundup and deport millions of people.
California is leading the way in what could prove to be one of the most effective mechanisms for thwarting Trump’s anti-liberty agenda. Strong state laws can create a legal barricade
to data sharing with the federal government, creating a database firewall between state and federal authorities. The end result? State and local government will not participate in
Trump policies that trample on digital liberties in California.
Importantly, these pro-privacy policies will outlast President Trump. Once baked into California law, they’ll safeguard the over 38 million residents of California for generations to
EFF is committed to working with the authors of S.B. 54 and S.B. 31 to ensure that the bills are effective and enforced. In the meantime, you can make your voice heard by sending
messages to your California legislators using the links below. And regardless of whether you are in California, you can help by sharing
this blog post on social media.
>> mehr lesen
Invasive Digital Border Searches: Tell EFF Your Story
(Mi, 01 Feb 2017)
Following President Trump’s confusingexecutive order on terrorism and immigration,
reports surfaced over the weekend that border agents at airports were searching the cell phones of passengers arriving from the Middle East, including U.S. permanent residents
(green card holders). We’re concerned
that this indicates an expansion of the already invasive digital practices of U.S. Customs and Border Protection (CBP), which is why we’re asking for your digital border search
CBP has for some time now had a practice of demanding from both Americans and foreigners social media information and access to digital devices, which store on the devices themselves highly personal information and
communications or link to cloud-based apps with equally sensitive data.
Last week, for example, we wrote about complaints by Muslim American
citizens that CBP accessed public posts by demanding social media handles, and potentially accessed private posts by demanding cell phone passcodes and perusing social media apps.
Border agents also allegedly
physically abused one man who refused to hand over his unlocked phone.
CBP has also searched or attempted to search the digital devices of journalists,
including a Wall Street Journal reporter who is an
American citizen. Other Americans are also subject to seizure and search of their digital devices at the border, including one Iranian-American dual citizen who was returning to the U.S. from vacation to Niagara Falls and on whose behalf we wrote an
Last fall, we submitted comments to CBP opposing a proposal, which was approved in December before President Trump took office, to
ask foreign visitors from Visa Waiver Countries voluntarily to disclose their social media handles. And CNN reported recently that the Trump Administration is contemplating
requiring all foreign visitors “to disclose all websites and social media sites they visit, and to share the contacts in their cell phones.”
Given these recent developments, we’re worried that the invasiveness and frequency of device searches and investigations into the digital lives of travelers are increasing.
As part of our work to combat what we believe to be unconstitutionalpractices at the border, and to better understand how the Trump
Administration’s new policies may be changing border practices, we would like to hear your stories.
Please let us know if a U.S. official at the border examined your cell phone, laptop, or other digital device; asked for your device’s passcode or ordered you to unlock or decrypt
it; or asked for your social media handles.
We would like to hear from everyone, but especially if you are a citizen or permanent resident (green card holder) of the United States.
Please tell us:
Your legal status in the U.S. (citizen, permanent resident, visa holder).
What airport or border crossing you were at.
What devices you had with you.
What border agents specifically demanded (including social media handles and passcodes) or what they specifically looked through.
Whether border agents recorded any information.
Whether border agents stated or suggested that compliance with their demands was voluntary or mandatory.
Whether border agents threatened you in any way.
Whether border agents stated any reasons for their demands.
You can write to us at firstname.lastname@example.org. If you want to contact us securely via email, please use PGP/GPG. Or you can call us at
United States v. Saboonchi
>> mehr lesen
Indefensible: the W3C says companies should get to decide when and how security researchers reveal defects in browsers
(Mi, 01 Feb 2017)
The World Wide Web Consortium has just signaled its intention to deliberately create legal jeopardy for security researchers who reveal defects in its members' products, unless the
security researchers get the approval of its members prior to revealing the embarrassing mistakes those members have made in creating their products. It's a move that will put
literally billions of people at risk as researchers are chilled from investigating and publishing on browsers that follow W3C standards.
It is indefensible.
When the W3C embarked on its plan to create a standardized DRM system for video on the World Wide Web, EFF told them it was a bad idea, pointing out that such a system could be covered under Section 1201 of
the DMCA, which provides for criminal and civil penalties for people who tamper with DRM, even for legitimate purposes, including security disclosures, accessibility adaptation for
people with disabilities, and making innovative, competitive products and services (almost every other country has its own version of this law).
The W3C told us that they were only concerned with the technological dimension of the work, not the legal ones -- if the problem was the DMCA, we should do something about the DMCA
But the W3C has a tried-and-true method for resolving conflicts between open standards and technology law. In the W3C's earliest days, it wrestled with the question of software
patents, and whether to allow its members to assert patents over the standards they were creating. In the end, the W3C became an open standardization trailblazer: it formulated a
patent policy that required its members to surrender the right to invoke their patents in lawsuits as a condition of participating
in the W3C process. It was a brilliant move, and it made the W3C the premier standards body for the web.
We proposed that the W3C should extend this existing policy to cover the world's DRM laws. We suggested
that W3C members should have to surrender their DMCA 1201 rights, making legally binding promises not to use DRM law to attack security researchers, technologists adapting
browsers for disabled people, and innovative new entrants to the market.
This proposal has picked up steam. Hundreds of security
researchers have endorsed it, as have dozens of W3C
members, from leading research institutions like Eindhoven, Oxford and Lawrence Berkeley Labs to leading nonprofits that work for disabled people, like the UK's Royal National
Institute for Blind People, Vision Australia, Braillenet in France, and Benetech in the USA; and browser vendors like Brave and cryptocurrency companies like Ethereum. This measure
has also been integrated into the leading definition of an "open standard."
But last weekend, the W3C signalled that it would ignore all of these concerns, and instead embrace and extend the legal
encumbrances created by its DRM work, creating a parallel working group that would develop "voluntary guidelines" for its members to employ when deciding whether to use the legal
rights the W3C has created for them with EME to silence security researchers.
Companies can and should develop bug bounty programs and other ways to work with the security community, but there's a difference between companies being able to say, "We think you
should disclose our bugs in this way," and "Do it our way or we'll sue."
Under almost every circumstance in almost every country, true facts about defects in products are always lawful to disclose. No one -- especially not the companies involved -- gets to
dictate to security researchers, product reviewers and users when and how they can discuss mistakes that manufacturers have made. Security facts, like most facts, should be legal to
talk about, even if they make companies unhappy.
By its own admission, the W3C did not set out to create a legal weapon that would give companies the unheard-of power to force silence upon security researchers who have discovered
critical flaws in products we all entrust with our financial details, personal conversations, legal and medical information, and control over our cameras and microphones.
Considered separately from DRM standardization, this new project would be most welcome. The W3C is just the sort of place where we'd like to see best practices guidelines for offering
incentives to use managed disclosure processes.
But in creating a DRM standard, the W3C has opted to codify and reinforce the legal weaponization of DRM law, rather than dismantling it. Bad enough that the W3C has summarily
dismissed the concerns of new entrants into the browser market and organizations that provide access to disabled people -- but in the case of security concerns, they've gone even
further. When it comes to security concerns, the W3C has departed from the technological standards business to become legal arms dealers.
We at EFF call on the W3C to reconvene its earlier negotiations to defang the legal consequences of its DRM work, and in so doing to transform its security disclosure work from a
weapon to a normative guideline. It's one thing to help companies figure out how to make an attractive offer to the researchers who investigate browser security, but it's another
thing altogether to standardize browser technology that empowers companies to sue the researchers who decline to take them up on the offer.
>> mehr lesen
Texas’ Overbroad Cyberbullying Bill Could Silence Unpopular Speech
(Mi, 01 Feb 2017)
Online harassment is a serious problem. But censorship is not the solution. Thus, EFF has long
opposed anti-harassment rules that would chill and punish lawful online speech. And courts have long
struck down such laws for violating the First Amendment.
EFF now opposes a new Texas bill that would target online harassment of youths. Students most in need of protection—including
those expressing unpopular opinions or documenting sexual assault—could find themselves facing disciplinary action or even expulsion. While we sympathize with the intention of the
authors, trampling on fundamental free speech rights isn’t the solution to harassment.
This bill has many flaws, but we emphasize four in our letter to the Texas legislature.
The Texas bill would expand the power of school officials to discipline youths for “cyberbullying.” The bill’s vague and overbroad definition of that term would include a single email
from one student to another that “infringes on the rights of the victim at school.” Those “rights” are not defined.
School officials might use this new power to silence unpopular speech by the very students that some legislators may wish to protect. Suppose that in a current events class, one
student said they oppose gay marriage or Black Lives Matter protesters. Suppose further that in response, the leader of that school’s Gay-Straight Alliance or NAACP chapter sent the
first student a critical email that concluded, “I wish you would keep your opinion to yourself.” School officials might determine that the second student’s email infringed on the
first student’s right to speak in class, and thus impose discipline for sending the email.
The bill authorizes expulsion from school of a student who engages in bullying that encourages another student to commit suicide. This rule would not take into account the expelled
student’s intentions, the consequences of their actions, or how a reasonable student would have interpreted the expelled student’s words.
Suppose in the hypothetical above that the second student’s email had said, “Why don’t you jump off a bridge so we don’t have to listen to your opinions?” The student who sent the
email could be expelled, though they meant “jump off a bridge” rhetorically, the recipient did not attempt any self-harm, and the rest of the student body, familiar with the students
involved, would have known the suggestion was not serious.
The bill also authorizes expulsion of a student who releases intimate visual material of another student. The expelled student might have had no previous relationship with the
depicted student; for example, they may have forwarded along an image they received from someone else. The expelled student may have intended no harm, caused no harm, or had consent
from the depicted person. The released images might be newsworthy; for example, the victim of a sexual assault might release images of their assailant’s crime. The bill authorizes
expulsion without regard to any of these considerations.
It bears emphasis that school expulsion is highly disruptive to the educational and other needs of expelled children. And all too often, expulsion and other school discipline
minority and LGBT youths.
Unmasking Anonymous Bloggers
The bill authorizes subpoenas to investigate potential legal claims arising from any undefined “injury” to a minor before a lawsuit is ever filed. This new process would threaten the
First Amendment right to communicate anonymously on the Internet. This right is especially important for people who belong to unpopular groups or who express unpopular messages, who might
otherwise stay silent rather than risk retaliation.
In the hypothetical above, suppose the second student anonymously blogged about the classroom comments of the first student, and concluded, “only a jerk would say this in class.” The
first student might try to use the bill’s pre-suit subpoena process to unmask the anonymous blogger, based on the pretext of a highly dubious defamation claim. The risk of unmasking
would silence many anonymous speakers.
Defending Against a Damages Lawsuit
The bill would authorize civil lawsuits against a student who sent an email to another student that encouraged them to commit suicide. Again, this is far too broad, because it does
not take into account the speaker’s intentions, the message’s consequences, and the response of a reasonable person to the message.
Moreover, the bill would impose damages liability upon the parents of a minor who sent a prohibited email, whether or not the parents had anything to do with the email. Most parents
do not require their adolescent children to obtain parental permission before sending emails, text messages, and posts to social media. Nor should they.
Any new laws against online harassment must be carefully and narrowly drawn to ensure they do not inadvertently harm the people they are intended to protect, and do not chill or
punish lawful online speech. The Texas cyberbullying bill fails both of these tests.
Read EFF’s full letter to the Texas legislature.
>> mehr lesen
Leaked TISA Safe Harbor Proposal: the Right Idea in the Wrong Place
(Mi, 01 Feb 2017)
A new leak of the Electronic Commerce chapter [PDF] of the Trade in Services Agreement from the November 2016 negotiating
round has exposed a brand new U.S. government proposal on Internet intermediary safe harbors. The proposal, which the European Union is shown as opposing, is a rough analog to 47
U.S.C.§ 230, enacted as part of the Communications Decency Act (known simply as "Section 230", or sometimes as CDA 230).
Section 230 is one of the most important provisions of U.S. law for online platforms that host users' speech. It provides a shield protecting online intermediaries against a range of
laws that would otherwise that would otherwise hold them responsible for what their users say or do online. Although there are exceptions to this law—for example, the immunity does
not protect platforms' hosting of user-generated material that infringes copyright (which is governed by the weaker DMCA safe
harbor)—Section 230 remains an invaluable catalyst to innovation and free expression online, and a major reason for the success of U.S. Internet platforms around the world.
The existence of a U.S. proposal for TISA based on Section 230 had been rumored for some months, and when asked directly about it last October the USTR did confirm its existence to EFF. However, we had not
seen a copy of the text until now. Like Section 230, the provision excludes intellectual property rights and criminal law enforcement, but otherwise provides:
[N]o Party may adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability for
harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part,
created, or developed the information.
No Party shall impose liability on a supplier or user of an interactive computer service on account of:
any action voluntarily taken in good faith by the supplier or user to restrict access to or availability of material that is accessible or available through its supply
or use of the interactive computer services and that the supplier or user consideres [sic] to be harmful or objectionable; or
any action taken to enable or make available the technical means that enable an information content provider or other persons to restrict access to material that it
considers to be harmful or objectionable.
Although we usually talk about Section 230 in the context of the protection that it provides platforms for hosting or republishing the speech of users (paragraph 2 above), it also
does the reverse—protecting them from liability for removing users' speech from their platforms, provided that they do so in good faith (paragraph 3 above). This
so-called "Good Samaritan" provision affirms that online platforms are entitled to choose what user content they do or don't wish to host, and allows technology providers to provide
tools for platform owners to use in exercising that choice. Without this legal clarity, Internet intermediaries could face legal consequences for choosing not to host or provide
access to content that they find objectionable on their platforms or networks.
EFF is a supporter of the Section 230 safe harbor, and we would also support its extension to the other TISA countries that presently lack similar protections for Internet
intermediaries in their law. Just to give two examples from countries that are amongst TISA's negotiating parties, Turkey frequently threatens Internet platforms such as Facebook and Twitter with liability for the
speech of their users, and in Estonia an online news publication was held liable in defamation for anonymous comments submitted by
users. Such claims against Internet platforms would fall flat in the United States, thanks to the Section 230 safe harbor.
But it's for this reason, probably, that Europe is opposing the TISA proposal. Like the United States, Europe goes into trade negotiations with the express objective of maintaining
its existing laws, and Europe's equivalent to CDA 230, its E-Commerce Directive, simply doesn't measure up to this U.S. proposal. Although Europe is also considering adopting a Good Samaritan provision to clarify that providers will not become liable for
user content by reason of steps they take to filter out and eradicate illegal content on their platforms, there is no similar proposal to expand safe harbor protection for user
content that intermediaries leave online. Indeed, if anything, Europe is planning to lump intermediaries with additional responsibility for user content.
It's likely, then, that this proposal is either dead in the water, or else that it will be considerably watered down before TISA is finalised, if ever. And there in a nutshell lies
the reason why EFF, despite our support for Section 230, can't support the inclusion of this provision in a closed, secret trade agreement such as TISA. It is by pure good fortune
that we have been able to read this first draft of the USTR's proposal thanks to the document being leaked. But unless and until it is leaked again, we will remain blind to any
changes that may be wrought in the back and forth of TISA's closed-door negotiations, which might well end up twisting the proposal beyond recognition.
EFF commends the USTR for the intent of its proposal. We too have promoted the extension of Section 230-style safe harbor protection around the world, through our Manila Principles on Intermediary Liability. But until trade agreements can be made more open and inclusive, they are the wrong tool to promote such an
important policy for the global Internet.
>> mehr lesen
Stupid Patent of the Month: A Lyrics Website With User Interaction
(Mi, 01 Feb 2017)
Song lyrics are some of the most searched-for topics on the Internet.
This has lead to fierce competition
among lyrics sites. If you scroll to the bottom of one of these websites, you’ll see the claim: “Song discussions is
protected by U.S. Patent No. 9,401,941.” We are honoring this “song discussions” patent as January’s Stupid Patent of the Month.
The patent (we’ll call it the ’941 Patent) is owned by CBS Interactive and discloses a “computer-implemented system” for “processing interactions with song lyrics.” It explains that
other websites display lyrics in a “static form” and suggests there is a “lack of mechanisms for increasing the engagement of users with song lyrics.” The patent suggests allowing
users to interact with lyrics by allowing them to “select a segment,” displaying a “menu of options,” and allowing the user to enter an “interpretation of the selected line.”
The patent dates back to an application filed in February 2011. Although it is 23 columns long, in our view the patent does not describe any software or Internet technology that was
remotely new or innovative at that time. Rather, it describes common and mundane features, such as a “menu of options,” “user-inputted text” and a “user interaction database,”
and applies these features to a lyrics website. That should not be enough to get a patent.
In fairness, the ’941 Patent’s claims were significantly narrowed during prosecution.
While the Patent Office often does a poor job searching for prior art, the
examiner in this case did at least find the Rap Genius website (now known as Genius). As a result, CBS narrowed its claims to require that the website suggest possible comments to users based on what others have commented in the past. This means most
lyrics websites likely won’t infringe the patent.
But the ’941 Patent should not have been granted even in its narrowed form. Online annotations were certainly not new. And there was nothing revolutionary in 2011 about suggesting entries to
a user based on previous user data. (To take just one example, autocomplete for Google search debuted in 2004.) Simply applying these techniques to a
song lyric website should not have been patentable. Indeed, the patent itself notes that its methods could be applied to any form of text.
Ultimately, patents like this one reflect the near total failure by the Patent Office to police obviousness in software and Internet patents. Any website will involve multiple design
decisions. It requires choices regarding user accounts, passwords, encryption, cookies, software languages, advertising, user interface/user experience, database structure, APIs,
server architecture, etc. Given the number of choices, major websites usually reflect a unique combination of those decisions.
But giving someone a patent merely for having a unique combination of features is absurd. The patent system is supposed to reward innovation that we wouldn’t have without the
incentive of a patent. It should not reward routine web development with a 20 year monopoly.
This month’s patent is similar to the patent on filming a yoga
class or Amazon’s infamous patent on white-background
photography. In both of those cases, the examiner found some prior art but the applicant persisted by adding mundane features to the claims until the examiner could not find
documentary evidence of those exact features. An applicant can effectively game the system by adding elements so obvious no one would ever write them down in a reference. Together
with Public Knowledge, EFF recently filed an amicus
brief [PDF] asking the Supreme Court to consider the obviousness standard in patent law and to reaffirm that examiners can reject common sense combinations of known
Leaving aside obviousness, the ’941 Patent should also have been rejected under Alice v. CLS Bank. Routine web development decisions should be
considered “generic” computer processes that are insufficient to elevate an abstract idea to patent eligibility. A patent application like this one, with rote recitations of basic
computer functions and a bunch of boxes and lines in a flow
chart, should at least draw a searching analysis under Alice. Yet the Patent Office never even raised Alice and subject matter eligibility during prosecution. We
have submitted multiple rounds of comments (1, 2, 3, and 4) to the Patent Office asking it to be more diligent in applying
Fortunately, the ’941 Patent has never been asserted in litigation. But software
patents like it are the raw materials behind the rise of patent trolling. Ultimately, we need broad patent
reform so such patents are not issued in the first place.
>> mehr lesen
Hearing Thursday: American Fights to Continue Case Against Ethiopian Spyware
(Di, 31 Jan 2017)
Foreign Governments Must Be Held Accountable for Wiretapping Americans in the U.S.
Washington, D.C. – On Thursday, February 2, at 9:30 am, the Electronic Frontier Foundation (EFF) and the law firms of Jones Day and Robins Kaplan will urge an appeals court to let an
American continue his suit against the Ethiopian government for infecting his computer with custom spyware and monitoring his communications for weeks on end.
With the help of EFF and the Citizen Lab, the plaintiff in this case found Ethiopian government spyware on his personal
computer in Maryland several years ago. Our investigation concluded that it was part of a systemic campaign by the Ethiopian government to spy on perceived opponents.
The plaintiff uses the pseudonym of Mr. Kidane in order to protect the safety and wellbeing of his family both in the United States and in Ethiopia. Kidane is a critic of the
Ethiopian government, and came to the U.S. over 20 years ago, obtaining asylum and eventually citizenship. He currently lives with his family in Maryland.
Kidane first brought suit against Ethiopia in 2014, but the federal court held that no foreign government could be held accountable for wiretapping an American citizen in his own
home, so Kidane appealed to the U.S Court of Appeals for the District of Columbia Circuit. Jones Day partner Richard Martinez will argue Thursday that foreign governments should not
be allowed to spy on Americans in America with impunity.
Kidane v. Ethiopia
Thursday, February 2
E. Barrett Prettyman U.S. Courthouse
333 Constitution Ave., NW
Washington, D.C. 20001
D.C. Circuit Courtroom 31
For more on Kidane v. Ethiopia:
Senior Staff Attorney
>> mehr lesen
For Data Privacy Day, Play Privacy As A Team Sport
(Sa, 28 Jan 2017)
Protecting digital privacy is a job no one can do alone. While there are
many steps you can take to protect your own privacy, the real protection comes when we recognize that privacy is a team sport. So as we
celebrate Data Privacy Day on January 28, don’t just change your tools and behavior to protect your own privacy—encourage
your friends, family, and colleagues to take action, too.
Don’t just install an end-to-end encrypted messaging app like Signal or WhatsApp. Encourage others to join you, too, so that you can all communicate securely. Beyond
protecting just your communications, you’re building up a user base that can protect others who use encrypted, secure services and give them the shield of plausible deniability. Use of a small secure messaging app made for
activists, for example, may be seen as a signal that someone is engaged in sensitive communications that require end-to-end encryption. But as a service's user base gets larger and
more diverse, it's less likely that simply downloading and using it will indicate anything about a particular user's activities.
On WhatsApp in particular, don’t just change your back-up settings to prevent unencrypted cloud storage of your messages. Talk to your contacts
about changing their settings, too. If any one participant in a conversation has cloud back-ups turned on, then copies of your conversations with them will be stored in the cloud
unencrypted at rest.
The same applies to email service providers. If keeping your email communications away from large tech companies like Google or Yahoo is a concern, don’t just move your email
to a different email provider or your own server. Encourage your contacts to do the same, too, otherwise your communications with contacts who use Gmail or Yahoo Mail will be
exposed to the companies you may have been trying to avoid.
Don’t just encrypt your own device. Suggest full-disk encryption to your contacts and coworkers,
too, so your files are safe after you share them.
Don’t just install Privacy Badger. Show it to your friends and family to download, too, so we can send a louder
message together to advertisers demanding responsible ads that do not track users without consent.
Don’t just change your own social media settings and behavior. Talk with your friends
about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or even if you untag yourself from posts, friends can still
unintentionally identify you, report your location, and make their connections to you public. If you use Facebook for organizing, work with others to keep your Facebook groups private and secure.
Working together for privacy applies in offline situations, too. Don’t just prepare
yourself and your own devices for a protest. Whether in the U.S. or internationally, share precautions with organizers and fellow protesters, too, and discuss ahead of time how you
can safely document your event with powerful photos, videos, and other media.
Of course, there is no such thing as one-size-fits-all privacy advice, and each individual should consider their own threat model when taking the steps listed above. But the more we share information and best practices, the more we can
each fine-tune the ways we protect ourselves and each other. It can take a community of privacy-conscious users to protect the privacy of any one individual. Join us in celebrating
Data Privacy Day and rallying your community around the privacy stakes we all share.
>> mehr lesen
California Databases Must Not Be Used to Target Immigrants and Muslims
(Fr, 27 Jan 2017)
The California State Legislature is now considering two bills that would build a database
firewall to block the flow of personal information from state and local government to federal efforts to deport immigrants and register people based on their religion, ethnicity, or
national origin. EFF supports bothbills because they would prevent abuse of law enforcement and other government databases to
target vulnerable communities.
The strongest way to protect civil liberties is to fight for privacy protections for all Californians, regardless of their national origin or immigration status. Please support S.B.
54 and S.B. 31 today.
Senate Bill 54, authored by Senate President Pro Tempore Kevin de León, would prevent
law enforcement agencies in California from sharing department databases or private information with the federal government for immigration enforcement. It would also reduce the
amount of personal information that state agencies collect, use, and share about all Californians. Senate Bill 31, authored by Sen. Ricardo Lara, would prevent local and state government
agencies from collecting data, sharing data, or using resources to participate in any program that would create a list or registry of people based on their religion, ethnicity, or
national origin—a direct response to Pres. Trump’s call for a Muslim registry. S.B. 31 would also strictly limit law enforcement from collecting information on a person’s religion.
Each bill goes before a legislative committee on January 31.
Organizational supporters of these bills include the ACLU of California, Asian Americans Advancing Justice, the California Immigrant Policy Center, the California Immigrant Youth
Justice Alliance, the Immigrant Legal Resource Center, the Mexican American Legal Defense and Education Fund, and the National Day Laborer Organizing Network.
The Perils of Database Abuse of Immigrants and Muslims
Governments gather all manner of personal information from members of the public, often for seemingly benevolent purposes, and store it in databases. All too often, governments
proceed to reuse that information in a manner that hurts these same people. Vulnerable subpopulations suffer most frequently from such database abuse.
The original sin at the dawn of our nation’s database era is the shameful use of stored personal information to round up and intern Japanese Americans during World War II.
Specifically, the U.S. Census Bureau shared its supposedly confidential data about the names and addresses of
Japanese Americans with the military officials in charge of internment. While the government initially gathered this information for a legitimate purpose, the government wrongfully
diverted it to an illegitimate purpose.
Today, many immigrants and their allies fear that the Trump administration will abuse government databases to implement his plan to rapidly deport three
million people. A ripe target is the federal database for the Deferred Action for Childhood Arrivals (DACA) program. Under DACA, some 750,000 undocumented immigrants who entered the
United States as minors (often called “Dreamers”) gave their personal information to the federal government in exchange for deferred action from deportation. Now they fear the federal government will reuse the DACA database to find and deport them. Scores of civil rights
organizations oppose such database abuse.
Pres. Trump also called for a Muslim registry. Such a
database would be illegitimate and illegal at its inception. The
Trump administration reportedly is considering the reinstatement of the infamous Bush-era NSEERS
database of Muslim immigrants. Civil rights advocates oppose this registry, too.
State and local governments in California possess myriad databases that the Trump administration might try to use to locate and deport immigrants and to register Muslims. Many
government agencies (including police, human services, and universities) gather and store a host of personal information (including names, addresses, and social security numbers) from
vast numbers of people. Federal data miners could abuse these state and local databases to pursue immigrants and Muslims.
How S.B. 54 and S.B. 31 Would Build a Database Firewall
The time is now to batten down the hatches to prepare for the coming storm.
S.B. 54 would prohibit California law enforcement agencies (including state, local, and school police) from making their databases available to any entity for purposes of immigration
enforcement. This ban includes databases maintained for agencies by private vendors. Any entity that obtains database access would be required to certify in writing that they will not
use the database for immigration enforcement.
S.B. 54 would also limit how California police agencies gather and share personal information. Specifically, agencies would be barred from collecting information about people’s
immigration status. They also would be barred from providing nonpublic personal information (such as home or work address) for immigration enforcement purposes. These rules would
advance a data privacy best practice: government agencies should not collect or share personal information except to the extent strictly necessary to do their jobs.
Of equal importance, S.B. 54 would require every state agency in California (not just police) to overhaul their confidentiality policies and identify necessary changes to ensure
they do not collect any more personal information than they need to perform their duties or use or disclose it for any other purposes. Agencies would have six months to generate this
review, and the California Attorney General would have three months to draw up model policies for contracts with private vendors. This bill would have an immediate effect on
protecting immigrants and Muslims, and would also protect the privacy of all Californians.
S.B. 31, in turn, would prohibit all of California’s state and local agencies from providing personal information from their databases to the federal government for purposes of
creating or enforcing a list, registry, or database based on religion, ethnicity, or national origin. Law enforcement also would be barred from collecting information on an
individual’s religious beliefs or practices if there isn’t a clear nexus with a criminal investigation. Law enforcement agencies would still be allowed to collect religious
information to provide special accommodations, such as religiously appropriate meals in a corrections facility.
In sum, these two bills would block federal efforts to commandeer state and local databases for purposes of deporting immigrants and registering Muslims. While EFF suggested ways to build the California database firewall even higher, we fully endorse
the current bill as-is.
For many years, EFF has fought government use of cutting-edge technology to target immigrants. Among other things, we oppose biometric surveillance of immigrant communities, rapid DNA analyzers as a tool of immigration enforcement, and social media monitoring of citizenship applicants and foreign visitors. Likewise, we resist street-level surveillance, such as the broken CalGang database, which all too often has an unfair disparate impact
against immigrant communities, as well racial, ethnic, and religious minorities.
In the wake of the Trump inauguration, EFF has redoubled its opposition to high-tech government attacks on our
immigrant and Muslim friends and neighbors. The first step is to block database abuse by passing both of these bills.
Please join the coalition to protect our data and tell your lawmakers to support S.B. 54
and S.B. 31 today.
>> mehr lesen
Fear Materialized: Border Agents Demand Social Media Data from Americans
(Do, 26 Jan 2017)
The Council on American-Islamic Relations (CAIR) recently
filed complaints against U.S Customs and Border Protection (CBP) for, in part, demanding social media information from Muslim
American citizens returning home from traveling abroad. According to CAIR, CBP accessed public posts by demanding social media handles, and potentially accessed private posts by
demanding cell phone passcodes and perusing social media apps. And border agents allegedly physically abused one man who refused to hand over his unlocked phone.
CBP recently began asking foreign visitors to the
U.S. from Visa Waiver Countries for their social media identifiers. Last fall we filed our own comments
opposing the policy, and joined two sets of coalition comments, one by the
Center for Democracy & Technology and the other by the Brennan Center for Justice. Notably, CBP explained that it was only seeking publicly available social media
data, “consistent with the privacy settings the applicant has set on the platforms.”
We raised concerns that the policy would be extended to cover Americans and private data. It appears our fears have come true far faster than we expected. Specifically, we wrote:
It would be a series of small steps for CBP to require all those seeking to enter the U.S.—both foreign visitors and U.S. citizens and residents returning home—to disclose
their social media handles to investigate whether they might have become a threat to homeland security while abroad. Or CBP could subject both foreign visitors and U.S. persons to
invasive device searches at ports of entry with the intent of easily accessing any and all cloud data; CBP could then access both public and private online data—not
just social media content and contacts that may or may not be public (e.g., by perusing a smartphone’s Facebook app), but also other private communications and sensitive
information such as health or financial status.
We believe that the CBP practices against U.S. citizens alleged by CAIR violate the Constitution. Searching through Americans’ social media data and personal devices intrudes upon
both First and Fourth Amendment rights.
CBP’s 2009 policy on border searches of electronic devices is woefully out of date. It
does not contemplate how accessing social media posts and other communications—whether public or private—creates chilling effects on freedom of speech, including the First Amendment
right to speak anonymously, and the freedom of association.
Nor does the policy recognize the significant privacy invasions of accessing private social media data and other cloud content that is not publicly viewable. In claiming that its program of screening the social media accounts of
Visa Waiver Program visitors does not bypass privacy settings, CBP is paying more heed to the rights of foreigners than American citizens.
Finally, the CBP policy does not address recent court decisions that limit the border search exception, which permits border agents to conduct “routine” searches without a warrant or
individualized suspicion (contrary to the general Fourth Amendment rule requiring a warrant based on probable cause for government searches and seizures). These new legal rulings
place greater Fourth Amendment restrictions on border searches of digital devices that contain highly personal information.
As we recently explained:
The U.S. Court of Appeals for the Ninth Circuit in U.S. v. Cotterman (2013) held that border agents
needed to have reasonable suspicion—somewhere between no suspicion and probable cause—before they could conduct a “forensic” search, aided by sophisticated software, of the
The Supreme Court held in Riley v. California (2014) that the police may not invoke another
exception to the warrant requirement, the search-incident-to-arrest exception, to search a cell phone possessed by an arrestee—instead, the government needs a probable cause
warrant. The Court stated, “Our holding, of course, is not that the information on a cell phone is immune from search; it is instead that a warrant is generally required before
such a search, even when a cell phone is seized incident to arrest.”
Although Riley was not a border search case, the Riley rule should apply at the border, too. Thus, CBP
agents should be required to obtain a probable cause warrant before searching a cell phone or similar digital device.
Both Riley and Cotterman recognized that the weighty privacy interests in digital devices are even weightier when law enforcement officials use these devices to search
cloud content. A digital device is not an ordinary “effect” akin to a piece of luggage or wallet, but rather is
a portal into an individual’s entire life, much of which is online.
The Ninth Circuit wrote:
With the ubiquity of cloud computing, the government’s reach into private data becomes even more problematic. In the “cloud,” a user’s data, including the same kind of highly
sensitive data one would have in “papers” at home, is held on remote servers rather than on the device itself. The digital device is a conduit to retrieving information from the
cloud, akin to the key to a safe deposit box. Notably, although the virtual “safe deposit box” does not itself cross the border, it may appear as a seamless part of the digital
device when presented at the border.
And the Supreme Court wrote:
To further complicate the scope of the privacy interests at stake, the data a user views on many modern cell phones may not in fact be stored on the device itself. Treating a cell
phone as a container whose contents may be searched incident to an arrest is a bit strained as an initial matter…. But the analogy crumbles entirely when a cell phone is used to
access data located elsewhere, at the tap of a screen. That is what cell phones, with increasing frequency, are designed to do by taking advantage of “cloud computing.” Cloud
computing is the capacity of Internet-connected devices to display data stored on remote servers rather than on the device itself. Cell phone users often may not know whether
particular information is stored on the device or in the cloud, and it generally makes little difference.
The Riley Court went on to state:
The United States concedes that the search incident to arrest exception may not be stretched to cover a search of files accessed remotely—that is, a search of files stored in the
cloud…. Such a search would be like finding a key in a suspect’s pocket and arguing that it allowed law enforcement to unlock and search a house.
Thus, the border search exception also should not be “stretched to cover” social media or other cloud data, particularly that which is protected by privacy settings and thus not
publicly viewable. In other words, a border search of a traveler’s cloud content is not “routine” and thus should not be allowed in the absence of individualized suspicion. Indeed,
border agents should heed the final words of the unanimous Riley decision: “get a warrant.”
We hope CBP will fully and fairly investigate CAIR’s grave allegations and provide a public explanation. We also urge the agency to change its outdated policy on border searches of
electronic devices to comport with recent developments in case law. Americans should not fear having their entire digital lives unreasonably exposed to the scrutiny of the federal
government simply because they travel abroad.
United States v. Saboonchi
>> mehr lesen
Vote for EFF on CREDO's January Ballot
(Do, 26 Jan 2017) EFF is one of the three non-profits featured in CREDO's giving pool this month. If you're a CREDO customer or member of its action
network, vote for EFF before the end of the month to help direct as much as $150,000 to support the defense of digital civil
Since its founding, CREDO members have raised more than $81 million for different charities. Each month, CREDO selects
three groups to receive a portion of donations that the selected nonprofits then use to drive positive change. CREDO customers generate funds as they use paid
services—like making phone calls or using credit cards—and members can vote on how to distribute donations among the selected charities. The more votes a group
receives, the higher its share of that month's donations.
EFF is proud to stand alongside organizations that defend users' rights. Last fall, CREDO revealed that EFF had been representing them in a long legal battle
over the constitutionality of national security letters (NSLs). The FBI has issued unknown numbers of NSL demands for companies' customer information without a
warrant or court supervision; NSLs are typically accompanied by a gag order, making it difficult for the recipients to complain or resist. Until recently, such a
gag prevented CREDO from disclosing it had received two NSLs in 2013. However, in March, a district court found that the FBI had failed to demonstrate the need for
this particular gag, allowing CREDO to explain why the legal challenge is important to the company and its customers.
We are honored to be one of January's charities, and we hope you will vote for us. You can also support our work by spreading the word on
Twitter and Facebook
or just becoming an EFF member!
>> mehr lesen
Shadow Regulation Around the World
(Mi, 25 Jan 2017)
A Look at Copyright Enforcement Agreements
For close to 20 years, online copyright enforcement has taken place under a predictable set of legal rules, based around taking down allegedly infringing material in response to
complaints from rights holders. In the United States, these rules are in Section 512 of the Digital Millennium Copyright Act (DMCA), and
in Europe they are part of the E-Commerce Directive. In a nutshell, both sets of rules
protect web platforms from liability for copyright infringement for material that they host, until they receive a formal notice about the claimed infringement from the copyright
holder. This system is imperfect, and has resulted in many mistaken or bad faith takedowns. But as imperfect as the rules are, the fact
that they are established by law at least means that they are pretty clear and well understood. That may be about to change.
Around the world, big media lobbyists are pushing for changes to the way copyright is enforced online, and they're focusing on new codes of conduct or industry agreements, rather than
new laws. In particular, we have written in depth about Europe's plans to force platforms to enter into private agreements
with copyright holders to filter files that users upload to the web, something that copyright holders would also like to see done in the United States. They're pushing this new upload filtering mandate through
private agreements to avoid the long and divisive process of developing such requirements through laws debated in parliaments, regulations made on public record, or a balanced
The problem with this approach is that the more that we rely on private agreements to create a regime of content regulation, the less transparent and accountable that regime becomes.
That's why EFF is highly skeptical of this backdoor approach to regulating, which we call Shadow Regulation. Copyright
enforcement measures through Shadow Regulation are taking shape around the world. Here are a few examples:
Tracking peer to peer downloads
The United Kingdom is about to launch a new industry program that requires participating ISPs to deliver educational emails to
users who are accused of using their connection to share copyright infringing files. This program is the UK's equivalent of the United States' Copyright Alert System, and just like that system, it subjects users to intrusive tracking of their
online behavior by the private agents of copyright holders. Unlike the U.S. system, the educational emails to users will not result in any action to slow or suspend the accounts
of accused users.
Since 2015, Portugal has had a code of voluntary enforcement for copyright
infringement that requires ISPs to institute DNS-level blocking of allegedly copyright-infringing websites. No court order is required to verify the websites put forward for
blocking, which are identified by copyright associations and rubber-stamped by Portugal's General Inspection of Cultural Activities (IGAC). If this sounds a little like SOPA, you'd be right—and it's even worse because it wasn't passed by elected Portuguese lawmakers,
but by a shadowy private agreement.
Privatized notice and takedown
Numerous other countries including Belgium [PDF, French], Malaysia, and South Africa, have industry codes of conduct detailing procedures for the removal of
allegedly unlawful content by Internet content hosts. In some cases this includes copyrighted material, and in other cases it's reserved only for other types of unlawful content
(for example, Europol’s Internet Referral Unit focuses on the
voluntary removal of terrorist material). Because these removals are negotiated under a private and notionally "voluntary" agreement, they are not subject to judicial review as
removals ordered by a court would be.
These agreements, and others like them, have established a bad precedent, giving a veneer of respectability to the movement in Europe to establish upload filtering system through
similar "voluntary" agreements. Indeed, the more we rely on such private agreements to construct our copyright enforcement system, the more difficult it becomes to push back against
further such agreements and to demand that copyright enforcement take place within a predictable, balanced, and accountable legal framework.
Copyright enforcement online is already plenty tough already, and the level of infringement that remains poses no real threat to the record profits of the movie and music industries. Therefore, there's no need
for new copyright enforcement measures at all—indeed, dealing with the problems of the enforcement measures that we already have is keeping EFF busy enough.
But, the reality is that proposals for more copyright enforcement measures are already on the table in Europe, and looming in the United States. If we have to face such new copyright
enforcement proposals, we would much rather do this in a forum that is inclusive, balanced and
accountable than by having these proposals emerge fully-formed from an impenetrable black box, negotiated by industry insiders and lobbyists. Shadow Regulation is never an
appropriate mechanism for crafting new copyright enforcement rules. If new rules ever become necessary, their only legitimacy can come from the inclusion of user representatives and
other affected stakeholders at every step of the process.
>> mehr lesen
6 Questions with EFF's New Staff Technologist Erica Portnoy
(Mi, 25 Jan 2017)
EFF is happy to welcome our newest
Staff Technologist Erica Portnoy. Erica is joining EFF's technology projects team, a group of technologists and computer
scientists engineering responses to the problems of third-party tracking, inconsistent encryption, and other threats to users' privacy and security online. Erica earned her BSE in
computer science at Princeton, and comes to EFF with experience in messaging privacy, searchable encryption, and tech policy and civil rights.
I asked Erica a few questions about her background and what she'll be working on at EFF.
What are you most excited about working on this year?
I'm excited to be working on Certbot, EFF's Let's Encrypt client. We're gradually working towards stability and the long tail of usage cases. I'm hoping to get it so that it just
works for as many people as possible, so they can get and install their certificates 100% painlessly.
What drew you to EFF?
EFF's tech projects team is doing the uncommon work of making direct, concrete, technical contributions to improving people's safety online. Plus, everyone who works here is the
nicest person you'll ever meet, which I promise is not logically inconsistent.
What kind of research did you do before coming to EFF?
My previous work involved experimenting with cryptographically-enforced privacy for cloud services. So I've worked with ORAM and encrypted search and SGX, to drop some jargon.
What advice would you have for users trying to secure their communications?
If you are only going to do one thing, use a password manager and diceware. I use the one built into Chrome, with a sync passphrase set up. No one's going to bother exploiting a million-dollar bug if your password is the same as the one you used for
a service that was recently breached.
But more broadly, this is a hard issue, and the best thing to do is different for every individual. Definitely look at our Surveillance Self-Defense
guide for more in-depth recommendations.
On another side of that, what should tech companies be doing to protect their users? How can users hold them accountable?
Especially now, companies can't absolve themselves of the responsibility for their users by claiming, "Well, high-risk users shouldn't be using our product." If a company makes a
product that is used by people in high-risk situations, it is their duty to protect their users by offering the ability to turn on security features.
But that's the bare minimum. A system should neither compute nor retain
information that could harm its users, and organizations that might have this data must also fight to protect people on a legal front.
As for users, making your voice heard will inform design decisions. Leave a one-star review on an application distrubution platform, like the Play Store or App Store, and include
specific details of how the design decision in the product is harmful to your safety or the safety of those you care about. Do the same thing on Twitter. It's hard to prioritize
features without knowing what people want to see.
How much are you loving EFF's dog-friendly offices?
90% of why I'm not a TGIF person is because Neko doesn't come in on Fridays. The other 10% is because Neko won't be there on the weekend,
>> mehr lesen
EFF To Patent Office: Supreme Court Limits On Abstract Patents Are a Good Thing
(Mi, 25 Jan 2017)
EFF has submitted comments to the Patent Office urging it not
to support efforts to undermine the Supreme Court’s recent decision in Alice v. CLS Bank. The Patent Office had called for public submissions regarding whether “legislative changes are desirable” in response to recent court
decisions, including Alice. We explain that, far from harming the software industry, Alice has helped it thrive.
When the Supreme Court issued its ruling in Alice, it was a shock to a patent system that had been churning out software patents by the tens of thousands every year. Back in
the 1990s, the Federal Circuit had opened the software patent floodgate with its
ruling in State Street and In re
Alappat. That decision held that any general purpose computer could be eligible for a patent so long as it is programmed to perform a particular function. In Alice, the
Supreme Court substantially moderated that holding by ruling that a generic computer is not eligible for a patent simply because it is programed to implement an abstract idea.
Courts have applied Alice to throw out many of the worstsoftware patents. Alice is particularly valuable because, in some cases,
courts have applied it early in litigation thereby preventing patent trolls from using the high expense of litigation to pressure defendants into settlements. While we think that the
Federal Circuit could do more to diligently apply Alice, it has
at least been a step forward.
As the Alice case made its way to the Supreme Court, defenders of software patents predicted disaster would befall the software industry if the courts invalidated the patent.
For example, Judge Moore of the Federal Circuit suggested that a ruling for the defendant
“would decimate the electronics and software industries.” This prediction turned out be entirely inaccurate.
In our comments, we explain that the software industry has thrived in the wake of Alice. For example, while R&D spending on software and Internet development went up an
impressive 16.5% in the 12 months prior to the Alice decision, it increased by an even more dramatic 27% in the year following Alice. Similarly, employment growth for software developers remains very strong, as anyone who has tried to
rent an apartment in the Bay Area can attest.
We also express concern that the Patent Office’s guidance puts the thumb on the scale in favor of patent eligibility. For example, the Patent Office’s call for comments asked how it
can make certain decisions better known to examiners. But it focused only on decisions finding patent claims eligible. During the same period, even more decisions were issued by the
Federal Circuit finding software-related claims ineligible, but those decisions were left off the list.
Some commentators have suggested that the Patent Office takes an “intentionally narrow” view of
Alice. But it is not the Patent Office’s job to narrow Supreme Court holdings, its job is to apply them. Ultimately, the patent system does not exist to create jobs for patent
prosecutors, examiners, or litigators. It exists for the constitutional purpose of “promot[ing] the Progress of Science and useful Arts.” With no evidence that Alice is harming
software development, the Patent Office should not focus on pushing more patenting on the industry.
Many other non-profits and companies submitted comments in favor of the changes brought by the Alice decision. These include comments from Public Knowledge, Engine, and Mozilla. We hope the Patent Office listens to this feedback from outside the patent world before
making any legislative recommendations.
Public comment periods are an important check on concentrated interests pushing regulations that hurt the public interest. EFF regularly submits comments to the Patent Office where
rules are proposed that would harm the public. For example, EFF and Public Knowledge recently submitted comments to the Patent Office regarding applicants' duties of disclosure. This is the duty to tell the Patent
Office about material (such as existing inventions) relevant to whether the application is patentable. The Patent Office has proposed a new rule that would require patent applicants
to submit material only if it the material would actually lead to a rejection of a pending claim. That is, the Patent Office proposed adopting the standard set out in a case called
Therasense, which was a decision from the Court of Appeals for the Federal Circuit regarding the
standards for finding a patent invalid for inequitable conduct. The Patent Office justified its proposed change as being simpler for applicants and would lessen the incentives to
submit only marginally relevant material.
In our comments, we urged the Patent Office to maintain its current standards. We explain that the change would lead to no reduction in a charge of inequitable conduct. In addition,
we suggested that a better incentive to reducing the amount of marginally relevant material would be if the Patent Office more frequently enforce procedures requiring patent
applicants to explain the relevance of materials submitted to the office.
Abstract Patent Litigation
>> mehr lesen
Final Hearing in Diego Gomez's Case on Wednesday
(Di, 24 Jan 2017)
After a postponed hearing in October, final arguments in Diego Gomez’s case are scheduled
for Wednesday, January 25. This marks the potential conclusion of a court case that has gone on for more than two and a half years. Regardless of the verdict, Diego’s case is an
urgent, global reminder to advocates of open research: open access must become the default in academic publishing, and we need global reforms to get there.
When Diego Gomez, a biology master’s student at the University of Quindio in Colombia, shared a colleague’s thesis with other scientists over the Internet,
he was doing what any other science grad student would do: sharing research he found useful so others could benefit from it and build on it. But the author of the paper filed a
lawsuit over the “violation of [his] economic and related rights,” putting this master’s graduate in his late 20s at risk of being sentenced to four to eight years in prison with crippling monetary
fines. (Colombian digital rights organization Fundación Karisma, in addition to providing Diego with legal assistance, has documented Diego’s
story in detail here.)
Diego’s case would not exist if open access were the default in academic publishing. “Open access” refers to the free, immediate, online availability of scholarly research, in
contrast to the current status quo of expensive subscription journals and paywalled databases. Open access policies are critical to education, innovation, and global progress.
We need major reform of our laws, both internationally and domestically, to make open access the norm and ensure that sharing, promoting scientific progress, and exercising creative
expression are not crimes.
As we await the final verdict in Diego’s trial, it is more important than ever to join EFF and open access allies all over the world in standing with Diego. Sign this petition before
Diego’s trial and make your voice heard.
open access worldwide.
>> mehr lesen
EFF Asks Massachusetts High Court to Require Clear Limits Before Allowing Searches of Digital Devices and Information
(Di, 24 Jan 2017)
Along with several other advocacy groups, EFF signed on to an amicus brief this week in the case of the Commonwealth of Massachusetts v. James Keown, in support of requiring
courts to set pre-search limits on the method of digital searches by law enforcement pursuant to judicially authorized warrants.
Keown was charged with murdering his wife after she died of an apparent poisoning. The evidence against him included a forensic search of his laptop, which revealed web searches for
homemade poison. Although the police got a warrant to do this forensic examination, it allowed them to conduct a nearly unfettered search of the computer.
Searches of digital devices—in this case, a laptop—are different from searches of physical spaces, both in the scale of information at issue and the way in which that information is
stored. The unique features of digital devices and the enormous amount of information stored on them make Fourth Amendment protections all the more important to uphold, especially
with respect to the “particularity” requirement. In order to avoid general searches, the Fourth Amendment requires that in addition to demonstrating probable cause, a warrant must
“particularly describ[e] the places to be searched and the persons or things to be seized.” In the brief, EFF asks the Court to set explicit limits on the scope of digital searches by
outlining concrete categories of relevant information prior to the warrant’s execution – a series of ex-ante search protocols – to ensure that the government does not exceed its
authority when executing a search warrant on digital devices and information.
Ex-ante search protocols—such as limits based on date, time, recipient or sender identities, types and sizes of files, etc.—tailored to the law enforcement inquiry for which probable
cause has been established, can assure magistrate judges that a search will be limited as much as possible to only the relevant information sought and justified in the warrant
Massachusetts should join the courts that have begun to move toward ex-ante protocols to bolster Fourth Amendment protections. The issuance of a search warrant for a specific file or
piece of evidence should not give law enforcement carte blanch to generally search all of the digital information stored on your device. Because such ex-ante search protocols
were needed in Keown’s case, but were not used, we argue the evidence seized from his laptop should be suppressed.
You can read our amicus brief in full below.
>> mehr lesen
New CIA Director Mike Pompeo Sparks Privacy Concerns
(Di, 24 Jan 2017)
The U.S. Senate confirmed Kansas Republican Rep. Mike Pompeo to be the Director of the CIA late on Monday over concerns from several congressional Democrats, who warned that putting
Pompeo at the head of the intelligence agency would threaten civil liberties.
In an impassioned floor speech, Sen. Bernie Sanders called it “vital to have a head of the CIA who will stand up for our constitution, stand up for privacy rights.” He continued,
“Unfortunately, in my view, Mr. Pompeo is not that individual.”
As we said late last year, we have concerns that many of President
Donald Trump’s nominees, including Pompeo, will undermine digital rights and civil liberties, and those concerns persist.
Specifically, Pompeo sponsored legislation that would have reinstated the National Security Agency’s bulk collection of
Americans’ telephone metadata—an invasive program that civil liberties and privacy advocates fought to curtail by enacting the USA FREEDOM Act.
We also noted troubling op-eds written by Pompeo. In one piece in late 2015, Pompeo
criticized Republican presidential candidates who were supposedly “weak” on national security and intelligence collection. “Less intelligence capacity equals less safety,” he wrote.
In another op-ed a few weeks later, Pompeo criticized lawmakers for “blunting [the intelligence
community’s] surveillance powers” and called for “a fundamental upgrade to America’s surveillance capabilities.”
Critics on the Senate floor—including Sens. Ron Wyden, Patrick Leahy and Bernie Sanders—honed in on the latter op-ed, which also recommended restarting the metadata collection that
was curtailed under USA FREEDOM Act and “combining it with publicly available financial and lifestyle information into a comprehensive, searchable database.” Pompeo continued, “Legal
and bureaucratic impediments to surveillance should be removed.”
While Pompeo’s defenders argued that an effective intelligence agency should be utilizing publicly available information posted to social media, Wyden—who fought for delay to give the
Senate more time to consider Pompeo’s nomination—drew a sharp distinction between seeking out social media information related to a known intelligence target and creating the database
Pompeo has envisioned.
“It is something else entirely to create a giant government database of everyone’s social media postings and to match that up with everyone’s phone records,” Wyden said, calling the
idea “a vast database on innocent Americans.”
Wyden also criticized Pompeo for skirting questions from lawmakers about what
kinds of information would end up in the database, including whether the database would include information held by data brokers, the third-party companies that build profiles of
internet users. He criticized Pompeo for being unwilling to “articulate the boundaries of what is a very extreme proposal.”
EFF thanks all 32 Senators who voted against
Pompeo and his expansive vision of government surveillance. We were especially pleased by the “no” vote from
our new home-state Sen. Kamala Harris of California.
EFF and other civil liberties advocates will work hard to hold Pompeo accountable as CIA Director and block any attempts by him or anyone else to broaden the intrusive government
surveillance powers that threaten our basic privacy rights.
>> mehr lesen
EFF to Santa Clara County: Improve Police Body Camera Rules
(Di, 24 Jan 2017)
EFF sent a letter to the Santa Clara County Board
suggesting ways to improve the proposed
policy of that county’s Sheriff for use of body-worn cameras (BWCs). We did so with our allies the ACLU of California and the Bay Area chapter of the Council on American-Islamic
BWCs may help protect civil liberties, but only if they are adopted with robust community input and are subject to strong policies that ensure they promote police transparency and
accountability. Without appropriate policies, BWCs may instead become another police tool of street-level surveillance.
Our letter addresses, among other issues, limits on when deputies may record at protests; discipline for deputies who fail to record their law enforcement activities, such as arrests
or use of force; when deputies may review their BWC footage; when the Sheriff’s Office must release BWC footage to the public; and when BWC footage should be deleted.
We made our BWC suggestions pursuant to Santa Clara County’s Surveillance Technology Ordinance. This salutary law, enacted in June 2016, ensures community control
of whether county government will adopt spying tools, and if so, what privacy safeguards are needed. Specifically, only the Santa Clara County Board of Supervisors can approve new
surveillance technologies, and it can only do so after giving the public notice and an opportunity to be heard. EFF supported this Santa Clara ordinance, and we support adoption of similar laws
for BART, Oakland, and Palo Alto. In October 2016, EFF used this Santa Clara ordinance to seek changes to the Sheriff’s proposed policy for an Integrated Helicopter Mapping
EFF has opposed BWC rules that fail to protect privacy and advance police accountability. For example, EFF in September 2015 opposed the LAPD’s policies for BWCs. And last year, EFF opposed several California bills regarding BWCs.
Proposed BWC guidelines have been published by the ACLU, the Constitution Project, and the Leadership Conference on Civil and Human Rights.
>> mehr lesen
Attorney General Nominee Sessions Backs Crypto Backdoors
(Di, 24 Jan 2017)
As the presidential campaign was in full swing early last year, now-President Trump made his feelings on encryption clear. Commenting on the Apple-FBI fight in San Bernardino, Trump
threatened to boycott Apple if they didn’t cooperate: “to think that Apple won't allow us to get into [the] cell phone,” Trump said in an interview. “Who do they think they are? No, we have to open it up.”
For that reason, we were curious what Trump’s nominee for Attorney General, Sen. Jeff Sessions (R-AL) would say about the role of encryption.
At his confirmation hearing, Sessions was largely non-committal. But in his written responses to questions posed by Sen. Patrick Leahy, however, he took a much
Question: Do you agree with NSA Director Rogers, Secretary of Defense Carter, and other national security experts that strong encryption helps protect this country from
cyberattack and is beneficial to the American people's’ digital security?
Response: Encryption serves many valuable and important purposes. It is also critical, however, that national security and criminal investigators be able to overcome
encryption, under lawful authority, when necessary to the furtherance of national-security and criminal investigations.
Despite Sessions’ “on the one hand, on the other” phrasing, this answer is a clear endorsement of backdooring the security we all rely on. It’s simply not feasible for encryption to serve what Sessions concedes are its “many valuable and important purposes” and still be
“overcome” when the government wants access to plaintext. As we saw last year with Sens. Burr and Feinstein’s draft Compliance with Court Orders Act, the only way to give the
government this kind of access is to break the Internet and outlaw industry best
practices, and even then it would only reach the minority of encryption products made in the
As we’ve done for more than two decades, we will strongly oppose
any legislative or regulatory proposal to force companies or other providers to give Sessions what he’s demanding: the ability to “overcome encryption.” Code is speech, and no law that mandates backdoors can be both effective and pass
constitutional scrutiny. If Sessions follows through on his endorsement of “overcoming” encryption, we’ll see him in court.
Apple Challenges FBI: All Writs Act Order (CA)
>> mehr lesen
Supreme Court Should Block Printer Company’s Ploy to Undermine Consumer Rights
(Mo, 23 Jan 2017)
EFF Urges Justices to Protect Important ‘Patent Exhaustion’ Doctrine
San Francisco - When you buy a printer cartridge, is it yours? Or can the company control what you do with it, even after you pay your bill and take it home? The Electronic Frontier
Foundation (EFF) urged the U.S. Supreme Court today to protect consumers’ property rights in a court case centering on the important “patent exhaustion” doctrine.
In Impression Products, Inc. v. Lexmark International Inc., printer company Lexmark sold printer cartridges with restrictions on refilling and resale. Impression Products
acquired used Lexmark ink cartridges and then refilled and resold them, sparking a lawsuit from Lexmark claiming infringement. The Federal Circuit decided in Lexmark’s favor, ruling
that a customer’s use of a product can be “restricted” by the patent owner with something as simple as a notice on disposable packaging.
In the amicus brief filed today, EFF—joined by Public Knowledge, AARP and the AARP Foundation, Mozilla, and R
Street—argued that “conditional sales” like the ones attempted by Lexmark cannot impose arbitrary conditions on a customer’s use of a product. The Federal Circuit’s incorrect ruling
to the contrary goes against the doctrine of “patent exhaustion,” which says that once a patent owner sells a product, it cannot later claim the product’s use or sale is infringing.
“If allowed to stand, the lower court’s decision could block your right to reuse, resell, and tinker with the devices you own,” said EFF Staff Attorney Daniel Nazer, who is also the
Mark Cuban Chair to Eliminate Stupid Patents. “Under this theory, consumers could be held liable for infringement for using products purchased legally, and that the patent owner has
already been paid for.”
Patent exhaustion has been part of centuries of law upholding the right of individuals to use and resell their possessions. If patent owners can control goods after sale, then all
sorts of activities—like security research, reverse engineering, and device modification—would be threatened.
“This trick is straight out of some companies’ wishlists for restricting user rights,” said EFF Staff Attorney Kit Walsh. “They have tried a variety of legal tactics to restrict your
ability to repair or resell the things you buy, and to prevent experts from investigating how they work. That includes experts who want to figure out if your devices are secure and
respecting your privacy, or who want to build products that can plug in to your devices and make them do new and useful things. We urge the Supreme Court to reaffirm the patent
exhaustion doctrine, and protect people’s rights to own and understand the products they’ve purchased.”
For the full amicus brief:
Staff Attorney and Mark Cuban Chair to Eliminate Stupid Patents
>> mehr lesen