Responsibility Deflected, the CLOUD Act Passes
(Fr, 23 Mär 2018)
UPDATE, March 23, 2018: President Donald Trump signed the $1.3 trillion government spending bill—which includes the CLOUD Act—into law Friday morning.
“People deserve the right to a better process.”
Those are the words of Jim McGovern, representative for Massachusetts and member of the House of Representatives Committee on Rules, when, after 8:00 PM EST on Wednesday, he and his
colleagues were handed a 2,232-page bill to review and approve for a floor vote by the next morning.
In the final pages of the bill—meant only to appropriate future government spending—lawmakers snuck in a separate piece of legislation that made no mention of funds, salaries, or
budget cuts. Instead, this final, tacked-on piece of legislation will erode privacy protections around the globe.
This bill is the CLOUD Act. It was never reviewed or marked up by
any committee in either the House or the Senate. It never received a hearing. It was robbed of a stand-alone floor vote because Congressional leadership decided, behind closed doors,
to attach this un-vetted, unrelated data bill to the $1.3 trillion government spending bill. Congress has a professional responsibility to listen to the American people’s concerns, to
represent their constituents, and to debate the merits and concerns of this proposal amongst themselves, and this week, they failed.
On Thursday, the House approved the omnibus government spending bill, with the CLOUD Act attached, in a 256-167 vote. The Senate followed up late that night with a 65-32 vote in
favor. All the bill requires now is the president’s signature.
Make no mistake—you spoke up. You emailed your representatives. You told them to protect privacy and to reject the CLOUD Act, including any efforts to attach it to must-pass spending
bills. You did your part. It is Congressional leadership—negotiating behind closed doors—who failed.
Because of this failure, U.S. and foreign police will have new mechanisms to seize data across the globe. Because of this failure, your private emails, your online chats, your
Facebook, Google, Flickr photos, your Snapchat videos, your private lives online, your moments shared digitally between only those you trust, will be open to foreign law enforcement
without a warrant and with few restrictions on using and sharing your information. Because of this failure, U.S. laws will be bypassed on U.S. soil.
As we wrote before, the CLOUD Act is a far-reaching, privacy-upending piece of legislation that will:
Enable foreign police to collect and wiretap people's communications from U.S. companies, without obtaining a U.S. warrant.
Allow foreign nations to demand personal data stored in the United States, without prior review by a judge.
Allow the U.S. president to enter "executive agreements" that empower police in foreign nations that have weaker privacy laws than the United States to seize data in the United
States while ignoring U.S. privacy laws.
Allow foreign police to collect someone's data without notifying them about it.
Empower U.S. police to grab any data, regardless if it's a U.S. person's or not, no matter where it is stored.
And, as we wrote before, this is how the CLOUD Act could work in practice:
London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect
those messages. The London police would not necessarily need prior judicial review for this request. The London police would not be required to notify U.S. law enforcement about this
request. The London police would not need a probable cause warrant for this collection.
Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could
be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages, if shared with U.S. law enforcement, could be used to criminally charge the U.S.
person in a U.S. court, even though a warrant was never issued.
This bill has large privacy implications both in the U.S. and abroad. It was never given the attention it deserved in Congress.
As Rep. McGovern said, the people deserve the right to a better process.
>> mehr lesen
Stupid Patent of the Month: A Token of Troll Appreciation
(Do, 22 Mär 2018)
In 2014, the U.S. Supreme Court decided a case that,
for the most part, banned the kind of “do it on a computer” style patents that have plagued the U.S. patent system for decades. Ever since then, IP maximalists have been doing
whatever they can to roll back or reverse the landmark Alice v. CLS Bank case. Small businesses, meanwhile, rely on it to avoid
Today, we take a look at a patent that shows why the Supreme Court’s Alice decision is so vital. U.S. Patent No.
7,177,838 claims “electronic tokens,” and the primary claim simply describes using such tokens in Internet commerce. The patent dates back to an application first filed in 2000, a
time when many “inventions” were able to get the stamp of approval from the Patent Office simply because they mentioned the World Wide Web.
The ’838 patent is nothing more than a kind of “e-money,” and as most people should be able to see, that’s not really much of an invention, much less a patentable one. Using tokens
and symbols to represent monetary value is as old as currency itself. It’s a classic example of something that has existed for a long time before the Internet and before
computers—clearly abstract and barred under Alice.
It wasn’t unusual for companies to acquire silly patents like this one in the go-go days of the first dot-com boom. If the patent had just moldered on a shelf, perhaps it wouldn’t be
a big deal. But that’s not what GTX Corp., the owner of the ’838 patent, decided to do. The owners of GTX have engaged in a years-long campaign to squeeze tens of thousands of dollars
out of businesses, and their newest targets are small gaming studios. The newest target is a company called Playsaurus, which makes a game called “Clicker Heroes 2.” A lawyer from GTX
wrote to them and said they’d be sued unless they agreed to pay the “bargain” $35,000 licensing fee to avoid “costly litigation.”
Playsaurus decided to fight back rather than pay that fee. The company’s attorney wrote a letter to GTX correctly noting that the ’838 patent is “a perfect example of a
patent that is subject matter ineligible under 35 U.S.C. § 101.” Issuing tokens for purchases just isn’t an invention, and shouldn’t be considered one, whether it’s done on a
computer, on the Internet, or offline. GTX’s stunning suggestion that its electronic token idea is a patentable concept, deserving of a $35,000 payout, didn’t escape the notice of the
programming community and the technology press.
Putting aside Alice, the patent has other serious problems. GTX and inventor Martin Ling were not the first to use tokens as cash even in the context of a videogame.
Playsaurus cites just one example of this, the 2000 game Neopets, in which “users could buy and use Neocash within the game.” Electronic payments systems also had been theorized,
analyzed, and discussed in academic literature well before 2000.
The ’838 patent is also currently featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any more prior art for the ’838 patent, you can submit it (before May 4) to Unified Patents for a possible $2,000 prize.
Alice Under Threat
Changes to the patent system, including the Alice v.
CLS Bank decision, have helped make it easier for a company like Playsaurus to resist against a bogus patent demand. But
Alice has threatened the trolling business model so much that it keeps facing threats, in both courts and Congress.
A recent decision called Berkheimer v. HP may make it much harder to defend a case using Alice. In Berkheimer, the district court invalidated a patent that
described a system for “archiving and outputting documents.” But the district court judge was overturned on appeal, when the U.S. Court of Appeals for the Federal Circuit held
[PDF] that whether or not a technology was “well-understood, routine, and conventional” is a
factual determination that requires more proceedings.
This ruling could undermine a key benefit of Alice: namely, that patent defendants shouldn’t have to engage in expensive discovery and trial proceedings to negate patents
that are abstract on their face. The defendant in that case, HP, has asked the full Federal Circuit to reconsider that decision, and we agree that it warrants the court’s attention.
Businesses like Playsaurus shouldn’t have to go through extensive discovery and motion practice to prove what real innovators in their industry know: e-tokens are an abstract idea and
not eligible for a patent. Neither are ideas like tracking a package, or remotely diagnosing an illness, or holding a photo contest. The Alice decision is indispensable to small businesses that are being held up by such patents, and it’s worth protecting.
>> mehr lesen
The New Frontier of E-Carceration: Trading Physical for Virtual Prisons
(Do, 22 Mär 2018)
Criminal justice advocates have been working hard to abolish cash
bail schemes and dismantle the
prison industrial complex. And one of the many tools touted as an alternative to incarceration is electronic monitoring or “EM”: a form of digital incarceration, often
using a wrist bracelet or ankle “shackle” that can monitor a subject’s location, blood alcohol level, or breath. But even as the use of this new incarceration technology expands,
regulation and oversight over it—and the unprecedented amount of information it gathers—still lags behind.
There are many different kinds of electronic monitoring schemes:
Active GPS tracking, where the transmitter monitors a person using satellites and reports location information in real time at set intervals.
Passive GPS tracking, where the transmitter tracks a person's activity and stores location information for download the next day.
Radio Frequency ("RF") is primarily used for “curfew monitoring.” A home monitoring unit is set to detect a bracelet within a specified range and then sends confirmation to a
Secure Continuous Remote Alcohol Monitoring ("SCRAM") - analyzes a person's perspiration to extrapolate blood alcohol content once every hour.
Breathalyzer monitoring reviews and tests a subject’s breath at random to estimate BAC and typically has a camera.
Monitors are commonly a condition of pre-trial release, or post-conviction supervision, like probation or parole. They are sometimes a strategy to reduce jail and prison populations.
Recently, EM’s applications have widened to include juveniles,
the elderly, individuals accused or convicted of DUIs or domestic violence, immigrants awaiting legal proceedings, and adults in drug programs.
This increasingly wide use of EM by law enforcement remains relatively unchecked. That’s why EFF, along with over 50 other organizations, has endorsed a set of Guidelines for Respecting the Rights of Individuals on Electronic
Monitoring. The guidelines are a multi-stakeholder effort led by the Center for Media Justice's Challenging E-carceration project to outline the legal and policy considerations
that law enforcement’s use of EM raises for monitored individuals’ digital rights and civil liberties.
For example, a paramount concern is the risk of racial discrimination. People of color tend to be placed on EM far more often
than their white counterparts. For example, Black people in Cook County, IL make up 24% of the population, yet represent 70% of people on EM. This ratio mirrors the similarly skewed racial disparity in physical incarceration.
Another concern is cost shifting. People on EM often pay user
fees ranging from $3-$35/day along with $100-$200 in setup charges, shifting the costs of electronic incarceration from the government to the monitored and their families.
Usually, this disproportionately affects poor communities of color who are already over-policed and over-represented within the criminal justice and immigration systems.
Then there are the consequences to individual privacy that threaten the rights not just of the monitored, but also of those who interact with them. When children, friends, or family
members rely on individuals on EM for transportation or housing, they often suffer privacy intrusions from the same mechanisms that monitor their loved ones.
Few jurisdictions have regulations limiting access to location tracking data and its attendant metadata, or specifying how long such information should be kept and for what purpose.
Private companies that contract to provide EM to law enforcement typically store location data on monitored individuals and may share or sell clients’ information for a profit. This
jeopardizes the safety and civil rights not just of the monitored, but also of their families, friends, and roommates who live, work, or socialize with them.
Just one example of how location information stored over time can provide an intimate portrait of someone’s life, and even be harvested by machine learning inferences to detect
deviations in regular travel habits, is featured in this bi-analytics marketing video.
So, what do we do about EM? We must demand strict constitutional safeguards against its misuse, especially because “GPS monitoring generates [such] a precise, comprehensive
record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations” as the U.S. Supreme
Court recognized in U.S. v. Jones. Recent studies by the Pew
Research Center in 2014 show that 82% of Americans consider the details of their physical location over time to be sensitive information, including 50% of Americans who consider it to
be “very sensitive.” Thus, law enforcement should be required to get a warrant or other court order before using EM to track an individual’s location information.
For criminal defense attorneys looking for more resources on fighting EM, review our one-pager
explainer and practical advice. And if you seek amicus support in
your case, email email@example.com with the following information:
Case name & jurisdiction
Case timeline/pending deadlines
Defense Attorney contact information
Brief description of your EM issue
US v. Jones
>> mehr lesen
How Congress Censored the Internet
(Mi, 21 Mär 2018)
In Passing SESTA/FOSTA, Lawmakers Failed to Separate Their Good Intentions from Bad Law
Today was a dark day for the Internet.
The U.S. Senate just voted 97-2 to pass the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), a bill that silences online speech by forcing Internet platforms to censor their users. As lobbyists and
members of Congress applaud themselves for enacting a law tackling the problem of trafficking, let’s be clear: Congress just made trafficking victims less safe, not more.
The version of FOSTA that just passed the Senate combined an earlier version of FOSTA (what we call FOSTA 2.0) with the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). The history of SESTA/FOSTA—a bad bill that turned into a worse bill and then was rushed through votes in
both houses of Congress—is a story about Congress’ failure to see that its good intentions can result in bad law. It’s a story of Congress’ failure to listen to the constituents who’d
be most affected by the laws it passed. It’s also the story of some players in the tech sector choosing to settle for compromises and half-wins that will put ordinary people in
Silencing Internet Users Doesn’t Make Us Safer
SESTA/FOSTA undermines Section 230, the most important law protecting free speech online. Section 230 protects online platforms from
liability for some types of speech by their users. Without Section 230, the Internet would look very different. It’s likely that many of
today’s online platforms would never have formed or received the investment they needed to grow and scale—the risk of litigation would have simply been too high. Similarly, in
absence of Section 230 protections, noncommercial platforms like Wikipedia and the Internet Archive
likely wouldn’t have been founded given the high level of legal risk involved with hosting third-party content.
The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.
Importantly, Section 230 does not shield platforms from liability under federal criminal
law. Section 230 also doesn’t shield platforms across-the-board from liability under civil law: courts have allowed civil claims against online platforms when a
platform directly contributed to unlawful speech. Section 230 strikes a careful balance between enabling the pursuit of justice and promoting free speech and innovation online:
platforms can be held responsible for their own actions, and can still host user-generated content without fear of broad legal liability.
SESTA/FOSTA upends that balance, opening platforms to new criminal and civil liability at the state and federal levels for their users’ sex trafficking activities. The platform liability created by new
Section 230 carve outs applies retroactively—meaning the increased liability applies to trafficking that took place before the law passed. The Department of Justice has raised concerns [.pdf] about this violating the Constitution’s Ex Post Facto Clause, at least for the criminal
The bill also expands existing federal criminal law to target online platforms where sex trafficking content appears. The bill is worded so broadly that it could even be used against
platform owners that don’t know that their sites are being
used for trafficking.
Finally, SESTA/FOSTA expands federal prostitution law to cover those who use the Internet to “promote or facilitate prostitution.”
The Internet will become a less inclusive place, something that hurts all of us.
It’s easy to see the impact that this ramp-up in liability will have on online speech: facing the risk of ruinous litigation, online platforms will have little choice but to become
much more restrictive in what sorts of discussion—and what sorts of users—they allow, censoring innocent people in the process.
What forms that erasure takes will vary from platform to platform. For some, it will mean increasingly restrictive terms of service—banning sexual content, for example, or
advertisements for legal escort services. For others, it will mean over-reliance on automated filters to delete borderline posts. No matter what
methods platforms use to mitigate their risk, one thing is certain: when platforms choose to err on the side of censorship, marginalized voices are censored disproportionately. The Internet will become a less inclusive
place, something that hurts all of us.
Big Tech Companies Don’t Speak for Users
SESTA/FOSTA supporters boast that their bill has the support of the technology community, but it’s worth considering what they mean by “technology.” IBM and Oracle—companies whose business models don’t heavily rely on Section
230—were quick to jump onboard. Next came the Internet Association, a trade association representing the world’s largest Internet companies, companies that will certainly be able to survive SESTA while their smaller competitors struggle to comply
Those tech companies simply don’t speak for the Internet users who will be silenced under the law. And tragically, the people likely to be censored the most are trafficking victims
SESTA/FOSTA Will Put Trafficking Victims in More Danger
Throughout the SESTA/FOSTA debate, the bills’ proponents provided little to no evidence that increased platform liability would do anything to reduce trafficking. On the other hand,
the bills’ opponents have presented a great deal of evidence that shutting down platforms where sexual
services are advertised exposes trafficking victims to more danger.
Freedom Network USA—the largest national network of organizations working to reduce trafficking in their communities—spoke out early to express grave concerns [.pdf] that removing sexual ads from the Internet would also
remove the best chance trafficking victims had of being found and helped by organizations like theirs as well as law enforcement agencies.
Reforming [Section 230] to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.
It is important to note that responsible website administration can make trafficking more visible—which can lead to increased identification. There are many cases of victims being
identified online—and little doubt that without this platform, they would have not been identified. Internet sites provide a digital footprint that law enforcement can use to
investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are
forced into even more dangerous circumstances.
Freedom Network was far from alone. Since SESTA was introduced, many
experts have chimed in to point out the danger that SESTA would put all sex workers in, including those who are being trafficked. Sex workers themselves have spoken out too,
explaining how online platforms have literally saved their lives. Why didn’t Congress bring those experts to its deliberations on SESTA/FOSTA over the past year?
While we can’t speculate on the agendas of the groups behind SESTA, we can study those same groups’ past advocacy work. Given that history, one could be forgiven for thinking that
some of these groups see SESTA as a mere stepping
stone to banning pornography from the Internet or blurring the legal distinctions between
sex work and trafficking.
In all of Congress’ deliberations on SESTA, no one spoke to the experiences of the sex
workers that the bill will push off of the Internet and onto the dangerous streets. It wasn’t surprising, then, when the House of Representatives presented its “alternative” bill,
one that targeted those communities more directly.
“Compromise” Bill Raises New Civil Liberties Concerns
In December, the House Judiciary Committee unveiled its new revision of FOSTA. FOSTA 2.0 had the same inherent flaw that its predecessor had—attaching more
liability to platforms for their users’ speech does nothing to fight the underlying criminal behavior of traffickers.
In a way, FOSTA 2.0 was an improvement: the bill was targeted only at platforms that intentionally facilitated prostitution, and so would affect a narrower swath of the Internet. But
the damage it would do was much more blunt: it would expand federal prostitution law such that online platforms would have to take down any posts that could potentially be in support
of any sex work, regardless of whether there’s an indication of force or coercion, or whether minors were involved.
FOSTA 2.0 didn’t stop there. It criminalized using the Internet to “promote or facilitate” prostitution. Activists who work to reduce harm in the sex work community—by providing
health information, for example, or sharing lists of dangerous clients—were rightly worried that prosecutors would attempt to use this law to put their work in jeopardy.
Regardless, a few holdouts in the tech world believed that their best hope of stopping SESTA was to endorse a censorship bill that would do slightly less damage to the tech industry.
They should have known it was a trap.
SESTA/FOSTA: The Worst of Both Worlds
That brings us to last month, when a new bill combining SESTA and FOSTA was rushed
through congressional procedure and overwhelmingly passed the House.
When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.
Thousands of you picked up your phone and called your senators, urging them to oppose
the new Frankenstein bill. And you weren’t alone: EFF, the American Civil Liberties Union, the Center for Democracy and Technology, and many other experts pleaded
with Congress to recognize the dangers to free speech and online communities that the bill presented.
Even the Department of Justice wrote a letter urging Congress not to go forward with the hybrid bill [.pdf]. The DOJ said that the
expansion of federal criminal law in SESTA/FOSTA was simply unnecessary, and could possibly
undermine criminal investigations. When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.
Assuming that the president signs it into law, SESTA/FOSTA is the most significant rollback to date of the protections for online speech in Section 230. We hope that it’s the last,
but it may not be. Over the past year, we’ve seen more calls than ever to create new exceptions to Section 230.
In any case, we will continue to fight back against proposals that undermine our right to speak and gather online. We hope you’ll stand with us.
>> mehr lesen
EFF Helps SEACC Stand Up To Mining Company, Protects Fair Use Rights
(Mi, 21 Mär 2018)
When a mining company sent a cease and desist letter aimed at a critical documentary, the Southeast Alaska Conservation Council (SEACC) worked with the Electronic Frontier Foundation
to help them respond. Hecla Mining Company claimed [PDF] that SEACC had infringed Hecla’s
copyright by using short clips from a Hecla promotional video. We worked with SEACC to draft and send a letter [PDF] explaining that this was a classic fair use of Hecla’s material. In response, Hecla withdrew its
demand. While this case resolved the right way, it shows that even elementary fair use sometimes requires the counsel of a lawyer.
“Irreparable Harm” is a short film sponsored by SEACC. The movie is about Alaska’s Admiralty Island, a National Monument which has
been inhabited by the Tlingit people for thousands of years. In addition to several hundred people living in the Tlingit village of Angoon, the huge island near Juneau is also home to
an estimated 2,500 bald eagles, more than 1,000 bears, and one silver mine—Hecla’s Greens Creek Mine.
The documentary explores the mine’s relationship with its Tlingit neighbors, highlighting pollution levels in traditional Tlingit food sources. SEACC says contamination has increased
since Greens Creek, the only mine operating within a U.S. National Monument, began production in 1989.
This year, “Irreparable Harm” is screening in cities around the country. The film has screened at several environmental-themed film festivals, including the Wild & Scenic Film
Festival, which is shown around the country—which apparently didn’t sit too well with Hecla Mining Company. Instead of offering a substantive response to the film, Hecla hired
big-city lawyers in an attempt to shut down the movie with a spurious copyright claim against the nine-person grassroots environmental organization from Juneau.
In a letter sent last month, Hecla claimed that SEACC’s use of footage from a company promotional video about Greens Creek
violated the Copyright Act. Ignoring SEACC’s fair use rights, the letter goes on to demand that SEACC “cease any and all reproduction of Hecla’s copyrighted works, including but not
limited to, any showings of the Irreparable Harm film.”
EFF responded to Hecla’s demands on behalf of SEACC. We pointed out what should have been obvious—that the use of short clips in a critical documentary is “a paradigmatic case of fair
use.” SEACC used just 28 seconds of footage from Hecla’s promotional video, combining it with voice-over commentary on Hecla’s mining practices.
Hecla has since backed off, stating [PDF] that it has “decided not to take further action” at this time. We’re
glad that we were able to help SEACC in this case. But filmmakers shouldn’t have to hire a lawyer to protect their fundamental right to free expression. Copyright is meant to spur the
production of new works, but unfortunately, it’s all too easy to use it as a tool of censorship (in this case we might call it a Hecla’s Veto).
Don’t let the potential of a copyright threat squelch your speech. For those seeking guidance on future projects, the Association of Independent Video and Filmmakers has a “best
practices” guide to fair use, and is a veritable “silver mine” of information.
To schedule a viewing of SEACC’s film or find one near you, contact the organization directly at firstname.lastname@example.org.
>> mehr lesen
Yet Another Lesson from the Cambridge Analytica Fiasco: Remove the Barriers to User Privacy Control
(Di, 20 Mär 2018)
Last weekend’s Cambridge Analytica news—that the company was able to access tens of millions of users’ data by paying low-wage workers on Amazon’s Mechanical Turk to take
a Facebook survey, which gave Cambridge Analytica access to Facebook’s dossier on each of those turkers’ Facebook friends—has hammered home two problems: first, that
Facebook’s default privacy settings are woefully inadequate to the task of really protecting user privacy; and second, that ticking the right boxes to make Facebook less creepy is far
too complicated. Unfortunately for Facebook, regulators in the U.S.
and around the world are looking for solutions, and fast.
But there’s a third problem, one that platforms and regulators themselves helped create: the plethora of legal and technical barriers that make it hard for third
parties—companies, individual programmers, free software collectives—to give users tools that would help them take control of the technologies they use.
Think of an ad-blocker: you view the web through your browser, and so you get to tell your web-browser which parts of a website you want to see and which parts you want to ignore. You
can install plugins to do trivial things, like replace the word “millennials” with “snake
people”—and profound things, like making the web readable by
people with visual impairments.
Ad-blockers are nearly as old as the web. In the early days of the web, they broke the deadlock over pop-up ads,
allowing users to directly shape their online experience, leading to the death of pop-ups as advertisers realized that serving a pop-up was a guarantee that virtually no one
would see your ad. We—the users—decided what our computers would show us, and businesses had to respond.
Web pioneer Doc Searls calls the current generation of ad-blockers “the largest consumer revolt in history.” The users of technology have availed themselves of the tools to give them
the web they want, not the web that corporations wanted us to have. The corporations that survive this revolt will be the ones who can deliver services that users are willing to use
without add-ons that challenge their business-models.
In his 1999 classic Code and Other Laws of Cyberspace, Lawrence Lessig argued that our world is regulated by four forces:
Law: what's legal
Markets: what's profitable
Norms: what's morally acceptable
Code: what's technologically possible
Under ideal conditions, companies that do bad things with technology are shamed and embarrassed by bad press (norms); they face lawsuits and regulatory action (law); they lose
customers and their share-price dips (markets); and then toolsmiths make add-ons for their product that allow us all to use them safely, without giving up our personal information, or
being locked into their software store, or having to get repairs or consumables from the manufacturer at any price (code).
But an increasing slice of the web is off-limits to the “code” response to bad behavior. When a programmer at Facebook makes a tool that allows the company to harvest the personal
information of everyone who visits a page with a “Like” button on it another programmer can write a browser plugin that blocks this
button on the pages you visit.
This week, we made you a tutorial explaining the torturous
process by which you can change your Facebook preferences to keep the company’s “partners” from seeing all your friends’ data. But what many folks would
really like to do is give you a tool that does it for you: go through the tedious work of figuring out Facebook’s inscrutable privacy dashboard, and roll that
expertise up in a self-executing recipe—a piece of computer code that autopiloted your browser to login to Facebook on your behalf and ticked all the right boxes for you,
with no need for you to do the fiddly work.
But they can’t. Not without risking serious legal consequences, at least. A series of court decisions—often stemming from the online
gaming world, sometimes about Facebook itself—has made fielding code that fights for the user into a
legal risk that all too few programmers are willing to take.
That's a serious problem. Programmers can swiftly make tools that allow us to express our moral preferences, allowing us to push back against bad behavior long before any
government official can be convinced to take an interest—and if your government never takes an interest, or if you are worried about the government's use of technology to
interfere in your life, you can still push back, with the right code.
Today, we are living through a“techlash” in which the world has woken up to realize that a single programmer can make choices that affect
millions—billions—of peoples’ lives. America’s top computer science degree programs are making ethics an integral part of their curriculum. The ethical epiphanies of
geeks have profoundlyshapedthe waywe understand our technology (if only all technologists were so concerned with the ethics of their jobs).
We need technologists to thoughtfully communicate
technical nuance to lawmakers; to run businesses that help people master their technology; to passionately
make the case for better technology design.
But we also need our technologists to retain the power to affect millions of lives for the better. Skilled toolsmiths can automate the process of suing Equifax, filing for housing aid after
you’re evicted, fighting a parking ticket or forcing an airline to give you a refund
if your ticket’s price drops after you buy it (and that’s all just one programmer, and he hasn’t even graduated yet!).
When we talk about “walled gardens,” we focus on the obvious harms: an App Store makes one company the judge, jury and executioner of whose programs you can run on
your computer; apps can’t be linked into and disappear from our references; platforms get to spy on you when you use them; opaque algorithms decide what you hear (and
thus who gets to be heard).
But more profoundly, the past decade’s march to walled gardens has limited what we can do about all these things. We still have ad-blockers (but not for “premium video” anymore, because writing an ad-blocker that bypasses DRM is a potential
felony), but we can’t avail ourselves of tools to auto-configure our privacy dashboards, or snoop on our media players to see if they’re snooping on
us, or any of a thousand other useful and cunning improvements over our technologically mediated lives.
Because in the end, the real risk of a walled garden isn’t how badly it can treat us: it’s how helpless we are to fight back against it with our own, better
code. If you want to rein in Big Tech, it would help immensely to have lots of little tech in use showing how things might be if the giants behaved themselves. If you want your
friends to stop selling their private information for a mess of potage, it would help if you could show them how to have an online social life without surrendering their privacy. If
you want the people who bet big on the surveillance business-model to go broke, there is no better way to punish them in the marketplace than by turning off the data-spigot with tools
that undo every nasty default they set in the hopes that we'll give up and use products their way, not ours.
Facebook v. Power VenturesBlizzard v. BNETD
>> mehr lesen
How To Change Your Facebook Settings To Opt Out of Platform API Sharing
(Mo, 19 Mär 2018)
You shouldn't have to do this. You shouldn't have to wade through complicated privacy settings in order to ensure that the companies with which you've entrusted your personal
information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and
regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.
Over the weekend, it became clear that Cambridge Analytica, a data analytics company, got access to more
than 50 million Facebook users' data in 2014. The data was overwhelmingly collected, shared, and stored without user consent. The scale of this violation of user privacy reflects
how Facebook's terms of service and API were structured at the time. Make no mistake: this was not a data breach. This was exactly how Facebook's infrastructure was designed
In addition to raising questions about Facebook's role in the 2016 presidential election, this news is a reminder of the inevitable privacy risks that users face when their personal
information is captured, analyzed, indefinitely stored, and shared by a constellation of data brokers, marketers, and social media companies.
Tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful
transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems. Finally, users need to
be able to leave when a platform isn’t serving them — and take their data with them when they do.
Of course, you could choose to leave Facebook entirely, but for many that is not a viable solution. For now, if you'd like keep your data from going through Facebook's API, you can
take control of your privacy settings. Keep in mind that this disables ALL platform apps (like Farmville, Twitter, or Instagram) and you will not be able to log into sites using your
Log into Facebook and visit the App Settings page (or go there manually via
the Settings Menu > Apps ).
From there, click the "Edit" button under "Apps, Websites and Plugins." Click "Disable Platform."
If disabling platform entirely is too much, there is another setting that can help: limiting the personal information accessible by apps that others use. By default, other people who
can see your info can bring it with them when they use apps, and your info becomes available to those apps. You can limit this as follows.
From the same page, click "Edit" under "Apps Others Use." Then uncheck the types of information that you don't want others' apps to be able to
access. For most people reading this post, that will mean unchecking every category.
>> mehr lesen
Catalog of Missing Devices: Physics Barbie
(Mo, 19 Mär 2018)
Savvy parents know that every cloud-connected electronic gadget they buy for their kids is a potential hole in their network, a sneaky listening device that hangs around some of the
most sensitive and personal moments of you kids' lives and the lives of your whole family. But tomorrow's smart parents know that those toys are a potential platform for innovation,
places where parents, programmers and businesses can work to create new operating systems that never talk to the cloud, and that replace the canned messages of a distant corporate
design department with material of your own choosing.
>> mehr lesen
Advocating for Change: How Lucy Parsons Labs Defends Transparency in Chicago
(Mo, 19 Mär 2018)
Here at the Electronic Frontier Alliance, we’re lucky to have incredible member organizations engaging in advocacy on
our issues across the U.S. One of those groups in Chicago, Lucy Parsons Labs (LPL), has done incredible work taking on a range of
civil liberties issues. They’re a dedicated group of advocates volunteering to make their world (and the Windy City) a better, more equitable place.
We sat down with one of the founders of LPL, Freddy Martinez, to gain a better understanding of the Lab and how they use their collective powers for good.
How would you describe Lucy Parsons Labs? How did the organization get started, and what need were you trying to fill?
The lab got started four years back when a few people doing digital security training in Chicago saw there was need for a more technical group that could bridge the gap between
advocacy and technology. We each had areas of interest and expertise that we were doing activism around, and it grew pretty organically from there. For example, lawmakers would try to
pass a bill without fully understanding the full implications that the piece of legislation would have, technologically or otherwise. We began to work together on these projects to
educate lawmakers and inform the public on these issues as a friend group, and the organization grew out of that as we added or expanded projects. We do a lot of public records
requests and work on police transparency, but our group has broad, varied interests. The common thread that runs through the work is that we have a lot of expertise in a lot of
different advocacy areas, and we leverage that expertise to make the world better. It lets us sail in many different waters.
LPL participates in the Electronic Frontier Alliance (EFA), a network of grassroots digital rights groups around the country. Your work in Chicago runs the gamut from
advocating for transparency in the criminal justice system, to investigating civil asset forfeiture, from operating a SecureDrop system for whistleblowers, to investigating the
use of cell-simulators by the Chicago Police Department. Given that, how does the EFA play into your
I feel that the more the organization grows, the more having groups around the country who are building capacity is key to making sure that these projects get done. There’s such a
huge amount of work to be done, and having other partners who are interested in various subsections of our work and can help us achieve our goals is really valuable. EFA provides us
access to a diverse array of experts, from academics and lawyers to grassroots activists. It gives us a lot of leverage, and lets us share our subject matter expertise in ways we
wouldn’t be able to if we were going it alone.
Let’s talk surveillance. LPL has done incredible work via the open records process to expose the use of cell-site simulators (sometimes referred to as “Stingrays” or IMSI
Catchers) by the Chicago Police Department. Can you tell us about how you started investigating, and why these kinds of surveillance need to be brought into the public
I actually heard of this equipment through news reporting—you would see major cities buying these devices, and then troubling patterns began to emerge. Prosecutors would begin
dropping cases because they didn’t want to tell defense attorneys where they got the information or how. There were cases of parallel construction. After noticing this trend, I sent
my first public records request to get info on whether the Chicago Police Department had bought any. Instead of following the law, they decided to ignore the request until a judge
ordered them to release the records. They were ostensibly used for the war on drugs, but usually they are used overseas in the war on terror. They test these technologies on black and
brown populations in war zones, then bring them back to surveil their citizens. It’s an abuse of power and an invasion of privacy. We need to be talking about this. We think that
there’s a reason that this stuff is acquired in secret, because people would not be okay with their government doing this if they knew.
LPL has done tons of community work in the anti-surveillance realm as well. Why do you believe educating people about how they can protect themselves from surveillance is
I think that you need to give people the breathing room to participate in society safely. Surveillance is usually thought of as an eye in the sky watching over your every move, but
it’s so much more pervasive than that. We think about these things in abstract ways, with very little understanding of how they can affect our daily lives. A way to frame the
importance of, say, encryption, is to use the example of medical correspondence. If you’re talking to your doctor, you don’t want your messages to be seen by anyone else. It’s
critical to have these discussions and decisions made in public so that people can make informed decisions about their lives and privacy. This is a broader responsibility we have as a
society, and to each other.
Do you have any advice for other community-based advocacy groups based on your experience?
I have found that being organized is extremely important. We’re a small team of volunteers, so we have to keep things really well documented, especially when dealing with something
like public records requests. You also have to, and I can’t stress this enough, enjoy the work and make sure you don’t burn out. It’s a labor of love—you need to be invested in these
projects and taking care of yourself in order to do effective activism. Otherwise the work will suffer.
LPL has partnered with other organizations and community groups in the past. What are some ways that you’ve found success in coalition building? What advice would you give to
other groups that would like to work more collaboratively with their peer groups?
LPL is also part of a larger group called the Chicago Data Collaborative, where we are working on sharing and analyzing data on the
criminal justice system. One of the most important pieces of information to know before embarking on a multi-organization enterprise is that you will have to do a lot of capacity
building in order to work together effectively. You’ll need to set aside a lot of time and effort to context build for those not in the know. You must be “in the room” (whether that’s
digital or physical) for dedicated, direct collaboration. This is what makes or breaks a good partnership.
Anything else you’d like to add?
I have a bit of advice for people who’d like to get involved in grassroots activism and advocacy, but aren’t sure where to start: You’ll never know when you’re going to come across
these projects. Being curious and following your gut will take you down weird rabbit holes. Get started somewhere and follow your gut. You’ll be surprised how far that will take you.
If you’re advocating for digital rights within your community, please explore the Electronic Frontier Alliance and consider joining.
This interview has been lightly edited for length and readability.
>> mehr lesen
Senator Wyden Asks NSA Director Nominee the Right Questions
(Fr, 16 Mär 2018)
Lt. Gen. Paul Nakasone, the new nominee to direct the NSA, faced questions Thursday from the Senate Select Committee on
Intelligence about how he would lead the spy agency. One committee member, Senator Ron Wyden (D-OR), asked the nominee if he and his agency could avoid the mistakes of the past, and
refuse to participate in any new, proposed spying programs that would skirt the law and violate Americans’ constitutional rights.
“In 2001, then-President Bush directed the NSA to conduct an illegal, warrantless wiretapping program. Neither the public nor the full intelligence committee learned about this
program until it was revealed in the press,” Wyden said. Wyden, who was a member of the committee in 2001, said he personally learned about the NSA surveillance program—which bypassed
judicial review required from the Foreign Intelligence Surveillance Court—by reading about it in the newspaper. Sen. Wyden continued:
“If there was a form of surveillance that currently requires approval by the [Foreign Intelligence Surveillance Court] and you were asked to avoid the court, based on some kind of
secret legal analysis, what would you do?”
Lt. Gen. Nakasone deferred, assuring Sen. Wyden that he would receive a “tremendous amount of legal advice” in his new job, if confirmed.
Sen. Wyden interrupted: “Let me just stop it right there, so I can learn something that didn’t take place before. You would, if asked, tell the entire committee that you had been
asked to [review such a program]?”
“Senator,” Lt. Gen. Nakasone responded, “I would say that I would consult with the committee—”
“When you say consult,” Wyden interrupted again, “you would inform us that you had been asked to do this?”
Lt. Gen. Nakasone repeated himself: he would consult with the committee, and keep senators involved in such discussions. Lt. Gen. Nakasone added, though, that “at the end of the day,
Senator, I would say that there are two things I would do. I would follow the law, and I would ensure, if confirmed, that the agency follows the law.”
Sen. Wyden took it as a win.
“First of all, that’s encouraging,” Wyden said, “because that was not the case back in 2001.”
“In 2001, the President said we’re going to operate a program that clearly was illegal. Illegal! You’ve told us now, you’re not going to do anything illegal. That’s a plus. And you
told us that you would consult with us if you were ever asked to do something like that. So, I appreciate your answer.”
Sen. Wyden also asked Lt. Gen. Nakasone about encryption. Sen. Wyden asked Lt. Gen. Nakasone if he agreed with encryption experts’ opinion that, if tech companies were required to
“permit law enforcement access to Americans’ private communications and data,” then such access could be exploited by “sophisticated, foreign government hackers,” too.
Again, Lt. Gen. Nakasone avoided a direct yes or no answer, and again, Sen. Wyden interrupted.
“My time is up, general. Just a yes-or-no answer to the question, with respect to what experts are saying,” Wyden said. “Experts are saying that the tech companies can’t modify their
encryption to permit law enforcement access to Americans’ private communications without the bad guys getting in, too. Do you disagree with the experts, that’s just a yes or no.”
“I would offer Senator,” Lt. Gen. Nakasone said, “that it’s a conditional yes.”
Wyden, a staunch encryption advocate in the Senate, interpreted Lt. Gen. Nakasone’s answer positively. “That’s encouraging as well,” Wyden said. “I look forward to working with you in
the days ahead.”
Senate Intelligence Committee Chairman Richard Burr (R-NC), at the close of the hearing, said he would like to swiftly move Lt. Gen. Nakasone’s nomination further. If other Senators
have the opportunity to question Lt. Gen. Nakasone about his potential leadership of the NSA, we hope they ask pointed, necessary questions about the agency’s still-ongoing
surveillance program Section 702, and how the nominee plans to reconcile the agency’s widespread, invasive spying program with Americans’ constitutional right to privacy.
>> mehr lesen
How FOSTA Could Give Hollywood the Filters It's Long Wanted
(Fr, 16 Mär 2018)
Some of the biggest names in the U.S. entertainment industry have expressed a recent interest in a topic that’s seemingly far away from their core business: shutting down online
prostitution. Disney, for instance, recently wrote to key U.S.
senators expressing their support for SESTA, a bill that was originally aimed at sex
traffickers. For its part, 20th Century Fox told the same senators that anyone
doing business online “has a civic responsibility to help stem illicit and illegal activity.”
Late last year, the bill the entertainment companies supported morphed from SESTA into FOSTA,
and then into a kind of Frankenstein bill that combines the worst aspects of both. The
bill still does nothing to catch or punish traffickers, or provide help to victims of sex trafficking.
As noted by Freedom Network USA, the largest coalition of organizations working to fight human
trafficking, law enforcement already has the ability to go after sex traffickers and anyone who helps them. Responsible web operators can help in that task. The civil liabilities
imposed by FOSTA could actually harm the hunt for perpetrators.
Freedom Network suggests the better approach would be to provide services and support to victims, but that’s not what FOSTA does. What it does do is offer a powerful incentive for
online platforms to police the speech of users and advertisers.
A perceived violation of a state’s anti-trafficking laws could lead to authorities seeking civil or criminal penalties, or a barrage of lawsuits.
So, why are movie studios involved at all in this debate? Hollywood is lobbying for laws that will force online intermediaries to shut down user speech. That’s what they’ve been
seeking since practically the beginning of the Internet.
A Brief History of Safe Harbors
The Internet as we know it is underpinned by two critical laws that have allowed user speech to blossom: Section 230 of the Communications Decency Act, and 17 U.S. Code § 512, which
outlines the “safe harbor” provisions of the Digital Millennium Copyright Act, or DMCA.
Section 230 prevents online platforms from being held liable, in many cases, for their users’ speech. Platforms are free to moderate speech in a way that works for them—removing spam
or trolling comments, for instance—without being compelled to read each comment, or view each video, a task that’s simply impossible on sites with thousands or millions of users.
Similarly, the DMCA safe harbor shields the same service providers from copyright damages based on user infringement, as long as they follow certain guidelines. The two laws work
together to send a clear message: in the online world, users are responsible for their own actions and speech, and online platforms can mediate that speech—or not—as fits the needs of
For two decades now, Section 230 and the DMCA have complemented each other, allowing for an explosion of online creativity. Without the DMCA safe harbor, small businesses could face
bankruptcy over the copyright infringement of a few users. And without Section 230, the same businesses could be sued for a vast array of user misbehavior that they didn’t even know
about. Lawsuits for libel or invasion of privacy, for instance, could be aimed at the platform, rather than the person who actually committed those acts.
Without these key legal protections, many sites would make the safe choice and simply choose to not host free and unfettered discussions. Others might begin to police user content
overzealously, removing or blocking lots of lawful speech for fear of letting something illegal slip through. The safe harbors keep the focus for any online wrongdoing on the actual
wrongdoer, whether it’s a civil violation like copyright infringement, or criminal acts.
It’s hardly a free-for-all for the companies protected by the safe harbors, which have significant limits. Online platforms that edit or direct user speech that violates the law, for
instance, can’t avail themselves of Section 230 protections. It’s fine to run online advertisements, but sites that help users post ads for illegal or discriminatory content can
be, and have been, held accountable.
Section 230 doesn’t offer any shield against federal criminal law, and one doesn’t have to look far to find website operators that have been punished under those laws. The operator of
the online marketplace Silk Road, for instance, was convicted of federal drug trafficking offences.
Nor does protection accrue to websites that make contributions, even small ones, to illegal content. An online housing website, Roommates.com, lost Section 230 protection simply
because it required users to answer questions that could be used in housing discrimination. While EFF has long expressed concerns about the free speech implications of the 2008
Fair Housing Council v. Roommates.com decision, it remains the law and demonstrates that Section 230 is far from a free pass.
Likewise, the DMCA safe harbors only apply if an online platform complies with numerous requirements, including implementing a repeat-infringer policy and responding to notices of
infringement by taking down content.
Towards a Filtered Net?
For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without
safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.
The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and
virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies
shouldn’t “blindly run platforms with no control of the content.”
That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An
Internet that’s policed by “copyright bots” is what major film studios and record have
advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a
push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.
Software filters can provide a useful first take in moderating content, but they need proper supervision from humans. Bots still can’t determine when use of copyrighted material is
fair use, for instance, which is why a best practice is to always let human creators dispute the
determination of an automated filter.
Similarly, it’s unlikely that an automated filter will be able to determine the nuanced difference between actual online sex-trafficking and a discussion about
sex-trafficking. Knocking down safe harbors will lead to an over-reliance on flawed filters, which can easily silence the wrong people.
Those filters would create a huge barrier to entry for startups, non-profits, and hobbyists. And at the end of the day, they’d hurt free speech. Saying that new technology can
produce a successful filter is a fallacy—bots simply can’t do fair use.
So when Hollywood and entrenched tech interests suddenly take a new interest in the problem of sex trafficking, it’s fair to wonder why. After all, an Internet subject to corporate
filters will make it harder, not easier, to hunt down and prosecute sex
Punching a hole in safe harbors to reshape the Internet has been the project, in many different forms, for more than a decade now. The FOSTA bill, if it passes the Senate, will be the
first major success in dismantling a safe harbor. But don’t count on it to be the last.
Stop SESTA and FOSTA
>> mehr lesen
Catalog of Missing Devices: Panfluent
(Fr, 16 Mär 2018)
Visit The Catalog of Missing Devices, a collection of tools, services, and products that could have been, but never were, because of DRM.
For the most part, rightsholders don't object to user-created subtitling, which is key to making videos available to non-native speakers of the media's original language, and
accessible to people with hearing disabilities. Fansubbing and similar practices predate internet videos by decades, but creating a crowdsourced subtitling tool becomes a potential
felony once DRM gets in the picture, if the DRM has to be bypassed to get the subtitles in.
>> mehr lesen
Unanimous Support in Berkeley for Community Control of Spy Tech
(Fr, 16 Mär 2018)
Berkeley’s City Council voted unanimously this week to pass the Surveillance
Technology and Community Safety Ordinance into law. (This is an earlier draft of the ordinance. We’ll update this link when the approved version is published.) Berkeley joins
Santa Clara County (which adopted a similar law in June of 2016) in
showing the way for the rest of California. In addition to considerable and unopposed spoken support during the public comment portion of the hearing, Mayor Jesse Arreguín
reported that he and the City Council had received almost 200 letters and emails asking for the law to be adopted.
EFF has long supported this ordinance. During this week’s
public comment, Jason Kelley spoke not only as EFF’s digital strategist but as a local resident and community member. He shared that “my friends and I—many of whom live here—are
concerned that surveillance tech might be purchased and used without proper oversight.”
The ordinance, part of a nationwide effort to require community control of police surveillance, will address the concerns Kelley and so many in the community share. The new law will
require that before acquiring surveillance technology, city departments submit use policies and acquisition reports detailing what will be acquired and how it works. These reports
must also outline potential impacts on civil liberties and civil rights as well as steps to ensure adequate security measures safeguarding the data collected or generated.
These requirements are particularly important in light of recent reports that Automated License Plate Reader
(ALPR) data collected by police is being shared with ICE. In response to these reports, the City of Alameda recently voted against acquiring new ALPRs. During this
week’s Berkeley city council meeting, the police chief stated that the Berkeley police department was not sharing any information acquired through their own ALPRs with third parties.
The new ordinance will assure that equipment acquired in the future will be approved only after such policies have been made public and reviewed.
While the meeting lasted into the late hours of the night, the path to this important legislation has been ongoing for over a year. EFF worked alongside over dozens of local partners,
including Oakland Privacy (a member of the Electronic Frontier Alliance), the ACLU, the Council of American Islamic Relations, the Center for Media Justice, and Restore the Fourth.
With Santa Clara County and Berkeley now working diligently to protect the civil liberties of their residents, requiring public comment and city council approval on whether or not to
acquire surveillance equipment, hope is high that similar ordinances will soon be passed in the cities of Davis and Oakland and by the Bay Area Rapid Transit system.
Technology has the power to improve our lives. It can make our government more accountable and efficient, and expose us to new information. But it also can intrude on our privacy and
chill our free speech. Now more than ever, public safety requires trust between law enforcement and the community served. That trust is by necessity built in transparency and clear
processes that balance public safety with the maintenance of the most essential of civil liberties. The Community Control of Police Surveillance ordinance model assures all residents
are afforded a voice in that process. Groups like Oakland Privacy in the Bay Area, and Privacy Watch in St. Louis, are working hard to assure similar ordinances are adopted in their communities. Visit the Electronic Frontier Alliance homepage to find or start an allied organization in your area.
>> mehr lesen
Blind Users Celebrate as Marrakesh Treaty Implementation Bill Drops
(Do, 15 Mär 2018)
Today the Marrakesh Treaty Implementation Bill was introduced into Congress by Senators Chuck Grassley (R-IA), Bob Corker (R-TN), Dianne Feinstein (D-CA), Bob Menendez (D-NJ),
Kamala Harris (D-CA), Orrin Hatch (R-UT), and Patrick Leahy (D-VT). The bill implements the Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind,
Visually Impaired or Otherwise Print Disabled, a landmark treaty that was adopted by the World Intellectual Property Organisation (WIPO) in June 2013,
and has since been ratified by 37 other countries. The treaty is notable in that it is the first WIPO treaty passed primarily for a disadvantaged class of users, rather than
for the benefit of copyright holders.
When passed, the bill will allow those who are blind, visually impaired, or otherwise reading disabled (for example, being unable to pick up and turn the pages of a book) to make free
use of written works in accessible formats such as braille, large print, or audiobook. Although similar provisions were already part of U.S. law, the amendments made by this
bill slightly broadens the class of beneficiaries who were eligible for access to such works.
Even more significantly, the implementation bill will ensure that it is legal for accessible works to be sent between the U.S. and other countries that are signatories
to the Marrakesh Treaty. There are many blind, visually impaired, and print disabled users in countries that do not have the capacity to produce their own accessible works,
reflected in the fact that such users in poor countries have access to only 1% of published books in accessible formats, compared with 7% in rich countries. Allowing eligible users
throughout the world access to works that have been created in any other Marrakesh signatory countries is a compassionate and sensible solution to this "book famine."
The implementation bill tracks the Marrakesh Treaty closely, and it is not, as we had once feared, tied to the implementation of the much more problematic Beijing Treaty on Audiovisual Performances, which would require
more significant changes to U.S. law. The National Federation for the Blind, libraries, publishers, the Copyright Office and the U.S. Patent and Trademark Office (USPTO) all
support the Marrakesh Treaty Implementation Bill, and so does EFF. We wish the bill's sponsors success in seeing its speedy passage through Congress.
>> mehr lesen
A Smattering of Stars in Argentina's First "Who Has Your Back?" ISP Report
(Mi, 14 Mär 2018)
It’s Argentina's turn to take a closer look at the practices of their local Internet Service Providers, and how they treat their customers’ personal data when the government
Argentina's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of Asociación por los Derechos Civiles and the Electronic Frontier
Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to turn a spotlight on how the policies of Internet Service Providers either advance
or hinder the privacy rights of users.
The report is based on EFF's annual Who Has Your
Back? report, but adapted to local laws and realities. Last year Brazil’s Internet Lab, Colombia’s Karisma
Foundation, Paraguay's TEDIC, and Chile’s Derechos
Digitales published their own 2017 reports, and ETICAS Foundation released a similar study earlier
this year, part of a series across Latin America and Spain.
The report set out to examine which Argentine ISPs best defend their customers. Which are transparent about their policies regarding requests for data? Do any challenge
disproportionate data demands for their users’ data? Which require a judicial order before handing over personal data? Do any of the companies notify their users when complying with
judicial requests? ADC examined publicly posted information, including the privacy policies and codes of practice, from six of the biggest Argentine telecommunications access
providers: Cablevisión (Fibertel), Telefónica (Speedy), Telecom (Arnet), Telecentro, IPLAN, and DirecTV (AT&T). Between them, these providers cover 90% of the fixed and broadband
Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information if they felt appropriate, all of
which was incorporated into the final report. ADC’s rankings for Argentine ISPs are below; the full report, which includes details about each company, is available
Evaluation Criteria for ¿Quién Defiende tus Datos?
they notify users if they change their privacy policies, if they publish a note regarding the right of access to personal data, and if they foresee how the right of access to a
person's’ data may be exercised.
Transparency: whether they publish transparency reports that are accessible to the public, and how many requests have been received, compiled and
rejected, including details about the type of requests, the government agencies that made the requests and the reasons provided by the authority.
Notification: whether they provide any kind of notification to customers of government data demands, and bonus points if they do the notification
Judicial Court: Whether they require the government to obtain a court order before handing over data, and if they judicially resist data requests
that are excessive and do not comply with legal requirements.
Law Enforcement Guidelines: whether they publish their guidelines for law enforcement requests.
Companies in Argentina are off to a good start but still have a way to go to fully protect their customers’ personal data and be transparent about who has access to it. ADC and
EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Argentines will have access to information about how
their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope next year’s report will shine with more stars.
>> mehr lesen
Fifth Circuit Appellate Court Issues Encouraging Border Search Opinion
(Mi, 14 Mär 2018)
The U.S. Court of Appeals for the Fifth Circuit in U.S. v.
Molina-Isidoro recently issued an encouraging opinion related to the digital privacy of travelers
crossing the U.S. border.
EFF filed an amicus brief last year in the case, arguing that the Supreme Court’s decision in Riley v. California (2014) supports the conclusion that border agents need a probable cause warrant before
searching electronic devices because of the unprecedented and significant privacy interests travelers have in their digital data. In Riley, the Supreme Court followed similar
reasoning and held that police must obtain a warrant to search the cell phone of an arrestee.
In U.S. v. Molina-Isidoro, although the Fifth Circuit declined to decide whether the Fourth Amendment requires border agents to get a warrant before searching travelers’
electronic devices, one judge invoked prior case law that could help us establish this privacy protection.
Ms. Molina-Isidoro attempted to enter the country at the port of entry at El Paso, TX. An x-ray of her suitcase led border agents to find methamphetamine. They then manually searched
her cell phone and looked at her Uber and WhatsApp applications. The government sought to use her correspondence in WhatsApp in her prosecution, so she moved to suppress this
evidence, arguing that it was obtained in violation of the Constitution because the border agents didn’t have a warrant.
Unfortunately for Molina-Isidoro, the Fifth Circuit ruled that the WhatsApp messages may be used in her prosecution. But the court avoided the main constitutional question: whether
the Fourth Amendment requires a warrant to search an electronic device at the border. Instead, the court held that the border agents acted in “good faith”—an independent basis to deny
Molina-Isidoro’s motion to suppress, even if the agents had violated the Fourth Amendment.
The Fifth Circuit presented two bases for its finding of “good faith”—factual and legal. The factual basis of the agents’ “good faith” was that there was probable cause to support a
search of Molina-Isidoro’s phone. The finding of drugs in her luggage, according to the Fifth Circuit, “created a fair probability that the phone contained communications with the
brother she supposedly visited (or whoever was the actual source of the drugs) and other information about her travel to refute the nonsensical story she had provided.” The legal
basis of the agents’ “good faith” was pre-Riley case law that generally permits warrantless and suspicionless “routine” searches of items travelers carry across the border.
While the court did not rule on whether Riley requires a warrant for border device searches, the court did emphasize that a leading Fourth Amendment legal treatise recognizes
that “Riley may prompt a reassessment” of the question.
Additionally, Fifth Circuit Judge Gregg Costa issued an instructive concurring opinion. While he agreed with the decision to let the WhatsApp evidence stand, based on the border
agents’ “good faith,” he made two key points we have made in our own briefs.
First, Judge Costa considered whether the traditional primary purpose of the Fourth Amendment’s border search exception—customs enforcement—justifies conducting warrantless,
suspicionless searches of electronic devices. As we have argued, the link between these ends and means is very weak. Judge Costa agreed: “Detection of … contraband is the
strongest historic rationale for the border search exception.” Yet, “Most contraband, the drugs in this case being an example, cannot be stored within the data of a cell phone.” He
concluded, “this detection-of-contraband justification would not seem to apply to an electronic search of a cellphone or computer.” We made the same argument in our amicus brief: “Just as the Riley Court stated that ‘data on the phone can endanger no one,’ physical items
cannot be hidden in digital data.”
Second, Judge Costa considered whether an “evidence-gathering justification” could support warrantless, suspicionless border searches of electronic devices. He questioned this, citing
an 1886 Supreme Court customs case, Boyd v. U.S., which we also cited in our amicus brief. The Boyd Court held:
The search for and seizure of stolen or forfeited goods, or goods liable to duties and concealed to avoid the payment thereof, are totally different things from a search for and
seizure of a man's private books and papers for the purpose of obtaining information therein contained, or of using them as evidence against him.
In other words, while border agents have an interest in preventing the importation of physical contraband, they have at most a much lesser interest in searching papers to find
evidence of crime. Judge Costa seemed persuaded by this holding in Boyd, especially given the unprecedented privacy interests modern travelers have in their digital data,
[Boyd’s] emphatic distinction between the sovereign’s historic interest in seizing imported contraband and its lesser interest in seizing records revealing unlawful
importation has potential ramifications for the application of the border-search authority to electronic data that cannot conceal contraband and that, to a much greater degree
than the papers in Boyd, contains information that is “like an extension of the individual’s mind”…
While we would have liked the Fifth Circuit to affirmatively hold that the Fourth Amendment bars a border search of a cell phone without a probable cause warrant, we’re optimistic
that we can win such a ruling in our civil case against the U.S. Department of Homeland Security, Alasaad v. Nielsen,
challenging warrantless border searches of electronic devices.
>> mehr lesen
A New Backdoor Around the Fourth Amendment: The CLOUD Act
(Di, 13 Mär 2018)
There’s a new, proposed backdoor to our data, which would bypass our Fourth Amendment protections to communications privacy. It is built into a dangerous bill called the CLOUD Act,
which would allow police at home and abroad to seize cross-border data without following the privacy rules where the data is stored.
This backdoor is an insidious method for accessing our emails, our chat logs, our online videos and photos, and our private moments shared online between one another. This backdoor
would deny us meaningful judicial review and the privacy protections embedded in our Constitution.
This new backdoor for cross-border data mirrors another backdoor under Section 702 of the FISA Amendments Act, an invasive NSA
surveillance authority for foreign intelligence gathering. That law, recently reauthorized and expanded by Congress for another six years, gives U.S.
intelligence agencies, including the NSA, FBI, and CIA, the ability to search, read, and share our private electronic messages without first obtaining a warrant.
The new backdoor in the CLOUD Act operates much in the same way. U.S. police could obtain Americans’ data, and use it against them, without complying with the Fourth Amendment.
For this reason, and many more, EFF strongly opposes the CLOUD Act.
The CLOUD Act (S. 2383 and H.R. 4943) has two major components. First, it empowers U.S. law enforcement to grab data stored anywhere in
the world, without following foreign data privacy rules. Second, it empowers the president to unilaterally enter executive agreements with any nation on earth, even known human rights
abusers. Under such executive agreements, foreign law enforcement officials could grab data stored in the United States, directly from U.S. companies, without following U.S. privacy
rules like the Fourth Amendment, so long as the foreign police are not targeting a U.S. person or a person in the United States.
That latter component is where the CLOUD Act’s backdoor lives.
When foreign police use their power under CLOUD Act executive agreements to collect a foreign target’s data from a U.S. company, they might also collect data belonging to a non-target
U.S. person who happens to be communicating with the foreign target. Within the numerous, combined foreign investigations allowed under the CLOUD Act, it is highly likely that related
seizures will include American communications, including email, online chat, video calls, and internet voice calls.
Under the CLOUD Act’s rules for these data demands from foreign police to U.S. service providers, this collection of Americans’ data can happen without any prior, individualized
review by a foreign or American judge. Also, it can happen without the foreign police needing to prove the high level of suspicion required by the U.S. Fourth Amendment: probable
Once the foreign police have collected Americans’ data, they often will be able to hand it over to U.S. law enforcement, which can use it to investigate Americans, and ultimately to
bring criminal charges against them in the United States.
According to the bill, foreign police can share the content of a U.S person’s communications with U.S. authorities so long as it “relates to significant harm, or the threat
thereof, to the United States or United States persons.” This nebulous standard is vague and overbroad. Also, the bill’s hypotheticals indicate far-ranging data sharing by
foreign police with U.S. authorities. From national security to violent crime, from organized crime to financial fraud, the CLOUD Act permits it all to be shared, and likely far more.
Moreover, the CLOUD Act allows the foreign police who collect Americans’ communications to freely use that content against Americans, and to freely share it with additional nations.
To review: The CLOUD Act allows the president to enter an executive agreement with a foreign nation known for human rights abuses. Using its CLOUD Act powers, police from that nation
inevitably will collect Americans’ communications. They can share the content of those communications with the U.S. government under the flawed “significant harm” test. The U.S.
government can use that content against these Americans. A judge need not approve the data collection before it is carried out. At no point need probable cause be shown. At no point
need a search warrant be obtained.
This is wrong. Much like the infamous backdoor search loophole connected to broad, unconstitutional NSA surveillance under Section 702, the backdoor proposed in the CLOUD Act violates
our Fourth Amendment right to privacy by granting unconstitutional access to our private lives online.
Also, when foreign police using their CLOUD Act powers inevitably capture metadata about Americans, they can freely share it with the U.S. government, without even showing
“significant harm.” Communications “content” is the words in an email or online chat, the recordings of an internet voice call, or the moving images and coordinating audio of a video
call online. Communications “metadata” is the pieces of information that relate to a message, including when it was sent, who sent it, who received it, its duration, and where the
sender was located when sending it. Metadata is enormously powerful information and should be treated with the same
protection as content.
To be clear: the CLOUD Act fails to provide any limits on foreign police sharing Americans’ metadata with U.S. police.
The CLOUD Act would be a dangerous overreach into our data. It seeks to streamline cross-border police investigations, but it tears away critical privacy protections to attain that
goal. This is not a fair trade. It is a new backdoor search loophole around the Fourth Amendment.
Tell your representative today to reject the CLOUD Act.
Stop the CLOUD Act
>> mehr lesen
Dear Leader McConnell: Don't pass FOSTA
(Di, 13 Mär 2018)
We have heard that the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) may be on the U.S. Senate floor this week for a
final vote. We are concerned that the U.S. Senate appears to be rushing to pass a seriously flawed bill without considering the impact it will have on Internet users and free speech.
We wrote Majority Leader Mitch McConnell and Democratic Leader Charles E. Schumer to share our concerns:
Websites and apps we all use every day - from WhatsApp and Instagram to Yelp and Wikipedia, even blogs and news websites with comment sections - rely on Section 230 (47 U.S.C §
230). Under Section 230, users are generally liable for the content they post, not the platforms. This bill would change that by expanding a platform's liability beyond its own
actions - if this bill passes, online platforms would be responsible for their users' speech and behavior in addition to their own.
Current law, including Section 230, does not prevent federal prosecutors from going after online platforms that knowingly advertise sex trafficking. Additionally, courts have
allowed civil claims against online platforms when a platform was shown to have a direct hand in creating the illegal content. New authorities are simply not needed to bring bad
platforms or the pimps and "johns" who directly harmed victims to justice.
Section 230 can be credited with creating today's Internet. Congress made the deliberate choice to protect online free speech and innovation, while providing discrete tools to go
after culpable platforms. Section 230 provided the legal buffer entrepreneurs needed to experiment with new ways to connect people online and is just as critical for today's
startups as it was for today's popular platforms when they launched.
FOSTA would destroy the careful policy balance struck in Section 230. By opening platforms to increased criminal and civil liability at both the federal and state levels for
user-generated content, the bill would incentivize those platforms to over-censor their users. Since it would be difficult if not impossible for platforms, both large and small,
to review every post individually for sex trafficking content (or to definitively know whether a piece of online content reflects a sex trafficking situation in the offline
world), platforms would have little choice but to adopt overly restrictive content moderation practices-silencing legitimate voices in the process. Trafficking victims themselves
would likely be the first to be censored under FOSTA.
In addition to opening platforms to increased liability under civil law and state criminal law, FOSTA would also create new federal crimes designed to target online platforms. The
expanded federal sex trafficking crimes would not require a platform owner to have knowledge that people are using the platform for sex trafficking-but only have "reckless
disregard" of this fact. The Department of Justice already has a powerful legal tool to prosecute culpable online platforms: the SAVE Act of 2015 made it a crime under 18 U.S.C. §
1591 to advertise sexual services with knowledge that trafficking is taking place.
You can read the rest of the letter here.
>> mehr lesen
We Still Need More HTTPS: Government Middleboxes Caught Injecting Spyware, Ads, and Cryptocurrency Miners
(Di, 13 Mär 2018)
Last week, researchers at Citizen Lab discovered that Sandvine's PacketLogic devices were being used to hijack users' unencrypted internet
connections, making yet another case for encrypting the web with HTTPS. In Turkey and
Syria, users who were trying to download legitimate applications were instead served malicious software intending to spy on them. In Egypt, these devices injected money-making content
into users' web traffic, including advertisements and cryptocurrency mining scripts.
These are all standard machine-in-the-middle attacks, where a computer on the path
between your browser and a legitimate web server is able to intercept and modify your traffic data. This can happen if your web connections use HTTP, since data sent over HTTP is
unencrypted and can be modified or read by anyone on the network.
The Sandvine middleboxes were doing exactly this. On Türk Telekom’s network, it was reported that when a user attempted to
download legitimate applications over HTTP, these devices injected fake "redirect" messages which caused the user’s browser to fetch the file from a different, malicious, site. Users
downloading common applications like Avast Antivirus, 7-Zip, Opera, CCleaner, and programs from download.cnet.com
had their downloads silently redirected. Telecom Egypt’s Sandvine devices, Citizen Lab noted, were using similar methods to inject
Site operators can mitigate these attacks by using HTTPS instead of HTTP. And as a user, it's easy to see when a web page has been loaded over HTTPS—check for “https” at the
beginning of the URL or, on most common browsers, a green lock icon displayed next to the address bar. However, it can still be hard to tell when you're downloading files insecurely.
For instance, Avast's website was hosted over HTTPS, but their downloads were
Today, Let’s Encrypt and Certbot make it easier
than ever to deploy HTTPS websites and to serve content securely. And later this year, Chrome is planning on marking all HTTP sites as “not secure”. Thanks to these collective efforts and many more,
almost 80% of web traffic in the U.S. is now encrypted with HTTPS. If you want to be sure you’re browsing securely, EFF’s HTTPS Everywhere browser extension can force your browser to use it wherever possible.
We've come a long way with HTTPS adoption since 2010, when EFF first started pushing tech companies to support it. Evidently, we still have a long way to go.
>> mehr lesen
EFF and 23 Groups Tell Congress to Oppose the CLOUD Act
(Mo, 12 Mär 2018)
EFF and 23 other civil liberties organizations sent a letter to Congress urging Members and Senators to oppose the CLOUD Act and any efforts to attach it to other legislation.
The CLOUD Act (S. 2383 and H.R. 4943) is a dangerous bill that would tear away global privacy protections by allowing police in the United
States and abroad to grab cross-border data without following the privacy rules of where the data is stored. Currently, law enforcement requests for cross-border data often use a
legal system called the Mutual Legal Assistance Treaties, or MLATs. This system ensures that, for example, should a foreign government wish to seize communications stored in the
United States, that data is properly secured by the Fourth Amendment requirement for a search warrant.
The other groups signing the new coalition letter against the CLOUD Act are Access Now, Advocacy for Principled Action in Government, American Civil Liberties Union, Amnesty
International USA, Asian American Legal Defense and Education Fund (AALDEF), Campaign for Liberty, Center for Democracy & Technology, CenterLink: The Community of LGBT Centers,
Constitutional Alliance, Defending Rights & Dissent, Demand Progress Action, Equality California, Free Press Action Fund, Government Accountability Project, Government Information
Watch, Human Rights Watch, Liberty Coalition, National Association of Criminal Defense Lawyers, National Black Justice Coalition, New America's Open Technology Institute, OpenMedia,
People For the American Way, and Restore The Fourth.
The CLOUD Act allows police to bypass the MLAT system, removing vital U.S. and foreign country privacy protections. As we explained in our earlier letter to Congress, the CLOUD Act
Allow foreign governments to wiretap on U.S. soil under standards that do not comply with U.S. law;
Give the executive branch the power to enter into foreign agreements without Congressional approval or judicial review, including foreign nations with a well-known record of human
Possibly facilitate foreign government access to information that is used to commit human rights abuses, like torture; and
Allow foreign governments to obtain information that could pertain to individuals in the U.S. without meeting constitutional standards.
You can read more about EFF’s opposition to the CLOUD Act here.
The CLOUD Act creates a new channel for foreign governments seeking data about non-U.S. persons who are outside the United States. This new data channel is not governed by the laws of
where the data is stored. Instead, the foreign police may demand the data directly from the company that handles it. Under the CLOUD Act, should a foreign government request data from
a U.S. company, the U.S. Department of Justice would not need to be involved at any stage. Also, such requests for data would not need to receive individualized, prior judicial review
before the data request is made.
The CLOUD Act’s new data delivery method lacks not just meaningful judicial oversight, but also meaningful Congressional oversight, too. Should the U.S. executive branch enter a data
exchange agreement—known as an “executive agreement”—with foreign countries, Congress would have little time and power to stop them. As we wrote in our letter:
“[T]he CLOUD Act would allow the executive branch to enter into agreements with foreign governments—without congressional approval. The bill stipulates that any agreement
negotiated would go into effect 90 days after Congress was notified of the certification, unless Congress enacts a joint resolution of disapproval, which would require
presidential approval or sufficient votes to overcome a presidential veto.”
And under the bill, the president could agree to enter executive agreements with countries that are known human rights abusers.
Troublingly, the bill also fails to protect U.S. persons from the predictable, non-targeted collection of their data. When foreign governments request data from U.S. companies about
specific “targets” who are non-U.S. persons not living in the United States, these governments will also inevitably collect data belonging to U.S. persons who communicate with the
targeted individuals. Much of that data can then be shared with U.S. authorities, who can then use the information to charge U.S. persons with crimes. That data sharing, and potential
criminal prosecution, requires no probable cause warrant as required by the Fourth Amendment, violating our constitutional rights.
The CLOUD Act is a bad bill. We urge Congress to stop it, and any attempts to attach it to must-pass spending legislation.
Read the full coalition letter here.
Stop the CLOUD Act
>> mehr lesen
The Foilies 2018
(So, 11 Mär 2018)
Recognizing the Year’s Worst in Government Transparency
Government transparency laws like the Freedom of Information Act exist to enforce the public’s right to inspect records so we can all figure out what the heck is being done in our
name and with our tax dollars.
But when a public agency ignores, breaks or twists the law, your recourse varies by jurisdiction. In some states, when an official improperly responds to your public records request,
you can appeal to a higher bureaucratic authority or seek help from an ombudsperson. In most states, you can take the dispute to court.
Public shaming and sarcasm, however, are tactics that can be applied anywhere.
The California-based news organization Reveal tweets photos of chickpeas or coffee beans to represent each day a FOIA response is overdue, and asks followers to guess how many there
are. The alt weekly DigBoston has sent multiple birthday cakes and edible arrangements to local agencies on the one-year anniversary of delayed public records requests. And
here, at the Electronic Frontier Foundation, we give out The Foilies during Sunshine Week, an annual celebration of open-government advocacy.
In its fourth year, The Foilies recognizes the worst responses to records requests, outrageous efforts to stymie transparency and the most absurd redactions. These tongue-in-cheek
pseudo-awards are hand-chosen by EFF’s team based on nominations from fellow transparency advocates, participants in #FOIAFriday on Twitter, and, in some cases, our own personal experience.
If you haven’t heard of us before, EFF is a nonprofit based in San Francisco that works on the local, national and global level to defend and advance civil liberties as technology
develops. As part of this work, we file scores of public records requests and take agencies like the U.S. Department of Justice, the Department of Homeland Security, and the Los
Angeles Police Department to court to liberate information that belongs to the public.
Because shining a spotlight is sometimes the best the litigation strategy, we are pleased to announce the 2018 winners of The Foilies.
Quick links to the winners:
The Mulligan Award - Pres. Donald J. TrumpFOIA Fee of the Year - Texas Department of Criminal JusticeBest Set Design in a Transparency Theater Production - Atlanta Mayor Kasim ReedSpecial Achievement for Analog Conversion - Former Seattle Mayor Ed MurrayThe Winger Award for FOIA Feet Dragging - FBIThe Prime Example Award – Midcoast Regional Redevelopment Authority (Maine)El Premio del Desayuno Más Redactado - CIAThe Courthouse Bully Award - Every Agency Suing a RequesterThe Lawless Agency Award - U.S. Customs and Border ProtectionThe Franz Kafka Award for Most Secrets About Secretive Secrecy - CIASpecial Recognition for Congressional Overreach - U.S. House of RepresentativesThe Data Disappearance Award - Trump AdministrationThe Danger in the Dark Award - The Army Corps of EngineersThe Business Protection Agency Award - The Food and Drug AdministrationThe Exhausted Mailman Award - Bureau of Indian AffairsCrime & Punishment Award - Martin County Commissioners (Florida)The Square Footage Award - Jacksonville Sheriff’s Office (Florida)These Aren’t the Records You’re Looking For Award - San Diego City Councilmember Chris CateThe Mulligan Award - Pres. Donald J. Trump
Since assuming the presidency, Donald Trump has skipped town more than 55 days to visit his Mar-a-Lago resort in Florida, according to sites like trumpgolfcount.com and NBC. He calls it his “Winter White House,” where he wines and dines
and openly strategizes how to respond to North Korean ballistic missile tests with the Japanese prime minister for all his paid guests to see and post on Facebook. The fact that
Trump’s properties have become secondary offices and remain a source of income for his family raises significant questions about transparency, particularly if club membership comes
with special access to the president. To hold the administration accountable, Citizens for Responsibility and Ethics in Washington filed a FOIA request for the visitor logs, but
received little in response. CREW sued and, after taking another look, the Secret Service provided details about the Japanese leader’s entourage. As Politico and others reported, the Secret Service ultimately
admitted they’re not actually keeping track. The same can’t be said about Trump’s golf score.
FOIA Fee of the Year - Texas Department of Criminal Justice
Sexual assault in prison is notoriously difficult to measure due to stigma, intimidation, and apathetic bureaucracy. Nevertheless, MuckRock reporter Nathanael King made a valiant effort to find out whatever he could about these
investigations in Texas, a state once described by the Dallas Voice as the
“Prison Rape Capital of the U.S.” However, the numbers that the Texas Department of Criminal Justice came back with weren’t quite was he was expecting. TDCJ demanded he fork over a
whopping $1,132,024.30 before the agency would release 260,000 pages of records that it said would take 61,000 hours of staff time to process. That in itself may be an indicator of
the scope of the problem. However, to the agency’s credit, they pointed the reporter in the direction of other statistical records compiled to comply with the federal Prison Rape
Elimination Act, which TDCJ provided for free.
Best Set Design in a Transparency Theater Production - Atlanta Mayor Kasim Reed
“Transparency theater” is the term we use to describe an empty gesture meant to look like an agency is embracing open government, when really it’s meant to obfuscate. For example, an
agency may dump an overwhelming number of documents and put them on display for cameras. But because there are so many records, the practice actually subverts transparency by making
it extremely difficult to find the most relevant records in the haystack.
Such was the case with Atlanta Mayor Kasim Reed, who released 1.476 million documents about a corruption probe to show his office was supporting public accountability.
“The documents filled hundreds of white cardboard boxes, many stacked up waist high against walls and spread out over rows of tables in the cavernous old City Council chamber,”
reporter Leon Stafford wrote. “Reed used some of the boxes as the backdrop for his remarks, creating a six-foot wall behind him.”
FOIA papercuts Credit: J. Scott Trubey/AJC
Journalists began to dig through the documents and quickly discovered that many were blank pages or fully redacted, and in some cases the type was too small for anyone to read. AJC
reporter J. Scott Trubey’s hands became covered in papercut gore. Ultimately, the whole spectacle was a waste of trees: The records already existed in a digital format. It’s just that
a couple of hard drives on a desk don’t make for a great photo op.
Special Achievement for Analog Conversion - Former Seattle Mayor Ed Murray
Credit: Phil Mocek
In the increasingly digital age, more and more routine office communication is occurring over mobile devices. With that in mind, transparency activist Phil Mocek filed a request for text messages (and other app communications) sent or received by
now-former Seattle Mayor Ed Murray and many of his aides. The good news is the city at least partially complied. The weird news is that rather than seek the help of an IT professional
to export the text messages, some staff simply plopped a cell phone onto a photocopier. Mocek tells EFF he’s frustrated that the mayor’s office refused to search their personal
devices for relevant text messages. They argued that city policy forbids using personal phones for city business—and of course, no one would violate those rules. However, we’ll
concede that thwarting transparency is probably the least of the allegations against Murray, who resigned in September 2017 amid a child sex-abuse scandal.
The Winger Award for FOIA Feet Dragging - FBI
Thirty years ago, the hair-rock band Winger released “Seventeen”—a song about young love that really hasn’t withstood the test of time. Similarly, the FBI’s claim that it
would take 17 years to produce a series of records about civil rights-era surveillance also didn’t withstand the judicial test of time.
As Politico reported, George Washington University
professor and documentary filmmaker Nina Seavey asked for records about how the FBI spied on antiwar and civil rights activists in the 1960s and 1970s. The FBI claimed they would only
process 500 pages a month, which would mean the full set of 110,000 pages wouldn’t be complete until 2034.
Just as Winger’s girlfriend’s dad disapproved in the song, so did a federal judge, writing in her opinion: “The agency's desire for administrative convenience is simply not a valid
justification for telling Professor Seavey that she must wait decades for the documents she needs to complete her work.”
The Prime Example Award – Midcoast Regional Redevelopment Authority (Maine)
When Amazon announced last year it was seeking a home for its second headquarters, municipalities around the country rushed to put together proposals to lure the tech giant to their
region. Knowing that in Seattle Amazon left a substantial footprint on a community (particularly around housing), transparency organizations like MuckRock and the Lucy Parsons Labs
followed up with records requests for these cities’ sales pitches.
More than 20 cities, such as Chula Vista, California, and Toledo, Ohio, produced the records—but other agencies, including Albuquerque, New Mexico, and Jacksonville, Florida, refused
to turn over the documents. The excuses varied, but perhaps the worst response came from Maine’s Midcoast
Regional Redevelopment Authority. The agency did provide the records, but claimed that by opening an email containing 37 pages of documents, MuckRock had automatically
agreed to pay an exorbitant $750 in “administrative and legal fees.” Remind us to disable one-click ordering.
El Premio del Desayuno Más Redactado - CIA
Buzzfeed reporter Jason Leopold has filed thousands of records requests over his career, but one
redaction has become his all-time favorite. Leopold was curious whether CIA staff are assailed by the same stream of office announcements as every other workplace. So, he filed a FOIA
request—and holy Hillenkoetter, do they. Deep in the document set was an announcement that “the breakfast burritos are back by popular demand,” with a gigantic redaction covering half
the page citing a personal privacy exemption. What are they hiding? Is Anthony Bourdain secretly a covert agent? Did David Petraeus demand extra guac? This could be the CIA’s greatest
Latin American mystery since Nicaraguan Contra drug-trafficking.
The Courthouse Bully Award - Every Agency Suing a Requester
As director of the privacy advocacy group We See You Watching Lexington, Michael Maharrey filed a public records request to find out how his city was spending money on surveillance
cameras. After the Lexington Police Department denied the request, he appealed to the Kentucky Attorney General’s office—and won.
Rather than listen to the state’s top law enforcement official, Lexington Police hauled Maharrey into court.
As the Associated Press reported last year, lawsuits like
these are reaching epidemic proportions. The Louisiana Department of Education sued a retired educator who was seeking school enrollment data for his blog. Portland Public Schools in
Oregon sued a parent who was curious about employees paid while on leave for alleged misconduct. Michigan State University sued ESPN after it requested police reports on football
players allegedly involved in a sexual assault. Meanwhile, the University of Kentucky and Western Kentucky University have each sued their own student newspapers whose reporters were
investigating sexual misconduct by school staff.
These lawsuits are despicable. At their most charitable, they expose huge gaps in public records laws that put requesters on the hook for defending lawsuits they never anticipated. At
their worst, they are part of a systematic effort to discourage reporters and concerned citizens from even thinking of filing a public records request in the first place.
The Lawless Agency Award - U.S. Customs and Border Protection
In the chaos of President Trump’s immigration ban in early 2017, the actions of U.S. Customs and Border Protection agents and higher-ups verged on unlawful. And if CBP officials already had their mind set on violating all sorts of
laws and the Constitution, flouting FOIA seems like small potatoes.
Yet that’s precisely what CBP did when the ACLU filed a series of FOIA requests
to understand local CBP agents’ actions as they implemented Trump’s immigration order. ACLU affiliates throughout the country filed 18 separate FOIA requests with CBP, each of which
targeted records documenting how specific field offices, often located at airports or at physical border crossings, were managing and implementing the ban. The requests made clear
that they were not seeking agency-wide documents but rather wanted information about each specific location’s activities.
CBP ignored the requests and, when several ACLU affiliates filed 13
different lawsuits, CBP sought to further delay responding by asking a federal court panel to consolidate all the cases into a single lawsuit. To use this procedure—which is
usually reserved for class actions or other complex national cases—CBP essentially misled courts about each of the FOIA requests and claimed each was seeking the exact same set of
The court panel saw through CBP’s shenanigans and refused to consolidate the cases. But CBP basically ignored the panel’s decision, acting as though it had won. First, it behaved as
though all the requests came from a single lawsuit by processing and batching all the documents from the various requests into a single production given to the ACLU. Second, it
selectively released records to particular ACLU attorneys, even when those records weren’t related to their lawsuits about activities at local CBP offices.
Laughably, CBP blames the ACLU for its self-created mess, calling their requests and lawsuits “haphazard” and arguing that the ACLU and other FOIA requesters have strained the
agency’s resources in seeking records about the immigration ban. None of that would be a problem if CBP had responded to the FOIA requests in the first place. Of course, the whole
mess could also have been avoided if CBP never implemented an unconstitutional immigration order.
The Franz Kafka Award for Most Secrets About Secretive Secrecy - CIA
The CIA’s aversion to FOIA is legendary, but this year the agency doubled down on its mission of thwarting transparency. As Emma Best
detailed for MuckRock, the intelligence agency had compiled a 20-page report that laid out at least
126 reasons why it could deny FOIA requests that officials believed would disclose the agency’s “sources and methods.”
But that report? Yeah, it’s totally classified. So not only do you not get to know what the CIA’s up to, but its reasons for rejecting your FOIA request are also a state secret.
Special Recognition for Congressional Overreach - U.S. House of Representatives
Because Congress wrote the Freedom of Information Act, it had the awesome and not-at-all-a-conflict-of-interest power to determine which parts of the federal government must obey it.
That’s why it may not shock you that since passing FOIA more than 50 years ago, Congress has never made itself subject to the law.
So far, requesters have been able to fill in the gaps by requesting records from federal agencies that correspond with Congress. For example, maybe a lawmaker writes to the U.S.
Department of Puppies asking for statistics on labradoodles. That adorable email chain wouldn’t be available through Congress, but you could get it from the Puppies Department’s FOIA
office. (Just to be clear: This isn’t a real federal agency. We just wish it was.)
In 2017 it’s become increasingly clear that some members of Congress believe that FOIA can never reach anything they do, even when they or their staffs share documents or correspond
with federal agencies. The House Committee on Financial Services sent a threatening letter to the Treasury Department
telling them to not comply with FOIA. After the Department of Health and Human Services and the Office of Management and Budget released records that came from the House Ways and
Means Committee, the House intervened in litigation to argue that their
records cannot be obtained under FOIA.
In many cases, congressional correspondence with agencies is automatically covered by FOIA, and the fact that a document originated with Congress isn’t by itself enough to shield it
from disclosure. The Constitution says Congress gets to write laws; it’s just too bad it doesn’t require Congress to actually read them.
The Data Disappearance Award - Trump Administration
Last year, we gave the “Make America Opaque Again Award” award to newly inaugurated President Trump for failing
to follow tradition and release his tax returns during the campaign. His talent for refusing to make information available to the public has snowballed into an administration that
deletes public records from government websites. From the National Park Service’s climate action plans for national parks, to the U.S.D.A. animal welfare datasets, to nonpartisan research on the corporate income tax, the Trump Administration has
decided to make facts that don’t support its positions disappear. The best example of this vanishing game is the Environmental Protection Agency’s removal of the climate change website in April 2017,
which only went back online after being scrubbed of climate change references, studies and information
to educate the public.
The Danger in the Dark Award - The Army Corps of Engineers
When reporters researching the Dakota Access Pipeline on contested tribal lands asked for the U.S.
Army Corps of Engineers’ environmental impact statement, they were told nope, you can’t have it. Officials cited public safety concerns as reason to deny the request: “The referenced
document contains information related to sensitive infrastructure that if misused could endanger peoples’ lives and property.”
Funny thing is, the Army Corps had already published the same document on its website a year earlier. What changed in that year? Politics. The Standing Rock Sioux, other tribal
leaders and “Water Protector” allies had since staged a multi-month peaceful protest and sit-in to halt construction of the pipeline.
The need for public scrutiny of the document became clear in June when a U.S. federal judge found that the
environmental impact statement omitted key considerations, such as the impact of an oil spill on the Standing Rock Sioux’s hunting and fishing rights as well as the impact on
The Business Protection Agency Award - The Food and Drug Administration
The FDA’s mission is to protect the public from harmful pharmaceuticals, but they’ve recently fallen into the habit of protecting powerful drug companies rather than informing people
about potential drug risks.
This past year, Charles Seife at the
Scientific American requested documents about the
drug approval process for a controversial drug to treat Duchenne muscular dystrophy (DMD). The agency cited business exemptions and obscured listed side effects as well as testing
methodology for the drug, despite claims that the drug company manipulated results during product trials and pressured the FDA to push an ineffective drug onto the market. The agency
even redacted portions of a Bloomberg Businessweek article about the drug because the story provided names and pictures of teenagers living with DMD.
The Exhausted Mailman Award - Bureau of Indian Affairs
Credit: Russ Kick
Requesting information that has already been made public should be quick and fairly simple—but not when you’re dealing with the Bureau of Indian Affairs. A nomination sent into EFF
requested all logs of previously released FOIA information by the BIA. The requester even stated that he’d prefer links to
the information, which agencies typically provide for records they have already put on their website. Instead, BIA printed 1,390 pages of those logs, stuffed them into 10 separate
envelopes, and sent them via registered mail for a grand total cost to taxpayers of $179.
Crime & Punishment Award - Martin County Commissioners, Florida
Generally The Foilies skew cynical, because in many states, open records laws are toothless and treated as recommendations rather than mandates. One major exception to the rule is
Florida, where violations of its “Sunshine Law” can result in criminal prosecution.
That brings us to Martin County Commissioners Ed Fielding and Sarah Heard and former Commissioner Anne Scott, each of whom were booked into jail in November on multiple charges
related to violations of the state’s public records law. As Jose Lambiet of GossipExtra and the Miami Herald reported, the case emerges from a dispute between the county and a mining
company that already resulted in taxpayers footing a $500,000 settlement in a public records lawsuit. Among the allegations, the officials were accused of destroying, delaying and
The cases are set to go to trial in December 2018, Lambiet told EFF. Of course, people are innocent until proven guilty, but that doesn’t make public officials immune to The
The Square Footage Award - Jacksonville Sheriff’s Office (Florida)
When a government mistake results in a death, it’s important for the community to get all the facts. In the case of 63-year-old Blane Land, who was fatally hit by a Jacksonville
Sheriff patrol car, those facts include dozens of internal investigations against the officer behind the wheel. The officer, Tim James, has since been arrested on allegations that he
beat a handcuffed youth, raising the
question of why he was still on duty after the vehicular fatality.
Land’s family hired an attorney, and the attorney filed a request for records. Rather than having a complete airing of the cop’s alleged misdeeds, the sheriff came back with a demand for $314,687.91
to produce the records, almost all of which was for processing and searching by the internal affairs division. Amid public outcry over the prohibitive fee, the sheriff took to social
media to complain about how much work it would take to go through all the records in the 1,600-foot cubic storage room filled with old-school filing cabinets.
The family is not responsible for the sheriff’s filing system or feng shui, nor is it the family’s fault that the sheriff kept an officer on the force as the complaints—and the
accompanying disciplinary records—stacked up.
These Aren’t the Records You’re Looking For Award - San Diego City Councilmember Chris Cate
Shortly after last year’s San Diego Comic-Con and shortly before the release of Star Wars: The Last Jedi, the city of San Diego held a ceremony to name a street after former
resident and actor Mark Hamill. A private citizen (whose day job involves writing The Foilies) wanted to know: How does a Hollywood star get his own roadway?
The city produced hundreds of pages related to his request that showed how an effort to change the name of Chargers Boulevard after the football team abandoned the city led to the
creation of Mark Hamill Drive. The document set even included Twitter direct messages between City
Councilmember Chris Cate and the actor. However, Cate used an ineffective black marker to redact, accidentally releasing Hamill’s cell phone number and other personal contact
As tempting as it was to put Luke Skywalker (and the voice of the Joker) on speed dial, the requester did not want to be responsible for doxxing one of the world’s most beloved
actors. He alerted Cate’s office of the error, which then re-uploaded properly redacted documents.
>> mehr lesen
Landis + Gyr Agrees to Leave Documents Up, Then Sends Notice to Take Them Down
(Sa, 10 Mär 2018)
A Georgia energy company has made two separate attempts to take down public documents that let Seattle residents know how the “smart meters” on their homes work.
Back in 2016, a local activist obtained two documents from the City of Seattle related to the smart meter technology. But some companies involved in making and maintaining that
technology went to court and won a quick order that forcing the documents offline by arguing that information about the city’s meters constituted “trade secrets.”
EFF fought back, defending Muckrock’s First Amendment right to publish public documents obtained from a public records request. After our intervention, a Washington state court
reversed the takedown order. In mid-2016, a settlement was reached with Landis
+ Gyr and Sensus, two of the companies that had attempted to remove the documents. Lawyers for the two companies explicitly agreed that the documents could remain public and published at Muckrock’s website.
But in February 2018, Landis + Gyr sent a DMCA notice demanding a takedown of the exact same
documents that, two years earlier, they explicitly agreed could remain online. A copy of the smart meter documents was placed on DocumentCloud, by Techdirt, a technology blog that
had reported on the initial 2016 proceedings.
Techdirt noted the futility of trying to remove documents that were already online elsewhere, and suggested that all Landis + Gyr is doing is “reminding everyone that (1) these
documents exist online and (2) apparently the company would prefer you not look at these public records about its own systems.”
While the bogus DMCA takedown notice does appear to have succeeded in removing the documents from Document Cloud, you can still find them here and here.
>> mehr lesen
Senators Introduce New Bill to Protect Digital Privacy at the Border
(Sa, 10 Mär 2018)
Senators Patrick Leahy (D-VT) and Steve Daines (R-MT) introduced a new bill (S. 2462) that would better
protect the privacy of travelers whose electronic devices—like cell phones and laptops—are searched and seized by border agents. While the new bill doesn’t require a probable cause warrant across the board like the Protecting Data at the Border Act (S. 823, H.R. 1899), it does have many positive provisions and would be a
significant improvement over the status quo.
The Leahy-Daines bill, which currently has the long title of “A bill to place restrictions on searches and seizures of electronic devices at the border,” applies to U.S. persons,
meaning U.S. citizens or lawful permanent residents. The bill places separate restrictions based on the type of search conducted: manual or forensic.
For “manual” searches of electronic devices, the bill requires that border agents—whether from U.S. Customs and Border Protection (CBP) or U.S. Immigration and Customs Enforcement
(ICE)—have reasonable suspicion that the traveler violated an immigrations or customs law and that the electronic device contains evidence relevant to the violation. The bill defines
a manual search as an examination of an electronic device without the use of forensic software or the entry of a password. (Imagine a hands-on review of photos on a digital camera
with no password on it, or a look through a phone not locked by a fingerprint scanner or passcode.) The definition also appears to include any type of search that lasts less than four
hours or doesn’t include the copying or documentation of data on the device. By contrast, the bill requires border agents to obtain a probable cause warrant before conducting a
“forensic” search of an electronic device.
These rules would be an improvement over CBP’s current
policy, which does not require any level of suspicion for manual searches, and requires reasonable suspicion for forensic searches—unless the forensic search is prompted by a
“national security concern” (which we believe is a huge loophole). ICE’s policy continues to permit suspicionless border searches of electronic devices.
The Fourth Amendment, however, requires border agents to obtain a probable cause
warrant before searching electronic devices given the unprecedented and significant privacy interests travelers have in their digital data. And the Constitution’s protections
don’t turn on an arbitrary distinction between manual and forensic searches. Recent updates to CBP’s policy don’t cure the constitutional problems with how either agency conducts
border searches (and seizures) of electronic devices.
The Leahy-Daines bill also requires that border agents have probable cause to seize an electronic device. They would then have to obtain a warrant from a judge within 48 hours. If a
warrant is not obtained within 48 hours, the device must be “immediately” returned to the traveler. We support this probable cause requirement for device seizures, and it’s what we
argue in our civil case against CBP and ICE, Alasaad v. Nielsen.
Importantly, the Leahy-Daines bill includes a suppression remedy if the government violates the law. This means that any information illegally obtained from a traveler’s electronic
device during a border search may not be relied upon in any legal, administrative, or legislative proceeding, including an immigration hearing or a criminal trial.
The bill also includes important reporting requirements. These include statistics on the “age, sex, country of origin, citizenship or immigration status, ethnicity, and race” of
travelers who were subject to device searches and seizures, which would shed light on whether border agents are acting in a discriminatory manner. The statistics also include the
number of travelers whose devices were searched or seized and who were later charged with a crime, which would shed light on how effective device searches and seizures at the border
are in rooting out criminals.
The border is not a Constitution-free zone. CBP searched over 30,000 devices last year and the number is rapidly
increasing. We are glad to see some members of Congress turning their attention to the rampant problem of unconstitutional border searches and seizures of electronic devices—and the
massive privacy invasions by the government that result.
>> mehr lesen
Video Game Developer Says He Won't Send a Takedown of a Bad Review, Does So Anyway
(Fr, 09 Mär 2018)
Oh what a tangled web we weave when first we get into a Twitter fight with someone who gave our video game a bad review on YouTube. And when we say that we would never send a DMCA
takedown for it. And when one mysteriously turns up anyway.
This is one of the most confusing series of events ever to surround a takedown. First, Richard La Ruina, a man who claims to be a top pickup artist, created a somewhat controversial dating game called Super Seducer. Then, YouTuber IAmPattyJack (also known as Chris Hodgkinson) covered the game in his “_____ Is the Worst Game Ever” series.
La Ruina took poorly to the bad review Hodgkinson gave Super Seducer and showed up in the video’s comments when it only had about 100 views. Hodgkinson and La Ruina then got
into it on Twitter, which did eventually
resolve itself into La Ruina acknowledging that giving a review copy to someone who does a “Worst Game Ever” series was perhaps not the smartest move.
That’s when it got weird. Someone else on Twitter applauded La Ruina for admitting he was wrong instead of sending a DMCA takedown.
La Ruina responded “ah yeah we have our DMCA subscription,” which is not a thing. (As others have pointed out, he may have meant a service that makes filing DMCA takedowns easier.)
Hodgkinson showed back up to say that this was not something La Ruina wanted to do, and
La Ruina said he “decided not to, I believe in freedom and democracy and all that american [sic] stuff. We only DMCA when people rip our products.” It got weirder when,
contrary to what La Ruina had stated on Twitter, a DMCA notice resulted in the review
getting taken down. Hodgkinson then got an apology letter from La Ruina’s PR people, saying the notice had been retracted, and offering to pay for any lost income Hodgkinson would
have as a result of the video vanishing. La Ruina sent Hodgkinson $50, which Hodgkinson said he did not want. It took a while, but the video is finally back on YouTube.
La Ruina’s apparent first instinct—that he should not send a DMCA takedown aimed at a review—was the correct one. It’s not infringement and therefore not what takedown notices are
for. But La Ruina also wrongly framed it as his choice, stemming out of benevolence on his part, and not a necessary aspect of the takedown process. And that is where we constantly
run into problems. DMCA takedowns are supposed to be for infringement and not silencing criticism. But the perception that they are a tool for that is so pervasive that merely
following the rules makes you look like the good guy.
Even with all of those factors, the video was still down for days. It seems that the DMCA ends up being a censorship tool even when people say they will do the right thing.
This is an entry in the Takedown Hall of Shame, highlighting the worst of bogus copyright and trademark complaints.
>> mehr lesen
Senators Pressure Platforms for Private Censorship of Drug Information
(Fr, 09 Mär 2018)
Last month Senators Chuck Grassley (R-Iowa), Dianne Feinstein (D-Calif.), Amy Klobuchar (D-Minn.), John Kennedy (R-La.) and Sheldon Whitehouse (D-R.I.) separately wrote to Google, Microsoft, Yahoo and Pinterest accusing them of facilitating trade in illegal narcotics and
prescription drugs. The near-identical letters demand that each of the recipients:
consider removing from its platform content that advertises the use of or enables the sale of illicit narcotics, including the sale of prescription drugs without a valid
prescription. We further request that [it] consider action to ensure that future, similar content is banned.
The letter specifies that the platforms concerned should censor search results for illicit drugs, and ensure that when users search for prescription medicines they be
"automatically directed" to approved U.S.-based suppliers. Attachments to the letters include printouts of organic search listings, with a few results on each page circled,
apparently containing information about suppliers who will sell drugs without prescription. (The same printouts reveal some stern anti-drug warnings in the top few results, both
organic and paid.)
The letters were announced in a mailing to members of the Alliance for Safe Online Pharmacies (ASOP), a pharma industry lobby group, on the same day that the letters were
sent. (Beyond that, we don't know whether there was any coordination between the Senators and ASOP in drafting the letter; and because Congress is exempt from FOIA requests,
it would be difficult for us to find out.)
ASOP is also one of the principal contributors to United States Trade Representative (USTR) reports such as the Special 301 Report and the Notorious Markets List, and it makes similar censorship demands in its submissions
to those reports. For example in its submission [PDF] to the 2017 Notorious Markets
report, ASOP recommends that domain name registrars should "voluntarily lock and suspend illegitimate websites" rather than requiring a court order.
By "illegitimate", ASOP doesn't mean that the website is selling fake drugs; its complaint extends to branded drugs that are merely "transported without the requisite
quality controls" (ie. sent through the mail). Neither is it targeting only recreational drugs; ASOP's submission acknowledges that most overseas drug sales are for "chronic
illness and/or maintenance drugs for diseases such as HIV/AIDS, hypertension, [and] hypercholesterolemia." Rather, an "illegitimate" online pharmacy in ASOP lingo is one
that doesn't comply with U.S. law that prohibits online medicine sales from overseas—even though, because they are overseas, they are not actually subject to U.S. law in the first place.
There might well be a case to be made for tighter regulation of sales of prescription and non-prescription drugs online. But to progress from that proposition to the proposal that
information about such drugs should be censored from search engines and online marketplaces, and without a court order at that, is quite a leap. It's concerning that ASOP's
recommendations are often incorporated holus bolus into the USTR's reports without independent verification, and that the responsibility for fact-checking of its claims is placed
on rebuttal submissions from third-parties.
We are even more concerned about the approach taken by the Senators who wrote the letter to major platforms. For U.S. Senators, with the imprimatur of official authority that their
offices represent, to prevail on platforms to privately censor content, is a blatant form of Shadow Regulation, intended to
intimidate them into compliance.
If the Senators are serious in their desire for these Internet platforms to censor organic search results, they could table a bill aimed at achieving
that object, and have it debated in both houses of Congress. Instead, knowing that such a law would likely be unconstitutional, they are seeking to achieve the same
result without a transparent and accountable lawmaking process. The Senators should know better, and we encourage platforms receiving such letters to
resist these extra-legal demands.
>> mehr lesen
Stop SESTA/FOSTA: Don’t Let Congress Censor the Internet
(Do, 08 Mär 2018)
The U.S. Senate is about to vote on a bill that would be disastrous for online speech and communities.
The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) might sound appealing,
but it would do nothing to fight sex traffickers. What it would do is silence a lot
of legitimate speech online, shutting some voices out of online spaces.
This dangerous bill has already passed the House of Representatives, and it’s expected to come up
for a Senate vote in the next few days. If you care about preserving the Internet as a place where everyone can gather, learn, and share ideas—even controversial ones—it’s time to
call your senators.
Take ActionStop SESTA/FOSTA
The version of FOSTA that’s passed the House is actually a Frankenstein combination of two different bills, an earlier version of FOSTA and a bill called the Stop Enabling Sex Traffickers Act (SESTA).
How would one bill do so much damage to communities online? Simple: it would scare online platforms into censoring their users.
Online platforms are enabled by a law referred to as Section 230. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230,
social media would not exist in its current form, and neither would the plethora of nonprofit and community-based online groups that serve as crucial outlets for free expression and
If Congress undermined these important protections by passing SESTA/FOSTA, many online platforms would be forced to place strong restrictions on their users’ speech, censoring a lot
of people in the process. And as we’ve discussed before, when platforms clamp down on their users’ speech, marginalized voices are disproportionately silenced.
Censorship is not the solution to sex trafficking. This is our last chance: call your senators now and urge them to oppose SESTA/FOSTA.
Take ActionStop SESTA/FOSTA
>> mehr lesen
Fair Use and Platform Safe Harbors in NAFTA
(Do, 08 Mär 2018)
Negotiators from Mexico, Canada and the United States were in Mexico City this week for a tense seventh round of negotiations over a modernized
version of NAFTA, the North American Free Trade Agreement. With President Trump's announcement of tough
new unilateral tariffs on imports of steel and
aluminum, and the commencement of the Mexican election season later this month, pressure to conclude the deal—or for the United States to withdraw from it—is
mounting. In all of this, there is a risk that the issues that are of concern to Internet users are being sidelined.
Protesters at the 7th round of NAFTA
One of these issues is the need for balance in the intellectual property chapter of the agreement, in particular by requiring the countries to have
copyright limitations and exceptions such as fair use. This is particularly important if, as we have reason to fear, the rest
of the chapter contains provisions that exceed the international copyright norms established in the TRIPS Agreement.
According to the latest unofficial information that we have, the United States Trade Representative (USTR) is not negotiating for a fair use provision in NAFTA. Without such
a provision, the new NAFTA will be worse than even the original version of the TPP, which did have a copyright balance provision, albeit an optional and weak one.
The new NAFTA should also include platform safe harbors, to ensure that Internet
intermediaries, such as ISPs, social networking websites, open WiFi hotspots or caching providers, are not held liable for the speech of their users. EFF addressed this issue in its
remarks at ¿Modernización o retroceso? Amenazas al medio ambiente e internet en la renegociación del
TLCAN, a forum held at the Mexican Senate on Friday March 2.
We emphasized in our presentation that we aren't arguing for platform safe harbors for the benefit of the large platforms themselves. The
platforms are far from perfect, and the decisions that they make to restrict users' content are frequently wrong. But that's exactly why safe harbors are
important. Without safe harbor rules, the Internet platforms that most Internet users depend upon to communicate and share online
are likely to censor more of their users' speech, in an effort to reduce their own possible legal exposure.
A Tale of Two Safe Harbors
Two separate safe harbor provisions are planned for NAFTA, and both are in trouble. The first is the copyright safe harbor, which in the U.S. is based on
section 512 of the DMCA or Digital Millennium Copyright Act. In a nutshell, this safe harbor would protect Internet platforms from
liability when their users infringe copyright, so long as the platforms take the allegedly infringing material down after they get a complaint. Canada also has a copyright safe harbor
system, which is a little different (and better) because it doesn't require the platform to take the content down, only to notify the person who uploaded it about the complaint.
The copyright safe harbor in NAFTA is under pressure from rightsholders who want to impose secondary liability on platforms who don't do enough to limit copyright infringement by
users. Due to the secrecy surrounding the agreement we haven't seen exactly what the more limited provision might look like, but we can guess from industry stakeholder lobbying that
it will include a requirement to adopt effective online enforcement regimes [PDF],
possibly similar to the SOPA-like censorship system currently under consideration in Canada.
The second safe harbor under consideration in NAFTA would apply to almost everything else that isn't copyright, for example defamation and hateful speech. In the US,
that safe harbor is found in Section 230 (also called CDA 230). Unlike the DMCA, it doesn't require the platform in question
to automatically take anything down. For example, under U.S. law a search engine isn't required to censor its search results if one of the results that comes up is alleged to be
defamatory. And a good thing too, or we would see even more private censorship.
Mexico and Canada don't have an equivalent to Section 230, and the U.S. is proposing that they should—not so much because it promotes freedom of expression online, but because it
would make it easier for American online platforms to operate safely and legally throughout the region. From what we have heard in the corridors of the closed
negotiations, Canada and Mexico are pushing back hard on a Section 230-like provision in NAFTA, but for now the USTR is continuing to maintain it as a negotiating
It would be great if EFF and other groups representing users could speak directly with negotiators on issues such as fair use, the need to avoid placing restrictive
conditions on copyright safe harbor rules, and the benefits that a Section 230-style safe harbor could bring to the online freedom of expression of Internet users throughout North
America. But unfortunately the NAFTA negotiations are so closed
and opaque that it's difficult for us to do that. We'll keep doing what we can to let the negotiators know our concerns, but ultimately what's needed is a
much more open and inclusive process, to ensure that trade
agreements such as NAFTA reflect the needs of all rather than just those of well-connected corporate lobbies.
>> mehr lesen
Ten Hours of Static Gets Five Copyright Notices
(Mi, 07 Mär 2018)
Sebastian Tomczak blogs about technology and sound, and has a YouTube channel. In 2015, Tomczak uploaded a
ten-hour video of white noise. Colloquially, white noise is persistent background noise that can be soothing or that you don’t even notice after a while. More technically, white noise
is many frequencies played at equal intensity. In Tomczak’s video, that amounted to ten hours of, basically, static.
In the beginning of 2018, as a result of YouTube’s Content ID system, a series of copyright claims
were made against Tomczak’s video. Five different claims were filed on sound that Tomczak created himself.
Although the claimants didn’t force Tomczak’s video to be taken down they all opted to monetize it instead. In other words, ads on the ten-hour video would now generate revenue for
those claiming copyright on the static.
Normally, getting out of this arrangement would have required Tomczak to go through the lengthy counter-notification process, but Google decided to drop the claims. Tomczak believes it’s because of the
publicity his story got. But hoping your takedown goes viral or using the intimidating counter-notification system is
not a workable way to get around a takedown notice.
YouTube’s Content ID system works by having people upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database and when the
algorithm detects a match, copyright holders are informed. They can then make a claim, forcing it to be taken down, or they can simply opt to make money from ads put on the video.
And so it is that an automated filter matched part of ten hours of white noise to, in one case, two different other white noise videos owned by the same company and resulted
in Tomczak getting copyright notices.
Copyright bots like Content ID are tools and, like any tool, can be easily abused. First of all, they can match content but can’t tell the difference between infringement and fair
use. And, as what happened in this case, match similar-sounding general noise. These mistakes don’t make the bots great at protecting free speech.
Some lobbyists have advocated for these kinds of bots to be required for platforms
hosting third-party content. Beyond the threat to speech, this would be a huge and expensive hurdle for new platforms trying to get off the ground. And, as we can see from this
example, it doesn’t work properly without a lot of oversight.
This article is part of the Takedown Hall of Shame, which collects the worst of the worst of bogus copyright and trademark complaints that have threatened all kinds of creative
expression on the Internet.
Back to Takedown Hall of Shame
>> mehr lesen
Offline/Online Project Highlights How the Oppression Marginalized Communities Face in the Real World Follows Them Online
(Di, 06 Mär 2018)
People in marginalized communities who are targets of persecution and violence—from the Rohingya in Burma to Native Americans in North Dakota—are using
social media to tell their stories, but finding that their voices are being silenced online.
This is the tragic and unjust consequence of content moderation policies of companies like Facebook, which is deciding on a daily basis what can be and can’t be said and shown online.
Platform censorship has ratcheted up in these times of
political strife, ostensibly to combat hate speech and online harassment. Takedowns and closures of neo-Nazi and white supremacist sites have been a matter of intense debate. Less visible is the
effect content moderation is having on vulnerable communities.
Flawed rules against hate speech have shut down online conversations about racism and
harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against
depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of
aboriginal women in Australia, Dakota pipeline protestors and many others are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are
being flagged for takedown by Facebook. The powerless struggle to be heard in the first place; online censorship further marginalizes vulnerable communities. This is not OK.
In response, EFF and Visualizing Impact launched an awareness project today that highlights the online censorship of communities across the globe that are struggling or in crisis.
Offline/Online is a series of visuals demonstrating that the inequities and oppression these communities face in
the physical world are being replicated online. The visuals can be downloaded and shared on Twitter, Facebook, and Snapchat, or printed out for distribution.
In one, the displacement of nearly 700,000 Rohingya Muslims from Myanmar because of state violence is represented in a photo showing Rohingya children trying to board a small boat.
Rohingya refugees, many of whom are women and children, are arriving in Bangladesh with wounds from gunshot and fire, according to the United Nations.
And online? Facebook is an essential means of communication in Myanmar. Activists
there and in the West have documented the violence against the Rohingya online, only to have their Facebook posts removed and accounts suspended.
Inequity offline, censorship online.
The EFF/Visualizing Impact project exposes this pattern among Palestinians, aboriginal women in Australia, Native Americans, Dakota pipeline protestors, and black Americans. We
believe this is just the tip of the iceberg. We are already far down the slippery slope from judicious moderation of online content to outright censorship. With two billion Facebook
users worldwide, there are likely more vulnerable communities being subject to online censorship.
Our hope is that activists, concerned citizens, and online communities will post and share Inequity Offline/Censorship Online visuals (found here) many times, raising awareness about the impact of censorship on marginalized communities—a story that is
underreported. Sharing the visuals is a step all of us can take to combat online censorship. It may help restore the speech and voices being erased online.
>> mehr lesen
Geek Squad's Relationship with FBI Is Cozier Than We Thought
(Di, 06 Mär 2018)
Update: A Best Buy spokesperson confirmed to reporters that at
least four Geek Squad employees received payments from the FBI.
After the prosecution of a California doctor revealed the FBI’s ties to a
Best Buy Geek Squad computer repair facility in Kentucky, new documents released to EFF show that the relationship goes back years. The records also confirm that the FBI has paid
Geek Squad employees as informants.
EFF filed a Freedom of Information Act (FOIA) lawsuit last year to learn more about how the FBI uses Geek
Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights.
The documents released to EFF show that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber
Working Group” at the company’s Kentucky repair facility.
The memo and a related email show that Geek Squad employees also gave FBI officials a tour of the facility before their meeting and makes clear that the law enforcement agency’s
Louisville Division “has maintained close liaison with the Geek Squad’s management in an effort to glean case initiations and to support the division’s Computer Intrusion and Cyber
Another document records a $500 payment from the FBI to a confidential Geek Squad informant.
This appears to be one of the same payments at issue in the prosecution of Mark Rettenmaier, the California doctor who was charged
with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility.
Other documents show that over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the
Geek Squad for repairs. The documents detail a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they
believed was child pornography.
The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send
it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device.
Some of these reports indicate that the FBI treated Geek Squad employees as informants, identifying them as “CHS,” which is shorthand for confidential human sources. In other cases,
the FBI identifies the initial calls as coming from Best Buy employees, raising questions as to whether certain employees had different relationships with the FBI.
In the case of the investigation into Rettenmaier’s computers, the documents released to EFF do not appear to have been made public in that prosecution. These raise additional
questions about the level of cooperation between the company and law enforcement.
For example, documents reflect that Geek Squad employees only alert the FBI when
they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content.
But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material. For example, the image found on Rettenmaier’s hard
drive was in an
unallocated space, which typically requires forensic software to find. Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography.
Such a bounty would likely encourage Geek Squad employees to actively sweep for suspicious content.
Although these documents provide new details about the FBI’s connection to Geek Squad and its Kentucky repair facility, the FBI has withheld a number of other documents in response to
our FOIA suit. Worse, the FBI has refused to confirm or deny to EFF whether it has similar
relationships with other computer repair facilities or businesses, despite our FOIA specifically requesting those records. The FBI has also failed to produce documents that would show
whether the agency has any internal procedures or training materials that govern when agents seek to cultivate informants at computer repair facilities.
We plan to challenge the FBI’s stonewalling in court later this spring. In the meantime, you can read the documents produced so far here and here.
FBI Geek Squad Informants FOIA Suit
>> mehr lesen
Namecheap Relaunches Move Your Domain Day to Support Internet Freedom
(Di, 06 Mär 2018)
Domain name registrar Namecheap has relaunched Move Your Domain Day, encouraging customers to raise money for
online freedom with every domain move. Namecheap will donate up to $1.50 per domain transfer to the Electronic Frontier Foundation when customers switch to their service on March 6.
With this year’s promotion Namecheap hopes to draw attention and much-needed funding to EFF’s work fighting for Internet freedom. It's especially urgent since the Federal
Communications Commission’s disappointing move to abandon landmark net
neutrality and broadband privacy protections. Despite this setback, EFF is committed to defending the open web we love. If you’re in the U.S., visit our action center and tell your representatives to restore net neutrality. Not sure where your lawmakers stand on the issue? You can use EFF’s
handy tool to check your reps.
The original Move Your Domain Day came into being in 2011 when popular domain name
registrar GoDaddy spoke out in support of the hugely unpopular Internet blacklist bills SOPA and
PIPA. The ensuing backlash from Internet users led to a call for customers to leave GoDaddy in favor of companies better-aligned with their online freedom goals. As a result, the
first Move Your Domain Day raised over $64,000 for EFF’s work on this and other issues. The response reflected the overwhelming public sentiment that eventually toppled SOPA/PIPA and
proved Internet users are powerful when they work together.
We are grateful to Namecheap for including us in this year’s campaign and for standing on EFF’s side in numerousonlinerightsbattles over the years. We’re also grateful to EFF’s 44,000 members
around the world for ensuring that Internet users have an advocate.
More information on Move Your Domain Day: https://www.namecheap.com/promotions/move-your-domain-day
>> mehr lesen
Blunt Measures on Speech Serve No One: The Story of the San Diego City Beat
(Mo, 05 Mär 2018)
It’s no secret: Social media has changed the way that we access news. According to the Pew Research Center, two-thirds of Americans report getting at least some of
their news on social media. Another study suggests that globally, for those under 45, online
news is now as important as television news. But thanks to platforms’ ever-changing algorithms,
content policies, and moderation practices, news outlets
face significant barriers to reaching online readers.
San Diego CityBeat's recent experience offers a sad case in point. CityBeat is an alt-weekly focusing on news, music, and
culture. Founded in 2002, the publication has a print circulation of 44,000 and is best known for its independence and no-holds barred treatment of public officials and demo tapes.
The site is also known for its quirky—and, it turns out, controversial—headlines.
It was one of those headlines that caused CityBeat to run afoul of Facebook’s censors. In late November, the platform removed links posted by CityBeat on their own page to a
piece by popular columnist Alex Zaragoza. Her piece, entitled “Dear dudes,
you’re all trash,” critiqued men for their complacency and surprise in the light of several high-profile sexual assault and harassment scandals. Zaragoza's similar
post on her own timeline was also removed.
Ryan Bradford, the web editor of CityBeat, said that Facebook notified him about the post on a weekend. “It didn’t really occur to me how serious it was” at first, he says.
“We’d been flagged for content before, [such as] artistic images that contain nudity.”
He had posted the link to CityBeat’s Facebook page a few days prior, even including the article’s sub-hed—“Even the “good ones” are safe in their obliviousness and complacency.”
The message he received from Facebook pointed him to the Community Standards, but—as was the case
with Egyptian journalist Wael Abbas—did not explicitly state which rule the content had violated.
Users frequently complain that Facebook provides scant explanation for its removals.
Bradford thought of appealing but, he told us, “Sending a complaint seemed futile. It feels like you’re sending it out into the ocean.” And in this case, appealing wouldn't have
been an option, as Facebook only allows users to appeal account deactivations, not removals of individual items.
By not notifying users about how their content has violated the rules, the company is setting up users for failure. Users must receive clear information
about the rules they've violated and how they can appeal content decisions.
As we’ve said
previously, private censorship isn’t the best way to fight hate or defend democracy. Corporations are often in a tough position when it comes to dealing with hate
speech and other content, but blunt measures that classify a nuanced article in a reputable publication about sexual assault as verboten due to harsh language serve
no one. Although corporations have the right to make their own decisions about what types of content users can post, they should seek to maximize freedom of
expression. CEO Mark Zuckerberg claims that the company
stands for freedom of speech, but the decision to ban Zaragoza's piece says otherwise.
Or, as Bradford puts it: “To start censoring innocuous stuff that ultimately sends a positive message is a detriment to the online community.”
You can read more about our position on private censorship here, and learn more about the
issue at Onlinecensorship.org.
>> mehr lesen
Work with EFF This Summer! Apply to be a Google Public Policy Fellow
(Sa, 03 Mär 2018)
If you’re a student who is passionate about emerging Internet and technology policy issues, come work with EFF this summer as a Google Public Policy Fellow! This is a paid opportunity for
students currently enrolled in higher education institutions to work alongside EFF’s international team on projects advancing debate on key public policy issues.
EFF is looking for someone who shares our passion for the free and open Internet. You'll have the opportunity to work on a variety of issues, including censorship and global surveillance. Applicants must have strong research and writing skills, the ability to produce thoughtful original policy
analysis, a talent for communicating with many different types of audiences, and be independently driven. More specific information can be found here.
Program timeline is June 5, 2018 - August 11, 2018, with regular programming throughout the summer. If selected, you can work with EFF to adjust start and completion dates.
The application period opens Friday, March 2, 2018 and all applications must be received by 12:00AM midnight ET, Tuesday, March 20, 2018.
The accepted applicant will receive a stipend of USD $7,500 in 2018 for their 10-week long Fellowship.
To apply with the Electronic Frontier Foundation, follow this link.
Note: This internship is associated with EFF's international team and is separate from EFF's summer legal internship
>> mehr lesen
Fair Use Protects So Much More Than Many Realize
(Fr, 02 Mär 2018)
With copyright being abused to shut down innovation and speech, and copyright terms lasting for generations, fair use is more important than ever. Without fair use, we’d see
less creativity. We’d see less news reporting and commentary. And we’d see far less innovation.
Fair use allows people to use copyrighted materials for certain purposes without payment or permission. If something is fair use, it is not infringing on a copyright.
A video remix or a story that critiques culture by incorporating famous characters and giving them new meaning or context is an example of fair use in action. Culture grows because
creators are constantly reworking what’s in it. If Superman is portrayed as someone other than a white man, that is clearly a commentary on
the symbol of “truth, justice, and the American way.”
Commentary also relies on fair use. Criticism is made stronger when the material being interrogated can be included in the critique. It is difficult to show why someone was wrong or
add context to someone else’s report without including at least part of it. We recently wrote about the Second Circuit’s decision that part of the service offered
by TVEyes, a subscription company that provides searchable transcripts and video archives of television and radio, was not fair use. In particular, the court seemed to say that what
makes TVEyes so objectionable was that it made material available without Fox News’ permission. One of the reasons fair use is so important to the First Amendment is because
it doesn’t require permission. Who would let researchers, academics, and journalists get access to their material for the purpose of saying if and how they’re wrong?
The ways fair use improves our creative culture and our commentary are apparent every time we see fan art on the Internet or watch news commentary. The ways fair use protects
innovation can be more subtle.
Copyright also covers software, which is working its way into every part of our life. We’re entering a world where your lights, toothbrush, coffeemaker, and television are all
connected to the Internet. And transmitting all sorts of information all the time. But if you want to ask an
expert how to change that, you’re probably going to need fair use.
Much of the problem lies with Section 1201 of the Digital Millennium Copyright Act, which bans breaking restrictions on copyrighted
works. That means, for example, that if someone wants to develop an app that better secures your phone but doing so means breaking the digital lock the manufacturer put there, then
that inventor faces trouble. Or, say you want to pay a mechanic to fix your car, but that requires them to break the encryption on the computer in it, then Section 1201 would prevent
you from getting that help.
Section 1201 can prevent access to things that fair use allows people to use. For example, you may want to make fair use of a clip from a DVD but be banned from breaking a lock to rip
the clip. And because of the impact that could have on fair use, there is a process for securing an exemption to it. The
exemption process occurs every three years, and we’ll get a new set of exemption in 2018.
Because fair use is important for creativity, commentary, and innovation, and because the ban on circumvention makes that so much harder, convincing the Copyright Office to issue
common-sense exemptions is necessary. In 2018, EFF is asking for exemptions for:
Repair, diagnosis, and tinkering with any software-enabled device, including “Internet of Things”
devices, appliances, computers, peripherals, toys, vehicle, and environmental automation systems;
Jailbreaking personal computing devices, including smartphones, tablets, smartwatches, and
personal assistant devices like the Amazon Echo and the forthcoming Apple HomePod;
Using excerpts from video discs or streaming video for criticism or commentary, without the narrow
limitations on users (noncommercial vidders, documentary filmmakers, certain students) that the Copyright Office now imposes;
Security research on software of all kinds, which can be found in consumer electronics,
medical devices, vehicles, and more;
Lawful uses of video encrypted using High-bandwidth Digital Content Protection (HDCP, which is applied to
content sent over the HDMI cables used by home video equipment).
It would be even better if hoops like this didn’t exist for fair use to jump through, but while they do, it’s important to keep showing how important it is.
This week is Fair Use/Fair Dealing Week, an annual celebration of the important doctrines of fair use and fair dealing. It is
designed to highlight and promote the opportunities presented by fair use and fair dealing, celebrate successful stories, and explain these doctrines.
>> mehr lesen
The Post-TPP Future of Digital Trade in Asia
(Fr, 02 Mär 2018)
On March 8, trade representatives from eleven Pacific rim countries including Canada, Mexico, Japan, and Australia are expected to ratify the Trans-Pacific
Partnership, now known as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). The agreement has been slimmed down both in its content—22 items in the text have been
suspended, including the bulk of the intellectual property chapter—and also in its membership, with the exclusion of the United States which had been the driver of those suspended
What remains in the CPTPP is the agreement's Electronic Commerce (also called digital trade) chapter, which will set new, flawed rules for the region on topics such
as the free flow of electronic data, access to software source code, and even rules applicable to domain name privacy and dispute resolution. But it's not the only Asian trade
agreement seeking to set such rules. There's another lesser-known but equally important agreement under negotiation by sixteen countries, called the Regional Comprehensive Economic
Partnership Agreement (RCEP).
Like CPTPP, RCEP would cover issues that are critical to the digital economy such as custom duties on electronic products, supply of cross-border services, paperless
trading, telecommunications, intellectual property, source code disclosure, privacy and cross-border data flows. But unlike CPTPP, RCEP includes the giants of China and India, meaning
that the agreement would represent a massive
28.5 percent of global trade. While India's commitment to the deal has become somewhat equivocal, RCEP holds an important place in China's ambitions to consolidate its leadership role
in the region.
India Not Ready to Compromise
The RCEP negotiating parties met last month in Indonesia between February 2 and 9, and although continuing secrecy in the negotiation process makes it difficult to
accurately assess progress, a series of missed deadlines point to growing uncertainty about the conclusion of the talks.
One of the sticking points is that countries such as India are pushing for a strong services pact which would facilitate the free movement of professionals, whereas
China, South Korea, Japan, Australia and New Zealand remain reluctant to commit. On the other hand the Indian government is being cautious about opening up its
markets and has incentives to draw out negotiations with elections scheduled next year. India's position on intellectual property is also different from other negotiating countries such as
Japan and Korea, which are pushing for a harder, TPP-like line.
As pressure to conclude the deal has intensified, calls for India to exit or block an speedy
conclusion of the agreement have also grown louder. At the Indo-ASEAN meeting in New Delhi, Indonesian Trade Minister reiterated that the ASEAN bloc
expected India not to block attempts to conclude the RCEP this year. Mounting expectations may lead India to withdraw from the talks, a move that would impact the strategic and
economic value of the agreement.
Can Digital Trade Improve Internet Freedom in China?
With India's continuing participation in doubt, Beijing has thrown its weight behind the agreement. Chinese Foreign Ministry spokesperson Hua Chunying recently
underscored that Beijing attaches great importance to the RCEP talks and plans
to ensure ratification of the agreement by year end. Following the US withdrawal from the TPP, China sees an early conclusion of the RCEP as critical for creating confidence in and
promoting its regional and global trade leadership, especially given its absence from the CPTPP.
Addressing the lack of progress on RCEP has gained urgency as China's trade war with the US has intensified. US is contemplating legislation that would forbid U.S. government agencies from purchasing ICT equipment
produced by Chinese ICT companies, or their subsidiaries and affiliates. If the law is passed government agencies would be restricted from doing business with any entity that uses equipment produced by those
Last week, concerns about China banning the
use of Virtual Private Networks (VPNs) as part of its proposed regulation for telecommunications networks prompted the US to demand an intervention from the World Trade
Organization (WTO). What makes this development interesting is that it is the first time that a trade resolution has been sought to address, even incidentally, a serious human rights
issue for Chinese Internet users. It is also interesting that the remedy sought is under the existing WTO rules, which at least raises questions about the added value of the new
generation of digital trade agreements such as CPTPP and RCEP.
As countries head into the next round of RCEP negotiations the challenge before negotiators is reaching an speedy conclusion versus ensuring a balanced agreement.
It's going to be difficult to achieve that balance with the current level of secrecy and lack of consultation surrounding the agreement. Just as the same flaws in the negotiation
process for the CPTPP have resulted in an agreement that fails to address users' needs or to preserve their digital rights, RCEP is unlikely to have anything more to offer for
Internet users and innovators.
>> mehr lesen
Playboy Drops Misguided Copyright Case Against Boing Boing
(Do, 01 Mär 2018)
In a victory for journalism and fair use, Playboy Entertainment has given up on its lawsuit against Happy Mutants, LLC, the company behind Boing
Boing. Earlier this month, a federal court dismissed Playboy’s claims
but gave Playboy permission to try again with a new complaint, if it could dig up some new facts. The deadline for filing that new complaint passed this week, and today Playboy
released a statement suggesting that it is standing down. That means both Boing Boing and Playboy can go back to
doing what they do best: producing and reporting on culture and technology.
This case began when Playboy filed suit accusing Boing Boing of copyright infringement
for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. The post in question, from February 2016, reported that someone had uploaded scans of the photos, and noted
they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were
created by Boing Boing.
Together with law firm Durie Tangri, EFF filed a motion to dismiss [PDF]. We explained that Boing Boing did not contribute to the infringement of any Playboy
copyrights by including a link to illustrate its commentary. The judge agreed, dismissing the lawsuit and writing that he was “skeptical that plaintiff has
sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”
It’s hard to understand why Playboy brought this case in the first place, turning its legal firepower on a small news and commentary website that hadn’t uploaded or hosted any
infringing content. We’re also a little perplexed as to why Playboy seems so unhappy that the Boing Boing post is still up when the links they complain about have been dead for almost two years. In any
event, this suit now appears to be over and the Boing Boing team can focus on doing what they love: sharing news, commentary, and awesome things with the world.
Playboy Entertainment Group v. Happy Mutants
>> mehr lesen
Stupid Patent of the Month: Buying A Bundle of Diamonds
(Mi, 28 Feb 2018)
This month’s Stupid Patent shows what happens when the patent system strays outside its proper boundaries. US Patent No. 8,706,513 describes a “fungible basket of investment grade gems” for use in “financial instruments.” In other words, it’s a rating
and trading system that attempts to turn diamonds into a tradeable commodity like oil, gold, or corn.
Of course, creating new types of investment vehicles isn’t really an invention. And patents on newfangled financial techniques like this were generally barred following Bilski v. Kappos, a 2008 Supreme Court case that prevents the patenting of purely financial instruments. Since then, the law has become even
less favorable to abstract business method patents like this one. In our view, the ’513 patent would not survive a challenge under Bilski or the Supreme Court’s 2014 decision
in Alice v. CLS Bank.
Despite its clear problems, the ’513 patent is being asserted in court—and one of the people best placed to testify against the patent may not be allowed to.
The public’s right to challenge a patent in court is a critical part of the US patent system, that has always balanced the exclusive power of a patent. It’s especially important since
patents are often granted by overworked examiners who get an average of 18 hours to review applications.
But there are two types of persons that, increasingly, aren't allowed to challenge problematic patents: inventors of patents, and even partial owners of patents. Under a doctrine
known as “assignor estoppel,” the Federal Circuit has barred inventors from challenging patents that they acquired for a former employer. Assignor estoppel was originally meant to
cover a narrow set of circumstances—inventors who engaged in fraud or bad dealing, for instance—but the nation’s top patent court now routinely applies it to prevent inventors from
Patent scholar Mark Lemley flagged this problem in a 2016 paper, noting assignor estoppel could be used to
control the free movement of employees or quash a legitimate competitor. “Inventors as a class are put under burdens that we apply to no other employee,” he wrote. “If they start a
company, or even go to work for an existing company in the same field, they will not be able to defend a patent suit from their old employer.”
In this case, the Federal Circuit’s expansive view of assignor estoppel may prevent a person who owned just a fraction of a patent from fighting back when that patent gets used in an
attempt to quash a competing business.
Despite the fact that this gemological trading system should never have been granted a patent, so far, it’s being successfully used by its owner to beat up on a competitor—and the
competitor could be barred from even challenging the patent by assignor estoppel.
Competing Diamond Companies
GemShares was created in 2008 to market “diamond investment products.” The original partners were joined in business by a man named Arthur Lipton, who bought 20% of GemShares in 2013.
He struck a deal not to compete with GemShares.
GemShares says [PDF] Lipton broke that deal in 2014, when he started working on his own project, a “secure diamond smart card,”
and filed for patents related to it. But in addition to breach of contract, GemShares sued for patent infringement. They said Lipton’s new business violated the ’513 patent.
The litigation also involves breach of contract claims, and allegations of fraud from Lipton’s former partner. Without getting into the weeds on all that, the defendant in this case
may not even be allowed to argue that the “gem financial product” patent is invalid. Earlier this month, the judge overseeing the case issued an order [PDF] noting that “the Federal Circuit has upheld the doctrine of assignor estoppel, which precludes an inventor-assignor of a patent sued
for infringement from arguing the patent's invalidity.”
The Federal Circuit has made assignor estoppel so powerful, in fact, that Lipton’s 20% ownership contract with GemShares may be enough to stop him and his lawyers from mounting an
It’s bad policy to stop the public from challenging bad patents, and assignor estoppel should only be used in narrow cases, like outright fraud. As it’s been applied by the Federal
Circuit, it’s destined to be used in exactly the way that Lemley warned it would—as an anticompetitive cudgel.
We agree with the brief signed by Lemley and more than two dozen other law professors [PDF] in EVE-USA, Inc. v. Mentor Graphics Corp., arguing that the Supreme Court should take up this
issue and keep assignor estoppel within the narrow limits it originally intended.
>> mehr lesen
State Lawmakers Want to Block Pornography at the Expense of Your Free Speech, Privacy, and Hard-Earned Cash
(Mi, 28 Feb 2018)
More than 15 state legislatures are considering the “Human Trafficking Prevention Act” (HTPA).
But don’t let the name fool you: this bill would do nothing to address human trafficking. Instead, it would only threaten your free speech and privacy in a misguided attempt to block
and tax online pornography.
EFF opposed versions of this bill in over a dozen states last
year, and the bill failed in all of them. Now HTPA is back, and we have written in opposition against the bill again to urge lawmakers to oppose it this year.
The gist of the model legislation is this: Device manufacturers would be forced to install "obscenity filters" on cell phones, tablets, computers, and any other
Internet-connected devices. Those filters could only be removed if consumers pay a $20 fee. In addition to violating the First Amendment and burdening consumers and businesses, this
would allow the government to intrude into consumers’ private lives and restrict their control over their own devices.
On top of that, the story of this bill’s provenance is bizarre and highly recommended reading for any lawmakers considering it.
In short, the HTPA is part of a multi-state effort coordinated by the same person behind a bill to delegitimize
same-sex marriages as “parody marriages.” In this post, however, we’ll be focusing on the policy itself.
Read EFF's opposition letter against HB 2422, Missouri's iteration of the Human Trafficking Prevention
HTPA—also sometimes named the Human Trafficking and Child Exploitation Prevention Act—has been introduced in the following states: Hawaii (Version 1, 2), Illinois, Indiana, Iowa, Kansas, Maryland, Mississippi, Missouri, New Mexico, New Jersey (Assembly, Senate), New
York, Rhode Island, South Carolina, Tennessee (House, Senate), Virginia, West Virginia (Senate, House), and Wyoming.
While some versions of the legislation vary, each hits the following points.
Manufacturers of Internet-enabled devices would be required to pre-install filters to block webpages and applications that contain sexual content. Although different versions of
the bill specify this content differently, the end result is the same: an unconstitutional restriction on the lawful speech people can access and engage with on the Internet.
A Censorship Tax
After overriding consumer choice and forcing people to purchase filtering software they don’t necessarily want, the bill would require users to pay a $20 fee per
device to remove the filters and to exercise their First Amendment rights to look at legal content. Between smartphones, tablets, desktop computers, TVs, gaming
consoles, routers, and other Internet-enabled devices, consumers could end up paying a small fortune to unlock all of the devices in their home.
Anyone who wants to unlock the filters on their devices would have to put their request in writing, show ID, and verify that they’ve been shown a “written warning regarding the
potential dangers” of removing the obscenity filter. That means that companies would be maintaining records on everyone who wanted their “Human Trafficking” filters removed. As
EFF Stanton Fellow Camille Fischer explains in our opposition letter:
To be clear, the HTPA’s deactivation process does not simply chill speech; it also requires consumers to sacrifice their privacy and anonymity, as the price of exercising
their First Amendment rights. If enacted, consumers would be forced to identify themselves when making a written request for filter deactivation, creating a humiliating situation
that suggests they want access to controversial sexual material. … In short, HTPA deactivation would be a frightening form of thought-based surveillance.
Unlocking such filters would not just be about accessing pornography. A gamer could be seeking to improve the performance of their computer by deleting unnecessary software. A
parent may want to install premium child safety software that is incompatible with a pre-installed filter. And, of course, many users will simply want to freely surf the Internet
without repeatedly being denied access to legal content.
Building A Censorship Machine
The bill would force the companies we rely upon for open access to the Internet to create a massive, easily abused censorship apparatus. Tech companies would be required to
operate call centers or online reporting centers to monitor complaints about which sites should or should not be filtered.
The technical requirements for this kind of aggressive platform censorship at scale are simply unworkable. If the attempts of social media sites to censor pornographic images
are any indication, we cannot
count on algorithms to distinguish, for example, nude art from medical information from pornography. Facing risk of legal liability, companies would likely over-censor and sweep up
legal content in their censorship net.
Do The Right Thing
Already lawmakers are starting to see through this legislation. In 2018, the bill has died in committees in Mississippi and Virginia. Democratic senators in New Mexico who
introduced the legislation pulled back the bill days after EFF raised the alarm.
Legislators should continue to do the right thing: uphold the Constitution, protect consumers, and not use the real problem of human trafficking as an excuse to deprive users of
their privacy and free speech.
>> mehr lesen
Ninth Circuit Court of Appeals Has New Opportunity to Protect Device Privacy at the Border
(Mi, 28 Feb 2018)
The U.S. Court of Appeals for the Ninth Circuit has a new opportunity to strengthen personal privacy at the border. When courts recognize and strengthen our Fourth Amendment rights
against warrantless, suspicionless border searches of our electronic devices, it’s an important check on the government’s power to search anyone, for any or no reason, at
airports and border checkpoints.
EFF recently filed amicus briefs in two cases, U.S.
v. Cano and U.S. v. Caballero, before the Ninth Circuit arguing that the
Constitution requires border agents to have a probable cause warrant to search travelers’ electronic devices.
Border agents, whether from U.S. Customs and Border Protection (CBP) or U.S. Immigration and Customs Enforcement (ICE), regularly search cell phones, laptops, and other electronic
devices that travelers carry across the U.S. border. The number of device searches at the border has increased six-fold in the past five years, with the increase accelerating during the
Trump administration. These searches are authorized by agency policies that generally permit suspicionless searches without any court oversight.
The last significant ruling on device privacy at the border in the Ninth Circuit, whose rulings apply to nine
western states, was in U.S. v. Cotterman (2013). In that case, the court of appeals held that the
Fourth Amendment required border agents to have had reasonable suspicion—a standard between no suspicion and probable cause—before they conducted a “forensic” search, aided by
sophisticated software, of the defendant’s laptop. Unfortunately, the Ninth Circuit also held that a manual search of an electronic device is “routine” and so the traditional border
search exception to the warrant requirement applies—that is, no warrant or any suspicion of wrongdoing is needed.
However, the year after the Ninth Circuit decided Cotterman, the U.S. Supreme Court decided Riley v.
California (2014). Although that case did not involve the border context, its analysis and ultimate holding are highly instructive. The Supreme Court held that, while police
may search those they arrest without a warrant, when it comes to an arrestee’s cell phone they need a probable cause warrant. The court based its holding on the extraordinary privacy
interests that individuals have in the massive amounts of sensitive digital data that their cell phones contain. The court emphasized that electronic devices are nothing like physical
containers, such as wallets.
Similarly, in the border search context, electronic devices are nothing like luggage or other physical items that travelers carry across the border. With the vast amounts and kinds of
personal data that electronic devices contain—data that can reveal our political affiliations, religious beliefs and practices, sexual and romantic lives, financial status, health
conditions, and family and professional associations—EFF argues that the Constitution requires the government to meet a higher burden before accessing this information.
Additionally, we argue that the method of search is irrelevant to the legal analysis of what standards should apply to border searches of electronic devices. Border agents
significantly invade travelers’ privacy when they search a cell phone or laptop—whether by hand or with forensic software. In fact, the cell phone searches in Riley were
manual searches, yet the Supreme Court applied the maximum Fourth Amendment protection available.
The Ninth Circuit has not yet ruled on whether or how Riley applies to border searches of electronic devices. With Cano and Caballero, the court of appeals
has a fresh opportunity to do so—and hopefully will strengthen privacy protections for travelers within its jurisdiction. Affirming the importance of digital privacy, the
Caballero court stated, “If it could, this Court would apply Riley.” Yet both district courts felt constrained by Cotterman and so did not require a
With these Ninth Circuit briefs, EFF has now filed a total of fiveamicusbriefssince2015 arguing that border agents
need a probable cause warrant to search electronic devices at the border. All of these cases, like Riley, were criminal cases where the defendants moved to suppress the
evidence obtained from their devices without a warrant. That these were criminal cases should not alter the constitutional analysis. Even though the defendants in Riley were
arrestees reasonably suspected of having committed crimes, the Supreme Court still required a warrant under the Fourth Amendment.
Additionally, our Alasaad v. Nielsen case against CBP and ICE is the first civil case post-Riley
challenging unconstitutional border searches of electronic devices. Our clients are 11 Americans—10 citizens and one lawful permanent resident—who have not been accused of any
wrongdoing. Yet they were subjected to highly intrusive searches of their cell phones and other electronic devices when they tried to re-enter the country.
Thus, whether through our civil case or the criminal appeals where we serve as amicus, we’re hopeful that the courts will explicitly apply Riley to the border and protect the
digital privacy of thousands of travelers from unjustified government intrusion.
>> mehr lesen
Second Circuit Gouges TVEyes With Terrible Fair Use Ruling
(Mi, 28 Feb 2018)
In a decision that threatens legitimate fair uses, the Second Circuit ruled against part of the service offered by TVEyes, which creates a
text-searchable database of broadcast content from thousands of television and radio stations in the United States and worldwide. The service is invaluable to people looking to
investigate and analyze the claims made on broadcast television and radio. Sadly, this ruling is likely to interfere with that valuable service.
TVEyes allows subscribers to search through transcripts of broadcast content and gives a time code for what the search returns. It also allows its subscribers to search for, view,
download, and share ten-minute clips. It’s used by exactly who you’d think would need a service like this: journalists, scholars, politicians, and so on in order to monitor what’s
being said in the media. If you’ve ever read a story where a public figure’s words now are contrasted with contradictory things they said in the past, then you’ve seen the effects of
In 2014, the district court hearing the case threw
out a number of arguments made by Fox news and held that a lot of what TVEyes does is fair use, but asked to hear more about customers’ ability to archive video clips, share
links to video clips via email, download clips, and search for clips by date and time (as opposed to keywords). In 2015, the district court found the archiving feature to be a fair use, but found the other
features to be “infringing.”
And now the 2nd Circuit has reversed [PDF]
the 2015 finding that the archiving was fair use and upholds the finding that the rest of the TVEye’s video features are not fair use. That’s a hugely disappointing result
that could result in a decrease in news analysis and commentary.
Fair use is determined by a look at four factors: the purpose and character of the use (ie, how “transformative” it is), the nature of the copyrighted work, the amount and
substantiality used, and the effect the use has on the market.
The Second Circuit decision does acknowledge that TVEyes’ functions are transformative “insofar as it enables users to isolate, from an ocean of programming, material that is
responsive to their interests and needs, and to access that material with targeted precision.” Where the court gets this wrong is in saying that because that material is delivered in
“unaltered from its original form with no new expression, meaning, or message,” this factor only weighs slightly in favor of TVEyes. A researcher or a journalist watching ten minutes
relevant to a specific search is doing something very different from an average television viewer. The new and different purpose being served by TVEyes means this factor should have
favored the service more than just slightly.
The court found that the second factor, not really a big player in this analysis, was neutral. TVEyes argued that it was providing access to facts, which are not copyrightable, so
this factor weighed in their favor. The court replied that just because works are factual doesn’t mean they can be copied and shared wholesale.
The court found that the third factor favored Fox because the ten-minute clips are long relative to the “brevity of the average news segment on a particular topic.” The result, in the
court’s eyes, being that users would see all of the segment on the topic they were searching for, destroying the need to go watch Fox News. The court envisions a future where
media criticism is limited to organizations with the budget and stamina to assign someone to watch Fox News 24 hours a day.
The biggest failure is in the court’s analysis of the fourth factor. The court says that TVEyes successfully charging its subscribers $500 a month shows that it has created a
profitable business that is somehow displacing a channel’s prospective revenue, especially since it allows people to watch content without the owner’s permission. That ignores a
fundamental characteristic of fair use.
If use of someone’s words was contingent on the permission of the person who said them, you would never be able to critique what was being said. Fair use allows the use of copyrighted
material without permission for this very reason. It’s not in the interest of anyone to license out clips of their material for the purpose of it being debunked, which is why
the service provided by TVEyes is so valuable.
Moreover, the markets for a cable news subscription and the market for a service like TVEyes are not the same. And restricting that service to the hands of the copyright holder will
keep important criticism and commentary from being done. Now more than ever we need rulings that reaffirm the importance of news analysis rather than ones that devalue it, as the
Second Circuit did here.
We're disappointed the court took such a limited view of the importance of this kind of use and it's incorrect and dangerous to consider this a plausible market. That's circular
reasoning that threatens many traditional fair uses where one could theoretically get a license but should not have to because stopping the use isn't a legitimate application
of copyright law.
>> mehr lesen
House Vote on FOSTA is a Win for Censorship
(Mi, 28 Feb 2018)
The bill passed today 388-25 by the U.S. House of Representatives marks an unprecedented push towards Internet censorship, and does nothing to fight sex traffickers.
H.R. 1865, the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), allows for private lawsuits and criminal prosecutions against Internet platforms and websites,
based on the actions of their users. Facing huge new liabilities, the law will undoubtedly lead to platforms policing more user speech.
The Internet we know today is possible only because of Section 230 of the Communications Decency Act, which prevents online platforms
from being held liable for their users’ speech, except in certain circumstances. FOSTA would punch a major hole in Section 230, enabling lawsuits and prosecutions against online
platforms—including ones that aren’t even aware that sex trafficking is taking place.
If websites can be sued or prosecuted because of user actions, it creates extreme incentives. Some online services might react by prescreening or filtering user posts. Others might
get sued out of existence. New companies, fearing FOSTA liabilities, may not start up in the first place.
The tragedy is that FOSTA isn’t needed to prosecute or sue sex traffickers. As we’ve said before, Section 230 simply isn’t broken. Right now, there is nothing preventing federal prosecution of an Internet company
that knowingly aids in sex trafficking. That includes anyone hosting advertisements for sex trafficking, which is explicitly a federal crime under 18 U.S.C. § 1591, as amended by the
2015 SAVE Act. The website that produced the most discussion around this issue, Backpage.com, is reportedly under
The array of online services protected by Section 230, and thus hurt by FOSTA, is vast. It includes review sites, online marketplaces, discussion boards, ISPs, even news publications
with comment sections. Even small websites host thousands or millions of users engaged in around-the-clock discussion and commerce. By attempting to add an additional tool to hold
liable the tiny minority of those platforms whose users who do awful things, FOSTA does real harm to the overwhelming majority, who will inevitably be subject to censorship.
Websites run by nonprofits or community groups, which have limited resources to police user content, would face the most risk. Perversely, some of the discussions most likely to be
censored could be those by and about victims of sex trafficking. Overzealous moderators, or automated filters, won’t distinguish nuanced conversations and are likely to pursue the
safest, censorial route.
We hope the Senate will reject FOSTA and uphold Section 230, a law that has protected a free and open Internet for more than two decades. Call your senator now and let them know that
online censorship isn’t the solution to fighting sex trafficking.
>> mehr lesen
Tell Congress to Protect the Open Internet
(Di, 27 Feb 2018)
Today, EFF is participating in a national Day of Action to push Congress to preserve the net neutrality rules the FCC repealed in December. With a simple majority, Congress can use
the Congressional Review Act (CRA) to overturn the FCC’s new rule. We’re asking for members of the House and Senate to commit to doing so publicly.
On Thursday, February 22, the FCC’s so-called “Restoring Internet Freedom Order” was published in the Federal Register. Under the CRA, Congress has 60
working days to vote to overturn that Order. We’re asking representatives to publicly commit to doing just that. In the House of Representatives, that means supporting Representative
Mike Doyle’s bill, which has 150
co-sponsors. In the Senate, Senator Ed Markey’s bill is
just one vote away from passing.
Net neutrality means that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improperly discriminating in favor of particular
apps, sites or services. For many years, net neutrality principles in various forms, have forbidden unfair practices like blocking or throttling particular services and sites, as well
as paid prioritization, where an ISP charges content providers to get better or faster or more consistent access to the ISP's customer or prioritizes its own content over a
competitor’s. Thanks to the hard work of millions of Internet users, these protections were enshrined in the FCC’s 2015 Open Internet Order. The new Order eviscerated those
protections; Congress can use the CRA to bring them back.
Because net neutrality is so popular, politicians often say they support it – but lip service is not enough. A vote to restore the net neutrality protections in the 2015 Open
Internet Order is a clear, concrete thing that you can ask your representatives to do to support real net neutrality.
For that reason, we’re launching Check Your Reps, a website that allows you to see whether or not your representatives are voting yes on
bringing back the 2015 Open Internet Order, email them voicing your support for net neutrality, and share what you’ve learned.
The clock is ticking: make sure you tell your representatives to act.
Take ActionTell Your Representatives to Bring Back Net Neutrality Protections
>> mehr lesen
Can India's Biometric Identity Program Aadhaar Be Fixed?
(Di, 27 Feb 2018)
The Supreme Court of India has commenced final hearings in the
long-standing challenge to India's massive biometric identity apparatus, Aadhaar. Following last August’s ruling in the Puttaswamy case rejecting the Attorney
General's contention that privacy was not a fundamental right, a five-judge bench is now weighing in on the privacy concerns raised by the unsanctioned use of Aadhaar.
The stakes in the Aadhaar case are huge, given the central government’s ambitions to export the underlying technology to other
countries. Russia, Morocco, Algeria, Tunisia, Malaysia, Philippines, and Thailand have expressed interest in implementing biometric identification system inspired by Aadhaar. The Sri
Lankan government has already made plans to
introduce a biometric digital identity for citizens to access services, despite stiff opposition to
the proposal, and similar plans are under consideration in Pakistan, Nepal and Singapore. The outcome of this hearing will impact the acceptance and adoption
of biometric identity across the world.
At home in India, the need for biometric identity is staked on claims that it will improve government savings through efficient, targeted delivery of welfare. But in the years since
its implementation, there is little evidence to back the government's savings claims. A widely-quoted World Bank's estimate of $11 billion annual savings (or potential savings) due to
Aadhaar has been challenged by
The architects of Aadhaar also
invoke inclusion to justify the need for creating a centralized identity scheme. Yet, contrary to government claims, there is growing evidence of denial of services for lack of
Aadhaar card, authentication failures that have led to death, starvation, denial of medical services and hospitalization, and denial of public utilities
such as pensions, rations, and cooking gas. During last week's hearings , Aadhaar's governing institution, the Unique Identity Authority of India
(UIDAI), was forced to
clarify that access to entitlements would be maintained until an adequate mechanism for authentication of identity was in place, issuing a statement that "no essential
service or benefit should be denied to a genuine beneficiary for the want of Aadhaar."
Centralized Decision-Making Compromises Aadhaar's Security
The UIDAI was established in 2009 by executive action as the sole decision-making
authority for the allocation of resources, and contracting institutional arrangements for Aadhaar numbers. With no external or parliamentary oversight over its decision-making, UIDAI
engaged in an opaque process of private contracting
with foreign biometric service providers to provide technical support for the scheme. The government later passed the Aadhaar Act in 2016 to legitimize UIDAI's powers, but used a special maneuver that enabled it to bypass the House of Parliament, where the government lacked a majority,
and prevented its examination by the Parliamentary Standing Committee. The manner in which Aadhaar Act was passed further weakens the democratic legitimacy
of the Aadhaar scheme as a whole.
The lack of accountability emanating from UIDAI's centralized decision-making is evident in the rushed proof of the concept trial of the project. Security researchers have noted that the trial sampled data from just 20,000 people and
nothing in the UIDAI's report confirms that each electronic identity on the Central ID Repository (CIDR) is unique or that de-duplication could ever be achieved. As mounting evidence
confirms, the decision to create the CIDR was based on an assumption that biometrics cannot
be faked, and that even if they were, it would be caught during deduplication.
It emerged during the Aadhaar hearings that UIDAI has neither access to, nor control of the source code of the software used for Aadhaar CIDR. This means that to date there has
been no independent audit of the software that could identify data-mining backdoors or security flaws. The Indian public has also become concerned about the practices of the foreign
companies embedded in the Aadhaar system. One of three contractors to UIDAI who were provided full access to classified biometric data stored in the Aadhaar database and permitted to
“collect, use, transfer, store and process the data" was US-based L-1 Identity Solutions. The company has since been acquired by a French company, Safran Technologies, which
has been accused of hiding the
provenance of code bought from a Russian firm to boost software performance of US law enforcement computers. The company is also facing a whistleblower lawsuit alleging it
fraudulently took more than $1 billion from US law enforcement agencies.
Compromised Enrollment Scheme
The UIDAI also outsourced the responsibility for enrolling Indians in the Aadhaar system. State government bodies and large private organizations were selected to act as registrars,
who, in turn, appointed enrollment agencies, including private contractors, to set up and operate mobile, temporary or permanent enrollment centers. UIDAI created an incentive based
model for successful enrollment, whereby registrars would earn Rs 40-50 (about 75c) for every
successful enrollment. Since compensation was tied to successful enrollment, the scheme created the incentive for operators to maximize their earning potential.
By delegating the collection of citizens' biometrics to private contractors, UIDAI created the scope for the enrollment procedure to be compromised. Hacks to work around the
software and hardware soon emerged, and have been employed in scams using cloned fingerprints to create fake
enrollments. Corruption, bribery, and the creation of Aadhaar numbers with unverified, absent or false documents have also marred the rollout of the scheme. In 2016, on being detained
and questioned, a Pakistani spy produced an Aadhaar card bearing his alias and fake address as
proof of identity. The Aadhaar card had been obtained through the enrollment procedure by providing fake identification information.
An India Today investigation has revealed that the misuse
of Aadhaar data is widespread, with agents willing to part with demographic records collected from Aadhaar applicants for Rs 2-5 (less than a cent). Another report from 2015 suggests that the enrollment client allows operators to use their fingerprints
and Aadhaar number to access, update and print demographic details of people without their consent or biometric authentication.
More recently, an investigation by The Tribune
exposed that complete access to the UIDAI database was available for Rs 500 (about $8). The reporter paid to gain access to the data including name, address, postal code, photo, phone
number and email collected by UIDAI. For an additional Rs 300, the service provided access to software which allowed the printing of the Aadhaar card after entering the Aadhaar number
of any individual. A young Bangalore-based engineer has been accused of developing an Android app "Aadhaar e-KYC", downloaded over
50,000 times since its launch in January 2017. The software claimed to be able to access Aadhaar information without authorization.
In light of the unreliability of information in the Aadhaar database and systemic failure of the enrollment process, the biometric data collected before the enactment of the Aadhaar
Act is an important issue before the Supreme Court. The petitioners have sought the destruction of all biometrics and personal information captured between 2009-2016 on the grounds
that it was collected without informed consent and may have been compromised.
The original plans for authentication of a person holding an Aadhaar number under Section 2(c) of the Aadhaar Act, 2016 were meant
to involve returning a "Yes" if the person's biometric and demographic data matched those captured during the enrollment process, and "No" if it did not. But somewhere
along the way, this policy changed, and in
2016, the UIDAI introduced a new mode of authentication, whereby on submitting biometric information against the Aadhaar number would result in their demographic information
This has created a range of public and private institutions using Aadhaar-based
authentication for the provision of services. However authentication failures due to incorrect captured fingerprints,
or a change in biometric details because of old age or wear and tear are increasingly common. The ability to do electronic authentication is also limited in India and therefore,
printed copies of Aadhaar number and demographic details are considered as identification.
There are two main issues with this. First, as Aadhaar copies are just pieces of paper that can be easily faked, the use and acceptance of physical copies creates avenue for
fraud. UIDAI could limit the use of physical copies: however doing so would deprive beneficiaries if authentication fails. Second, Aadhaar numbers are supposed to be secret:
using physical copies encourage that number to be revealed and used publicly. For the UIDAI whose aim is speedy enrollment and provision of services despite authentication failure,
there is no incentive to stop the use of printed Aadhaar numbers.
Data security has also been weakened because institutions using Aadhaar for authentication have not met the standards for processing and storing data. Last year, UIDAI had to get more
than 200 Central and State government departments, including educational institutes, to remove lists of Aadhaar beneficiaries, along with their name, address, and Aadhaar numbers
had been uploaded and available on their public
Can Aadhaar be secured? Not without significant institutional reforms, no. Aadhaar does not have an independent threat-analyzing agency: securing biometric data that has been
collected falls under the purview of UIDAI. The agency does not have a Chief Information Officer (CIO) and has no defined standard operating procedures for data leakages and security
breaches. Demographic information linked to an Aadhaar number, made available to private parties during authentication, are already being collected and stored externally by those
parties; the UIDAI has no legal power or regulatory mechanism to prevent this. The existence of parallel databases means that biometric and demographic information is increasingly
scattered among government departments and private companies, many of whom have little conception of, or incentive to ensure data security.
Second order tasks of oversight and regulatory enforcement serve a critical function in creating accountability. Although UIDAI has issued legally-enforceable rules, there is no
monitoring or enforcement agency, either within UIDAI or without, to see if these rules are being followed. For example, an audit of enrollment centers revealed that UIDAI had no way of knowing if operators were retaining biometrics nor for how
UIDAI has also neither adopted, nor encouraged reporting of software vulnerabilities or testing enrollment hardware. Reporting of security vulnerabilities provides learning opportunities and improves coordination; security researchers can fulfill the critical task of enabling institutions to identify
failures, allowing incremental improvements to the system. But far from encouraging such security research, UIDAI has filed FIRs against researchers and reporters that uncovered flaws in the Aadhaar ecosystem.
As controversies over its ability to keep its data secure has grown, the agency has stuck to its aggressive stance, vehemently refuting any suggestion of the vulnerabilities in the
Aadhaar apparatus. This attitude is perplexing given the number of data breaches and procedural gaps that are being uncovered every day. UIDAI is so confident of its security that it
filed an affidavit before
the Supreme Court in the Aadhaar case which claims that the data cannot be hacked or breached. UIDAI's defiance of their own patchy record hardly provides much cause for confidence.
The Way Forward
The current Aadhaar regime is structured to radically centralize the implementation of Indian government and private digital authentication systems. But a credible national
identity system cannot be created by an opaque, unaccountable centralized agency that chooses not to follow democratic procedures when creating its rules. It would have made more
sense to confine UIDAI's role to maintaining the legal structure that secures the individual right over their data, enforces contracts, ensures liability for data breaches, and
performs dispute resolution. In that way, the jurisdictional authority of UIDAI would be limited to tasks where competition cannot be an organizing principle.
The present scheme has created a market of institutions that use Aadhaar for authentication of identity in the provision of services with varying degree of transparency and privacy.
The central control of the scheme is too rigid in some ways, as the bureaucratic structure of Aadhaar does not facilitate adaptation to security threats, or allow vendors or private
companies to improve data protection practices. Yet in other ways, it is not strong enough, given the security lapses that it has enabled by giving multiple parties free access
to the Aadhaar database.
By making Aadhaar mandatory, UIDAI has taken away the right of individuals to exit these unsatisfactory arrangements. The coercive measures taken by the State to encourage the
adoption of Aadhaar have introduced new risks to individuals' data and national security. Even the efficiency argument has fallen flat, as it is negated by the unreliability of
Aadhaar authentication. The tragedy of Aadhaar is that not only does it fail to generate efficiency and justice, but also introduces significant economic and social costs.
All in all, it's hard to see how this mess can be fixed without scrapping the system and—perhaps—starting again from scratch. As drastic as that sounds, the current Supreme Court
challenge may, ironically, provide a golden opportunity to revamp the fatally flawed existing institutional arrangements behind Aadhaar, and provide the Indian government with a
fresh opportunity to learn from the mistakes that brought it to this point.
>> mehr lesen
A Technical Deep Dive: Securing the Automation of ACME DNS Challenge Validation
(Di, 27 Feb 2018)
Earlier this month, Let's Encrypt (the free, automated, open Certificate Authority EFF helped launch two years ago) passed a huge
milestone: issuing over 50 million active certificates. And that
number is just going to keep growing, because in a few weeks Let's Encrypt will also start issuing “wildcard” certificates—a feature many system administrators have been asking for.
What's A Wildcard Certificate?
In order to validate an HTTPS certificate, a user’s browser checks to make sure that the domain name of the website is actually listed in the certificate. For example, a certificate
from www.eff.org has to actually list www.eff.org as a valid domain for that certificate. Certificates can also list multiple domains (e.g., www.eff.org, ssd.eff.org, sec.eff.org,
etc.) if the owner just wants to use one certificate for all of her domains. A wildcard certificate is just a certificate that says “I'm valid for all of the subdomains in this
domain” instead of explicitly listing them all off. (In the certificate, this is indicated by using a wildcard character, indicated by an asterisk. So if you examine the certificate
for eff.org today, it will say it's valid for *.eff.org.) That way, a system administrator can get a certificate for their entire domain, and use it on new subdomains they hadn't even
thought of when they got the certificate.
In order to issue wildcard certificates, Let's Encrypt is going to require users to prove their control over a domain by using a challenge based on DNS, the domain name system that translates domain names like www.eff.org into IP addresses like 126.96.36.199. From the
perspective of a Certificate Authority (CA) like Let's Encrypt, there's no better way to prove that you control a domain than by modifying its DNS records, as controlling the domain
is the very essence of DNS.
But one of the key ideas behind Let's Encrypt is that getting a certificate should be an automatic process. In order to be automatic, though, the software that requests the
certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for
the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means
that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies. In the rest of this post, we'll take a deep dive into
the components involved in that process, and what the options are for making it more secure.
How Does the DNS Challenge Work?
At a high level, the DNS challenge works like all the other automatic challenges that are part of the ACME protocol—the protocol that a Certificate Authority (CA) like Let's Encrypt
and client software like Certbot use to communicate about what certificate a server is requesting, and how the server should prove ownership of the corresponding domain name. In the
DNS challenge, the user requests a certificate from a CA by using ACME client software like Certbot that supports the DNS challenge type. When the client requests a certificate, the
CA asks the client to prove ownership over the domain by adding a specific TXT record to its DNS zone. More specifically, the CA sends a unique random token to the ACME client, and
whoever has control over the domain is expected to put this TXT record into its DNS zone, in the predefined record named "_acme-challenge" under the actual domain the user is trying
to prove ownership of. As an example, if you were trying to validate the domain for *.eff.org, the validation subdomain would be "_acme-challenge.eff.org." When the token value
is added to the DNS zone, the client tells the CA to proceed with validating the challenge, after which the CA will do a DNS query towards the authoritative servers for the domain. If
the authoritative DNS servers reply with a DNS record that contains the correct challenge token, ownership over the domain is proven and the certificate issuance process can continue.
DNS Controls Digital Identity
What makes a DNS zone compromise so dangerous is that DNS is what users’ browsers rely on to know what IP address they should contact when trying to reach your domain. This applies to
every service that uses a resolvable name under your domain, from email to web services. When DNS is compromised, a malicious attacker can easily intercept all the connections
directed toward your email or other protected service, terminate the TLS encryption (since they can now prove ownership over the domain and get their own valid certificates for it),
read the plaintext data, and then re-encrypt the data and pass the connection along to your server. For most people, this would be very hard to detect.
Separate and Limited Privileges
Strictly speaking, in order for the ACME client to handle updates in an automated fashion, the client only needs to have access to credentials that can update the TXT records for
"_acme-challenge" subdomains. Unfortunately, most DNS software and DNS service providers do not offer granular access controls that allow for limiting these privileges, or simply do
not provide an API to handle automating this outside of the basic DNS zone updates or transfers. This leaves the possible automation methods either unusable or insecure.
A simple trick can help maneuver past these kinds of limitations: using the CNAME record. CNAME records essentially act
as links to another DNS record. Let's Encrypt follows the chain of CNAME records and will resolve the challenge validation token from the last record in the chain.
Ways to Mitigate the Issue
Even using CNAME records, the underlying issue exists that the ACME client will still need access to credentials that allow it to modify some DNS record. There are different ways to
mitigate this underlying issue, with varying levels of complexity and security implications in case of a compromise. In the following sections, this post will introduce some of these
methods while trying to explain the possible impact if the credentials get compromised. With one exception, all of them make use of CNAME records.
Only Allow Updates to TXT Records
The first method is to create a set of credentials with privileges that only allow updating of TXT records. In the case of a compromise, this method limits the fallout to the attacker
being able to issue certificates for all domains within the DNS zone (since they could use the DNS credentials to get their own certificates), as well as interrupting mail delivery.
The impact to mail delivery stems from mail-specific TXT records, namely SPF, DKIM, its extension ADSP and DMARC. A compromise of these would also make
it easy to deliver phishing emails impersonating a sender from the compromised domain in question.
Use a "Throwaway" Validation Domain
The second method is to manually create CNAME records for the "_acme-challenge" subdomain and point them towards a validation domain that would reside in a zone controlled by a
different set of credentials. For example, if you want to get a certificate to cover yourdomain.tld and www.yourdomain.tld, you'd have to create two CNAME
records—"_acme-challenge.yourdomain.tld" and "_acme-challenge.www.yourdomain.tld"—and point both of them to an external domain for the validation.
The domain used for the challenge validation should be in an external DNS zone or in a subdelegate DNS zone that has its own set of management credentials. (A subdelegate DNS zone is
defined using NS records and it effectively delegates the complete control over a part of the zone to an external authority.)
The impact of compromise for this method is rather limited. Since the actual stored credentials are for an external DNS zone, an attacker who gets the credentials would only gain the
ability to issue certificates for all the domains pointing to records in that zone.
However, figuring out which domains actually do point there is trivial: the attacker would just have to read Certificate
Transparency logs and check if domains in those certificates have a magic subdomain pointing to the compromised DNS zone.
Limited DNS Zone Access
If your DNS software or provider allows for creating permissions tied to a subdomain, this could help you to mitigate the whole issue. Unfortunately, at the time of publication the
only provider we have found that allows this is Microsoft Azure DNS. Dyn supposedly also has
granular privileges, but we were not able to find a lower level of privileges in their service besides “Update records,” which still leaves the zone completely vulnerable.
Route53 and possibly others allow their users to create a subdelegate zone, new user credentials, point NS records towards the new zone, and point the "_acme-challenge" validation
subdomains to them using the CNAME records. It’s a lot of work to do the privilege separation correctly using this method, as one would need to go through all of these steps for each
domain they would like to use DNS challenges for.
As a disclaimer, the software discussed below is written by the author, and it’s used as an example of the functionality required to handle credentials required for DNS challenge
automation in a secure fashion. The final method is a piece of software called ACME-DNS, written to combat this exact issue, and it's able to mitigate the issue completely. One
downside is that it adds one more thing to your infrastructure to maintain as well as the requirement to have DNS port (53) open to the public internet. ACME-DNS acts as a simple DNS
server with a limited HTTP API. The API itself only allows updating of TXT records of automatically generated random subdomains. There are no methods to request lost credentials,
update or add other records. It provides two endpoints:
/register – This endpoint generates a new subdomain for you to use, accompanied by a username and password. As an optional parameter, the register endpoint takes a list of CIDR
ranges to whitelist updates from.
/update – This endpoint is used to update the actual challenge token to the server.
In order to use ACME-DNS, you first have to create A/AAAA records for it, and then point NS records towards it to create a delegation node. After that, you simply create a new set of
credentials via the /register endpoint, and point the CNAME record from the "_acme-challenge" validation subdomain of the originating zone towards the newly generated subdomain.
The only credentials saved locally would be the ones for ACME-DNS, and they are only good for updating the exact TXT records for the validation subdomains for the domains on the box.
This effectively limits the impact of a possible compromise to the attacker being able to issue certificates for these domains. For more information about ACME-DNS,
To alleviate the issues with ACME DNS challenge validation, proposals like assisted-DNS to IETF’s ACME working group have been discussed, but are currently still left without a
resolution. Since the only way to limit exposure from a compromise is to limit the DNS zone credential privileges to only changing specific TXT records, the current possibilities for
securely implementing automation for DNS validation are slim. The only sustainable option would be to get DNS software and service providers to either implement methods to create more
fine-grained zone credentials or provide a completely new type of credentials for this exact use case.
>> mehr lesen
How Grassroots Activists in Georgia Are Leading the Opposition Against a Dangerous “Computer Crime” Bill
(Mo, 26 Feb 2018)
A misguided bill in Georgia (S.B. 315) threatens to criminalize independent computer security research
and punish ordinary technology users who violate fine-print terms of service clauses. S.B. 315 is currently making its way through the state’s legislature amid uproar and resistance
that its sponsors might not have fully anticipated. At the center of this opposition is a group of concerned citizen-advocates who, through their volunteer advocacy, have drawn
national attention to the industry-wide implications of this bill.
Scott M. Jones and David Merrill from Electronic Frontiers Georgia—a group that participates in the Electronic Frontier Alliance network —spoke to us about their efforts to inform legislators and the public of the harms this bill would cause.
You have most recently been organizing around Georgia Senate Bill 315. What is the bill about, and what are your concerns with it?
Scott: Senate Bill 315 is a computer intrusion bill. Georgia already has on the books some very strong laws against computer intrusion, computer fraud, and the malicious side
of hacking. I think this is pretty well covered in state law as it is.
There was an incident last year at Kennesaw State University. Some of the functions for conducting elections in the state of Georgia were farmed out to KSU and their Election Center,
and there was a data breach there. That was very big in the news. What they didn’t say in the news at the time was that [it was] a security researcher who found a vulnerability and
reported it ethically. As it turns out, the researcher in question was not even targeting KSU election systems, but merely found inappropriate personal information via a Google
search, and then tried to get authorities to act quickly to remove it. This person, as we found out later, was investigated by the FBI and they came up clean. [The FBI] didn’t have
anything to charge them with, so they left.
The state feels very embarrassed by this, and the attorney general’s office has asked for a bill that goes above and beyond the existing statutes that we have against computer crime.
That’s where Senate Bill 315 came from. To use the language that the attorney general’s office used, they want to build it to criminalize so-called “poking around.” Basically, if
you’re looking for vulnerabilities in a non-destructive way, even if you’re ethically reporting them—especially if you’re ethically reporting them—suddenly you’re a criminal if this
bill passes into law.
David: I’ve worked in Atlanta cyber security for about 13 years and it’s a very tight-knit community. People from one company will go to another company, or a lot of the
founders from one company will end up founding another company. A lot of them started from incubators and think tanks at our university system here—a lot of them at Georgia Institute
of Technology. So if you have a chilling effect on one founder or one person who is interested in this kind of topic it can really stifle an entire industry and the whole chain of
people creating all these other organizations.
Other than security researchers, who else needs to be concerned about this bill?
Scott: The other issue with Senate Bill 315 is it’s so broadly written that it could bring in terms of service [enforcement]. Terms of service come from a private company—for
instance, your cable and Internet provider have terms of service. The bill is so broadly written that a violation of terms of service could possibly be construed as a criminal
violation, and that would be improper delegation of powers.
David: S.B. 315 uses the term, “unauthorized access,” which is a very murky term. If you’re trying to go through all the proper channels in advance and get authorization for
something, it’s not always clear who the person who has the authority to give that authorization is. If it’s a website and you’re testing some part of a website’s security you might
think it’s the website administrator, but often it’s not. Often it’s their IT dev ops team or the tech ops team or something else. You may even get permission from one person and
think you’re in the clear, and the next thing you know they say that’s not the correct authorization. With the broadness of the way this bill is written, there are way too many
circumstances where somebody could be in violation of the law just performing their daily duties.
What is your game plan right now for fighting this bill?
Scott: It was voted on by the Senate, so now it goes on to the House and it will be heard in committee. The game plan right now would be to line up support to have a good
showing at the House committee meeting. What we need in addition to ordinary people who do technology every day is some C-level people—CEOs, CIOs, CFOs, CTOs, CISOs, etc.
Electronic Frontiers Georgia participates in the Electronic Frontier Alliance. From that perspective, are there any notable differences between legislative-based organizing
and, say, generally raising awareness of digital rights locally?
Scott: As far as legislative versus non-legislative organizing: Electronic Frontiers Georgia is also very interested in raising general awareness and teaching basic concepts,
but I’m finding that it’s really hard to do both. We’re in legislative mode while the legislature is in session, which is roughly January 1st through about April 1st. After the
legislative season is over we pivot back to educational and social mode. It’s good to do both, but it can be very difficult to do both at the same time. Groups that are actively doing
activism at the state level shouldn’t beat themselves up if they’re not able to keep the same educational schedule up during the busy legislative season.
Electronic Frontiers Georgia has started working with other community groups in the area on the S.B. 315 fight. What advice would you give to grassroots groups who want to
work more collaboratively with each other but have never done so before?
Scott: What I’m finding is that there are a lot of groups in the area but a lot of them are siloed, which is to say that they essentially keep to themselves and don’t mix
with the other groups very frequently. They’re focused on their main core interest, and they just probably haven’t considered some of the issues like S.B. 315. It’s a challenge to
bring disparate groups together, but I’m trying to talk to them. For example, I’m giving a talk on S.B. 315 to DC404, which is the local DEFCON group—an information security group.
We’re also trying to invite in other groups that are not necessarily technology-focused that I think would be interested in this particular fight if they just understood it better.
One of the real struggles with S.B. 315 is trying to convince people who don’t work in technology that this is something they should care about. With news of data breaches every day,
how do you explain to somebody that this is actually going to make security worse rather than make it better? That requires a lot of explaining. Some of these groups are looking for
speakers and content, and that’s an opportunity for us to step in and fill that, and maybe explain our position to a better degree.
For more on Georgia S.B. 315, read here. If you’re advocating for digital rights
within your community, please explore the Electronic Frontier Alliance and
This interview has been lightly edited for length and readability. Additional information about the KSU breach was added after the original interview.
>> mehr lesen
The Problems With FISA, Secrecy, and Automatically Classified Information
(Mo, 26 Feb 2018)
We need to talk about national security secrecy. Right now, there are two memos on everyone’s mind, each with its own version of reality. But the memos are just one piece. How the
memos came to be—and why they continue to roil the waters in Congress—is more
On January 19, staff for Representative Devin Nunes (R-CA) wrote a classified memo alleging that the
FBI and DOJ committed surveillance abuses in its applications for and renewal of a
surveillance order against former Trump administration advisor Carter Page. Allegedly, the FBI and DOJ’s surveillance application included biased, politically-funded information.
The House Permanent Select Committee on Intelligence, on which Rep. Nunes serves as chairman, later voted to release the memo. What the memo meant, however, depended on who was
talking. Some Republican House members took the memo as fact, claiming it showed “abuse” and efforts
to “undermine our country.” But Rep. Adam Schiff
(D-CA)—who serves as Ranking Member on the House Permanent Select Committee on Intelligence, across from Nunes—called
the memo “profoundly misleading” and, in an opinion for The Washington
Post, said it “cherry-picks facts.”
Even the FBI entered the debate, slamming the memo
and saying the agency had “grave concerns about material omissions of fact that fundamentally impact the memo's accuracy." And Assistant Attorney General Stephen Boyd of the DOJ said
releasing the memo without review would be “extraordinarily reckless.” Finally, the president said the
memo “totally vindicates” him from special counsel Robert Mueller’s investigation into his
So a lawmaker made serious charges about surveillance abuses and corruption at the highest levels, and the rest of Congress and the public were ensnared in a guessing game: Could they
trust Devin Nunes and what he says? Is the memo he wrote, and the allegations in it, just smoke or is there fire? Unfortunately, the information needed to evaluate his claims is
hidden within multiple, nested layers of secrecy.
The secrecy starts with surveillance applications and secret court opinions, which are protected by classification that requires proper security clearance. Only a handful of lawmakers
can read the materials, but even they can’t openly discuss them in public. They could write a report, but the FBI and Justice Department would ask to redact the report. After
redactions, the report would be subject to a committee vote for release. If the report is cleared by committee, it ordinarily requires the president’s approval.
At any point in the process, this information could have been mislabeled, misidentified, embellished, or obscured, and we’d have almost no way of knowing.
It’s time to talk about FISA again, and the problems with its multi-layered secrecy regime.
We’re going to talk about a surveillance law that, when passed, installed secrecy both in a court system and in Congress, barring the public and their representatives from accessing
important information. When that information is partially revealed, it’s near impossible for the public to trust it.
The Foreign Intelligence Surveillance Act and Its Regime of Secrecy
Passed in 1978, the Foreign Intelligence Surveillance Act (FISA) dictates how the government conducts physical and electronic surveillance for national security purposes against
“foreign powers” and “agents of foreign powers.” FISA allows surveillance against “U.S. persons,” Americans and others in the U.S., so long as the agency doing the surveillance
demonstrates and provides probable cause that the U.S. person is engaged in terrorism, espionage, or other activities on behalf of a foreign power.
Typically when law enforcement conducts a search, the Fourth Amendment requires that they get a search warrant approved by a neutral magistrate, a judge assigned to hear warrant
applications. Under FISA, surveillance orders go through a slightly different review. The statute created an entirely separate court venue filled with 11 judges designated to review
FISA surveillance orders. These judges make up the Foreign Intelligence Surveillance Court
Similar to how courts review standard search warrants, FISC judges review FISA surveillance applications out of public view. Judges typically hear arguments from the government and no
one else, court hearings are not public, and the FISA orders themselves are kept secret.
(Notably, this warrant-like review does not happen under Section 702 of FISA, which the NSA uses to collect billions of communications without a warrant, including Americans’
communications. Under Section 702, which you can read about here, FISC judges do not review individual targets of surveillance and instead
sign off on programmatic surveillance policies.)
In the FISC, secrecy in each step is heightened. The court’s opinions and any transcript or record of the proceedings are automatically classified. Even the court’s physical location
is constructed to be “the nation’s most secure courtroom,” with reinforced concrete and
hand scanners to keep unauthorized people out.
This secrecy is hard to unravel after the fact. When recently asked by Rep. Nunes for more information about the renewed FISA surveillance warrant on Carter Page, Rosemary Collyer,
the presiding judge of the FISC wrote:
“As you know, any such transcripts would be classified. It may also be helpful for me to observe that, in a typical process of considering an application, we make no
systematic record of questions we ask or responses the government gives.”
Although surveillance conducted for run-of-the-mill law enforcement is often shadowy, the FISA process is far more shielded from public view. For example, standard search warrants are
used to gather evidence for later prosecutions that are by default public. That means at some point the government has to face—and knows it has to face—a defense attorney’s efforts to
question the evidence gathered from the search warrant. This is known as a “motion to suppress,” and with typical search warrants, these motions are filed in a public court. When that
court hears a motion to suppress, it usually issues an order discussing why the surveillance violated—or didn’t violate—the law. This is how our legal system is intended to function.
Lawyers and the public actually learn what the law is through this process, because in our system it is the duty of courts to “say what the law is.” For that reason, secret law is a
perversion of our system.
Moreover, the public disclosure of law enforcement search warrants serves important ends outside of any particular legal challenge. For one, they let the public know what police are
doing, both in their name and with their tax dollars. Second, they allow for greater accountability when police overstep their authority or otherwise misbehave.
FISC proceedings routinely fail this test.
FISA orders are for foreign intelligence purposes, so the surveillance is rarely used in a prosecution and rarely challenged in a motion to suppress. Moreover, even if the
fruits of FISA surveillance are used in court, criminal defendants and other litigants are deprived of access to this information, so they have little way of knowing if evidence
brought against them may have come from an improper FISA order. (FISA provides a mechanism for defendants to request this information, but no defendant has succeeded in doing so in
FISA’s 40-year history.) This impedes a defendant’s ability to challenge their prosecution, and it prevents related, public knowledge of these challenges.
But the secrecy in FISA extends much further than FISC, adding further opaque layers between what intelligence agencies and the court do and what the public sees.
Lacking Congressional Oversight
In practice, congressional oversight of the FISA process and the underlying materials is severely constrained. Although they have security clearances by virtue of their office, many
lawmakers are kept far away from classified documents because they do not have cleared staff to assist in processing the information, and their requests are given
lower priority than members of the intelligence oversight committees.
Even members of those House and Senate intelligence committees do not always have access to everything. In the case of the Nunes memo, only the “Gang of Eight” congressional leaders
and a handful of others out of the 435 members of the House of Representatives and the 100 members of the Senate reportedly had access to the underlying FISA surveillance applications
and unredacted FISC opinions.
This problem has restricted Congress members before. In 2003, when then-House intelligence committee chairman Jay Rockefeller learned of the NSA’s unconstitutional spying programs
under President George W. Bush, he had little capability to fight back. He wrote to then-Vice President Dick
“As you know, I am neither a technician nor an attorney. Given the security restrictions associated with this information, and my inability to consult staff or counsel on my own, I
feel unable to fully evaluate, much less endorse these activities."
Rockefeller—who knew of the programs—could not speak of them. For everyone else, reading FISA and FISC materials is close to impossible. Even after Congress passed the USA FREEDOM Act
in 2015 requiring that significant FISC Opinions be released to the public, these opinions are still highly redacted and tightly guarded, and no FISA application material has never
been revealed to the public.
It’s for these reasons that EFF has long called for Congress to reform how it
oversees surveillance activities conducted by the Executive Branch, including by providing all members of Congress with the tools they need to meaningfully understand and challenge
activities that are so often veiled in extreme secrecy.
Why This Matters
FISA’s inherent secrecy causes a chain reaction. Because the FISC’s surveillance orders are kept secret, it is hard to know if they are ever improper. Because criminal defendants are
kept in the dark about what evidence was used to obtain a FISA order, they cannot meaningfully challenge if the order was wrongly issued.
In Congress, because lawmakers are widely excluded from knowing the FISC’s procedures, efforts to fix the process are scarce. And, as we’ve seen with the Nunes memo, because so few
lawmakers can access FISA materials, if one lawmaker uses that access to make extraordinary claims, trying to prove or refute those claims is mostly futile.
Plainly, outsiders do not know who is telling the truth. Because the public cannot read the underlying FISA materials that the memo is based on, they can’t accurately separate fact
from fiction. They cannot see the FISC’s written approval for the order. They cannot see the order itself. And they cannot see the materials that went into the surveillance
According to reports, the majority of Congress is in the exact same position. They have not been able to see the FISC’s written approval for the order; they cannot see the order
itself. And they cannot see the materials that went into the surveillance application.
Rep. Adam Schiff, a member of the Gang of Eight, has tried to refute the Nunes memo, relying on the classified FISA order and surveillance application to write a sort of counter-memo.
But Schiff’s counter-memo was originally blocked by the Trump administration, with a lawyer for the president explaining that it “contains numerous properly classified and especially sensitive
What is sensitive about those passages, we don’t know. Why they are classified, we don’t know. What they could clear up, we don’t know. And we can’t assess the White House’s claim
that this counter-memo is too sensitive to be released, even though it approved release of the Nunes memo.
On February 24, the House Intelligence Committee ignored the White House’s wishes and released Rep. Schiff’s counter memo. The memo offered several claimed rebuttals to many of the
allegations in the original Nunes memo, but it included far more redactions, leaving the public to, yet again, guess at the full
And that’s the problem with FISA. Because of near airtight classification for everything that occurs in the FISC—and a corresponding congressional inaccessibility to that classified
information—it is exceedingly difficult to know when we are being told the truth. A single member of the Gang of Eight could, at any time, present information to the public as truth,
with few opportunities for others to rebut or verify those claims.
These truths should not be held at the mercy of classification, and they should not be a matter of security clearances, committee votes, and personal accusations. These problems are
exacerbated by Congress’ systemic failures to assert its constitutional oversight role. FISA prevents the public from knowing much of what its own government does in national security
investigations, and it prevents much of Congress from being able to stop single bad actors from misrepresenting classified material.
EFF will continue to fight for governmental transparency. It is one of the strongest vehicles we have to ensure that our government is protecting our rights, and that our government’s
members are telling the truth.
>> mehr lesen
San Francisco: Building Community Broadband to Protect Net Neutrality and Online Privacy
(Sa, 24 Feb 2018)
Like many cities around the country, San Francisco is considering an investment in community broadband infrastructure: high-speed fiber that would make Internet access cheaper and
better for city residents. Community broadband can help alleviate a number of
issues with Internet access that we see all over America today. Many Americans have no choice of provider for high-speed Internet, Congress
eliminated user privacy protections in 2017, and the FCC
decided to roll back net neutrality protections in December.
This week, San Francisco published the recommendations of a group of experts, including EFF’s Kit Walsh, regarding how to protect the privacy and speech of those using community
This week, the Blue Ribbon Panel on Municipal Fiber released its third report, which tackles competition, security, privacy, net neutrality, and more. It recommends
San Francisco’s community broadband require net neutrality and privacy protections. Any ISP looking to use the city’s infrastructure would have to adhere to certain standards. The
model of community broadband that EFF favors is sometimes called “dark fiber” or “open access.” In this model, the government invests in fiber infrastructure, then opens it up for
private companies to compete as your ISP. This means the big incumbent ISPs can no longer block new competitors from offering you Internet service. San Francisco is pursuing the “open
access” option, and is quite far along in its process.
The “open access” model is preferable to one in which the government itself acts as the ISP, because of the civil liberties risks posed by a government acting as your conduit
Of course, private ISPs can also abuse your privacy and restrict your opportunities to speak and learn online.
To prevent such harms, the expert panel explained how the city could best operate its network so that competition, as well as legal requirements, would prevent ISPs from violating net
neutrality or the privacy of residents.
That would include, as was found in the 2015 Open Internet Order recently repealed by the FCC, a ban on blocking of sites, content, or applications; a ban on throttling sites,
content, or applications; and a ban on paid prioritization, where ISPs favor themselves or companies who have paid them by giving their content better treatment.
The report also recommends requiring a number of consumer protections that Congress prevented from ever being enacted. If an ISP wants to sell or show a customer’s personal
information to anyone, they’d have to give permission first. Even the use of data that doesn’t identify someone would require permission. Both of these would have to be “opt-in,” so
it would be assumed that there was no consent to use the data. (“Opt-out” would mean that using customer data is assumed to be fine unless that customer figured out how to tell them
Furthermore, the goal is to build infrastructure that connects every home and business to a fiber optic network, guaranteeing everyone in the city access to fast, reliable Internet.
And while the actual lines will be owned by the city, it will be an “open-access” model—that is, space on the city-owned lines will be leased to private companies, creating
competition and choice.
The report also recommends that San Francisco require ISPs to protect privacy when faced with legal challenges or demands from government agencies. It recommends San Francisco require
ISPs using its network do a number of things (e.g., give up the right to look at customer communications, give up the right to consent to searches of communications, and swear to—if
not prohibited by law—tell customers when they’re being asked to hand over information) to help protect the civil liberties and privacy of users.
With all of these things combined, San Francisco’s community broadband looks to be doing as much as possible to provide choices while also ensuring that all their options lead to safe
and secure connection to a free and open Internet. That’s something we can all work towards in our communities.
>> mehr lesen
The Federal Circuit Should Not Allow Patents on Inventions that Should Belong to the Public
(Fr, 23 Feb 2018)
One of the most fundamental aspects of patent law is that patents should only be awarded for new inventions. That is, not only does someone have to invent something new to them in
order to receive a patent, is must also be a new to the world. If someone independently comes up with an idea, it doesn’t mean that person should get a patent if someone else already
came up with the same idea and told the public.
There’s good reason for this: patents are an artificial restraint on trade. They work to increase costs (the patent owner is rewarded with higher prices) and can impede follow-on innovation. Policy makers generally try to justify what would otherwise be
considered a monopoly through the argument that without patents, inventors may never have invested in research or might not want to make their inventions public. Thus, the story goes,
we should give people limited monopolies in the hopes that overall, we end up with more innovation (whether this is actually true, particularly for software, is debatable).
A U.S. Court of Appeals for the Federal Circuit rule, however, upends the patent bargain and allows a second-comer—someone who wasn’t the first inventor—to get a patent under a
particular, albeit fairly limited, circumstance. A new petition challenges this rule, and EFF has
filed an amicus brief in support of undoing the Federal Circuit’s misguided rule.
The rule is based on highly technical details of the Patent Act, which you can read about in our brief along with those of Ariosa (the patent challenger) and a group of law professors. Our brief argues that the Federal Circuit rule is an incorrect understanding
of the law. We ask the Federal Circuit to rehear the issue with the full court, and reverse its current rule.
While the Federal Circuit rule is fairly limited and doesn’t arise in many situations, we have significant concerns about the policy it seems to espouse. Contrary to decades of
Supreme Court precedent, the rule allows, under certain circumstances, someone to get a patent on something had already been disclosed to the public. We believe that is always bad
>> mehr lesen
FOSTA Would Be a Disaster for Online Communities
(Fr, 23 Feb 2018)
Frankenstein Bill Combines the Worst of SESTA and FOSTA. Tell Your Representative to Reject New Version of H.R. 1865.
The House of Representatives is about to vote on a bill that would force online platforms to censor their users. The Allow States and Victims to Fight Online Sex Trafficking Act
(FOSTA, H.R. 1865) might sound noble, but it would do nothing to stop sex traffickers. What it
would do is force online platforms to police their users’ speech more forcefully than ever before, silencing legitimate voices in the process.
Back in December, we said that while FOSTA was a very dangerous bill, its impact on online spaces would not be as broad as the
Senate bill, the Stop Enabling Sex Traffickers Act (SESTA, S.
1693). That’s about to change.
The House Rules Committee is about to approve a new version of FOSTA [.pdf] that incorporates most of the dangerous components of
SESTA. This new Frankenstein’s Monster of a bill would be a disaster for Internet intermediaries, marginalized communities, and even trafficking victims themselves.
If you don’t want Congress to undermine the online communities we all rely on, please take a moment to call your representative and
urge them to oppose FOSTA.
Take ActionStop FOSTA
Gutting Section 230 Is Not a Solution
The problem with FOSTA and SESTA isn’t a single provision or two; it’s the whole approach.
FOSTA would undermine Section 230, the law protecting online platforms from some types of liability for their users’ speech. As we’ve
explained before, the modern Internet is only possible thanks to a
strong Section 230. Without Section 230, most of the online platforms we use would never have been formed—the risk of liability for their users’ actions would have simply been too
Section 230 strikes an important balance for when online platforms can be held liable for their users’ speech. Contrary to FOSTA supporters’ claims, Section 230 does nothing to
protect platforms that break federal criminal law. In particular, if an Internet company knowingly engages in the advertising of sex trafficking, the U.S. Department of Justice can and should prosecute it.
Additionally, Internet companies are not immune from civil liability for user-generated content if plaintiffs can show that a company had a direct hand in creating the illegal content.
The new version of FOSTA would destroy that careful balance, opening platforms to increased criminal and civil liability at both the federal and state levels. This includes a new
federal sex trafficking crime targeted at web platforms (in addition to 18 U.S.C. § 1591)—but which would not require a
platform to have knowledge that people are using it for sex trafficking purposes. This also includes exceptions to Section 230 for state law criminal prosecutions against
online platforms, as well as civil claims under federal law and civil enforcement of federal law by state attorneys general.
Perhaps most disturbingly, the new version of FOSTA would make the changes to Section 230 apply retroactively: a platform could be prosecuted for failing to comply with the law before
it was even passed.
FOSTA Would Chill Innovation
Together, these measures would chill innovation and competition among Internet
companies. Large companies like Google and Facebook may have the budgets to survive the massive increase in litigation and liability that FOSTA would bring. They may also have the
budgets to implement a mix of automated filters and human censors to comply with the law. Small startups don’t. And with the increased risk of litigation, it would be difficult for
new startups ever to find the funding they need to compete with Google.
Today’s large Internet companies would not have grown to prominence without the protections of Section 230. FOSTA would raise the ladder that has allowed those companies to grow,
making it very difficult for newcomers ever to compete with them.
FOSTA Would Censor Victims
Congress should think long and hard before dismantling the very tools that have proven most effective in fighting trafficking.
More dangerous still is the impact that FOSTA would have on online speech. Facing the threat of extreme criminal and civil penalties, web platforms large and small would have little
choice but to silence legitimate voices. Supporters of SESTA and FOSTA pretend that it’s easy to distinguish online postings related to sex trafficking from ones that aren’t. It’s
not—and it’s impossible at the scale needed to police a site as large as Facebook or Reddit. The problem is compounded by FOSTA’s expansion of federal prostitution law. Platforms
would have to take extreme measures to remove a wide range of postings, especially those related to sex.
Some supporters of these bills have argued that platforms can rely on automated filters in order to distinguish sex trafficking ads from legitimate content. That argument is
laughable. It’s difficult for a human to distinguish between a legitimate post and one that supports sex trafficking; a computer certainly could not do it with anything approaching 100% accuracy. Instead, platforms
would have to calibrate their filters to over-censor. When web platforms rely too heavily on automated filters, it often puts marginalized voices at a disadvantage.
Most tragically of all, the first people censored would likely be sex trafficking victims themselves. The very same words and phrases that a filter would use to attempt to delete sex
trafficking content would also be used by victims of trafficking trying to get help or share their experiences.
There are many, many stories of traffickers being caught by law enforcement thanks
to clues that police officers and others found on online platforms. Congress should think long and hard before dismantling the very tools that have proven most effective in fighting
FOSTA Is the Wrong Approach
There is no amendment to FOSTA that would make it effective at fighting online trafficking while respecting the civil liberties of everyone online. That’s because the problem with
FOSTA and SESTA isn’t a single provision or two; it’s the whole approach.
Creating more legal tools to go after online platforms would not punish sex traffickers. It would punish all of us, wrecking the safe online communities that we use every day.
And in the process, it would also undermine the tools that have proven most effective at putting
traffickers in prison. FOSTA is not the right solution, and no trimming around the edges will make it the right solution.
If you care about protecting the safety of our online communities—if you care about protecting everyone’s right to speak online, even about sensitive topics—we urge you to call your representative today and tell them to reject FOSTA.
Take ActionStop FOSTA
>> mehr lesen
The FCC’s Net Neutrality Order Was Just Published, Now the Fight Really Begins
(Do, 22 Feb 2018)
Today, the FCC’s so-called “Restoring Internet Freedom Order,” which repealed the net neutrality protections the FCC had previously created with the 2015 Open Internet Order, has been officially published. That means the clock has started ticking on all the
ways we can fight back.
While the rule is published today, it doesn’t take effect quite yet. ISPs can’t start blocking, throttling, or paid prioritization for a little while. So while we still have the
protections of the 2015 Open Internet Order and we finally have a published version of the “Restoring Internet Freedom Order,” it’s time to act.
First, under the Congressional Review Act (CRA), Congress can reverse a change in regulation with a simple majority vote. That would bring the 2015 Open Internet Order back into
effect. Congress has 60 working days—starting from when the rule is published in the official record—to do this. So those 60 days start now.
The Senate bill has
50supporters, only one away from the majority it
needs to pass. The House of Representatives is a bit further away. By our count, 114 representatives have made public commitments in support of voting for a CRA action. Now that time
is ticking down for the vote, tell Congress to save the existing net neutrality rules.
Second, it is now unambiguous that the lawsuits of 22 states, public interestgroups, Mozilla, and the
Internet Association can begin. While the FCC decision said lawsuits had to wait ten days until after the official publication, there was some question about whether federal law
said something else. So while some suits have already been filed, with the 10-day counter from the FCC starting, it’s clear that lawsuits can begin.
And, of course, states and other local governments continue to move forward on their own measures to protect net neutrality. 26 state legislatures are considering net neutrality legislation and five governors have issued executive
orders on net neutrality. EFF has some ideas on how state law
can stand up to the FCC order. Community broadband can also ensure that net
neutrality principles are enacted on a local level. For example, San Francisco is currently looking for proposals
to build an open-access network that would require net neutrality guarantees from any ISP looking to offer services over the city-owned infrastructure.
So while the FCC’s vote in December was in direct contradiction to the wishes of the majority of Americans, the publishing of that order means
that action can really start to be taken.
>> mehr lesen