Deeplinks

Telling the Truth About Defects in Technology Should Never, Ever, Ever Be Illegal. EVER. (Mi, 15 Aug 2018)
Congress has never made a law saying, "Corporations should get to decide who gets to publish truthful information about defects in their products,"— and the First Amendment wouldn't allow such a law — but that hasn't stopped corporations from conjuring one out of thin air, and then defending it as though it was a natural right they'd had all along. Some background: in 1986, Ronald Reagan, spooked by the Matthew Broderick movie Wargames (true story!) worked with Congress to pass a sweeping cybercrime bill called the Computer Fraud and Abuse Act (CFAA) that was exceedingly sloppily drafted. CFAA makes it a felony to "exceed[] authorized access" on someone else's computer in many instances. Fast forward to 1998, when Bill Clinton and his Congress enacted the Digital Millennium Copyright Act (DMCA), a giant, gnarly hairball of digital copyright law that included section 1201, which bans bypassing any "technological measure" that "effectively controls access" to copyrighted works, or "traffic[ing]" in devices or services that bypass digital locks. Notice that neither of these laws bans disclosure of defects, including security disclosures! But decades later, corporate lawyers and federal prosecutors have constructed a body of legal precedents that twist these overbroad laws into a rule that effectively gives corporations the power to decide who gets to tell the truth about flaws and bugs in their products. Businesses and prosecutors have brought civil and criminal actions against researchers and whistleblowers who violated a company's terms of service in the process of discovering a defect. The argument goes like this: "Our terms of service ban probing our system for security defects. When you login to our server for that purpose, you 'exceed your authorization,' and that violates the Computer Fraud and Abuse Act." Likewise, businesses and prosecutors have used Section 1201 of the DMCA to attack researchers who exposed defects in software and hardware. Here's how that argument goes: "We designed our products with a lock that you have to get around to discover the defects in our software. Since our software is copyrighted, that lock is an 'access control for a copyrighted work' and that means that your research is prohibited, and any publication you make explaining how to replicate your findings is illegal speech, because helping other people get around our locks is 'trafficking.'" The First Amendment would certainly not allow Congress to enact a law that banned making true, technical disclosures. Even (especially!) if those disclosures revealed security defects that the public needed to be aware of before deciding whether to trust a product or service. But the presence of these laws has convinced the tech industry — and corporations that have added 'smart' tech to their otherwise 'dumb' products — that it's only natural that they should be the sole custodians of the authority to embarrass or inconvenience them. The worst of these actors use threats of invoking CFAA and DMCA 1201 to silence researchers altogether, so the first time you discover that you've been trusting a defective product is when it is so widely exploited by criminals and grifters that it's impossible to keep the problem from becoming widely known. Even the best, most responsible corporate actors get this wrong. Tech companies like Mozilla, Dropbox and, most recently, Tesla, have crafted "coordinated disclosure" policies in which they make sincere and legally enforceable promises to take security disclosures seriously and act on them within a defined period, and they even promise not to use laws like DMCA 1201 to retaliate against security researchers who follow their guidelines. This is a great start, but it's a late and limited solution to a much bigger problem. The point is that almost every company is a "tech company" — from medical implant vendors to voting machine companies — and not all of them are as upstanding and public-spirited as Mozilla. Many of these companies do have "coordinated disclosure" policies by which they hope to tempt security researchers into coming to them first when they discover problems with their products and services. But these companies don't make these policies out of the goodness of their hearts: those policies exist because they're the companies' best hope of keeping security researchers from embarrassing them and leaving them scrambling by just publishing the bug without warning. If corporations can simply silence researchers who don't play ball, we should expect them to do so. There is no shortage of CEOs who are lulling themselves to sleep tonight with fantasies about getting to shut their critics up. EFF is currently suing the US government to invalidate DMCA 1201 and the ACLU is trying to chip away at CFAA, and there will come a day when we succeed, because the idea of suppressing bug reports (even ones made in disrespectful or rude ways) is totally incompatible with the First Amendment. Rather than crafting a disclosure policy that says "We'll stay away from these unjust and absurd interpretations of these badly written laws, provided you only tell the truth in ways we approve of," companies that want to lead by example could do so by putting something like this in their disclosure policies: We believe that conveying truthful warnings about defects in systems is always legal. Of course, we have a strong preference for you to use our disclosure system [LINK] where we promise to investigate your bugs and fix them in a timely manner. But we don't believe we have the right to force you to use our system. Accordingly, we promise to NEVER invoke any statutory right — for example, rights we are granted under trade secret law, anti-hacking law, or anti-circumvention law — against ANYONE who makes a truthful disclosure about a defect in one of our products or services, regardless of the manner of that disclosure. We really do think that the best way to keep our customers safe and our products bug-free is to enter into a cooperative relationship with security researchers and that's why our disclosure system exists and we really hope you'll use it, but we don't think we should have the right to force you to use it. Companies should not rely on these laws to silence security researchers who displease them with the time and manner of their truthful disclosures — if their threats ever materialize into full-blown lawsuits, there's a reasonable chance that they'll find themselves facing down public-spirited litigators (ahem) who will use those suits as a fast-track to overturning these laws in the courts. But while we wait for the slow wheels of justice to turn, the specter of legal retaliation haunts the best and most public-spirited security researchers (the researchers who work for cyber-criminals and state surveillance contractors don't have to worry about these laws, because they never make their findings public). That is bad for all of us, because for every Tesla, Dropbox and Mozilla, there are a thousand puny tyrants who are using these good-citizen companies' backhanded insistence that disclosure should be subject to  their corporate approval to intimidate their own critics into silence. Those intimidated researchers? They've discovered true facts about why we shouldn't trust systems with our data, our finances, our personal communications, the security of our homes and businesses, and even our lives. EFF has sued the US government to overturn DMCA 1201 and we just asked the US Copyright Office to reassure security researchers that DMCA 1201 does not prevent them from telling the truth. We're discussing all this in a Reddit AMA next Tuesday, August 21, from 12-3PM Pacific (3-6PM Eastern). We hope you'll come and join us. Related Cases:  Green v. U.S. Department of Justice
>> mehr lesen

Help Send EFF to SXSW 2019 (Mi, 15 Aug 2018)
Want to see the Electronic Frontier Foundation at the annual SXSW conference and festival in 2019? Help us get there by voting for our panels in the SXSW Panel Picker! Every year, the Internet has a chance to choose what panels will be featured at the event. We’re asking friends and fans to take a moment to vote for us. Here's how you can help EFF: Visit the Panel Picker site and login or register for a new account. Click each of the links below. Click the “Vote up” button on the left of the page, next to the panel description. Share this blog post! Suggested tweet: Help @EFF get to SXSW! You can vote in SXSW's Panel Picker: https://www.eff.org/deeplinks/2018/08/help-send-eff-sxsw-2019 Here are the panels with EFF staff members—please upvote! 8-Bit Policies in a 4K World: Adapting Law to Tech Fighting Misinformation and Defending the Open Web Beyond the Surveillance Business Model: Why & How Untold AI: Is Sci-Fi Telling Us the Right Stories? With four exciting panel proposals on subjects from combating misinformation on the web to a discussion of whether or not science-fiction is doing a good job at talking about AI, you can help us keep SXSW as an incubator of cutting-edge technologies and digital creativity, and also as a place where experts discuss what those technologies mean for digital rights. Here is more info on the panels we’re hoping to join: 8-Bit Policies in a 4K World: Adapting Law to Tech The speed at which technology is developing is unprecedented in our history, yet politicians are as jammed up and at loggerheads as ever. The Senate hearing with Mark Zuckerberg revealed how little our political leaders actually understand what's going on, but we're still bound by the decisions they make regarding the technology we use on a daily basis. SOPA, PIPA, and the FCC's vote against Net Neutrality are specific instances of politicians being at odds with public opinion, where technology enthusiasts feel the constant struggle to stem the tide of harmful legislation, and many may be left wondering - where is this going? Speakers: Alex Shahrestani, Board Member, EFF-Austin, Digital Arts Coalition  Shahid Buttar, Director of Grassroots Advocacy, Electronic Frontier Foundation Jan Gerlach, Public Policy Manager, Wikimedia Foundation Join us as we discuss how to engage with our representatives and help them craft flexible policies that address the ever-changing tech landscape. Fighting Misinformation and Defending the Open Web The spread of misinformation is becoming an increasing problem in countries around the world. In particular during election times, social media platforms have been used to strategically to influence public opinion – from the Philippines, to Kenya, from Germany to the USA. Lack of net neutrality and the dominance of platforms like Facebook with its zero rating services are contributing to this becoming an increasing problem for democracy. Internet activists from Africa, Europe and the USA will give insights into different government attempts to introduce new legislation combating the spread of misinformation as well as civil society strategies to defend freedom of speech and promote access to pluralistic information sources. Speakers: Geraldine de Bastion, Founder / International Executive Director, Global Innovation Gathering Nanjira Sambuli, Consultant , Web Foundation Markus Beckedahl, Founder, Netzpolitik Jillian York, Director for International Freedom of Expression, EFF Beyond the Surveillance Business Model: Why & How It’s time to talk about the future – how technology developers and companies can successfully move beyond the surveillance business model. Trump, Cambridge Analytica and the growing scope of cybersecurity crises have been a wake-up call to the public, tech employees, and investors about the high price of the collect-it-all business model and the grave impact it can have on society. New comprehensive European and California privacy law have changed the landscape and the risk for surveillance business models. Get the inside track from Silicon Valley journalist and author Brad Stone, Duck Duck Go Founder and CEO Gabriel Weinberg, EFF’s Executive Director Cindy Cohn, and the ACLU’s Nicole Ozer on why and how to build a successful business model beyond surveillance. Speakers: Nicole Ozer, Technology & Civil Liberties Director, ACLU of California Gabriel Weinberg, Founder and CEO, Duck Duck Go Brad Stone, Senior Executive Editor, Bloomberg Technology Cindy Cohn, Executive Director, Electronic Frontier Foundation Untold AI: Is Sci-Fi Telling Us the Right Stories? How do depictions of Artificial Intelligence in popular science fiction affect how we think about real AI and its future? How has fiction about AI influenced the development of AI technology and policy in the real world? (And do we really have to talk about Terminator’s Skynet or 2001’s Hal 9000 every damned time we talk about the risks of AI?) Join bestselling sci-fi authors Cory Doctorow and Malka Older, scifiinterfaces.com editor Chris Noessel, along with futurism and AI policy experts as they examine what TV, movies, games, and sci-fi literature are telling us about AI, compare those lessons to real-world AI tech & policy, and identify the stories that we should be telling ourselves about AI, but aren’t. Speakers: Christopher Noessel, Designer, IBM Cory Doctorow, Apollo 1201, Electronic Frontier Foundation Malka Older, Author, Self-employed Rashida Richardson, Director of Policy Research, AI Now Institute Thanks for your help!
>> mehr lesen

How Militaries Should Plan for AI (Di, 14 Aug 2018)
Today we are publishing a new EFF white paper, The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI. This paper analyzes the risks and implications of military AI projects in the wake of Google's decision to discontinue AI assistance to the US military's drone program and adopt AI ethics principles that preclude many forms of military work. The key audiences for this paper are military planners and defense contractors, who may find the objections to military uses of AI from Google's employees and others in Silicon Valley hard to understand. Hoping to bridge the gap, we urge our key audiences to consider several guiding questions. What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control? What are the appropriate responses that states and militaries can adopt in response? What kinds of AI are safe for military use, and what kinds aren't? Militaries must make sure they don't buy into the machine learning hype while missing the warning label. We are at a critical juncture. Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation. They also lack the basic forms of common sense and judgment on which humans usually rely.[1] Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too. The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run—or they could quickly push us in the opposite direction. We hope this white paper will help them chart the former course. Part I identifies how military use of AI could create unexpected dangers and risks, laying out four major dangers: Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning. Until these attacks are better understood and defended against, militaries should avoid ML applications that are exposed to input (either direct input or anticipatable indirect input) by their adversaries. The current balance of power in cybersecurity significantly favors attackers over defenders. Until that changes, AI applications will necessarily be running on insecure platforms, and this is a grave concern for command, control, and intelligence (C2I), as well as autonomous and partially autonomous weapons. Many of the most dramatic and hyped recent AI accomplishments have come from the field of reinforcement learning (RL), but current state-of-the-art RL systems are particularly unpredictable, hard to control, and unsuited to complex real-world deployment. The greatest risk posed by military applications of AI, increasingly autonomous weapons, and algorithmic C2I is that the interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. This is true both of use by a single military over time, and, even more importantly, between those of opposing nations. As a result, there is a serious risk of accidental conflict, or accidental escalation of conflict, if ML or algorithmic automation is used in these kinds of military applications. Part II offers and elaborates on an agenda for mitigating these risks: Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts. Focus on machine learning applications that lie outside of the "kill chain," including logistics, system diagnostics and repair, and defensive cybersecurity. Focus R&D effort on increasing the predictability, robustness, and safety of ML systems. Share predictability and safety research with the wider academic and civilian research community. Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective, since the security of hardware and software platforms is a precondition for many military uses of AI. The national security community has a key role to play in changing the balance between cyber offense and defense. Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation, that increasing automation of weapons systems and C2I would inherently create. Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that we may regret in decades to come. Read the full white paper as a PDF or on the Web.
>> mehr lesen

EFF Tells Bay Area Regional Transit: Reject Proposed Face Surveillance Scheme (Fr, 10 Aug 2018)
Around the country, communities concerned about privacy and surveillance are seeking to secure a robust role for public community oversight to constrain the co-optation of local police departments by electronic surveillance. EFF supported recent victories for community control in Oakland and Berkeley, CA, before recommending today that the Bay Area Regional Transit (BART) Board reject recent proposals to expand surveillance on the BART system.  The Board considered two proposals today. One was for a hastily crafted “Safety and Security Action Plan,” including a provision for a “Physical Security Information Management system” (PSIM) that “would be capable of monitoring thousands of simultaneous video streams and automating response recommendation.” The other was for a face surveillance scheme that seems to lack any awareness of the profound threat it could present to privacy, dissent, communities of color, and immigrants. Facial recognition is an especially menacing surveillance technology, and BART should reject it. Given the wide proliferation of surveillance cameras and the choice of most people to expose their face in public, facial recognition technology can enable the government to track all of our movements and activities as we go about our days in public places. If allowed to proceed, a face surveillance system will deter people from engaging in First Amendment activity in public places monitored by surveillance cameras. It will disparately impact people of color, because they are more likely than white people to suffer “false positive” matches, and because of structural inequities in our criminal justice system regarding who is listed in over-inclusive watchlists and error-riddled warrant databases. And it will menace immigrant communities, because federal immigration agencies may seek the massive set of sensitive data captured by these systems.  Finally, both proposals ignore the Board’s prior discussions about creating a process for community oversight, and threaten the principles of community control advancing across California and elsewhere across the country. At a time when the federal government’s arbitrary uses of surveillance tools have prompted widespread concerns about the rights of vulnerable minorities, BART should place a priority on heeding communities’ concerns, rather than half-baked proposals for new surveillance schemes. Read the letter we submitted below. https://www.eff.org/document/august-8-2018-eff-letter-bart-board
>> mehr lesen

Topple Track Attacks EFF and Others With Outrageous DMCA Notices (Fr, 10 Aug 2018)
Update August 10, 2018: Google has confirmed that it has removed Topple Track from its Trusted Copyright Removal Program membership due to a pattern of problematic notices. Symphonic Distribution (which runs Topple Track) contacted EFF to apologize for the improper takedown notices. It said that “bugs within the system that resulted in many whitelisted domains receiving these notices unintentionally.” Symphonic Distribution said that it had issued retraction notices and that it was working to resolve the issue. While we appreciate the apology, we are skeptical that its system is fixable, at least via whitelisting domains. Given the sheer volume of errors, the problem appears to be with Topple Track’s search algorithm and lack of quality control, not just with which domains they search. At EFF, we often write about abuse of the Digital Millennium Copyright Act (DMCA) takedown process. We even have a Hall of Shame collecting some of the worst offenders. EFF is not usually the target of bad takedown notices, however. A company called Topple Track has been sending a slew of abusive takedown notices, including false claims of infringement levelled at news organizations, law professors, musicians, and yes, EFF. Topple Track is a “content protection” service owned by Symphonic Distribution. The company boasts that it is “one of the leading Google Trusted Copyright Program members.” It claims: Once we identify pirated content we send out automated DMCA takedown requests to Google to remove the URLs from their search results and/or the website operators. Links and files are processed and removed as soon as possible because of Topple Track’s relationship with Google and file sharing websites that are most commonly involved in the piracy process. In practice, Topple Track is a poster child for the failure of automated takedown processes. Topple Track’s recent DMCA takedown notices target so much speech it is difficult to do justice to the scope of expression it has sought to delist. A sample of recent improper notices can be found here, here, here, and here. Each notice asks Google to delist a collection of URLs. Among others, these notices improperly target: EFF’s case page about EMI v MP3Tunes The authorized music store on the official homepage of both Beyonce and Bruno Mars A fundraising page on the Minneapolis Foundation’s website The Graceland page at Paul Simon’s official website A blog post by Professor Eric Goldman about the EMI v MP3Tunes case A Citizen Lab report about UC Browser A New Yorker article about nationalism and patriotic songs Other targets include an article about the DMCA in the NYU Law Review, an NBC News article about anti-virus scams, a Variety article about the Drake-Pusha T feud, and the lyrics to ‘Happier’ at Ed Sheeran’s official website. It goes on and on. If you search for Topple Track’s DMCA notices at Lumen, you’ll find many more examples. The DMCA requires that the sender of a takedown notice affirm, under the penalty of perjury, that the sender has a good faith belief that the targeted sites are using the copyrighted material unlawfully. Topple Track’s notices are sent on behalf of a variety of musicians, mostly hip-hop artists and DJs. We can identify no link—let alone a plausible claim of infringement—between the pages mentioned above and the copyrighted works referenced in Topple Track’s takedown notices. The notice directed at an EFF page alleges infringement of “My New Boy” by an artist going by the name “Luc Sky.” We couldn’t find any information about this work online. Assuming this work exists, it certainly isn’t infringed by an out-of-date case page that has been languishing on our website for more than eight years. Nor is it infringed by Eric Goldman’s blog post (which has more recent news about the EMI v MP3Tunes litigation).  EMI v. MP3Tunes was a case about a now-defunct online storage service called MP3Tunes. The record label EMI sued the platform for copyright infringement based on the alleged actions of some of its users. But none of this has any bearing on Luc Sky. MP3Tunes has been out of business for years. It is important to remember than even the most ridiculous takedown notices can have real consequences. Many site owners will never even learn that their URL was targeted. For those that do get notice, very few file counternotices. These users may get copyright strikes and thereby risk broader disruptions to their service. Even if counternotices are filed and processed fairly quickly, material is taken down or delisted in the interim. In Professor Goldman’s case, Google also disabled AdSense on the blog post until his counternotice became effective. We cannot comprehend how Topple Track came to target EFF or Eric Goldman on behalf of Luc Sky. But given the other notices we reviewed, it does not appear to be an isolated error. Topple Track’s customers should also be asking questions. Presumably they are paying for this defective service. While Topple Track is a particularly bad example, we have seen many other cases of copyright robots run amok. We reached out to Google to ask if Topple Track remains part of its trusted copyright program but did not hear back. At a minimum, it should be removed from any trusted programs until it can prove that it has fixed its problems.
>> mehr lesen

EFF Amicus Brief: The Privacy Act Requires the FBI to Delete Files of Its Internet Speech Surveillance (Do, 09 Aug 2018)
U.S. law makes clear that the government cannot keep surveillance records on a person or group because of their political views or the way that they express their First Amendment rights. Unfortunately, the FBI has flouted these laws by maintaining records of its probe of two people whose website criticized U.S. policy in the Middle East. EFF is urging a court to make this right. EFF filed an amicus brief in support of an ACLU of Northern California lawsuit to enforce privacy protections that Congress put in place in the 1970s against government surveillance.  Rigorous enforcement of this law is needed to prevent the FBI from maintaining information it collects on the Internet about our First Amendment activity for many years after that information is no longer relevant to an ongoing investigation. After the FBI tracked Dr. Martin Luther King, Jr. and other civil rights activists, the Army monitored domestic protests, and President Nixon ordered surveillance of his political opponents, Congress stepped in and passed the Privacy Act of 1974, which established rules about what types of information the government can collect and keep about people. The Act gives individuals the right to access records the government has on them and change or even delete that information.  One of the most protective provisions is a prohibition against maintaining records of First Amendment activity, but law enforcement was given an exception for “authorized law enforcement purposes.” In this case, plaintiffs Mr. Raimondo and Mr. Garris ran the website antiwar.com, where they wrote pieces criticizing U.S policy in the Middle East in the early 2000s. After reposting a widely available FBI document, they caught the notice of the FBI, which began tracking the website and the two men through a practice called “threat assessment.” The FBI did not find any wrongdoing or basis to further investigate. Nonetheless, the FBI maintained for many years a record of the postings on this advocacy website and its writers. The First Amendment clearly protects their online journalism and advocacy. Now they are requesting that the FBI expunge their surveillance files. FBI assessments are the lowest level of investigation under the Attorney General’s guidelines for FBI investigations.  When agents undertake assessments, they aren’t supervised, and they don’t have to justify opening an assessment based on specific facts. Rather, they just have to assert an authorized purpose in a criminal, national security, or foreign intelligence investigation. The Attorney General’s guidelines for this practice actually encourage agents to search the Internet for public information about targets. This may include people’s online blogs, posts in a public Facebook group, and even comments on a news article. Then agents store this online First Amendment activity in FBI files that can last in perpetuity. The Privacy Act’s protection for free expression was written out of fear of on-the-ground surveillance of protestors, but the modern law enforcement practice of searching the Internet for an individual’s entire online presence is far more invasive. Collecting and then keeping historic records of speech chills the ability of people and organizations to use the Internet as a platform for the open exchange of ideas. And it is all too likely that FBI agents will target this high tech investigative power against racial minorities and political dissidents. This fear is heightened by the FBI’s ongoing surveillance of Black Lives Matter and Muslim Americans, and the controversy surrounding the FBI’s ill-advised designation of non-existent so-called “Black Identity Extremists” as a dangerous movement. Which is why we filed an amicus brief in the Ninth Circuit Court of Appeals. EFF is committed to fighting ever-expanding federal surveillance policies. It is critical that we enforce the protections that have already been written into law, like the federal Privacy Act’s ban on maintaining records of First Amendment activity. If you’ve been improperly surveilled, you should have an easily-available mechanism to delete records that never should have been collected and kept in the first place.  
>> mehr lesen

Large ISPs, Flush with Capital, Blame Consumer Protections for Their Disregard of Rural America (Do, 09 Aug 2018)
Companies like AT&T, Comcast, and Verizon are going around to state legislatures and telling them that any laws they pass that protect consumers will harm their ability to deploy networks in rural America. They claim that any legislator eager to protect their constituents from the nefarious things that can be done by companies that control access to the Internet is somehow hurting residents most desperate for an Internet connection. But ISPs unwillingness to invest has nothing to do with net neutrality or privacy, because today they are nearly completely deregulated, sitting on a mountain of cash, and have shown no intention of connecting rural Americans to high-speed Internet while their smaller competitors take up the challenge. The Tax Cuts from Congress Gave Them Billions in New Profits Followed by No New Plans to Roll Out New Networks Congress cut corporate tax rates last year and substantially increased the profit margins of large ISPs. In total, the top three major ISPs expect to receive an additional $8.8 billion in profits just from the tax cuts alone for 2018 (Verizon - $4 billion, AT&T - $3 billion, Comcast $1.8 billion) on top of the more than $34 billion (their profits in 2016) they are expected to collect in profits. What has happened with a vast majority of that new money has not been invested in expanding or upgrading their networks to fiber to the home (FTTH), which is necessary to have a network able to handle the coming advancements in Internet services, but rather in stock buybacks. That is to say that they are not using their money to improve things for their customers but to increase the share of the profits each shareholder gets all while leaving rural America to languish. To give you context as to how much infrastructure potential $8.8 billion represents alone, it is more money than the entire budget Congress spent in 2009 to build broadband networks in its economic recovery package known as the American Recovery and Reinvestment Act. With a little more than $7 billion, Congress was able to fund 553 projects across the country including fiber optic roll out in rural America. Even then AT&T and Verizon stated that including consumer protection conditions in federally funded projects will result in fewer networks being built, but thousands of applicants showed up to wire the toughest to serve markets in America. So we know that these companies have a lot of money. And we know that money is being used to give money to the company’s owners and not to better their services. And we know exactly what that money could have done. But somehow, ISPs want to blame net neutrality and privacy for their choices. ISPs that Support Consumer Protections Are Deploying Next Generation Fiber in Rural Markets Not only do we know what these large ISPs aren’t doing with their money, we know that nothing about consumer protections prevents them from using that money to reach new customers in rural areas. We have concrete examples of smaller ISPs, with substantially less cash on hand, doing just that. In Maine, a small ISP called Axiom has deployed fiber to local communities that major incumbents ignored. When the island community of Chebeague, Maine approached other ISPs about building a faster alternative to dial-up Internet access, nothing happened. But when local residents, working together found private and institutional investors, they were able to make progress in a very difficult market. To date, Axiom has deployed 30 miles of fiber optic gigabit connections and continues to deploy today. In California, a small ISP (Spiral Internet) that supports the state’s net neutrality legislation is aggressively working on deploying a fiber optic network in Nevada County, a rural part of the state. These ISPs are part of nearly half of the FTTH deployments that are happening across the entire and are also part of the dozens of small ISPs that opposed the FCC’s decision to completely deregulate the industry. That is because regulation of their business practices has nothing to do with their ability to deploy networks in difficult to serve markets. In fact, they were important to promote competition, which in turn promotes greater investment in networks as ISPs fight for customers. The Challenge to Connecting Rural America is Infrastructure Barriers, Not Business Practice Regulation The same barriers that faced near-universal access to electricity, water, telephone, and the roads decades ago apply to high-speed broadband in rural America. Those barriers being that rural areas are sparsely populated (thus fewer potential customers) and are in challenging terrains such as forests, mountains, and large open spaces. At no point has deregulation of consumer protections or the absence of consumer protections resulted in a grand investment into rural markets. Rather, the problem is the industry wants to treat broadband access as a luxury, accessible only to those who can pay ever-rising costs as opposed to a necessity of life. Real, and continued, concrete investments of public dollars are needed to address the areas where no private market could make a business case alone. Markets that lack access or upgrades that are profitable need more competition. Ultimately meeting this challenge has always been a joint effort of private and public dollars historically and remains so today. Policymakers can also explore new models to distribute fiber optics such as wholesale models that allow private market actors to spread the costs of deployment, which has lowered the entry costs into rural markets internationally. Until our policies look at the problem as an infrastructure challenge as opposed to a question of business practice regulation, rural American’s lack of high-speed Internet access will persist. The worst possible outcome would be for legislators to fall for the shell game perpetrated by the major ISPs where we eventually connect rural Americans but deny them the guarantees under the law that they get a free and open Internet that urban Americans have enjoyed for decades.
>> mehr lesen

Captive Audience: How Florida's Prisons and DRM Made $11.3M Worth of Prisoners' Music Disappear (Do, 09 Aug 2018)
The Florida Department of Corrections is one of the many state prison systems that rely on private contractors to supply electronic messaging and access to electronic music files and books for prisoners. For seven years, Florida’s prisoners have bought music through Access Corrections, a company that took in $11.3 million selling songs at $1.70 each—nearly twice what the typical song costs on the marketplaces available to people who aren’t incarcerated. This is hardly exceptional: prisons also charge extremely high rates for phone calls. The FCC briefly capped this at $1/minute (much higher than normal calling rates), only to have the Trump FCC abandon the policy rather than fight a court challenge. Florida prisoners used Access Corrections’ $100 MP3 players to listen to their music purchases and access their other digital files. But the Florida Department of Corrections has terminated its contract with Access Corrections in favor of the notorious industry-leader Jpay, a company that once claimed ownership of inmates’ correspondence with their families, and had inmates who violated the company’s lengthy terms of service punished with solitary confinement, and who became notorious for selling digital postage stamps to prisoners who want message their loved ones (prisoners need to spend one “postage stamp” per “page” of electronic text, and the price of postage stamps goes up around Mother’s Day). (Jpay is a division of Securus, a company notorious for selling and even giving away access to US and Canadian cellphone location data, without a warrant, and without notice to the tracked individuals.) Neither Jpay nor Access Corrections have offered prisoners any way to move their music purchases from the old devices to the new ones. Prison rules ban prisoners from owning more than one device at a time, so even if prisoners wanted to keep their Access Corrections devices without the ability to buy new music, they’d do so at the cost of not being able to use a Jpay device, which would severely curtail their access to correspondence with family members, in addition to cutting off access to reading material, educational materials, and other electronic resources. There is no technical reason why the files can’t be transferred: the decision to prevent prisoners from keeping the music they bought at a steep markup is a purely commercial one. It may just be a coincidence that Jpay stands to earn fresh millions from prisoners re-purchasing their music, and that the prison system stands to earn millions more in commissions, but whatever the reason, the whole thing is manifestly unfair, and imposes millions in costs on the struggling—and innocent—families of incarcerated people. The Florida Department of Corrections is already earning record sums from Jpay, taking a cut every time a prisoner’s family pays to transfer money into the prisoner’s Jpay account. The music-repurchasing bonanza that will follow the Jpay switchover represents an especially lucrative windfall for the department: under the terms of its Jpay deal, excess cash generated by the program goes straight to the department’s budget (under the old Access Corrections deal, the excess went to the Florida general treasury). With the incentives thus aligned, the Florida Department of Corrections and Jpay are poised to convert their captive population of prisoners into cash cows, to be milked for every penny their families can spare. The Jacksonville Times-Union’s Ben Conarck has a detailed look at the deal, including excerpts from the hundreds of prisoner grievances that have been raised since the deal was announced. Conarck’s reporting paints a dystopian picture of how proprietary technologies and official corruption can combine to create an inescapable system of control.
>> mehr lesen

How to Improve the California Consumer Privacy Act of 2018 (Do, 09 Aug 2018)
On June 28, California enacted the Consumer Privacy Act (A.B. 375), a well-intentioned but flawed new law that seeks to protect the data privacy of technology users and others by imposing new rules on companies that gather, use, and share personal data. There's a lot to like about the Act, but there is substantial room for improvement. Most significantly: The Act allows businesses to charge a higher price to users who exercise their privacy rights. The Act does not provide users the power to bring violators to court, with the exception of a narrow set of businesses if there are data breaches. For data collection, the Act does not require user consent. For data sale, while the Act does require user consent, adults have only opt-out rights, and not more-protective opt-in rights. The Act’s right-to-know should be more granular, extending not just to general categories of sources and recipients of personal data, but also to the specific sources and recipients. Also, the right-to-know should be tailored to avoid news gathering. The law goes into effect in January 2020, which means privacy advocates have 18 months to strengthen it—and to stave off regulated companies' attempts to weaken it. Background to the Act For many years, a growing number of technology users have objected to the myriad ways that companies harvest and monetize their personal data, and users have called on companies and legislators to do a better job at protecting their data privacy. EFF has long supported data privacy protections as well. In March 2018, the Cambridge Analytica scandal broke. The public learned that private data was harvested from more than 50 million Facebook users, without their knowledge and consent, and that the Trump presidential campaign used this private data to target political advertisements. Demand for better data privacy rules increased significantly. In May 2018, supporters of a California ballot initiative on data privacy filed more than 600,000 signatures in support of presenting the initiative to voters, nearly twice the number of signatures required to do so. But ballot initiatives are an imperfect way to make public policy on a complex subject like data privacy. Before enactment, it can be difficult for stakeholders to help improve an initiative’s content. And after enactment, an initiative can be difficult to amend. California legislators hoped to do better, but now they faced a deadline. June 28 was the last day the initiative’s sponsor could remove it from the ballot, and the sponsor told the legislature that he would do so only if they passed data privacy legislation first. Legislators rushed to meet this deadline, but that rush meant privacy advocates didn’t have much chance to weigh in before it was passed. The Basics of the CCPA The CCPA creates four basic rights for California consumers:  A right to know what personal information a business has about them, and where (by category) that personal information came from or was sent. See Sections 100, 110, 115. See also Section 140(c) (defining “business”), and Section 140(o) (defining “personal information”). A right to delete personal information that a business collected from them. See Section 105. While the right-to-know extends to all information a business collected about a consumer, the right-to-delete extends to just the information a business collected from them. A right to opt-out of sale of personal information about them. See Section 120. See also Section 140(t) (defining “sale”). A right to receive equal service and pricing from a business, even if they exercise their privacy rights under the Act, but with significant exceptions. See Section 125. The Act also creates a limited right for consumers to sue businesses for data security breaches, based on California’s existing data breach notification law. See Section 150. Most of the Act’s enforcement punch, however, rests with the California Attorney General (AG), who can file civil actions against violations of the Act. See Section 155. The AG is also responsible for promulgating regulations to flesh out or update the CCPA framework. See Section 185. As we explained above, the CCPA was put together quickly, and with many important terms undefined or not clearly defined. As a result, these rights in some cases look better than they really are. Fortunately, the new CCPA is generally understood to be a work in progress. Legislators, privacy advocates, and regulated companies will all be seeking substantive revisions before the law goes into effect. The rest of this post focuses on EFF's suggestions. Opt-in Consent to Collection Many online services gather personal data from technology users, without their knowledge or consent, both when users visit their websites, and, by means of tracking tools, when users visit other websites. Many online services monetize this personal data by using it to sell targeted advertising. New legislation could require these online services to obtain the users’ opt-in consent to collect personal data, particularly where that collection is not necessary to provide the service. The CCPA does not require online services to obtain opt-in consent before collecting personal data from users. Nor does it provide users an opportunity to opt-out of collection. The law does require notice, at or before the point of collection, of the categories of collected data, and the purposes of collection. See Section 100(b). But when it comes to users’ autonomy to make their own decisions about the privacy of their data, while notice is a start, consent is much better. The legislature should amend the Act to require it. Some limits are in order. For example, opt-in consent might not be required for a service to perform actions the user themselves have requested (though clear notice should be required). Also, any new regulations should explore ways to avoid the “consent fatigue” that can be caused by a high volume of opt-in consent requests. “Right to Know” About Data Gathering and Sharing Technology users should have an affirmative “right to know” what personal data companies have gathered about them, where the companies got it, and with whom the companies shared it, subject to some limits to ensure that the right to know does not impinge on other rights. The CCPA creates a right to know, empowering “consumers” to obtain the following information from “businesses”: The categories of personal information collected. See Sections 100(a), 110(a)(1), 110(c)(1), 115(a)(1). The categories of sources of the personal information. See Sections 110(a)(2), 110(c)(2). The purposes for collecting the personal information. See Sections 110(a)(3), 110(c)(3). The categories of third parties with whom businesses shares personal information. See Sections 110(a)(4). The categories of personal information sold. See Sections 115(a)(2), 115(c)(1). The Act defines a “consumer” as any natural person who resides in California. See Section 140(g). The Act defines a “business” as a for-profit legal entity with: (i) annual gross revenue of $25 million; (ii) annual receipt or disclosure of the personal information of 50,000 consumers, households, or devices; or (iii) receipt of 50% or more of its annual revenue from selling personal information. See Section 140(c). The Act’s right-to-know would be more effective if it was more granular. It allows people to learn just the “categories” of sources and recipients of their personal data. People should be able to learn the specific sources and recipients. Moreover, the Act’s right-to-know should be tailored to avoid impacting news gathering, which is protected by the First Amendment, when undertaken by professional reporters and lay members of the public alike. For example, if a newspaper tracked visitors to its online edition, the visitors’ right-to-know could cover that tracked information, but should not also extend to a reporters’ investigative file. Data Portability Users generally should have a legal right to “data portability”, that is, to obtain a copy of the data they provided to an online service. People might use this data in myriad ways, including self-publishing their own content, better understanding their service provider, or taking their data to a rival service. The CCPA advances data portability. Consumers may obtain from businesses the “specific pieces” of personal information collected about them. See Sections 100(a), 110(c)(5). Moreover, the Act provides that if “provided electronically, the information shall be in a portable and, to the extent technically feasible, in a readily useable format that allows the consumer to transmit their information to another entity.” See Section 100(d). It will be important to ensure that “technical infeasibility” does not become an exception that swallows the rule. Also, it may be appropriate to address scenarios where multiple users’ data is entangled. For example, suppose Alice posts a photo of herself on social media, under a privacy setting that allows only certain people to see the photo, and Bob (one of those people) posts a comment on the photo. If Bob seeks to obtain a copy of the data he provided to that social media, he should get his comment, but not automatically Alice’s photo. Consent to Data Sharing As discussed above, EFF supports properly tailored legislation that requires companies to get opt-in consent before collecting a user’s personal data. Opt-in consent should also be required before a company shares that data with a third party. The more broadly that personal data is disseminated, the greater the risk of theft by malicious hackers, misuse by company employees, and expanded uses by company managers. Technology users should have the power to control their personal data by deciding when it may be transferred from one entity to another. The CCPA addresses sale of personal data. It defines “sale” to include any data transfer “for monetary or other valuable consideration.” See Section 140(t). Adults have a right to opt-out of sales. See Sections 120(a), 120(c). To facilitate such opt-outs, businesses must provide a “do not sell my personal information” link on their homepages. See Section 135(a)(1). Minors have a right to be free from sales absent their opt-in consent. See Sections 120(c), 120(d). Also, if a third party buys a user’s personal data from a company that acquired it from the user, the third party cannot re-sell that personal data, unless they notify the user and give them an opportunity to opt-out. See Section 115(d). However, the Act’s provisions on consent to data sharing are incomplete. First, all users—adults as well as minors—should be free from data sales and re-sales without their opt-in consent. While opt-out consent is good, opt-in consent is a better way to promote user autonomy to make their own decisions about their data privacy. Second, the opt-in consent rules should apply to data transfers that do not yield (in the Act’s words) “valuable consideration.” For example, a company may find it to be in its business interests to give user data away for free. The user should be able to say “no” to such a transfer. Under the current Act, they cannot do so. By contrast, the original ballot initiative defined “sale” to include sharing data with other businesses for free. Notably, the Act empowers the California Attorney General to issue regulations to ensure that the Act’s various notices and information are provided “in a manner that may be easily understood by the average consumer.” See Section 185(a)(6). We hope these regulations will address the risk of “consent fatigue” that can result from opt-in requests. Deletion The CCPA provides that a consumer may compel a business to “delete” personal information that the business collected from the consumer. See Section 105(a). The Act provides several exceptions. Two bear emphasis. First, a business need not delete a consumer’s personal information if the business needs it to “exercise free speech, ensure the right of another consumer to exercise his or her right of free speech, or exercise another right provided for by law.” See Section 105(d)(4). Second, a business may keep personal information “to enable solely internal uses that are reasonably aligned with the expectations of the consumer based on the consumer’s relationship with the business.” See Section 105(d)(7).  Confusingly, another exception uses similar language, and it’s unclear how these interact. See Section 105(d)(9) (“Otherwise use the consumer’s personal information, internally, in a lawful manner that is compatible with the context in which the consumer provided the information”). Deletion is a particularly tricky aspect of data privacy, given the potential countervailing First Amendment rights at issue. For example, suppose that Alice and Bob use the same social media service, that Alice posts a photo of herself, that Bob re-posts it with a caption criticizing what Alice is doing in the photo, and that Alice becomes embarrassed by the photo. A statute empowering Alice to compel the service to delete all copies of the photo might intrude on Bob’s First Amendment interest in continuing to re-post the photo. EFF is working with privacy and speech advocates to find ways to make sure the CCPA ultimately strikes the right balance. But EFF will strongly oppose any provision empowering users to compel third-party services (including search engines) to de-list public information about them. Laws outside the United States that do this are often called the “right to be forgotten.” EFF opposes such laws, because they violate the rights to free speech and to gather information. Many of us may be embarrassed by accurate published reports about us. But it does not follow that we should be able to force other people to forget these reports. Technology users should be free to seek out and locate information they find relevant. Non-discrimination The CCPA provides that if a user exercises one of the foregoing statutory data privacy rights (i.e., denial of consent to sell, right to know, data portability, or deletion), then a business may not discriminate against the user by denying service, charging a higher price, or providing lower quality. See Section 125(a)(1). This is a critical provision. Without it, businesses could effectively gut the law by discriminating against users that exercise their rights. Unfortunately, the Act contains a broad exemption that threatens to swallow the non-discrimination rule. Specifically, a business may offer “incentives” to a user to collect and sell their data, including “payments.” See Section 125(b)(1). For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent. This will discourage users from exercising their privacy rights. Also, it will lead to unequal classes of privacy “haves” and “have nots,” depending upon the income of the user. EFF urges the California legislature to repeal this exemption from the non-discrimination rule. This problem is not solved by the Act’s forbidding financial incentives that are “unjust, unreasonable, coercive, or usurious.” See Section 125(b)(4). This will not stop companies from charging more from users who exercise their privacy rights. The Act also allows price and quality differences that are “reasonably related” or “directly related” to “the value provided to the consumer by the consumer’s data.” See Sections 125(a)(2), 125(b)(1). These exemptions from the non-discrimination rule are unclear and potentially far-reaching, and need clarification and limitation. Empowering Users to Enforce the Law One of the most powerful ways to ensure enforcement of a privacy law is to empower users to take violators to court. This is often called a “private cause of action.” Government agencies may fail to enforce privacy laws, for any number of reasons, including lack of resources, competing priorities, or regulatory capture. When a business violates the statutory privacy rights of a user, the user should have the power to decide for themselves whether to enforce the law. Many privacy statutes allow this, including federal laws on wiretaps, stored electronic communications, video rentals, driver’s licenses, and cable subscriptions. Unfortunately, the private right of action in the CCPA is woefully inadequate. It may only be brought to remedy certain data breaches. See Section 150(a)(1). The Act does not empower users to sue businesses that sell their data without consent, that refuse to comply with right-to-know requests, and that refuse to comply with data portability requests. EFF urges the California legislature to expand the Act’s private cause of action to cover violations of these privacy rights, too. The Act empowers the California Attorney General to bring suit against a business that violates any provision of the Act. See Section 155(a). As just explained, this is not enough. Waivers Too often, users effectively lose their new rights when they “agree” to fine print in unilateral form contracts with large businesses that have far greater bargaining power. Users may unwittingly waive their privacy rights, or find themselves stuck with mandatory arbitration of their privacy rights (as opposed to their day in an independent court). So we are very pleased that the CCPA expressly provides that contract provisions are void if they purport to waive or limit a user’s privacy rights and enforcement remedies under the Act. See Section 192. This is an important provision that could be a model for other states as well. Rule Making The CCPA empowers the California Attorney General to adopt regulations, after it solicits broad public participation. See Section 185. These regulations will address, among other things, new categories of “personal information,” new categories of “unique identifiers” of users, new exceptions to comply with state and federal law, and the clarity of notices. EFF will participate in this regulatory process, to help ensure that new regulations strengthen data privacy without undue burden, particularly for nonprofits and open-source projects. Next Steps The CCPA is just a start. Between now and the Act’s effective date in January 2020, much work remains to be done. The Act itself makes important findings about the high stakes: The proliferation of personal information has limited Californians’ ability to properly protect and safeguard their privacy. It is almost impossible to apply for a job, raise a child, drive a car, or make an appointment without sharing personal information. . . . Many businesses collect personal information from California consumers. They may know where a consumer lives and how many children a consumer has, how fast a consumer drives, a consumer’s personality, sleep habits, biometric and health information, financial information, precise geolocation information, and social networks, to name a few categories. . . . People desire privacy and more control over their information. EFF looks forward to advocating for improvements to the Act in the months and years to come.
>> mehr lesen

EFF to the FCC: Don’t Let AT&T and Verizon Get a Chokehold on Internet Access Competition (Mi, 08 Aug 2018)
The majority of Americans do not have a choice when it comes to high-speed Internet. People living in rural areas have poor quality and coverage when it comes to even mid-range broadband, and America is lagging behind other countries in fiber optics. There are very few things in place that help address these problems, and big ISPs are asking the FCC to end one of them. But EFF is stepping in to ask the FCC to deny AT&T's and Verizon’s petition to give them a further chokehold on Internet access choice. On August 6, we filed a comment [pdf] opposing US Telecom’s (AT&T's and Verizon's trade association) petition for forbearance—the request that the FCC use its authority to repeal a key provision of the 1996 Telecommunications Act. Today, thanks to this provision, a new telecom company doesn’t have to raise the huge amounts of money needed to initially build its own infrastructure. Existing incumbent telecom companies are required to share their infrastructure at established, affordable rates with new competitors. This allows them to buy space on an existing infrastructure at an affordable rate. That lowers the barrier to compete with the big, established telecom companies. And, where the new companies appear, customers finally have a choice. They can pick between, say, AT&T’s policies and those of a smaller ISP like Sonic.  Not only does that provide much-needed ISP competition, new ISPs make money with mid-level Internet access they get through existing copper lines (the FCC decided in 2005 not to extend these sharing rules to fiber). And then they can use that capital to spend on building high-speed infrastructure and build in rural areas that need more and better coverage. Small, local ISPs are also vital for rural areas. 39 percent of rural Americans lack access to middle-level Internet service. And where big ISPs leave a gap in the market through a lack of willingness to upgrade, new local ones can step in to fill the gap. As we pointed out in our comments to the FCC, small ISPs account for nearly half of fiber to the home deployment in the last few years. But if big ISPs can charge huge amounts for access to copper lines or simply cut off new competition altogether, we’ll lose the ISPs working to improve American infrastructure. The United States lags behind other countries on speed and coverage in a way that is embarrassing. 85 percent of Americans have no or only one choice (the local cable monopoly) when it comes to Internet speeds above 100 Mbps. Barely 10 percent of Americans have access to high-speed Internet through fiber optics. The European Union, meanwhile, is mostly on track [pdf] to meet goals of providing everyone with access to 30 Mbps Internet by 2020, with at least half of the EU being wired for 100 Mbps and higher. Almost everyone in South Korea has access to fiber. America is stuck at 85 percent of people having access to 25 Mbps.  With their forbearance petition, big ISPs are seeking to end a requirement that creates competition and spurs better and faster Internet coverage. New ISPs use the guaranteed access to copper lines to get a foothold in a market and to build capital. And then it’s these local ISPs that build high-speed infrastructure and cover rural areas. These are two things not being done by big ISPs, who would have even less incentive to do anything if these local ISPs vanish. AT&T and Verizon know that we’ll all take bad Internet over no Internet. And that’s why EFF is asking the FCC to prevent big telecom from getting a chokehold on our Internet access.
>> mehr lesen

Facebook Deletes Anti-Unite the Right Event, Claiming Foreign Involvement (Fr, 03 Aug 2018)
Correction—August 7, 2018: Although Facebook found connections between accounts linked to Russia's Internet Research Agency (IRA) and the accounts connected to the canceled event, a post by Chief Security Officer Alex Stamos states that Facebook is not attributing the "coordinated inauthentic behavior" of these accounts to a specific group or country. We regret the error. Facebook stumbled this week—again—in its effort to police "misinformation": it deleted an event page for the anti-fascist protest "No Unite the Right 2 - DC." Facebook justified the deletion by claiming that the event was initially created by an "inauthentic" organization with possible foreign connections. In fact, a number of legitimate local organizations and activists had become involved in administering and planning the event. These activists weren't given an opportunity by Facebook to explain or present evidence that they were involved in what had become a very real protest. Nor were they given a chance to dispute claims that the original organizers had Russian connections. So what makes a protest “real"? Is it who organizes it, or who attends? And what happens when a bad actor creates an event with the intent to sow discord, but prospective attendees take it seriously and make it their own? These are all questions that Facebook is going to have to grapple with as it cracks down on misinformation ahead of US midterm elections. But first, the company should ask itself how it can reform its content removal policies so that users have a chance to challenge removals before they happen. The event page for "No Unite the Right 2 - DC" may have been created by Resisters, a group suspected by Facebook of being tied to Russia’s Internet Research Agency, but to the organizations that were involved in planning the protest, and the more than two thousand users who had registered to attend, the event was very real. Many of those groups and individuals are now, rightfully, angry that Facebook chose to remove their page without giving them an opportunity to provide explanation or evidence of their involvement in the very real protest. In a press release, Facebook admits that it doesn’t “have all the facts,” and says that the legitimate groups' pages “unwittingly helped build interest in ‘No Unite Right 2 – DC’" and posted information about transportation, materials, and locations so people could get to the protests.” Facebook doesn’t seem to consider that, to the participants, there was nothing unwitting about their involvement in an anti-Unite the Right protest, or what effect the removal of the group pages will have on them. The decision is reminiscent of another one the company made nearly eight years ago. Just a few months prior to the uprising in Egypt that would eventually topple long-time dictator Hosni Mubarak, Facebook removed a page called “We Are All Khaled Said”—the same page that later called for the January 25 street protests. The decision to remove the page stemmed from the fact that its administrator was using an “inauthentic” name—but after being contacted by NGOs, the company allowed the page to remain up so long as another administrator stepped in. The legitimate administrators of “No Unite Right 2 – DC” weren’t given the same option. A lot has happened between 2010 and today, but one thing remains the same: Facebook’s executives continue to make bad—and potentially influential—decisions about what is “authentic.” In this case, we believe that the legitimate organizers of the event—which reportedly includes 18 different local groups—should have had a say in how their event page was handled, and how prospective attendees were contacted. The Santa Clara Principles, a set of minimum standards for content moderation created by EFF and other free expression advocates, expressly calls on social media platforms to provide human review of content removal and give users meaningful and timely opportunity to present additional information as part of that review. If all it takes going forward to get an event canceled is one bad actor’s involvement in it, then Facebook is likely going to be dealing with this sort of situation again. Therefore, it’s imperative that the company devise a consistent and fair strategy that allows legitimate participants of a group or event to have a stake in how their page is governed.
>> mehr lesen

Internet Publication of 3D Printing Files About Guns: Facts and What’s at Stake (Fr, 03 Aug 2018)
When it comes to guns, nearly everyone has strong views. When it comes to Internet publication of 3D printed guns, those strong views can push courts and regulators into making hasty, dangerous legal precedents that will hurt the public's ability to discuss legal, important, and even urgent topics ranging from mass surveillance to treatment of tear gas attacks. Careless responses to 3D-printed guns, even those that will do little to limit their availability, will have long-lasting effects on a host of activities entirely unrelated to guns. In its responses to 3D printed guns, the U.S. Department of State and state Attorneys General have sought to brush aside the legal protections that ensure your right to dissent and to publish technological information and software for privacy and other purposes. That’s why we’re working to make sure that 3D printing cases don’t set precedents that chip away at your freedoms to speak and learn online. Here's how we got to this moment. In 2012, the first order to de-publish the well-known, non-classified 3D design files for guns came about when the federal government decided that it could use existing export regulations to censor technical information whenever it deemed that censorship was “advisable.” The regulations, which are not normally aimed at speech, had no objective legal standards, no judicial oversight, and no binding deadlines. This decision was applied to a company called Defense Distributed and its founder, Cody Wilson. Last month, after years of litigation, the federal government decided that, contrary to its view in 2012, the export restrictions should not apply to the publication of 3D printer files for guns on the Internet. In response, state governments have persuaded a federal court to order the takedown of that information from the Internet without any First Amendment analysis. They have also asked the federal government to reinstate the system that gave it total discretion over Internet publication of technical information about 3D printed guns, which it enforced against Defense Distributed but not against other publishers. Experts have different views on whether the government could meet the appropriate First Amendment standard with carefully-tailored measures designed to address the 3D printing of guns. But whether or not you think the government could satisfy the necessary legal test in a hypothetical case, it’s critical that the government not be able to skip that step and jump straight to the de-publication of speech. Our government has a history of characterizing information (like encryption technology) and ideas (like socialism or Islam) as dangerous and likely to lead to violence. A free society cannot give the government unbridled discretion to make those choices, because of the systematic oppression that such a government can engage in. A Brief Explainer on Making Guns, via 3D Printer and Otherwise Most of us are not familiar with the process of manufacturing a gun, but there are many tutorials available both offline and online for doing so, as well as multiple sources for designs that could be used in a 3D printer. Federal law and many states permit a person to engage in gunsmithing, creating an unlicensed, unregistered firearm for their own use. The materials are generally not difficult to buy either. While making guns is allowed many places, whether the firearm is made through 3D printing or by simply buying and assembling the materials, it is generally unlawful to sell or distribute the unmarked firearms you make without a license. Most of the files at issue in the Defense Distributed case are “Computer Aided Design” (CAD) files, a type of file that engineers use describe three-dimensional objects. Programs like “Slic3r” can interpret these shapes and figure out the path that a 3D printer would have to move its nozzle, or a milling machine would have to move its cutter, to form that object. Slic3r creates a 3D print file that can then be understood by the machine itself and used by its operator to create an object. Once you’ve got your 3D printer or milling machine, your raw materials, the software to run it, and the design files, you can tell the machine to make whatever shapes you want, including shapes that can be assembled into a gun. You can’t print bullets, of course – you need to buy them or acquire gunpowder to make your own. So, following all these steps, it is possible to 3D print or CNC mill a gun, go acquire bullets, and fire it. Your 3D-printed gun will likely be made of plastic. This is not an ideal material for a gun, because it is weak and it melts, but plastic guns are capable of firing. The plastic part will not be detectable by metal detectors, but would be detectable by the scanners at airport security. And it's illegal under the Undetectable Firearms Act to manufacture an entirely plastic gun unless you insert a bar of metal that can be detected by a metal detector. A CNC-milled gun can be made of metal, and this is the more relevant technology because metal is more suitable for guns. Most of the parts of guns are unregulated, so realistically, a person would buy the unregulated parts, print the regulated ones, and then assemble the weapon. A CNC mill that can generate the regulated lower receiver of an AR-15, for instance, costs about $1700. The raw metal for the lower receiver costs under $30. Neither CNC nor 3D printing is needed to make guns, however. As a simpler alternative to milling the entire shape yourself, you can purchase an unregulated lower receiver that is not quite finished for about $75 and drill some simple holes and a trough into it with an inexpensive drill press, without the need for an automatic milling machine. If someone wants to use the more complex, more expensive 3D printing or CNC process to make a gun, however, the files that describe the gun shapes you would need to print are available in several places on the Internet, both inside and outside the U.S. The most simple designs have been around for over seven years. The process surrounding the publication and de-publication of these designs is a precedent that is simply dangerous to speech. It allows the government to use export regulations to censor speech on the Internet. Granting a censorship power that broad will lead to speech being taken down for political reasons, and a mechanism must be in place to prevent that. Following that power with requests to remove the information globally extends the potential for harm to speech. Export Regulations Gave the Government Unchecked Power to Censor Technical Speech Online In 2012, the government told Cody Wilson's Defense Distributed that it could not publish designs for firearms online. Despite the fact that many others had already published similar information, the government told Defense Distributed that it had to apply for an export license in order to publish the computer files because some of the files can be interpreted by a 3D printer to create a gun. The government's reasoning stemmed from an interpretation of the International Traffic in Arms Regulations (ITAR), which gave the government authority to restrict the export of technologies with potential military applications. Under the government’s definition, “export” encompasses not only sending physical items overseas, but also publishing information on the Internet about certain technologies. If you wanted to publish online about gun designs, or how to diagnose a biological weapon attack, or treat chemical weapon injuries, then under the government's reasoning, you were supposed to ask permission first. The Internet, argued the government, is not the “public domain” because it is accessible to foreigners, and therefore constitutes an export. The Department of State would then decide – with no binding legal standards, no deadline for a decision, and no judicial oversight – whether to permit you to publish or not. The massive list of covered technologies encompasses certain medical information and devices, certain types of GPS technologies, and jet engines, just to name a few. The materials on the list have obvious, legitimate applications for researchers, manufacturers, journalists, hobbyists, and many others. There are no rules ensuring that the government doesn’t unfairly bar certain speakers for political reasons, and there’s no opportunity to appeal the government’s decision to a court. Defense Distributed applied for a license (EFF helped advise the company at this juncture and helped it to get experienced export counsel). After a lengthy delay, the government denied the license, and the appeal dragged on without any binding deadline. After waiting for an answer for many months, Defense Distributed finally sued and lost preliminary arguments in both in the District Court and in the Court of Appeal (EFF did not represent them, and instead filed an amicus brief addressing the First Amendment issues posed by the speech-licensing regime). Last month, the government reversed course and not only granted Defense Distributed a license, but changed the regulations to allow publication of Defense Distributed’s materials. Broad Censorship Powers Lead to Politically Motivated Takedowns It’s dangerous for the Executive Branch to have so much control over the public’s right to share information online. Without meaningful restrictions on how and when the State Department can exercise its power, the risk of politically motivated censorship is extremely high. Indeed, it is quite possible that both the previous administration's decision to deny Defense Distributed a license and the current government's change in policy were motivated in part by Wilson's political opinions and often inflammatory comments. It's telling that other groups were publishing similar information online at the same time that Defense Distributed was barred from it. It’s dangerous for the Executive Branch to have so much control over the public’s right to share information online. In absence of laws dictating when the government can and can’t use this power, politically motivated censorship is unavoidable. As EFF argued in our amicus brief, echoing concerns raised by the Supreme Court, “Human nature creates an unacceptably high risk that excessive discretion will be used unconstitutionally, and such violations would be very difficult to prove on a case-by-case basis.” Under the same law, the government could try to bar activists from sharing instructions for treating the effects of tear gas and other chemical weapons, or researchers from spreading information about the government’s use of mass surveillance tools. Or it could bar technologists from publishing the encryption technologies that we all use to protect ourselves from criminals online. In the 1990s, EFF successfully argued that it was unconstitutional for the government to use these export regulations to ban the online distribution of computer code used for effective encryption. Two decades later, the government has again used this unconstitutional export control regime in a way that gives it broad control over who can share information about a wide range of technologies online, with no safeguards ensuring that it doesn’t ban certain speakers for political reasons. New Lawsuits Seek Global Takedown Orders That Would Erode Protections for All Instructive Speech The new lawsuits brought by state Attorneys General are concerning for a different reason: they ask the courts to remove the published files because other people might use the information they contain to make guns illegally or make legal guns and use them illegally. The cases are not based on gun control laws, because states can’t impose their own law on the rest of the country and Federal gun control law permits personal gunsmithing. Instead, the state claims include common law nuisance and negligence, while the claims in Federal court argue that the Department of State did not follow the Administrative Procedures Act and justify why it changed its mind. Normally, you cannot be prevented from saying something merely because someone else might use that information to commit a crime, or even because they might be persuaded to commit a crime. Unless your speech rises to the level of a conspiracy to commit a crime or speech that incites people to immediate violence, then the legal responsibility falls on the people who decide to break the law. Even when heady interests such as national security or physical harm are potentially at stake, the government has a heavy burden to prove the urgency of the harm and the appropriateness of a speech restriction as the proper remedy. It’s generally not appropriate to order one person not to publish material that is readily available elsewhere. The government has a history of characterizing ideas as dangerous in an attempt to suppress speech about those technologies and ideas. First Amendment standards ensure that speech cannot be suppressed as an easy measure of first resort, or where those speech constraints aren’t necessary to address a proven harm or effective at addressing that harm. If the states in this case are successful, they will bypass legal doctrines that we rely on to protect your right to encrypt and your right to advocate for social change. The arguments from the states are clear on this point – the states are arguing that the government should be required to prevent publication because foreigners abroad might do things that the U.S. opposes and they are arguing that the courts themselves should order the designs to be kept offline because people might make the guns and use them in domestic crimes. These arguments are dangerous because they threaten to empower current (and future) U.S. government officials to play pre-publication gatekeeper of what information you can publish online based on the barest, unproven claim of national interest or the possibility that others might use your information to further crimes. It could bar us from publishing and discussing artificial intelligence technologies, something that has increasing importance to our online lives and even how the government makes decisions about bail and sentencing. It could censor information about how to survive a chemical weapons attack. It could force us to compromise our secure communications technologies, making our personal information vulnerable to unlawful surveillance and identity theft. EFF will continue to protect your freedom to teach one another new skills and share code with each other, so that others can learn and benefit from your ingenuity. We will continue to protect your freedom to advocate for ideas the government labels as dangerous. Not because we agree with every idea that’s out there, but because of the clear danger posed by a government that grants itself unbridled power to decide whose ideas are dangerous and what knowledge should be deleted from the Internet. Related Cases:  Defense Distributed v. United States Department of State
>> mehr lesen

Behind the Octopus: The Hidden Race to Dismantle Global Law Enforcement Privacy Protections (Mi, 01 Aug 2018)
Last month, 360 cyber crime experts from 95 countries gathered in Strasbourg to attend the Octopus Conference. The event sounds like something from James Bond, and when you look at the attendee list—which includes senior figures from the United States Department of Justice, national police forces across the world, and senior figures from companies like Facebook, Microsoft, Apple and Cloudflare—it’s easy to imagine a covert machination or two. As it happens, Octopus is one of the more open and transparent elements in the world of global law enforcement and cybersecurity. Civil society like EFF and EDRI were invited to speak, and this year it was our primary chance to comment on a new initiative by the event’s organizers, the Council of Europe—an additional protocol to their Cybercrime Convention (also known as the Budapest Convention on Cybercrime), which will dictate how Parties of the Convention from around the world can cooperate across borders to fight Internet crime. Our conclusion: the Council of Europe (CoE) needs to stand more firmly against a global trend to undermine everyone’s privacy in the pursuit of faster and easier investigations. As conversations at Octopus showed, the many long arms of the world’s law-enforcers are coming for user data, and the CoE needs to stand firm that they obey international human rights, in particular article 15 of the Budapest Convention, when they reach across borders. The CoE is an international organization that grew out of a post-World War II initiative to build human rights into European decision-making. It’s older and has more members states than the European Union (EU), with which it is often confused (you can blame this confusion on the EU because they poached the original CoE logo for their flag, and even named one of their major institutions “The European Council”). Nowadays, the CoE (among other roles) acts as a forum for developing international treaties. The organization recently celebrated an update to Convention 108, its 1981 treaty on data protection that was the forerunner of the GDPR. Currently, the CoE Cybercrime Committee (TC-Y), comprised of State Parties, Observers, and international governmental representatives from around the world, are working on a second additional protocol to the Budapest Convention in order to spell out practices of countries when allowing cross-border law enforcement access to subscriber data held by big tech companies like Google and Facebook, as well as smaller companies and startups. The TC-Y's CoE proposal is part of a general push by governments around the world to speed up and widen access in international criminal investigations to online data held in other countries, most recently seen in the United States’ passing of the CLOUD Act, as well as an E-Evidence draft proposals by the European Union. We, along with civil liberties groups across Europe and Canada, have been strong critics of the EU and U.S. initiatives, saying that rather than create judicial short-cuts for law enforcement, as these laws would do, countries should seek to put more resources to make the existing mutual legal assistance treaty (MLAT) system, which has built-in protections for privacy, run more effectively. Some of the proposals introduced at July’s Octopus conference, unfortunately, fit some of these same patterns, such as allowing “direct cooperation with providers across jurisdictions and extending searches to access evidence in the cloud with the necessary rule of law safeguards.” Before Octopus, we, along with EDRi, Access, CIPPIC, IFEX, and a coalition of global civil society organization from around the world, had already expressed our concern with CoE’s TC-Y direction, but it’s been hard to hammer out the details, primarily because civil society is excluded  from the CoE’s drafting meetings, which take place a few days before Octopus assembles. If we’d been in those meetings, we would have highlighted the same problems that have weakened all of these attempts so far: First, as mentioned before, we continue to question whether such drastic reforms are truly necessary. The existing system of mutual legal assistance among countries certainly needs to be improved—but bypassing MLATs by going directly to service providers for electronic data, as all these new initiatives offer, is not the answer. Considerable procedural and human rights safeguards would be lost in such a move. Instead, civil society from around the world including EFF and EDRI have consistently recommended: offering technical training for law enforcement authorities; simplifying and standardizing data request forms; creating single points of contact for data requests; and most importantly, increasing resources, especially in the United States, where the bulk of the requests end up. We’ve seen this work first-hand: thanks to a recent U.S. MLAT reform program, which increased its resources to handle MLATs, the U.S. Department of Justice has already reduced the amount of pending cases by a third. Second, if you are going to circumvent MLATs, the replacement protocol needs to cope with some major difficulties in protecting human rights between states. One of the biggest challenges in the CoE TC-Y drafting process—a challenge that was evident in the initial Cybercrime convention itself—is a presumption that signatory parties share (and will continue to share) a common baseline of understanding with respect to the scope and nature of human rights protections, including privacy. Unfortunately, there is not yet a harmonized legal framework among the countries participating in the negotiations and, more importantly, not a shared human rights understanding. Experience shows there is a need for countries to bridge the gap between national legal frameworks and practices on the one hand, and human rights standards established by case law of the highest courts on the other.  That’s especially true in the digital domain, where key human rights decisions have still not completely propagated globally—or even within their own jurisdictions. For example, the Court of Justice of the European Union (CJEU) human rights held that blanket data retention is illegal under EU law on several occasions. Yet, several EU Member States still have blanket data retention laws, which is a basis for accessing data. Other states involved in the protocol negotiations have implemented precisely the type of sweeping, unchecked, and indiscriminate data retention regime that the CJEU ruled out as well, such as Australia, Mexico or Colombia. Because the Cybercrime Convention’s Parties lack a harmonized human rights and legal safeguards standard, we think the forthcoming protocol proposals risk:    bypassing those critical human rights vetting mechanisms inherent in the current MLAT system that are currently used to, among other things, navigate conflicts in fundamental human rights and legal safeguards that inevitably arise between countries; seeking to encode practices that fall below minimum standards being established in various jurisdictions by ignoring human rights safeguards established primarily by the case law of the European Court of Human Rights, the Court of Justice of the European Union, the Inter-American Commission on Human Rights, the Inter-American Court on Human Rights, among others; and including few substantial limits and instead relying on the legal systems of signatories to include enough safeguards to ensure human rights are not violated in cross-border access situations and a general and non-specific requirement that signatories ensure adequate safeguards (see Article 15 of the Cybercrime Convention). Finally, we would urge the authors of the forthcoming protocol not to create a mandatory or voluntary direct access mechanism to obtain data from companies directly. While the CoE’s current proposals seem to be limited to subscriber data, there are serious risks that interpretation of what constitutes subscriber data might be expanded to include metadata, such as IP address. Maryant Fernandez, EDRI’s Senior Policy Analyst and Katitza Rodriguez, EFF International Rights Director, who spoke up at Octopus, made all of these points and more. But speaking up isn’t enough. It’s imperative that civil society be present for the drafting meetings themselves, so we can fix and correct these problems as they arise. Without civil society participation, we’re concerned the proposed Protocol will lack strong data protections and critical human rights vetting mechanisms that are embedded in the current MLAT system. There are some places the long arm of the law—even the many arms of the global law enforcement Octopus—just shouldn’t reach without real oversight and meaningful safeguards.
>> mehr lesen

An Open Letter to Assemblymember Lorena Gonzalez Fletcher, Chair of the California Assembly Appropriations Committee (Mi, 01 Aug 2018)
Dear Assemblymember Lorena Gonzalez Fletcher,  We live in dangerous times. The rights of people of color, immigrants, workers, women, and asylum seekers are threatened every day. As history has repeatedly shown, one of the most powerful tools of oppression is surveillance. California has the opportunity to ensure public control and oversight of the spying technologies that law enforcement is most likely to abuse. Right now, the power is in your hands to ensure it moves forward. S.B. 1186 would not hamper criminal investigations or make the work of peace officers more difficult. Accountable policing is good policing. With each year, civil rights advocates have watched technology advance amidst a climate of growing secrecy, allowing authorities to collect more and more personal data from more and more people and store it indefinitely, without paramaters for how it can be used, with whom it can be shared, or what to do if it is misused or abused. We ask you, as chair of the California Assembly Appropriations Committee, to pass S.B. 1186 out of the committee without further amendments. We have reached the point where unchecked surveillance may pose a public safety risk as great as the ones the technology is meant to address. Over the past decade, high tech government surveillance has expanded well beyond national intelligence agencies based in the Beltway. Tools like aerial surveillance drones, automatic license plate readers, cell-site simulators, and face recognition algorithms—many originally developed for military application in foreign battlegrounds—are finding their way onto the streets of cities across our state. In San Diego and other border regions, the acceleration is acute as the Trump administration seeks to ramp up deportations, build “the Wall,” and recruit local law enforcement to assist in its “zero tolerance” schemes. As has been documented time and time again, surveillance disproportionately impacts communities of color, immigrants, and religious minorities. S.B. 1186 is a straightforward accountability measure: it requires law enforcement agencies to go through a public process before they may obtain surveillance technology. City Councils and county boards would have the authority to either approve the acquisition, or reject it if they find that it isn’t justified, or that the agency’s policy does not sufficiently respect civil liberties and civil rights. There is a lot for progressives and conservatives to like about S.B. 1186. Fiscal hawks will appreciate that it gives elected legislative bodies a chance to prevent wasteful spending. At the same time, it would also allow local officials to oversee—or even block—law enforcement data-sharing with federal deportation forces. It would also gives public employees a chance to weigh in on the surveillance policies that may be used to monitor them in the workplace. Here’s how former Lemon Grove Mayor and SANDAG Public Safety Committee Chair Mary Sessom described her experience overseeing surveillance: I wanted to know: what technologies did we have, how were these technologies used, how were they funded, who controlled it, and who retained the data and for how long? But, even finding answers to these basic questions proved frustrating. If I asked for a record, law enforcement would provide it — but I had to know that it already existed. If I didn’t know a record existed, nobody volunteered to tell me it was available. Under S.B. 1186, city councils and county boards across the state would have answers to those questions before law enforcement could deploy a new surveillance technology.  The problem is not hypothetical. The San Diego region, in particular, has suffered the consequences of surveillance technology being deployed without adequate scrutiny or oversight. The San Diego District Attorney’s office distributed computer surveillance software to families that it later had to publicly warn was unsafe. In addition, the San Diego Police Department rolled out a patrol-car camera system that Voice of San Diego found to be so dysfunctional as to be hilarious. More recently, Voice of San Diego journalist Andrew Keatts reported that SDPD was sharing location data collected with automated license plate reader with hundreds of agencies around the country (including DHS) and likely violating a state law by failing to adequately document searches of its data. Across the state, law enforcement personnel misused sensitive databases more than 140 times last year alone—including 11 times in your home county. While the nature of those abuses have not been revealed, such breaches are often related to domestic abuse.  S.B. 1186 would not hamper criminal investigations or make the work of peace officers more difficult. Accountable policing is good policing. It is important to have clear rules and restrictions for surveillance technology, just as there must be clear policies for use of force or strip searches. In addition, when the public is involved in surveillance decision, it builds community trust with public safety officers. In essence, the bill would shine sunlight on important decisions that are currently being made behind closed doors. It will help enable a long overdue public discussion about how communities can best stay safe, and how police can operate within constitutional limits that respect the privacy of law-abiding Americans. This conversation has begun on the local level. Indeed, the very first reform of this kind in the nation was in Santa Clara County, which adopted a similar measure in 2016. The Cities of Oakland, Davis, and Berkeley have followed suit, in addition to jurisdictions from Seattle, Washington to Somerville, Massachusetts. California would be the first state to pass such a measure. As we have on so many other pressing issues, California should lead the way on ensuring community control over surveillance technology by local law enforcement—especially given the danger that the sensitive information collected by these powerful technologies will be shared with the current administration in Washington. TAKE ACTION Stand up for community control over police surveillance.  You have long been a champion of human rights. We urge to join us in this fight by voting “Aye” on S.B. 1186 and ensuring its passage without further amendments. Sincerely, Shahid Buttar Director of Grassroots Advocacy Electronic Frontier Foundation
>> mehr lesen

Summer Is the Season for Visiting Members of Congress (Mi, 01 Aug 2018)
August has just begun, and that means the start of the summer recess for Congress. During that recess, most members of Congress—specifically members of the House of Representatives—will be coming home. And that means that you have the opportunity to meet and talk to them without traveling to Washington, D.C. Let's make sure that Representatives hear about net neutrality, innovation, and privacy while they're back home. These discussions will play a huge rule in determining the Congressional agenda in the following months. Constituents can request meetings with members of Congress either by filling out a meeting request form on the member’s official website or by contacting their local office in your state or district. You can look up your member’s address and phone number on the member’s official website. Make sure to check who your representative is since they prefer hearing from their constituents. Though it will depend on timing, you will hopefully get a meeting with the actual member of Congress. If not, meeting with Congressional staff will still get your concerns to the member. Calling the local office will also help you find out if your member of Congress is planning any town halls. The staff may be able to give you the information over the phone and the member’s official website and social media accounts may also post the location and time of any town halls one to three days beforehand. Make sure to carefully follow any instructions listed about parking and security and look to see if you need to register ahead of time to attend. Be aware that registering may mean including your name and contact information and that failing to register may mean you can’t get into a town hall with heightened security. While speaking in-person is the best way to be heard, you can always send an email, letter, or call instead. Town halls and meetings matter a lot. When members hear repeatedly from their own constituents in person about how issues are affecting people in the district, those conversations travel with the members back to DC. Especially if the members think that the issue could generate enough controversy and press, local stories can influence votes, legislation, and private conversations with other members. With so many issues vital to digital rights looming in the congressional calendar, this August is a critical time for Internet users to pressure Congress to do the right thing on net neutrality, surveillance, copyright, and preventing government agencies from having the power to shoot down drones. Here are some key issues to bring up this August, whether in meetings, town halls, calls, and letters to your member of Congress—or when writing for public audiences. Net Neutrality In 2017, the FCC under Ajit Pai voted to repeal the 2015 Open Internet Order which created clear, enforceable protections for net neutrality. Using the Congressional Review Act (CRA), a simple congressional majority can overturn the FCC’s decision. It’s already passed in the Senate and now we need the House of Representatives to follow suit. The vote has to happen in this session, so August is a key time to check and see where your representative stands and, if they haven’t committed to voting for the CRA, ask them why not. We’ve prepared a guide for how to talk about net neutrality with your representative when they’re home for the recess. You can also adapt parts of that guide—how to set up meetings and how to write op-eds, for example—for any of these issues. Ask your representatives to vote for the CRA. Surveillance Technology on the Border Time and again in the past year, Congress members met to negotiate and craft several bills to revamp many aspects of the country’s immigration process. And time and again, those bills required increased, high-tech government surveillance of citizens and immigrants alike. Any bill that would create a path to citizenship should not include invasive, high-tech surveillance. Please let your members of Congress know that increased surveillance should not be the price of immigration. Extending and Complicating Copyright S. 2823 is a bill that combines the Music Modernization Act (MMA) with the CLASSICS Act, and now both are being called the MMA. The combined package has already passed the House and is now pending in the Senate. The original MMA simply created a new system for compensating songwriters and music publishers for songs played on digital streaming services. CLASSICS, on the other hand, attaches new federal rights and penalties to sound recordings made before 1972 but doesn’t apply the federal rules about copyright term to those recordings. Most of those sound recordings won’t be in the public domain until 2067, meaning some will have copyrights lasting 144 years or more. And CLASSICS leaves the long state copyright scheme in place, but creates a national law to collect money on it. This complicated way of approaching copyright is the new frontier of assaults on the public domain, and members of Congress should be told not to fall for it. Please tell your elected officials not to vote for bills that keep music under the control of a few legacy companies while giving nothing back to the public. Government Agencies Eliminating Drones with No Proper Procedures When government agencies refuse to let the public know what they’re doing and where, drones can be an important tool to hold them accountable. However, the Preventing Emerging Threats Act of 2018 (S. 2836) would authorize Department of Justice and Department of Homeland Security to “track,” “disrupt,” “control,” “seize or otherwise confiscate,” or even “destroy” unmanned aircraft that pose a “threat” to certain facilities or areas in the U.S. The bill also authorizes the government to “intercept” or acquire communications around the drone for these purposes, which could be read to include capturing video footage sent from the drone. This expansion of powers does not require these agencies to follow the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act before they take down a drone. And it appears that some legislators will try to get this language passed however they can. This measure raises large First and Fourth Amendment concerns that must be addressed. Please ask your members of Congress not to extend the broad authority to destroy or hack drones to the Department of Justice and the Department of Homeland Security.
>> mehr lesen

Transparency Win: Federal Circuit Makes Briefs Immediately Available to the Public (Mi, 01 Aug 2018)
In a modest victory for public access, the Federal Circuit has changed its policies to allow the public to immediately access briefs. Previously, the court had marked briefs as “tendered” and withheld them from the public pending review by the Clerk’s Office. That process sometimes took a number of days. EFF wrote a letter [PDF] asking the court to make briefs available as soon as they are filed. The court has now changed its policies to allow immediate access. Earlier this month, the Federal Circuit announced a new compliance review policy. While that new policy [PDF] doesn’t specifically mention the practice of withholding briefs as “tendered,” we have confirmed with the Clerk’s Office that briefs are now available on PACER as soon as they are filed. Our review of recent dockets also suggests that briefs are now available to the public right away.  While this is perhaps a small change, we appreciate that the Federal Circuit is making briefs available upon filing. We had encountered delays of 7 days or more (this meant that the parties’ briefs were sometimes not available until after supporting amicus briefs were due). Ultimately, the public’s right of access to courts includes a right to timely access. The Federal Circuit is the federal court of appeal that hears appeals in patent cases from all across the country, and many of its cases are of interest to the public at large.  The Federal Circuit’s former practice was at odds with its other actions on transparency. The court has issued rulings making it clear that it will only allow material to be sealed for good reason. Federal Circuit rules were also recently changed to require parties to file a separate motion if they want to seal more than 15 consecutive words in a motion or a brief. With federal district courts routinely allowing parties to seal records that should be public, we hope that more courts take the Federal Circuit’s lead on public access.
>> mehr lesen

Eight AT&T Buildings and Ten Years of Litigation: Shining a Light on NSA Surveillance (Di, 31 Jul 2018)
Two reporters recently identified eight AT&T locations in the United States—towering, multi-story buildings—where NSA surveillance occurs on the backbone of the Internet. Their article showed how the agency taps into cables, routers, and switches that handle vast quantities of Internet traffic around the world. Published by The Intercept, the report shines a light on the NSA’s expansive Internet surveillance network housed inside these sometimes-opaque buildings. EFF has been shining its own light on NSA Internet surveillance for years with our landmark case, Jewel v. NSA. In more than 10 years of litigation, we’ve made significant strides. We’ve had our case dismissed but we fought the decision and it was reversed on appeal. We’ve overcome multiple delays. We’ve forced the NSA to produce evidence about whether our plaintiffs were harmed by mass, warrantless surveillance. And earlier this year, the former NSA director finally submitted a 193-page declaration in response to our questions, in addition to producing thousands of pages of other evidence concerning the NSA’s spying program for the court to review. No case challenging NSA surveillance has ever pushed this far. As the years press on, the picture becomes clear: the NSA’s mass surveillance operation is deeply embedded inside our country’s Internet and telecommunications infrastructure. Now, thanks to The Intercept’s reporting, we have a better idea of where this surveillance takes place. For many of us, it’s in our own backyards. Despite the government’s years-long stonewalling, EFF is committed to continuing its fight against the NSA’s mass, warrantless surveillance. According to The Intercept, the NSA is siphoning data out of eight large AT&T buildings, known as “service node routing complexes,” located in Washington, D.C., New York, Atlanta, San Francisco, Dallas, Chicago, Seattle, and Los Angeles. These centers handle not only AT&T’s internet traffic, but also the traffic of other phone and Internet providers. At these complexes, the NSA and AT&T copy and analyze large swaths of domestic and international Internet traffic. This information includes the content of your emails and online chats, along with your browsing history. Some amount of this traffic is processed and stored by the NSA, sometimes for years. This network of warrantless surveillance, described by The Intercept, is quite familiar to EFF. In 2006, former AT&T technician Mark Klein told our organization about what he believed was NSA surveillance taking place inside one of the company’s San Francisco buildings. As he stated in 2006, through his work at AT&T, he learned that the NSA had installed surveillance devices in AT&T facilities in other cities on the West Coast, like Los Angeles and Seattle, just as the Intercept confirmed. In 2012, technical expert J. Scott Marcus reviewed the Klein evidence and offered his opinion on what it all meant. He agreed with Klein that the evidence suggested that NSA had installed surveillance devices in AT&T facilities across the country, including Atlanta, as the Intercept confirmed. Even in 2012, after reviewing the evidence, Marcus concluded that: “AT&T has constructed an extensive—and expensive—collection of infrastructure that collectively has all the capability necessary to conduct large scale covert gathering of IP-based communications information, not only for communications to overseas locations, but for purely domestic communications as well.” Equipped with expert testimony, verified technical diagrams, and investigative reporting like the Intercept’s that increasingly bolsters our arguments, EFF’s signature lawsuit against NSA surveillance is looking stronger by the day. Jewel v. NSA in 2018 The new year got off to a promising start. After three deadline delays spanning more than a year, the government’s lawyers were finally expected to comply with our “discovery” requests. (These are inquiries for evidence when lawsuits advance towards trial. In many lawsuits, this process can take months. In Jewel v. NSA, simply forcing the government to begin the process took eight years.) But the government missed the first deadline of 2018 and received one last extension. The new deadline to respond to our discovery questions was February 16. The court also told the government that it would need to comply with a previous order to “marshal all evidence” it had about our plaintiff’s ability to prove harm and deliver it directly to the court, away from public view, by April 1. Finally, the government complied, submitting reams of evidence about its vast surveillance operation. That included declarations from former NSA director Michael Rogers and Principal Deputy Director of National Intelligence Susan Gordon. But yet again, we met obstacles. Much of the submitted evidence is classified, requiring security clearances that our lawyers do not possess. We asked the court for temporary approval to help review the evidence, but our requests were denied.  Now, the court will have to sort through this information on its own. However long that takes, we’re confident the evidence will show that the NSA has been collecting and searching the communications of millions of innocent Americans for decades. Despite the government’s years-long stonewalling, EFF is committed to continuing its fight against the NSA’s mass, warrantless surveillance. Multiple newspapers and publications, like The Intercept, are equally committed, too. We thank them for investigating and writing stories that confirm what we’ve said in our Jewel suit, and for continuing to expose the enormous breadth of NSA surveillance to the public. Related Cases:  Jewel v. NSA
>> mehr lesen

County Welfare Office Violated Accountability Rules While Surveilling Benefits Recipients (Di, 31 Jul 2018)
California law is crystal clear: any entity—including government agencies—that accesses data collected by automated license plate readers (ALPRS) must implement a privacy and usage policy. This policy must ensure all use of this sensitive information “is consistent with respect for individuals’ privacy and civil liberties.” The policy must include a process for periodic audits and every time the data is looked up, a purpose for the search must be recorded.   From June 2016 until July 2018, the Sacramento County Department of Human Assistance (DHA) failed to abide by these basic legal requirements, according to documents obtained by EFF through the California Public Records Act. The county allowed 22 employees working in the welfare fraud department to search ALPR data collected by other agencies and private companies more than 1,000 times without any of these mandated accountability measures. No policies were written or posted online, as required by law. No audits were conducted. The purposes for the ALPR data searches were not recorded according to logs. ALPRs are high-speed camera systems that capture images of license plates of vehicles that pass into view. The systems convert the plates into machine-readable numbers and letters, attach the time, date, and GPS coordinates, and upload the information to a searchable database that can be used to establish travel patterns of drivers and visitors to certain locations. ALPR technology collects data on all vehicles, regardless of whether they are connected to criminal activity.  Because DHA handles welfare fraud investigations, it is most likely that the data was used to spy on the travel patterns of food stamps and other benefits recipients—i.e. mostly people below the poverty line and disproportionately African-American. However, certain DHA employees used the “stakeout” feature in the ALPR database, which allows the user to view license plates and other information about every single vehicle that visited a particular location, regardless of whether they were recipients of benefits.  DHA does not operate its own ALPR cameras, and instead accesses data collected by other entities that is shared through a system called LEARN operated by the private company Vigilant Solutions. Since the spring EFF and Muckrock have been filing public records requests around the country with Vigilant Solutions customers—public agencies—to reveal who is sharing data with whom. So far, we have identified 19 agencies sharing data with DHA—mostly in California, but also agencies in other states such as the Cedar Rapids Police Department in Iowa and the Austin Police Department in Texas.  To give a sense of the scale of this data: DHA had access to data collected by the Sacramento County Sheriff’s Office and Sacramento Police Department, a combined 196-million license plate scans in 2016-2017. Vigilant Solutions also offers billions of license plate scans collected by commercial operators, such as tow truck drivers. Supporting public records available DocumentCloud.org (DocumentCloud’s privacy policy applies). Records show that the 22 employees—all criminal investigators or investigative assistants—in the DHA Program Integrity Division accessed ALPR data through Vigilant Solutions’ LEARN system on 1,110 occasions between June 2016 and July 2018. Some employees only ran a single search, while others ran more than 100. One employee ran 214 searches over the course of 20 months.  In emails and phone calls with EFF, DHA officials acknowledged that the agency did not know that as an “end-user” it had any legal obligations regarding the use of ALPR data. After receiving our request, the agency spent a week creating a policy and then uploaded it to its website. The policy now includes a process for monthly audits.  EFF obtained contracts and invoices between DHA and Vigilant Solutions that show the county paid a little more than $10,000 for access to the data, while bypassing the competitive bidding process. DHA also signed an agreement forbidding the agency from talking to the media about the ALPR program without Vigilant Solutions’ written permission. The agency also agreed not to use information about Vigilant Solutions in “any manner that is disparaging.”  The use of powerful ALPR data is disproportionate to the need. As Sacramento DHA officials acknowledged in 2013: “the percentage of fraud cases is statistically low.” In 2012, DHA found fraud found in only .02% of all welfare cases or 500 of the about 8,000 fraud referrals (out of nearly 200,000 people receiving assistance in Sacramento). EFF calls on Sacramento County to cancel this program immediately and launch an internal investigation to determine the extent that privacy and civil liberties were violated and what disciplinary measures are appropriate for failing to comply with state law.  The Sacramento County Board of Supervisors should further pass an ordinance requiring all surveillance technology acquisitions and associated use policies to come for a full, public vote before being approved. Other local governments in the region have already passed such ordinances, including Santa Clara County and the cities of Oakland, Berkeley, and Davis.  The county may believe it’s a priority to investigate individuals for breaking welfare rules, but it must hold its own department accountable when it breaks the law too. Related Cases:  Automated License Plate Readers (ALPR)
>> mehr lesen

Stupid Patent of the Month: Upaid Sues “Offending Laundromats” For Using Prepaid Cards (Di, 31 Jul 2018)
When patent trolls threaten and sue small businesses, their actions draw the public’s attention to the worst abuses of the patent system. In 2013, a company called MPHJ Technology got called out in a U.S. Senate hearing as a “bottom feeder” engaged in “garden-variety extortion” after it sent out thousands of demand letters demanding payments from small businesses that dared to use printers with “scan-to-email” functions. Lawmakers, understandably, found it incomprehensible that broad, stupid patents were being used to sue burger stands and grocery stores. It’s essentially a patent on having a prepaid account for—well, anything. There’s a good reason for that concern. It’s hard to see how lawsuits against small businesses using basic technology do anything to “promote the progress of science and the useful arts.” By contrast, it is easy to see how these lawsuits harm companies and consumers by increasing the costs and risks of doing business. But the intermittent public attention hasn’t stopped this most basic abuse of the patent system. Upaid Ltd., a shell company based in the British Virgin Islands, has been filing patent infringement lawsuits throughout 2018, including 14 against laundromats—yes, laundromats—from California to Massachusetts. Upaid says that laundromats are infringing U.S. Patent No. 8,976,947. Claim 1 of the patent describes a computer system that performs “pre-authorized communication services and transactions,” after checking an account to see if a user “has a sufficient amount currently available for the … transaction.” It’s essentially a patent on having a prepaid account for—well, anything. Right now, Upaid lawyers are focused on systems run by Card Concepts, Inc., a service provider that markets a system called Laundry Card to laundromats. Many of the Upaid’s complaints simply point to online photos of the laundromats and the relevant card dispensers as evidence of infringement. This incredibly broad patent was granted in 2015, but dates to a series of applications stretching back to 1998. Even in 1998, a prepaid account was not an inventive concept. It’s a basic and longstanding idea, that isn’t improved by adding verbiage about a “plurality of external networks” and a “computer readable medium.” And that’s exactly the argument that lawyers for Card Concepts Inc. made [.pdf] when they got sued by Upaid last year. CCI has rightly argued that the patent should be invalidated as abstract under the Alice decision. CCI’s motion may well succeed in defending their customers—at some point. Meanwhile, though, Upaid has unleashed 14 lawsuits against laundromats in different states, and has promised more. Faced with the prospect of paying a lawyer, even if just to buy time, some of those small businesses are likely to pay unjustified licensing fees for this patent. In fact, it has begun to happen. Last week, UPaid put out a press release boasting that a Houston-based facility called 24 Hour Laundry had agreed to pay them. Laundromats in Kansas, Massachusetts, and Monterey, California are next up on the list. “When required, we will strenuously enforce our rights through litigation against offending laundromats,” warned Upaid CEO Simon Joyce. “Our recent settlement reveals that many parties are not aware that the card equipment critical to their successful laundry business infringes our patents.” Upaid’s behavior is brazen, but it is not an anomaly. Other patent trolls have waged campaigns against small businesses that merely use off-the-shelf technology. For example, Innovatio IP Ventures sent thousands of letters targeting hotels and cafes that provide Wi-Fi for customers. In Upaid’s case, the company’s website doesn’t list any products or services, but states that it is engaged in “ongoing development” of “intellectual property related to mobile commerce systems.” Lawsuits against small, non-technology business show how trolls exploit the patent system. The costs to challenge a wrongly granted patent are high—defending a patent lawsuit through a jury trial can cost millions of dollars. Faced with the possibility of that kind of “winning,” small businesses will often fold. Yet this year, patent maximalists are actually talking about rolling back the key changes to patent law that give small businesses a fighting chance. The Alice Corp. v. CLS Bank decision has stopped hundreds of “do it on a computer” style patents in their tracks. Meanwhile, inter partes review, a process that can get wrongly issued patents thrown out at a lower cost, are also under attack. Instead of considering patent bills that move in exactly the wrong direction, like last year’s STRONGER Patents Act, Congress should consider legislation focused on how to help the smallest businesses from being roped into unjustified and expensive patent disputes.
>> mehr lesen

Sextortion Scam: What to Do If You Get the Latest Phishing Spam Demanding Bitcoin (Di, 31 Jul 2018)
You may have arrived at this post because you received an email from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation. Don’t panic. Contrary to the claims in your email, you haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam which is popularly being called "sextortion." This is a type of online phishing that is targeting people around the world and preying off digital-age fears. We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom. We have pasted a few examples of these emails at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers.  The hacker promises to go away if you send them thousands of dollars, usually with bitcoin. What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you a password you once used or currently use. Again, this still doesn't mean you've been hacked. The scammers in this case likely matched up a database of emails and stolen passwords and sent this scam out to potentially millions of people, hoping that enough of them would be worried enough and pay out that the scam would become profitable. EFF researched some of the bitcoin wallets being used by the scammers. Of the five wallets we looked at only one had received any bitcoin, in total about 0.5 bitcoin or $4,000 at the time of this writing.  It’s hard to say how much the scammers have received in total at this point since they appear to be using different bitcoin addresses for each attack, but it’s clear that at least some people are already falling for this scam. Here are some quick  answers to the questions many people ask after receiving these emails. They have my password! How did they get my password? Unfortunately, in the modern age, data breaches are common and massive sets of passwords make their way to the criminal corners of the Internet. Scammers likely obtained such a list for the express purpose of including a kernel of truth in an otherwise boilerplate mass email. If the password emailed to you is one that you still use, in any context whatsoever,  STOP USING IT and change it NOW! And regardless of whether or not you still use that password it's always a good idea to use a password manager. And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps. Should I respond to the email? Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all. So,  I shouldn’t pay the ransom? You should not pay the ransom. If you pay the ransom, you’re not only losing money but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re are susceptible. What should I do instead? As we said before, for sure stop using the password that the scammer used in the phishing email, and consider employing a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online. One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do. We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good password hygiene going forward! Example 1 I am aware one of your passphrase: password. Lets get directly to point. Not a single person has compensated me to investigate about you. You do not know me and you are probably wondering why you're getting this e mail?actually, I actually installed a software on the adult vids (sex sites) site and you know what, you visited this web site to have fun (you know what I mean). When you were viewing videos, your internet browser initiated working as a Remote control Desktop that has a key logger which provided me access to your display screen and also web cam. Right after that, my software program collected your complete contacts from your Messenger, FB, and email . After that I created a double-screen video. 1st part shows the video you were viewing (you've got a good taste haha . . .), and 2nd part shows the view of your webcam, and its u. You do have only 2 alternatives. We are going to understand these types of choices in aspects:1st solution is to disregard this message. In this case, I am going to send your actual video clip to just about all of your contacts and thus you can easily imagine about the disgrace you feel. Not to mention should you be in a relationship, just how it will eventually affect?Number two choice will be to pay me $3000. We will think of it as a donation. As a consequence, I most certainly will without delay eliminate your videotape. You will keep going on your daily life like this never happened and you will not hear back again from me.You'll make the payment through Bitcoin (if you do not know this, search for "how to buy bitcoin" in Google). Example 2 Hi, victim.I write yоu becаusе I put а mаlware оn the wеb раge with porn whiсh yоu hаve visitеd.My virus grаbbed all your рersonal infо аnd turnеd on yоur сamеrа which сaрtured the рroсеss оf your onаnism. Just aftеr that the soft savеd yоur соntaсt list.I will dеlеte thе сompromising video and infо if you pаy me 999 USD in bitcoin. This is address fоr рaymеnt : 1K2jNTLdbHEwaALQWKMeGoKLWD67Cb6q8BI give yоu 30 hоurs aftеr you ореn my mеssаge for making the trаnsactiоn.As sоon аs yоu reаd the mеssаgе I'll see it right awаy.It is nоt necessary tо tell mе thаt you hаve sеnt money to me. This address is соnneсtеd tо yоu, my systеm will dеlete еverything automаtically aftеr trаnsfer соnfirmаtiоn.If yоu nееd 48 h just reрly оn this letter with +.Yоu сan visit thе pоlicе stаtion but nobоdy cаn hеlp yоu.If you try to dеceive mе , I'll sеe it right аway !I dont live in yоur соuntry. So they саn nоt track my lосаtiоn evеn for 9 months.Goodbyе. Dоnt fоrget аbоut thе shame and tо ignore, Yоur life can be ruined. Example 3 𝕨hat's up.If you were more vigilant while playing with yourself, I wouldn't worry you. I don't think that playing with yourself is very bad, but when all colleagues, relatives and friends get video record of it- it is obviously for u.I adjusted virus on a porn web-site which you have visited. When the victim press on a play button, device begins recording the screen and all cameras on your device starts working.мoreover, my program makes a dedicated desktop supplied with key logger function from your device , so I could get all contacts from ya e-mail, messengers and other social networks. I've chosen this e-mail cuz It's your working address, so u should read it.Ì think that 730 usd is pretty enough for this little false. I made a split screen vid(records from screen (u have interesting tastes ) and camera ooooooh... its awful ᾷF)Ŝo its your choice, if u want me to erase this сompromising evidence use my ƅitсȯin wᾷllеt aďdrеss-  1JEjgJzaWAYYXsyVvU2kTTgvR9ENCAGJ35 Ƴou have one day after opening my message, I put the special tracking pixel in it, so when you will open it I will know.If ya want me to share proofs with ya, reply on this message and I will send my creation to five contacts that I've got from ur contacts.P.S... You can try to complain to cops, but I don't think that they can solve ur problem, the investigation will last for several months- I'm from Estonia - so I dgf LOL Example 4 I know, password, is your pass word. You may not know me and you're most likely wondering why you are getting this e mail, correct?In fact, I placed a malware on the adult vids (porn material) web-site and you know what, you visited this website to have fun (you know what I mean). While you were watching video clips, your internet browser initiated operating as a RDP (Remote Desktop) that has a keylogger which provided me access to your screen and also webcam. Immediately after that, my software program gathered your entire contacts from your Messenger, social networks, as well as email.What did I do?I made a double-screen video. 1st part shows the video you were watching (you have a good taste lmao), and 2nd part shows the recording of your webcam.exactly what should you do? Well, I believe, $2900 is a fair price for our little secret. You'll make the payment by Bitcoin (if you don't know this, search "how to buy bitcoin" in Google).BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge(It is cAsE sensitive, so copy and paste it)Note:You have one day in order to make the payment. (I have a specific pixel in this email message, and at this moment I know that you have read through this email message). If I do not get the BitCoins, I will definitely send out your video recording to all of your contacts including family members, coworkers, etc. However, if I do get paid, I'll destroy the video immidiately. If you want to have evidence, reply with "Yes!" and I will certainly send out your video to your 14 contacts. This is the non-negotiable offer, so please don't waste my personal time and yours by responding to this email message.
>> mehr lesen

Moving Your Site From "Not Secure" to Secure (Mo, 30 Jul 2018)
Maybe you’re a beginner to web development, but you’ve done the hard work: you taught yourself what you needed to know, and you’ve lovingly made that website and filled it with precious content. But one last task remains: you don’t have that little green padlock with the word “secure” beside your website’s address. You don’t yet have that magical “S” after “HTTP”. You might have heard or noticed recently that something is different on Google Chrome: if your website does not have a HTTPS certificate, your visitors will see a warning on your pages, cautioning them about your page’s security. This is because Google Chrome browser is now marking unencrypted websites that don’t provide HTTPS as “Not Secure.” If you want to: mark your website as secure retain visitors to your website and boost search engine optimization provide privacy to your site visitors keep out nosey neighbors peeping on your and your users’ connections prevent malicious actors from tampering with content on your site prove that your site is not being impersonated (or prevent some malicious actor from pretending to be you) do this all for free Then, this post about getting an HTTPS certificate is for you! If transport-layer security, certificate authorities, and HTTPS are new concepts for you, check out this comic from How HTTPS Works: https://howhttps.works/. The details about how to enable HTTPS on your site depend crucially on your hosting environment. Depending on the provider and software your site is hosted with, HTTPS setup could range anywhere from automatic, to a single click, to impossible (if your hosting provider specifically doesn’t allow HTTPS). For many web site owners, the most challenging or unfamiliar step in enabling HTTPS is getting a certificate, a document issued by a publicly-trusted certificate authority.  A valid certificate is required for browsers to confirm that encrypted connections to your site are secure. EFF helped create a free, automated, publicly-trusted certificate authority called Let’s Encrypt, which is now the most-used certificate authority on the web. In this post, we’re going to provide advice about the process of getting a certificate from Let’s Encrypt. It’s a convenient option in many cases because it doesn’t charge money for the certificates, they’re accepted by all mainstream browsers, and the certificate renewal process can often be automated with EFF’s tool Certbot. There are also many other certificate authorities (CAs), which have different policies and procedures for getting certificates. Most will expect you to pay for a certificate unless you have some other relationship with them (for example, through a university that gets free certificates from a particular CA, or if you use a web host that has a commercial relationship with a CA to let subscribers get certificates at no additional charge). For most purposes, you won’t get a different level of privacy or security protection by choosing one CA rather than another, so you can choose whichever public CA you conclude best meets your needs. We’ve compiled some resources that we’re sharing here for beginners who are new to getting their own HTTPS certificates from the Let’s Encrypt Certificate Authority. This blogpost isn’t a full tutorial, but is intended to help you get started with the journey to get a HTTPS certificate: Find whether your web hosting provider already provides free HTTPS certificates. Confirm with your web hosting provider to see what options are available for HTTPS. Learn what system and software your server uses. Troubleshoot until you find an appropriate tutorial to get HTTPS certificates for your site. Check that HTTPS is working! We’re trying to improve this process to encrypt the web. When Let’s Encrypt first launched in 2016, only 40% of website connections were encrypted. Today, that number is as high as 73%. Help websites get to 100% encrypted and make the Internet more secure for everyone. 1. Find whether your web hosting provider already provides free HTTPS certificates. Do you use Tumblr? Github pages? Weebly? Or a variety of other hosting providers? There’s a chance that your web host already provides an option to obtain a certificate automatically, either from Let’s Encrypt or a different CA. Check if this is already described in your web host’s site or administrative interface. You can also check if they’re on this master list of web hosts supporting Let’s Encrypt, and if they have up-to-date instructions. If you find your web host on the list of supported providers, or you already know that it has a tutorial or guide for using its HTTPS support, follow their instructions for enabling HTTPS on your site. If it is not supported, proceed below. 2. Confirm with your web hosting provider to see what options are available for HTTPS. See if your site administration page has an option to enable HTTPS. A lot of providers—including many that aren't on that community list—use software like cPanel on some of their hosting plans to let subscribers configure their hosting services. cPanel normally has a feature to let the subscriber automatically get a certificate for free (which may be either from Let's Encrypt or another CA). Some of cPanel's competitors such as Plesk also have this configurable option. However, some hosts may be running outdated software or have deliberately disabled the ability to get a free certificate. Get in touch with your provider and ask them about their options of HTTPS support. Many providers are already working on making HTTPS available or  or may already provide an HTTPS feature. You can contact them and ask to see if this might be an option. “Dear [company], I would like to obtain a free HTTPS certificate for my site. I was wondering if this is already in the works? Thank you.” Your provider may then be able to guide you about whether your hosting plan allows you administrative access to the server (in which case a tool like Certbot may be relevant for you). See the next step if this is your circumstance. 3. Learn what system and software your server uses. If your hosting provider doesn’t integrate Let’s Encrypt but you do have administrative access to your server, you can use software to obtain and install a certificate. This is dependent on what software your web server is using, and what operating system your server is running on. If the above sounds like unfamiliar jargon and you’re not sure about what software or system you’re using, don’t worry! You can email your webhost to get that information. Try using the following language in an email to your webhost (influenced from Matt Mitchell). “Dear [company], I am using your hosting service. I’m interested in using Certbot to use a free certificate from Let’s Encrypt. Can you send me the support webpage on how to do this? In particular, I’m wondering how I can SSH into your server from my computer? I need to know what software the server is using, and what system the server is on. Thank you.” If you know what software and operating system your web server is on and know how to use the command line, Certbot might be a good tool for you. Check EFF’s Certbot site to generate instructions for getting Let’s Encrypt certificates on Unix servers that you administer. If you don’t see your server’s software and operating system reflected on Certbot, or are unable to get a certificate from following the Certbot instructions for your configurations, proceed to step 4. 4. Troubleshoot until you find an appropriate tutorial to get HTTPS certificates for your site. This is the messy part: there are many, many tutorials out there for many possible situations. If you’re new to using your command line, we recommend calling a friend with experience in configuring a Let’s Encrypt certificate on their site to help. Be prepared to copy and paste error messages, and spend some time troubleshooting. Try checking the service https://letsdebug.net/ for an analysis of your setup that can help point out a number of common problems. Try searching the Let’s Encrypt Community Forum for similar questions. If you don’t find the answer from the community’s responses, try submitting your own question to the Let’s Encrypt Community Forum, or calling a friend. Some other things to look for as you set up HTTPS include: Get the certificate to automatically renew every 90 days. This means you won’t have to go through the pains of configuring a new HTTPS certificate manually, or leaving your site with an expired certificate warning in web browsers if you forget to repeat these steps 3 months from now. Redirect your sites to HTTPS by default, so that it doesn’t default to the HTTP connection. Check with your site host if a wildcard certificate is available for you. This just means that it’ll apply to all your sites that are subdomains of the same domain (if the domain is “example.com”, the subdomains “transactions.example.com” and “email.example.com” will be covered by a “*.example.com” wildcard certificate). Once you’ve found a tutorial and enabled HTTPS, you’re almost there! 5. Check that HTTPS is working! Now, visit your site in your own browser and troubleshoot the HTTPS configuration for your site to make sure it’s working. If you have problems, some resources include: For checking the certificate itself: https://www.ssllabs.com/ssltest/index.html For checking the reason for security error messages in your browser: https://www.whynopadlock.com/
>> mehr lesen

Fair-Use Champion Stephanie Lenz, European Digital Rights Leader Joe McNamee, and Groundbreaking Content-Moderation Researcher Sarah T. Roberts Win EFF’s Pioneer Awards (Mo, 30 Jul 2018)
Government Whistleblower Daniel Ellsberg Keynotes September 27th Ceremony, Dedicated to EFF Co-Founder John Perry Barlow San Francisco – The Electronic Frontier Foundation (EFF) is honored to announce the winners of its 2018 Pioneer Awards: fair use champion Stephanie Lenz, European digital rights leader Joe McNamee, and groundbreaking content moderation researcher Sarah T. Roberts. The ceremony will be held September 27th in San Francisco. This year’s Pioneer Awards will be dedicated to Internet visionary and EFF co-founder John Perry Barlow, who died earlier this year. EFF has renamed the statuette awarded to winners the “Barlow” in recognition of the indelible mark he left on digital rights. The keynote speaker for this year’s ceremony will be one of Barlow’s many friends, Daniel Ellsberg. Ellsberg co-founded the Freedom of the Press Foundation with Barlow, and is known for his years of work advocating for government transparency, including his release of the Pentagon Papers. Tickets for the Pioneer Awards are $65 for current EFF members, or $75 for non-members. Stephanie Lenz’s activism over a home video posted online helped strengthen fair use law and brought nationwide attention to copyright controversies stemming from new, easy-to-use digital movie-making and sharing technologies. It all started in 2007, when Lenz posted a 29-second YouTube video of her then-toddler-aged son dancing while Prince’s song “Let's Go Crazy” played in the background. Universal Music Group used copyright claims to get the link disabled, and with the assistance of EFF, Lenz sued UMG for the bogus takedown. After more than 10 years of litigation, the case finally ended earlier this year, with many wins along the way establishing fair use as an affirmative public right. Lenz lives in western Pennsylvania with her family, and is the managing editor and a founder of Toasted Cheese, one of the earliest exclusively-online literary journals. Joe McNamee, Executive Director of European Digital Rights (EDRi), claimed a space in Brussels and the heart of the European Union for digital fundamental rights to be heard. EDRi has fought excessive copyright regulations in the EU—most recently against Articles 13 and 11. EDRi has also worked for Europe’s net neutrality rules, against privatised law enforcement, and was instrumental in the bruising lobbying battle over the GDPR, the “General Data Protection Regulation” that increased digital privacy for people in Europe and beyond. McNamee joined EDRi in 2009, at a time when there were no digital rights advocacy groups based in Brussels, despite the importance of EU decision-making for global digital freedom. During the nine years since, EDRi has grown to become an established part of digital rights policy-making. Prior to joining EDRi, McNamee worked for eleven years on Internet policy, including for the European Internet Services Providers Association. He started his Internet career working on the CompuServe UK helpdesk in 1995. Sarah T. Roberts coined the term “commercial content moderation” (CCM), and her research has been key to understanding how social media companies farm out content takedown decisions to low-wage laborers globally. Roberts has spent the past eight years identifying, describing, and documenting how people from Mountain View to Manila screen user-generated Internet content to see if it meets various platforms’ often opaque guidelines, demonstrating the effect that this work has on free expression as well as the on the mental and physical health of the screeners. Currently a researcher and Assistant Professor in the Department of Information Studies at UCLA, Roberts is preparing a monograph for Yale University Press based on her findings due to be published in 2019. She is also the recipient of a 2018 Carnegie Fellowship to support her ongoing work. “We need an Internet that is free for us to discuss, debate, share, and celebrate what’s going in our lives and around the world,” said EFF Executive Director Cindy Cohn. “Over the years we’ve seen this freedom threatened by bad laws, terrible corporate policies, and invasive tracking. Stephanie, Joe, and Sarah have all worked many years to make the Internet a better place for us all, and we are thrilled to honor them this year.” Awarded every year since 1992, EFF’s Pioneer Awards recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Chelsea Manning, Vint Cerf, Laura Poitras, and the Mozilla Foundation. Sponsors of the 2018 Pioneer Awards include Anonyome Labs, Dropbox, Gandi.net, and Ron Reed. To buy tickets to the Pioneer Awards: https://www.eff.org/awards/pioneer/2018
>> mehr lesen

Google Chrome Now Marks HTTP Sites "Not Secure" (Mo, 30 Jul 2018)
Last week, the movement to encrypt the web achieved another milestone: Google’s Chrome browser made good on its promise to mark all HTTP sites “not secure.” EFF welcomes this move, and we are calling on other browsers to follow suit. This is the latest in the web’s massive shift from non-secure HTTP to the more secure, encrypted HTTPS protocol. All web servers use one of these two protocols to get web pages from the server to your browser. HTTP has serious problems that make it vulnerable to eavesdropping and content hijacking. HTTPS fixes most of these problems. That’s why EFF and others have been working to encourage websites to offer HTTPS by default. Users should be able to expect HTTPS by default. And browsers have been an important part of the equation to push secure browsing forward. Last year, Chrome and Firefox started showing users “Not secure” warnings when HTTP websites asked them to submit password or credit card information. And last October, Chrome expanded the warning to cover all input fields, as well as all pages viewed over HTTP in Incognito mode. Chrome’s most recent move to show “not secure” warnings on all HTTP pages reflects an important, ongoing shift for user expectations: users should be able to expect HTTPS encryption—and the privacy and integrity it ensures—by default. Looking ahead, Chrome plans to remove the “Secure” indicator next to HTTPS sites, indicating that encrypted HTTPS connections are increasingly the norm (even on sites that don’t accept user input). For website owners and administrators, these changes come at a time when offering HTTPS is easier and cheaper than ever thanks to certificate authorities like Let’s Encrypt. Certificate Authorities (CAs) issue signed, digital certificates to website owners that help web users and their browsers independently verify the association between a particular HTTPS site and a cryptographic key. Let's Encrypt stands out because it offers these certificates for free and in a manner that facilitates automation. And, with EFF’s Certbot and other Let’s Encrypt client applications, certificates are easier than ever for web masters and website administrators to get. What Website Owners and Users Can Do If you’re a website owner or administrator new to getting your own HTTPS certificate, check out these resources for moving your site from “not secure” to secure. If you're a user, you can take steps to protect your browsing. Download HTTPS Everywhere to make sure your browser uses an encrypted HTTPS connection where ever possible.
>> mehr lesen

Defending Users: Initial Ideas for Cryptocurrency Exchanges, Payment Processors, and Other Choke Points Within the Blockchain Ecosystem (Mo, 30 Jul 2018)
The blockchain ecosystem has drastically changed over the last nine years, and the realities of today don’t closely resemble how many early enthusiasts imagined Bitcoin would evolve. People are no longer mining Bitcoin on their home laptops, and most people aren’t storing private keys on their own hard drives and then sending Bitcoin directly to friends and merchants. Instead, we’ve seen the rise of companies building software that handles these and other tasks on behalf of users. At the same time, creators are developing dozens of new tools to interact with the Bitcoin blockchain and many alternative blockchains. This in turn has inspired a wide array of new companies that mine, store, and exchange these alternative coins. The result? Many users in the cryptocurrency space have traded banks and credit card networks for cryptocurrency exchanges, wallet providers, payment processors, and other software tools and companies that are relatively young and untested. Each of these stakeholders sets policies for how and when they’ll allow cryptocurrency storage or exchange, who is allowed to have an account, how and when accounts can be frozen, and how they’ll react to government regulation and demands for user data. While blockchain protocols may be designed to favor censorship-resistance and autonomy, the real-world experiences of most of the users of cryptocurrencies are dictated by the policies of a few, centralized corporate intermediaries. This post is designed to address policy concerns that companies and startups in this space should be thinking about early in their development. This post is specifically designed to speak to startups within the larger blockchain ecosystem that work at the transactional layer—including any of the multitude of tools, businesses, and services being created atop distributed ledgers such as communications platforms, methods for tracking assets, and smart contracts, and other innovative projects within the blockchain space. This specific post isn’t designed to address projects working only at the protocol layer, since projects focused on developing decentralized protocols are dedicated (if they are genuine) to solving the issues of centralization and power that we'll be addressing in this post. Some of the ideas in this post may also apply to other projects within the larger decentralized web space, where ideals of autonomy and decentralization are running up against the practical realities of businesses bringing products to market in a way that requires minimal effort from users. The problem with centralization is that it creates points of potential failure. It can mean a small number of entities can be more easily pressured to censor speech or spy on users. Governments frequently serve as the pressure-bearers, pushing companies either directly through regulation or legal demands or indirectly through scrutiny, threats, or requests for assistance. Pressure can also include the government asking companies to build backdoors into their software to facilitate surveillance, requiring companies to shut down specific accounts or types of accounts, asking companies to keep open or freeze certain accounts or types of accounts, requiring or requesting detailed data on users, and more. Make no mistake: governments are far from the only parties that can pressure a blockchain startup. Pressure may come from investors, advertising and business partners, from outside advocacy groups, external pundits, users of the service, and even from people who work at the company itself. Blockchain companies that host content, open user accounts, and hold customer funds should take great care about when and how they’ll cave to pressures. The networks they are built on top of may be decentralized and censorship-resistant but they—as exchanges, merchant processors, or hosted wallet providers—are powerful choke points, capable of betraying the trust of their users. It’s vital for leaders within these companies to examine their values and philosophies early on, before there is an emergency or significant public pressure. This will allow for ample opportunity to discuss how the company can stand up for users, where it will embrace transparency, and how it will fulfill legal and ethical obligations to respond to government requests, all while there is still time for a nuanced and thoughtful analysis. We encourage leaders at blockchain startups to consider the unique challenges facing their own company, and to stretch themselves to take affirmative, strong steps to defend civil liberties in writing their policies. While there are countless ways blockchain startups could consider user rights when developing policies, we offer the following initial concepts as a good starting place: Transparency reports Transparency reports are public reports from a company providing an overview of how many government requests the company received in a set period of time (such as a year). A company may also include other details, such as how many requests it complied with, how many accounts were affected, and any requests to censor or take down accounts. Transparency reports have become standard practice among Internet giants like Google, Facebook, and Twitter. If blockchain startups want to merit the trust of users, they should be embracing at least that level of transparency. Applying the practice of transparency reporting to the blockchain space takes some creativity and flexibility. For example, many blockchain companies that store or transfer cryptocurrency on behalf of users may be required to file Suspicious Activity Reports with the U.S. government. While not traditionally in a transparency report, including information about how many such reports are filed would be vital for the public’s understanding of the company’s relationship to the government. At minimum, any company that has to file such reports should clearly and prominently tell their users that they are required by law to provide user data to the government and explain the circumstances under which they do so. Other companies may rarely get demands for user data but could face less official requests to shut down certain accounts. Finding creative ways to best reflect these types of censorship requests—especially if the company complies with them—could help draw attention to the ways in which blockchain companies are facing pressure to stifle and surveil users. Notifying users of government requests When the government seeks access to user data, sometimes the user herself is the last to find out. That’s why EFF has long applauded policies that commit to notifying users when the government seeks access to their data. For example, in 2011 Twitter successfully fought for the right to tell Twitter users that the government was seeking access to their data as part of its WikiLeaks investigation. From this decision, we were able to learn that Birgitta Jónsdóttir was a target of this government data demand, and EFF took her as a client and fought for her privacy in the case. Since then, we’ve been urging other companies to adopt similar policies of informing users about government data demands. The gold standard is to tell users before a company surrenders the data, with enough time that the user can secure legal counsel and challenge the request in court. There are exceptions to this—such as emergencies where someone’s in grave physical danger, or when an account has been compromised and so notice would be useless—but even when notice can’t be provided to users before data is shared with the government, it’s still a best practice to promise to notify a user after an emergency has ended. There will also be times when a company can’t provide notice because of a gag order, or because doing so would violate another law. Many blockchain companies may think they don’t have user data that may be of interest to the government, especially because they may think that the most important data related to their application is already shared publicly in a blockchain. But there are many types of data and metadata that could attract attention. Do you have a list of the email addresses or device IDs of everyone who has downloaded your app? Do you have IP addresses in web server logs? Do you store records about who has used your service to submit or query specific pieces of data from a blockchain? If someone accesses your service from a mobile device, will you collect data about their geolocation or what services they accessed and when? And if you host communications, don’t assume that end-to-end encryption obviates all government interest; law enforcement may still seek to know with whom your users are conversing, when, and how often. Do you have logs of any of that? Even startups in the blockchain space that think of themselves as small today, who perhaps may not have received their first demand for an account shut down or warrant for user content, should start thinking through their notification practices now. Making affirmative commitments to notify users about government demands for their data now ensures you’ll know how you’ll react the first time you get a warrant or subpoena for identifiable user content. Commitment to freedom of expression One of the key characteristics of decentralized blockchain technologies is that they are built to resist censorship. Editing out data from the blockchain is extremely costly, to the point of being near-impossible. It creates a history of records both difficult to erase and difficult to falsify. But for the multitude of corporations who are using blockchain technologies, the issue is not so simple. Could a content-addressing system used by blockchain companies, such as IPFS, have default blacklists, supplied by governments or the IP lobbies? Could a government force a mining pool to reject a transaction? Could an exchange like Kraken be pressured to freeze accounts for controversial online writers, whether they are are known for publishing erotica or polarizing political viewpoints? Each project within the larger ecosystem will need to think through its own policies around speech individually. But it’s useful to recognize early on that some of your users will be people you disagree with and that some of them will use your tools for purposes other than your initial intentions. Deciding how you’ll handle those circumstances early on, before you’re faced with a user you find annoying or abhorrent, could help you create fair policies that promote speech over personalities. Swift, transparent appeals process In May, EFF and a coalition of other civil liberties groups called on technology companies to adopt transparency and accountability around account shutdowns and content censorship. Some of the ideas in the Santa Clara principles, as they are called, could apply also to many companies in the larger blockchain ecosystem. In particular, we urged companies to “provide human review of content removal by someone not involved in the initial decision, and enable users to engage in a meaningful and timely appeals process for any content removals or account suspensions.” Though these principles are focused on content removal, we think a transparent appeals process is an important value to consider when limiting or shutting down user accounts or other ways that users can participate in an online community or service. We recognize that there are many different reasons for account closures and freezes, including government demands, fraud, and terms of service violations. We could imagine cryptocurrency exchanges that automatically limit or freeze accounts that exhibit certain behaviors, or wallet providers that flag accounts that receive an unusually large influx of assets from different sources in a brief period of time. But an account closure for a small business or individual can have potentially disastrous effects. Creating transparent and swift methods of appealing automated decisions can stave off some of the worst scenarios. Minimizing data collection Finally, one of the best tools that blockchain startups have when it comes to protecting users of their service is limiting how much data is collected. Taking steps such as collecting only the minimum data necessary for the service, deleting unnecessary data, allowing users to delete accounts fully, ensuring any backups are encrypted and deleted when appropriate, allowing user-side encryption where possible and appropriate, and otherwise limiting data collection can ensure that the government doesn’t see your service as a honeypot for surveillance. In addition, your users may actively choose your service because you’ve made commitments around data protection and deletion. There are types of data you may need to collect—such as data you are required to collect or keep by law, data that users need to provide in order for the service to function, and data necessary to prevent abuses of the system and fraud. Every blockchain startup will need to analyze their own data protection practices and make individual decisions about how and when to collect and keep info. But when you set up these systems, we urge you to remember that anything you keep could well be sought for unintended purposes, from government snoops to civil litigants. We urge you to seek ways to minimize unnecessary data collection and retention, and tell users what data you are keeping. These five concepts are far from exhaustive, but they do represent a few basic tenants around defending user rights that too often get lost in the shuffle during the early growth spurts of new companies. As blockchain technology kickstarts a range of innovations, we urge developers to remember that different actors within the space may have wildly different incentives to stand up for users, and those incentives may change over time. Do not assume that just because a blockchain is designed to be censorship-resistant means that these concerns are irrelevant to the blockchain community. Instead, think of every company in the space as being another potential pressure point that could be targeted by those who would squelch the speech, autonomy, and privacy of users. Proactive, pro-user policies adopted today can create the norms for the future of the larger blockchain economy and ecosystem. Note that  we’ll be hosting a workshop on these ideas at the upcoming Decentralized Web Summit. Please be sure to register today and attend the workshop on Defending Your Users, featuring Protocol Labs’s  Marvin Ammori, Human Rights Watch’s Cynthia Wong, and EFF’s Rainey Reitman. A number of people provided helpful insights, ideas, and feedback to early versions of this blog post. Deep thanks to Seth Schoen, Sydney Li, Joe Bonneau and Peter Van Valkenburgh.
>> mehr lesen

Egypt Sentences Tourist to Eight Years Jail for Complaining about Vacation Online (Fr, 27 Jul 2018)
When she went to Egypt for vacation, Mona el-Mazbouh surely didn’t expect to end up in prison. But after the 24-year-old Lebanese tourist posted a video in which she complained of sexual harassment—calling Egypt a lowly, dirty country and its citizens “pimps and prostitutes”—el-Mazbouh was arrested at Cairo’s airport and found guilty of deliberately spreading false rumors that would harm society, attacking religion, and public indecency. She was sentenced to eight years in prison. The video that el-Mazbouh posted was ten minutes long, and went viral on Facebook, causing an uproar in Egypt. In the video, el-Mazbouh also expressed anger about poor restaurant service during Ramadan and complained of her belongings being stolen. Egyptian men and women posted videos in response to her original video, prompting el-Mazbouh to delete the original video and post a second video on Facebook apologizing to Egyptians. Nevertheless, Mona was arrested at the end of her trip at the Cairo airport in May 31, 2018 and charged with “spreading false rumors that aim to undermine society, attack religions, and public indecency”. Under Egyptian law, “defaming and insulting the Egyptian people” is illegal. Mona was originally sentenced to 11 years in prison, but her sentence was reduced to eight years after her lawyer presented evidence that a 2006 surgery removing a blood clot from her brain impaired her ability to control anger. An anticipated appointment with an appeal court is set to hear her case on July 29th. Unhappy tourists have always criticized the conditions of the countries they visit; doing so online, or on video, is no different from the centuries of similar complaints that preceded them offline or in written reviews. Beyond the injustice of applying a more vicious standard online to offline speech, this case also punishes Mona for a reaction that was beyond her control. Mona had no influence over whether her video went viral. She did not intend her language or her actions to reach a wider audience or become a national topic of discusson. It was  angry commenters' reactions and social media algorithms that made the video viral and gave it significance beyond a few angry throwaway insults. The conviction of Mona el-Mazbouh is just one of many in a series of disproportionate actions taken by General Abdel Fattah El Sisi’s administration against dissent, including similar cases such as the detainment of Egyptian activist Amal Fathy. Sisi’s administration has so far fostered a zero-tolerance policy towards any kind of dissent, involving regressive legislation surrounding freedom of expression, reinstating a state of emergency, and detaining hundreds of dissidents without proper due process. Many of the administration’s actions have fallen under the pretext of “preventing terrorism”, including a much-dreaded anti-terrorism cybersecurity bill that will put Egyptian freedom of expression even more at risk. Mona el-Mazbouh is just one of many innocent Internet users who have been caught up in the Egyptian governments' attempts to vilify and control the domestic use of online media. At minimum, she should be released from her ordeal and returned to her country immediately. But more widely, Egypt's leaders need to pull back from their hysterical and arbitrary enforcement of repressive laws, before more people — including the foreign visitors on which much of Egypt's economy is based — are hurt.
>> mehr lesen

California Should Provide Public Access to Police Body Cam Footage (Do, 26 Jul 2018)
These days, more police officers are using body-worn cameras, or BWCs. That's why it's more important than ever we have clear guidelines around the public's right to access those police recordings. To that end, EFF is supporting [PDF] A.B. 748, a bill currently pending in the California legislature that would mandate public access to police recordings of so-called “critical incidents.” In 2015, following high-profile police shootings of civilians, a survey found that 95 percent of large police departments were planning to use them in the future. Body-worn cameras can serve a valuable function in increasing police accountability. Without proper policies, though, they can also be use to surveil people who interact with police, or those who may not be aware that filming is taking place. BWCs can’t function as a proper police accountability tool unless the public has a clear right of access to police video and audio recordings. Unfortunately, some of the first departments to embrace the cameras also had inadequate policies that utterly failed to ensure accountability or transparency. When the Los Angeles Police Department announced in 2015 that it would roll out several thousand BWCs for officer use, the department’s policy provided for no public access whatsoever. The department not only allowed, but required officers to review video before writing up their reports. The review took place even before officers provided initial statements to investigators in cases where they were accused of misconduct. The LAPD policy was so poor that we urged the Department of Justice not to fund the city’s BWC experiment. Earlier this year, the LAPD made a major change in its policy, and now California is set to follow suit with A.B. 748. A.B. 748 generally requires public access to video or audio recordings related to a “critical incident,” which is defined as an officer use of force, or a legal or policy violation. Police can withhold recordings related to an active investigation, but only for 45 days. After that time period, the agency must disclose a recording, unless it can prove by clear and convincing evidence that the disclosure would interfere with the investigation. The bill is a good start, but not a complete solution, for police BWC policies. Local agencies can and should go further. A proper policy should also make provisions for preventing BWC recording of public protests and similar First Amendment protected activity; not allow officers to review footage before writing reports; and discipline officers who do not use their cameras to record when they should be. Finally, a BWC policy should apply to all police use of cameras, not just the “critical incidents” defined in A.B. 748. We’re supporting A.B. 748 as a first step in the right direction and hope the California legislature passes it in the coming months. The proposed law will serve as a critical “floor” for public access to police recordings, and prevent the kind of no-disclosure policies that were, until recently, used in Los Angeles.
>> mehr lesen

CLASSICS Is the Future of Assaults Against the Public Domain (Do, 26 Jul 2018)
January 1, 2019 will be the first time in twenty years that works in the United States will once again join the public domain through copyright expiration. A growing public domain means more access to works and the ability of other artists to build on what came before. And as we get closer and closer to finally growing the public domain, big content holders are going to push harder and harder to lock it all down again. CLASSICS is the first step in that direction. CLASSICS is a very bad bill that has been bundled with the largely-good Music Modernization Act (MMA). That bundle was passed in the House of Representatives and is currently sitting in the Senate. The original text of MMA created a new way to compensate songwriters and publishers for music played on digital services. CLASSICS, on the other hand, took advantage of a messy and confusing situation—not unusual in copyright—in order to let labels find new ways to make money off of music that should be in the public domain.   The situation is this: sound recordings didn’t used to be protected by federal copyright law. As a result, states came up with their own laws, creating a patchwork. Congress did eventually get around to bringing sound recordings under federal copyright law, but only for recordings made in 1972 and later. Older recordings remained under the old crazy quilt of state law. This meant they did not enter the public domain when they should have. State laws continue to govern the pre-1972 sound recordings until 2067. Music from World War I is locked under copyright until nearly the 150th anniversary of the war. After so much time, even finding the rightsholders to ask for permission to copy a recording is a daunting task. CLASSICS doesn’t fix the problem of sound recordings being kept out of the public domain. What it does do is create a way for music labels—and some lucky recording artists—to collect money from streaming services for these recordings. It also makes it legally risky for music libraries, archives, and fans to digitize music that is decades old, raising the possibility of massive, unpredictable federal copyright penalties. CLASSICS simply does not fit the purpose of copyright, as 42 intellectual property scholars explained in a letter to Congress [pdf]. CLASSICS leaves the current state copyrights in place, some lasting more than 144 years, while simultaneously creating a federal system to collect money that federal copyright might not entitle them to. Of course, these recordings could just join the public domain on the same schedule as everything else. That’s what Senator Ron Wyden’s ACCESS to Recordings Act does: applies the federal rules to all recordings. It’s a far superior solution to CLASSICS. Big rightsholders—studios, labels, and so on—don’t want to see creative works enter the public domain and exit their control. We, as people, benefit from shorter copyright terms and a robust public domain. It means rare books can be copied and distributed without risk, saving them from the dustbin. It means that artists can build on existing work to further enrich culture. It means information can be more easily shared. It is far easier to simply monetize existing works forever than to create new works. Or to be the only ones who can use a certain story, song, or character, as opposed to having to be the one using it best. And so, as the public domain approaches again, we can expect to see big content owners working to undermine it. It may take the form of seeking, once again, a law that extends copyright term. It may take the form of legislation in the SOPA/PIPA vein. But, in all likelihood, it will take the form of legislation in these more esoteric, slogan-unfriendly areas of copyright law. They’re hoping it’s harder to mount resistance to things like CLASSICS than to blatant term extensions like the Sonny Bono Act. So we, the “public” part of the “public domain,” need to make sure that they learn we’ll keep fighting. Take Action Tell the Senate to Vote No on CLASSICS
>> mehr lesen

Facing Facebook: Data Portability and Interoperability Are Anti-Monopoly Medicine (Mi, 25 Jul 2018)
Social media has a competition problem, and its name is Facebook. Today, Facebook and its subsidiaries are over ten times more valuable than the next two largest social media companies outside China—Twitter and Snapchat—combined. It has cemented its dominance by buying out potential competitors before they’ve had a chance to grow (like Instagram) and waging wars of attrition against others (like Snapchat) when it can’t. Because of its massive reach across much of the world, the platform can effectively censor public speech, perform psychological experiments, and potentially sway elections on the scale of a nation-state. And if users don’t like the way Facebook wields this power, there is nowhere else as ubiquitous or as well-populated for them to go. It’s going to take multiple changes to fix the problems, in free expression and elsewhere, caused by Facebook’s dominance. If we’re going to have a real shot at it, one thing that needs to change is giving users meaningful control of their own data. Facebook’s trove of user data is its most valuable asset, which presents a dilemma. Thanks to network effects, every user who joins a social network makes it more valuable for advertisers and more useful to everyone else. Without some access to the data Facebook has, it’s virtually impossible for upstart platforms to compete with the behemoth now used by nearly a third of the world. At the same time, the ways Facebook chooses to share its data often go terribly wrong. Users have been rightfully outraged to learn about Facebook’s troublesome use and misuse of their data in the past, including the recent Cambridge Analytica scandal. Since these breaches of trust were often enabled by Facebook’s third-party Application Programming Interfaces (APIs), some analysts have come to the conclusion that there’s an unavoidable trade-off between interoperability and privacy. There’s some truth to that, but it’s too simplistic. And it leads to appointing Facebook keeper and protector of the world’s data. We believe there’s another way to look at it. Facebook should let users take back control of their own data. This doesn’t raise the same privacy problems as letting third parties suck up everything about all of us. If done with care, it can be accomplished without opening the door to shady actors like Cambridge Analytica. In addition, Facebook has to start thinking differently about how it interacts with third-party developers. Instead of granting them access to data but forcing them to work within its walled garden, Facebook should serve as a hub, allowing developers to create new experiences for users that build off of the core service it offers and hosts. Ultimately, Facebook does not have to be any less diligent about protecting users from malicious actors. It just has to stop “protecting” them from legitimate competitors. Facebook already recognizes that it is under pressure to improve its data portability story. Last week it announced that, along with Twitter, it was joining Microsoft and Google’s data portability initiative, the Data Transfer Project. While we applaud that move on behalf of all of these tech companies, our concern is that this, and similar, projects will be used to fend off regulation without substantially changing the status quo. The Data Transfer Project is a set of tools and standards to make it technologically easier to move data from one place to another. However, without substantive changes to the Facebook’s policies and processes, this project alone won’t give us meaningful portability away from the tech giants or tools that empower end-users themselves. We think Facebook should: Give users a tool for real data portability. That includes a way to export the rich contact list that Facebook hosts and the tracking data Facebook collects without meaningful consent. Open up its platform policy to enable competitors, cooperators, and follow-on innovators. Allow developers to use Facebook’s APIs for software that modifies or competes with the core Facebook experience. Interoperate with next generation of social networks via open standards. Adapt Facebook’s APIs to use the W3C’s social web protocols where appropriate, and allow open, federated services like Mastodon to work with Facebook as partners. Let’s go into more detail on each of these. Make data truly portable Data portability allows a user to take their data and move it to a different platform. Many tech companies have long supported data portability as a core value. Facebook, however, has a history of taking advantage of the data portability features offered by other companies as a means to an end: growing its own network. In its early years, for example, Facebook benefited immensely from Google's portability efforts. Facebook encouraged users to download their contacts lists from Gmail, then upload them to Facebook, in order to build out its social network. At the same time, Facebook has always dragged its feet when it comes to portability from its own platform. In its early years, Facebook displayed users’ email addresses on their profile pages, not as text, but as images, making it frustratingly difficult to download lists of friends’ contact information, or even to copy and paste a single address into an email client. Until recently, Facebook’s data export tool provided users with an inscrutable, unparseable mess of text and HTML. Europe’s General Data Privacy Regulation (GDPR), which took effect May 25, 2018, declares data portability a basic right for all European citizens. In accordance with GDPR, Facebook’s newest export tool allows users everywhere to download their data in the machine-readable JSON format. But Facebook’s export tool only includes a small subset of the data the company actually has about its users, and it falls short of empowering users to pick up their data and take their business elsewhere. Facebook’s promise to work with the Data Transport Project is encouraging, but if they don’t use it as an opportunity to address wider issues with their current tools, it won’t mean substantial change for users. Free the friends list Facebook’s newest data-export tool exports friends lists—the building blocks of the social network, and arguably the data most critical to its competitive advantage—in the form of plain-text names without unique identifiers. This makes it impossible for a user to take their list of friends to a competing service. Any social network trying to parse Facebook’s list won’t be able to tell whether “John Smith” refers to John Smith in Haight-Ashbury, John Smith in Sri Lanka, or John Smith the 17th-century British explorer. Facebook has claimed it doesn’t want to let users export their friends’ email addresses for privacy reasons, but remember that Facebook was more than happy to take advantage of Gmail’s tool to grow its own network. Facebook could build a better export tool without raising tough privacy questions. Associating names with unique identifiers, like “John Smith, user number 100372813,” would allow competing services to disambiguate common names. It’s not just competing social media companies that would benefit. Facebook friends lists are essentially rich “contact lists” that give its other products, especially WhatsApp and Messenger, a distinct competitive advantage. Facebook was built on data ported from the incumbent services of its time. Now, it’s time to return the favor. Let users see how they’re being tracked Another area where Facebook’s tool is seriously lacking is in the advertisement data it exports. The company tracks whenever you use one of the hundreds of thousands of websites and apps that use Facebook technology, and it uses those data to target ads. You can see a list of plain-text “topics” that Facebook believes you’re interested in, but there is no record of the browsing data the company used to determine those interests. With these trackers, Facebook is engaged in massive, nonconsensual surveillance of its users’ habits both on the web and on their phones. You can’t opt out of collection or delete these data—the best you can do is to stop it with a tracker blocker like Privacy Badger. We think Facebook should stop this entirely, but the least it can do is let you see what it knows: its detailed record of where you’ve been, what sites you’ve visited, and what advertisers have paid Facebook for your eyeballs. Recently, Facebook has hinted at giving users the ability to delete historical tracking data about them. This would be a great step forward, but ultimately, users deserve full control of when and how they are tracked. That includes first-class access to the detailed data Facebook has. Interoperate, Federate, Innovate  Interoperability is the extent to which one platform’s infrastructure can work with others. In software parlance, interoperability is usually achieved through Application Programming Interfaces (APIs)—interfaces that allow other developers to interact with an existing software service. For example, Facebook’s APIs allow third-party apps to verify a user’s identity, access their data, and even post on their behalf with that user’s permission. When big companies build interoperable platforms, it’s often a boon to everyone. “Follow-on innovators” can leverage the tools that a platform has pioneered to make better experiences for the platform’s users, to offer novel tools that build on the platform’s strengths, and to allow users to interact with multiple major services at the same time. For example, PadMapper started by organizing data about rental housing pulled from Craigslist posts and presenting it in a useful way; Trillian allowed users to use multiple IM services through the same client and added features like encryption on top of AIM, Skype, and email. On a larger scale, digital interoperability enables decentralized, federated services like email, modern telephony networks, and the World Wide Web. Facebook’s lack of true interoperability, especially for enhancing or competing services, is one of the ways it has cemented its position. Use app review to protect users, not stifle innovation The Cambridge Analytica scandal was a result of Facebook offering extremely powerful APIs to third-party apps. Facebook made it too easy for apps to request data about users and all of their friends, and too easy for users to agree to sharing data without understanding the implications. In response to the scandal, Facebook has tightened control over their interoperable tools across the board and removed some of the more problematic APIs altogether. However, the scandal has also given the company an excuse to make life more difficult for would-be innovators. We must detangle the two if we’re going to reduce Facebook’s power. Currently, the “platform policy” that Facebook requires developers to agree to in order to use its APIs is designed to protect Facebook’s interests as much as, if not more than, its users’. For example, Section 4.2 prevents offering “experiences that change the way Facebook looks and functions.” This explicitly prevents app developers from trying to improve the UI, or even allowing users to customize it for themselves. Other clauses, like “respect the limits we’ve placed on Facebook functionality,” similarly reflect Facebook’s desire to maintain tight control over the ways its users interact with their data in the platform. Furthermore, Section 4.1 states, “Don’t replicate core functionality that Facebook already provides.” This gives the company grounds to reject any competitive social network that would federate its service with Facebook. App review is an important practice, and Facebook should continue working to prevent malicious developers from leveraging its platform to harm users. However, the company should allow others to build on and differ from what it has created in meaningful ways. A platform as vast and powerful as Facebook should be a jumping-off point for innovators, not a means for the company to impose a single experience on everyone in its network. Interface with the next generation of federated social networks Successful interoperability is almost always powered by open standards. Email is a good example. Thanks to widely-adopted protocols like SMTP and IMAP, you can sign up for an account on FastMail and send messages to your friends who use Gmail, Yahoo, AOL, and Microsoft seamlessly. Email is a federated service: it comprises many decentralized, independent service providers that communicate with a set of common, open standards. As a result, users get to choose both the company they trust to host their messages and the software they use to access them. Facebook should adapt their APIs to work with the World Wide Web Consortium’s recently-developed Social Web Protocols, like ActivityStreams and ActivityPub. This would give developers a stable, flexible interface to Facebook’s platform and make it possible for Facebook to interoperate with the next generation of federated services like Mastodon. In the future, Facebook could become just one of a vast network of independent social servers. Users could choose to host their data on the service with the features and policies that they preferred and still be able to interact with their friends on Facebook and elsewhere. It’s Not Just Facebook There was a time when data portability was seen as a positive goal for emerging tech companies: Mark Zuckerberg said recently — with a tone of regret — that it was something Facebook engineers considered, many years ago: I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences. We disagree that’s how the feedback Facebook is receiving from users and lawmakers should be interpreted. What users want is both security and real control over their experience — and they don’t want to cede how to do that exclusively to what Mark Zuckerberg decides is appropriate. But, to be clear, a choice between having Facebook decide what your needs are, or Google or Twitter or Microsoft, can’t be enough either. When we’re talking about nearly 8 billion people, it’s inconceivable that a handful of (largely American) companies will be able to deliver that balance for everybody. With initiatives like the Data Transfer Project, we are concerned that these companies are already acting as though the most important portability is between their own shared conception of what an Internet service is. The Data Transfer Project will enable users to move data directly between two services, but it remains to be seen what data Facebook lets you transfer and where it lets those data go. We know there’s more to the Internet than this decade’s version of a successful, venture-capital funded, West Coast company. All of these companies need to hand back over control of people’s data, so we can decide what we want to do with it. How We Get There Facebook has the power to make all of these changes on its own. Doing so would mean a more socially responsible company, a better experience for its users, and a more level playing field for its competition. If the company isn’t willing to help, governments may have to step in. Congress can pass laws that mandate real data portability, as the GDPR already does in Europe. Such laws should require that data like a user’s friends list be accessible to that user in a useful format. The Federal Trade Commission (FTC) has authority to impose “behavioral remedies” on companies that illegally maintain market power. Fixes for data portability and interoperability could be part of an antitrust remedy or negotiated settlement with the FTC. And if some politicians are  reticent to regulate the tech industry, there’s still work that could be done to empower users to fix the problem themselves — by fixing laws that prevent users from taking their data back. Facebook and the other tech giants fence off their data, even from end-users, through their narrow terms of service that seek to limit what users — or their tools — can do on a site or service. Users shouldn’t have to fear losing access to their account just because they have decided to download or scrape their own data from it. And no one in the Internet ecology should fear prosecution from statutes like the CFAA for empowering users to move, delete or examine the data that big tech has on them. None of these solutions is a panacea. Facebook wields other kinds of market power which smaller companies may not be able to overcome, and which we may need to address in other ways. But data portability and interoperability could help transform Facebook from an obstacle to a catalyst for innovation. If it were more feasible for users to take their data and move elsewhere, Facebook would need to compete on the strength of its product rather than on the difficulty of starting over. And if the platform were more interoperable, smaller companies could work with the infrastructure Facebook has already created to build innovative new experiences and open up new markets. Users are trapped in a stagnant, sick system. Freeing their data and giving them control are the first steps towards a cure. This summer, EFF is looking at how corporate concentration is harming the Internet and how to fix it. Interoperability and data portability are just one set of tools, and Facebook is just one company. Stay tuned for more discussion about what’s wrong, who’s responsible, and the right and wrong ways to address it. Related Cases:  Facebook v. Power Ventures
>> mehr lesen

California Supreme Court Strengthens Section 230 Protections for Online Speech (Mi, 25 Jul 2018)
Special thanks to legal intern Miranda Rutherford who was the lead author of this post. If someone sues you for a review you wrote on Yelp, can a court force Yelp to take down the review? This month, the California Supreme Court said “no” in the case Hassell v. Bird. This case concerned a dangerous misinterpretation of Section 230, the law that protects online platforms from liability for their users’ speech. We’re glad that the California Supreme Court corrected this misinterpretation and upheld Section 230’s protections for online speech. It all started with a bad review. Like many of us have, Ava Bird had a bad experience with a business—a law office headed by a lawyer, Dawn Hassell—and wrote a review on Yelp detailing her frustrations. Hassell wasn’t happy with the review and sued Bird for defamation. After Bird didn’t appear in court to defend the case, the trial judge entered a default judgment in Hassell’s favor, and ordered both Bird and Yelp to remove Bird’s Yelp review. Yelp refused to take down the review and filed a motion to have the default judgment and removal order vacated. Yelp argued that the court order violated due process because Hassell hadn’t named the company as a defendant in the lawsuit and so the company hadn’t had its own day in court. Yelp also argued that the court order violated Section 230 (47 U.S.C. § 230). Section 230 is the federal law that provides online platforms with broad immunity from liability for user-generated content. This means that a website like Yelp generally can’t be found liable for what its reviewers post. Congress first passed Section 230 in 1996 with the goal of promoting online free speech and innovation. Section 230 gives Internet intermediaries legal breathing room to create new products and services and to host online speech (in the words of the Supreme Court) “as diverse as human thought.” The law ensures that online platforms aren’t treated as accomplices to their users’ actions. The trial court determined that Yelp was acting as an agent of Bird and thus could be ordered to remove the review, along with Bird. Yelp appealed to the California Court of Appeal, which also found that Yelp could be ordered to remove the review. The Court of Appeal found no due process problem and held that Section 230 didn’t apply because the removal order didn’t count as holding Yelp liable for user-generated content. Yelp then appealed to the California Supreme Court. We filed an amicus brief supporting Yelp’s arguments; in particular, that it was still covered under Section 230 even though Hassell hadn’t originally sued Yelp. The majority of the California Supreme Court agreed that the trial court’s removal order against Yelp was barred by Section 230. Chief Justice Cantil-Sakauye noted that Hassell tried to make a “procedural end-run” around Section 230. Hassell’s lawyers admitted that they had specifically avoided suing Yelp directly because they knew that Yelp could immediately invoke Section 230 to get out of the case. By not naming Yelp as a defendant, but then getting the trial court to issue a removal order against Yelp, Hassell thought she could bypass Section 230. Fortunately, the California Supreme Court saw right through this sneaky tactic. The Court held that by ordering Yelp to take down the review, the lower courts were treating Yelp as if it had written the review itself, rather that acknowledging that it was simply a host for Bird's speech. This is exactly the situation Section 230 was designed to cover. This case was unique precisely because Yelp was not named as a defendant. Section 230, in part, protects online platforms from “liability” under state law for user-generated content. In a typical Section 230 case, the statute’s immunity applies where a website is itself sued for user speech (often alongside the user), which is a clear case of potentially imposing liability on the online platform. The California Supreme Court was the first state supreme court to consider whether a removal order against a non-party online platform also meets the liability requirement. The court agreed with Yelp that it does. If it didn't take down the review, Yelp could be found in contempt of the court, resulting in monetary sanctions. The California Supreme Court also noted that Section 230 is not just meant to protect Internet intermediaries from legal liability, but also from the more general burdens of defending a case. The Court stated that such a removal order “can impose substantial burdens,” because it can harm an online platform’s business and lead to even more litigation. The Court also expressed concern that such removal orders could lead to fraud. In short, as the Chief Justice stated, “the extension of injunctions to these otherwise immunized nonparties would be particularly conducive to stifling, skewing, or otherwise manipulating online discourse.” In a separate concurrence, Justice Kruger focused on Yelp’s due process arguments. She concluded that the only way the removal order against Yelp, as a non-party to the lawsuit, would be valid is if Yelp were considered to be Bird’s agent—that is, acting on her behalf. However, Justice Kruger said that there was no evidence that Yelp was acting as Bird’s agent or obstructing Bird’s ability to take down her review herself. The mere fact that an online platform hosts the speech of users does not turn the company into their legal agent. Furthermore, if there was a dispute as to whether Yelp had a legal obligation as Bird’s agent, due process rules would require Hassell to separately sue Yelp so that Yelp could have its own day in court. Yet the Chief Justice wrote that while a non-party to a lawsuit may sometimes have to comply with a court order, when that non-party is an online platform, Section 230 must still be considered. And if the court order treats the online platform as the speaker or publisher of speech that originated with a user, then Section 230 bars the order. This is a critical point in the Supreme Court’s decision. Otherwise, some courts might consider non-party online platforms to be agents of their users (for example, in the words of one of the dissenting judges, because they use “algorithms” to distribute the users’ content) without also undergoing a Section 230 analysis. This would be a truly damaging end-run around Section 230 and would surely result in a flood of takedown orders, creating a new era of Internet censorship. This isn’t just a great decision for free speech online; it’s also a good example of how an online platform can step up to the plate for its users. Online platforms should serve as the gatekeepers between users and what can be frivolous takedown orders, making sure that any orders to censor their users’ speech are carefully considered. Yelp did exactly that, and the California Supreme Court’s decision in Hassell v. Bird will enable even more companies to follow in its footsteps.
>> mehr lesen

EFF Files Amicus Brief in Seventh Circuit Supporting Warrant for Border Searches of Electronic Devices (Di, 24 Jul 2018)
EFF, joined by ACLU, filed an amicus brief in the U.S. Court of Appeals for the Seventh Circuit arguing that border agents need a probable cause warrant before searching personal electronic devices like cell phones and laptops. We filed our brief in a criminal case involving Donald Wanjiku, who, in June 2015, landed at Chicago’s O’Hare International Airport after returning from a trip to the Philippines. Without getting a warrant from a judge that was based on probable cause of criminality, border agents searched Wanjiku’s cell phone manually—using their hands to navigate the phone’s interface; and forensically—using external software to search the phone’s files. Border agents also forensically searched Wanjiku’s laptop and external hard drive. He was ultimately charged with transporting child pornography. Wanjiku asked the district court in U.S. v. Wanjiku to suppress evidence obtained from the warrantless border searches of his electronic devices, but the judge denied his motion. He then appealed to the Seventh Circuit. In our amicus brief, we argued that the Supreme Court’s decision in Riley v. California (2014) supports the conclusion that border agents need a warrant before searching electronic devices because of the unprecedented and significant privacy interests travelers have in their digital data. In Riley, the Supreme Court followed similar reasoning and held that police must obtain a warrant to search the cell phone of an arrestee. We also cited the Supreme Court’s recent decision in U.S. v. Carpenter (2018) holding that the government needs a warrant to obtain historical cell phone location information. In our amicus brief, we explained that historical location information can be obtained from a border search of a cell phone. Citing Riley, the Supreme Court in Carpenter stated, “When confronting new concerns wrought by digital technology, this Court has been careful not to uncritically extend existing precedents.” Similarly, EFF’s longstanding position is that the traditional border search exception to the Fourth Amendment, which generally permits warrantless and suspicionless “routine” searches of items travelers carry across the border (like luggage), should not extend to personal electronic devices. A reasonable exception in one context isn’t necessarily appropriate in another. While the district court judge denied Wanjiku’s motion to suppress, she did not do so because she agreed with the government’s argument that electronic devices fall within the border search exception. Rather she stated that she was “inclined to agree with defendant” that suspicionless border searches of electronic devices violate the Fourth Amendment. The judge also stated that she: agree[s] that the [Supreme] Court’s decision in Riley rejects the government’s claim that searches of cell phones or other electronic devices are analytically equivalent to searches of physical items, and may indeed suggest the Court’s willingness to reevaluate, in the age of modern cell phones, whether the balance of interests should continue to be ‘struck much more favorably to the Government at the border’ where digital searches are concerned…. Unfortunately, the district court judge concluded “that this is not the appropriate case in which to wrestle these difficult issues to the ground.” She denied Wanjiku’s motion to suppress because she found that the border agents had reasonable suspicion—a lower standard than probable cause—that Wanjiku was involved in criminal activity. She declined to go further in her analysis given that the “Seventh Circuit has not defined the level of suspicion required to conduct an electronics search at the border.” We hope the Seventh Circuit takes this opportunity to apply the highest level of Fourth Amendment protection to border searches of electronic devices—a probable cause warrant. We’re also optimistic that we can win such a ruling in the First Circuit in our civil case with ACLU against the U.S. Department of Homeland Security, Alasaad v. Nielsen.
>> mehr lesen

The Next Supreme Court Justice: Here's What the Senate Should Ask About New Technologies and the Internet (Di, 24 Jul 2018)
Brett Kavanaugh’s nomination has sparked a great deal of discussion about his views on reproductive rights and executive authority. But the Supreme Court tackles a broad range of issues, including the present and future of digital rights and innovation. As Congress plays its crucial constitutional role in scrutinizing judicial nominees, Senators should take care to press the nominee for his views on how the law should address new technologies and the Internet. We hope that the Court will ensure that constitutional protections extend to our digital landscape. To better understand whether Kavanaugh is likely to help or hinder, here are a few questions Senators should ask. As an initial matter, any nominee to the Supreme Court must appreciate how the Court’s rulings may impact digital rights now and far into the future. In a 1979 case called Smith v. Maryland, for instance, the Supreme Court ruled that people do not have a privacy interest in information they hand over to third parties (like the numbers you dial on a telephone). That case—where police had reasonable suspicion that a single individual was committing a specific crime—has shaped police practice in the digital age, and provided a contorted legal defense for mass domestic surveillance programs like the NSA’s call-records program, even though they subject millions of people to continuous monitoring based on no suspicion of any particular crime. But the Court is starting to understand how much the Internet and the ubiquity of mobile devices have changed daily life in the United States. In Packingham, the Court acknowledged that social media has become the “modern public square,” and in Riley the Court ruled that law enforcement cannot search cell phones at the time of arrest because of the vast quantities of personal data they store. And just a few weeks ago, in Carpenter the Supreme Court ruled that the 4th Amendment applies to cell-phone-based location tracking—so if law enforcement wants historical customer location information from cell-phone providers, they will now have to get a warrant. We hope this is a trend, and that the Court will do its part to ensure that constitutional protections extend to our digital landscape. To better predict whether Kavanaugh is likely to help or hinder, here are a few questions the Senate should ask him, keeping in mind that nominees traditionally steer clear of commenting on specific pending cases. Mass Surveillance In 2015, the D.C. Circuit refused to hear a case challenging the NSA mass telephone surveillance program. Kavanaugh issued a concurrence saying: “The Government’s collection of telephony metadata from a third party such as a telecommunications service provider is not considered a search under the Fourth Amendment, at least under the Supreme Court’s decision in Smith v. Maryland, 442 U.S. 735 (1979)...” And that even if the collection is a search, it is reasonable because: “The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States See The 9/11 COMMISSION REPORT (2004). In my view, that critical national security need outweighs the impact on privacy occasioned by this program.” Given this broad assertion, the Senate should ask: Fourth Amendment jurisprudence requires the government to have individualized suspicion before intruding against a person’s privacy. How would the Framers view mass data collection by the government—for example, copying or viewing all Internet activity routed through a service provider? How should the Constitution address those who are impacted by, but not targeted by, surveillance? Do you believe that the government can collect digital information from individuals without that collection constituting a “search” for Fourth Amendment purposes? Do people have a privacy interest in metadata that can be used to create a detailed timeline of someone’s actions? Do bulk surveillance programs that create detailed pictures of the lives of millions of Americans, where they go, and who they associate with, implicate rights guaranteed under the First Amendment? Are there any constitutional limits on the executive branch’s national security authority? What are they? You have written that the government’s bulk collection program is a special need. What factual showing should the government make to use this doctrine? Is there a distinction between special needs exceptions for national security and law enforcement purposes? Law Enforcement Access to Digital Information When US v. Jones was on appeal before the DC Circuit, Kavanaugh issued a dissent arguing that a person has no reasonable expectation of privacy in their “public movements,” but law enforcement nonetheless violated the 4th amendment by tampering with Jones’ car. To better understand Kavanaugh's view on digital privacy, the Senate should ask: Do you believe that a person has a reasonable expectation of privacy when they move about in public? Does tracking the location or other information about a subject over long periods of time implicate any further interests? Is the reasonable expectation of privacy a “failed experiment?” Do rights to privacy extend beyond a person’s property interests? Do you agree with the well-settled law long established in this area? Do Terms of Service agreements and other contracts that caution users that their information may be shared with the police affect a person’s privacy interest? Law enforcement is now using technologies like Automated License Plate Readers to track people as they move in their cars. Can the volume of data become a privacy harm, or a harm to First Amendment principles such as freedom of association, speech, and assembly? With “smart cities” on the rise – cities that are beginning expansive government and third party data collection programs to offer more tailored services to the public – do constitutional safeguards against unreasonable searches extend to data the government has collected for a non-law enforcement purpose? Net Neutrality In a dissenting opinion, Kavanaugh decried the DC Circuit's decision upholding the 2015 Open Internet Order -- and order, for which millions of Internet user fought long and hard, that forbade practice such as throttling, blocking, and pair prioritization-- saying that the Federal Communications Commission did not have clear authority from Congress to issue the 2015 Order. He also insisted that the rule infringed upon Internet service providers’ First Amendment rights. In fact, as EFF and ACLU explained in an amicus brief, while the ISPs do have First Amendment rights, the 2015 net neutrality rules appropriately balanced those rights against the public interest in a neutral Internet. In light of this case and the multiple ongoing efforts to rescue  net neutrality after the FCC abdicated its role in protecting the Open Internet, the Senate should ask: Can paid prioritization practices run afoul of consumer protection or civil rights laws? How should the broadband Internet market be analyzed under current competition laws? Does the Federal Communication Commission have the authority to determine the classification of broadband internet service providers? Does the Communications Act occupy the field and preempt states and municipalities from passing their own laws blocking throttling, paid prioritization, and zero rating by broadband internet service providers? How would the Constitution view Federal attempts to limit State broadband regulation? Innovation Recent Supreme Court rulings have provided some balance to a patent system that many thought had gotten out of control. For example, in a 9-0 decision in Alice Corp v. CLS Bank the court invalidated an abstract software patent, essentially ruling that adding “on a computer” to an abstract idea does not make it patentable. The Court also ended rampant venue shopping that had led to more than 1,000 patent cases a year being filed in the courtroom of a single federal judge in East Texas. Thanks to decisions like these, many small businesses have been able to stave off unfounded legal threats. Patent cases continue to appear on the Court’s docket, many of which will have consequences for software patents. To figure out Kavanaugh’s views, the Senate should ask: Some people say that the U.S. patent office issues too many bad patents, allowing patent trolls to threaten operating companies trying to innovate, especially small start-ups. Others say the Supreme Court has gone too far in its recent cases that cut back patent protection on abstract ideas. What do you think are the purposes of our patent system? Do you think that patent protection should extend to laws of nature or abstract ideas? Should small businesses have ways to protect themselves from unmeritorious patent claims, other than paying litigation cost-based settlements? In addition, the long-running Oracle v. Google case may finally make its way to the Court, potentially giving the Justices a chance to opine on both the scope of copyright in software and the application of the fair use doctrine. The central question in the case is whether Oracle can claim a copyright on Java APIs and, if so, whether Google infringed these copyrights. Many, including EFF, argued that the APIs in question were not copyrightable in the first place. If the Court decides to review the case, its decision could affect software development for many years to come. Again, Kavanaugh won’t comment on pending cases, but the question of the scope of copyright is likely to come up one way or another. Given the impact of this area of law on digital innovation and expression, the Senators might ask: What is the purpose of copyright generally? Do copyrights (and patents) exist primarily to reward their owners, or should their grant benefit the public generally? Are there situations in which copyright may disserve innovation and expression? How should courts deal with such a situation? Is fair use an affirmative right as opposed to a narrow defense? Should companies that want to use a small portion of another’s copyrighted work be required to get a license, rather than rely on fair use? One of the judges for whom Kavanaugh clerked, Alex Kozinski, has publicly stated that a license should be required, rather than using an unlicensed work under circumstances that are fair use. Does Kavanaugh agree? Can software be covered by both patent and copyright? Are there limits to this? Competition Finally, many widely relied-upon Internet functions are now controlled by a few giant companies, and the dominance of these companies has proven to be sticky. It’s still easy and cheap to put up a website, build an app, or organize a group of people online—but a few large corporations dominate the key resources needed to do those things, and basic Internet access as well. That, in turn, gives those companies extraordinary power over speech, privacy, and innovation. Against this background, policymakers are considering whether and how to recreate conditions for competition. Many are looking to antitrust law for a solution, but it’s not yet clear how antitrust law will apply to these circumstances. Meanwhile, the DOJ’s challenge to the AT&T-Time Warner merger is on appeal. If the DOJ or the FTC decides to take on the tech giants, and/or the appeal reaches the Supreme Court, its ruling could reshape competition law. The Senate might ask: Does antitrust law authorize courts to remedy harms caused by the lack of competition in a given market, or is it limited to ensuring a narrow measure of consumer welfare? For instance, can regulators and courts scrutinize the acts of a monopolistic enterprise that lowers prices for consumers but also undermines competition? Senate confirmation hearings for Supreme Court justices have been described as a “Kabuki” performance. With the next generation of American jurisprudence—and technology—hanging in the balance, we encourage Senators to thoughtfully and rigorously challenge the nominee to share his views.
>> mehr lesen

Federal Circuit Rejects Pharmaceutical Company’s Attempt to Dodge Review of its Patents (Mo, 23 Jul 2018)
The Federal Circuit has prevented a private company from using a Native American tribe’s rights to bar the Patent Office from reviewing its patents. The case involves a pharmaceutical company, Allergan, that paid the Saint Regis Mohawk Tribe to “own” its patents, and then assert sovereign immunity to avoid inter partes review (IPR). Congress created IPR proceedings to improve patent quality by giving third parties the opportunity to challenge patents at the Patent Office. Emphasizing the public interest in the nature of IPRs as proceedings before an administrative agency—the Patent Office—the appeals court found that tribal immunity could not be asserted to end these proceedings. This case began when Allergan sued a number of generic pharmaceutical companies, including Mylan, for infringing its patents related to Restasis, a treatment for symptoms of chronic dry eye. Mylan responded by filing IPRs challenging the patents. Allergan responded in turn by “selling” these patents to the Saint Regis Mohawk Tribe (which received millions of dollars from Allergan as part of the transfer). After the patents were transferred, the tribe asserted sovereign immunity and asked the Patent Office to terminate the IPRs. Generally, sovereign immunity refers to the concept that a sovereign entity (here, the Saint Regis Mohawk Tribe) can’t be subject to the jurisdiction of another sovereign (here, the U.S., in the form of the Patent Office) unless the first entity agrees. The deal with Allergan required the tribe to assert sovereign immunity in an attempt to end the Patent Office proceedings before the Restasis patents were revoked. Stated more bluntly, Allergan paid to use tribal sovereignty in order to block efforts to have its patents invalidated. After administrative judges at the Patent Office denied the request to terminate the IPRs, the Saint Regis Mohawk Tribe appealed to the Federal Circuit. The appeal generated a lot of public interest and many amicus briefs were filed on both sides, including a brief [PDF] from the R-Street Institute and EFF. We explained that if Allergan’s tactic were allowed, it would undermine the IPR process and lead to many other companies using the same scheme to avoid review of their patents. The Federal Circuit has now ruled [PDF] that tribal sovereign immunity cannot be used to shield patents from IPR proceedings. In a unanimous decision, the court held that tribal sovereign immunity does not apply in IPR proceedings because they are more like executive branch enforcement actions—where tribal sovereignty does not apply—than federal court litigation between private parties. The appeals court relied on a number of factors. For example, the panel noted that the Director of the Patent Office has discretion whether or not to institute review, and therefore, “if IPR proceeds on patents owned by a tribe, it is because a politically accountable, federal official has authorized the institution of that proceeding.” The court also noted that petitioners do not exercise all-embracing control over the course of IPR proceedings because, once initiated, “the Board may choose to continue review even if the petitioner chooses not to participate.” Moreover, an “IPR is an act by the agency in reconsidering its own grant of a public franchise.” Judge Dyk joined the majority opinion in full, but also filed a concurring opinion that emphasizes the important purpose of IPR proceedings: allowing the Patent Office to weed out those patents it issues in error. The Patent Office receives more than 600,000 applications a year, giving individual examiners approximately 22 hours to review each application. Some type of reconsideration is necessary because “[r]esource constraints in the initial examination period inevitably result in erroneously granted patents.” That's long been recognized. Before IPRs existed, there was a different reexamination process in which patents could be given a post-grant review, and nobody argued—before or during the St. Regis case—that sovereign immunity barred the Patent Office from engaging in such review. Tribal sovereignty is a important issue with significant implications for Native American self-determination and justice. But we were concerned to see private companies effectively attempt to rent sovereign immunity in order to avoid administrative processes created to protect the public interest. In this case, the tactic was part of an attempt to use bad patents to prop up drug prices for everyone. We are glad the Federal Circuit has rejected the tactic.
>> mehr lesen

California Can Pioneer Local Community Oversight of Police Surveillance (Fr, 20 Jul 2018)
For nearly a decade, a company known as Harris Corp. managed to sell sophisticated military surveillance equipment to police departments across the U.S. without any elected policymakers knowing that their tools even existed. A proposed law in Sacramento could ensure that this history never repeats itself.  Corporate secrets subvert transparency  The particular tools built by Harris Corp. are cell-site simulators, sometimes described as a “Stingray” (after the trade name of an early version). They monitor cell phone networks by mimicking a cell tower and gaining transmissions from cell phones near it, thereby exposing the phones’ locations and unique identifiers (such as an IMSI number), and enabling capture of metadata and unencrypted voice and text content.  This saga reflects how executive secrecy can extend from Washington all the way down to the local level.  While originally developed for use in military theaters, cell-site simulators are also increasingly purchased—and deployed without civilian oversight—by police departments across the U.S. While presented as tools to enhance public safety, cell-site simulators have been used in the U.S. to monitor peaceful activists. One early example involved a 2003 demonstration in Miami (in which the author of this post participated) opposing global corporate governance via undemocratic international trade agreements. At the time, no one outside law enforcement, the military, and corporate contractors even knew that cell-site simulators existed, since the FBI had insisted on non-disclosure agreements to keep civilians—including local officials and judges—in the dark. But in spite of corporate and government secrecy undermining public oversight and trust, they eventually came to light after a jailhouse lawyer serving a prison sentence for credit card fraud took it upon himself to investigate, discovering a racket with tentacles gripping local police departments across the U.S. This saga reflects how executive secrecy can extend from Washington all the way down to the local level.  Public oversight offers a solution Due to local laws secured by grassroots campaigns, several cities around the country—including Oakland, Berkeley, and Seattle, as well as Somerville, MA—now bar local police from obtaining surveillance equipment without approval from independent local policymakers, following meaningful oversight of the technology and proposed policies. Most importantly, before the city council votes on whether to approve the spy tech, the public must be notified of the issue and given an opportunity to be heard. In most cities and towns around the country, however, police unilaterally decide whether to acquire surveillance technologies. Local elected officials and the public are cut out from the oversight process. The opportunity to strengthen local oversight is one reason that EFF has supported SB 1186, a proposed measure currently pending before the California state legislature that would increase transparency in the acquisition of surveillance technology by local police departments. In particular, the bill would require law enforcement agencies to develop use policies, and justify the public safety rationale, before acquiring powerful tools that could easily be misused—and even more easily undermine the rights of community members.  Recent changes preserve community control over acquisition, if not deployment This year’s SB 1186 is modeled on a similar proposal from last year’s legislative session, SB 21, which we also supported. We were disappointed by recent amendments watering down the bill’s provisions for ongoing oversight. Previous versions required law enforcement agencies that acquired surveillance tech to periodically report to civilian authorities on how they used it. By eliminating these critical annual use reports, the recent amendments limit opportunities for public oversight of how surveillance tech is deployed. On the other hand, the measure would vastly strengthen oversight relative to the indefensible and completely opaque status quo.  Because the bill would expand transparency and community control over the acquisition of surveillance technology, we continue to support it. We encourage members of the state legislature to enact it—without further amendments—for the Governor to sign into law, and also encourage Californians to raise our voices in support of the measure. Wherever local coalitions have organized around community control principles, they will continue to enjoy opportunities to propose more demanding limits, such as post-acquisition reporting requirements. Local coalitions also will have opportunities to demand substantive limits on spying technologies, such as judicial warrant requirements for targeted surveillance methods, or limits on information retention, dissemination, or use to prevent surveillance tools from being used for trivial—or worse yet, corrupt—reasons. Take Action Support S.B. 1186 All too often, and in far too many places across California, police routinely acquire and use sophisticated spying technologies in secret, without public oversight. In a nation facing such profound divisions and questions about the legitimacy of law enforcement tactics, every community in California could use a little sunlight.
>> mehr lesen

Between You, Me, and Google: Problems With Gmail's “Confidential Mode” (Fr, 20 Jul 2018)
With Gmail’s new design rolled out to more and more users, many have had a chance to try out its new “Confidential Mode.” While many of its features sound promising, what “Confidential Mode” provides isn’t confidentiality. At best, the new mode might create expectations that it fails to meet around security and privacy in Gmail. We fear that Confidential Mode will make it less likely for users to find and use other, more secure communication alternatives. And at worst, Confidential Mode will push users further into Google’s own walled garden while giving them what we believe are misleading assurances of privacy and security. With its new Confidential Mode, Google purports to allow you to restrict how the emails you send can be viewed and shared: the recipient of your Confidential Mode email will not be able to forward or print it. You can also set an “expiration date” at which time the email will be deleted from your recipient’s inbox, and even require a text message code as an added layer of security before the email can be viewed. Unfortunately, each of these “security” features comes with serious security problems for users. DRM for Email It’s important to note at the outset that because Confidential Mode emails are not end-to-end encryptedGoogle can see the contents of your messages and has the technical capability to store them indefinitely, regardless of any “expiration date” you set. In other words, Confidential Mode provides zero confidentiality with regard to Google. But despite its lack of end-to-end encryption, Google promises that with Confidential Mode, you’ll be able to send people unprintable, unforwardable, uncopyable emails thanks to something called “Information Rights Management” (IRM), a term coined by Microsoft more than a decade ago. (Microsoft also uses the term “Azure Information Protection.”) Here’s how IRM works: companies make a locked-down version of a product that checks documents for flags like “don’t allow printing” or “don’t allow forwarding” and, if it finds these flags, the program disables the corresponding features. To prevent rivals from making their own interoperable products that might simply ignore these restrictions, the program encrypts the user’s documents, and hides the decryption keys where users aren’t supposed to be able to find them. This is a very brittle sort of security: if you send someone an email or a document that they can open on their own computer, on their own premises, nothing prevents that person from taking a screenshot or a photo of their screen that can then be forwarded, printed, or otherwise copied. But that’s only the beginning of the problems with Gmail’s new built-in IRM. Indeed, the security properties of the system depend not on the tech, but instead on a Clinton-era copyright statute. Under Section 1201 of the 1998 Digital Millennium Copyright Act (“DMCA 1201”), making a commercial product that bypasses IRM is a potential felony, carrying a five-year prison sentence and a $500,000 fine for a first offense. DMCA 1201 is so broad and sloppily drafted that just revealing defects in Google IRM could land you in court. We think that “security” products shouldn’t have to rely on the courts to enforce their supposed guarantees, but rather on technologies such as end-to-end encryption which provide actual mathematical assurances of confidentiality. We believe that using the term “Confidential Mode” for a feature that doesn’t provide confidentiality as that term is understood in infosec is misleading. “Expiring” Messages Similarly, we believe that Confidential Mode’s option to set an “expiration date” for sensitive emails could lead users to believe that their messages will completely disappear or self-destruct after the date they set. But the reality is more complicated. Also sometimes called “ephemeral” or “disappearing” messages, features like Confidential Mode’s “expiring” messages are not a privacy panacea. From a technical perspective, there are plenty of ways to get around expiring messages: a recipient could screenshot the message or take a picture of it before it expires.  But Google’s implementation has a further flaw. Contrary to what the “expiring” name might suggest, these messages actually continue to hang around long after their expiration date for instance, in your Sent folder. This Google “feature” eliminates one of the key security properties of ephemeral messaging: an assurance that in the normal course of business, an expired message will be irretrievable by either party. Because messages sent with Confidential Mode are still retrievable—by the sender and by Google—after the “expiration date,” we think that calling them expired is misleading. Exposing Phone Numbers If you choose the “SMS passcode” option, your recipient will need a two-factor authentication-like code to read your email. Google generates and texts this code to your recipient, which means you might need to tell Google your recipient’s phone number—potentially without your recipient’s consent. If Google doesn’t already have that information, using the SMS passcode option effectively gives Google a new way to link two pieces of potentially identifying information: an email address and a phone number. This “privacy” feature can be harmful to users with a need for private and secure communications, and could lead to unpleasant surprises for recipients who may not want their phone number exposed. Not So Confidential Ultimately, for the reasons we outlined above, in EFF’s opinion calling this new Gmail mode “confidential” is misleading. There is nothing confidential about unencrypted email in general and about Gmail’s new “Confidential Mode” in particular. While the new mode might make sense in narrow enterprise or company settings, it lacks the privacy guarantees and features to be considered a reliable secure communications option for most users.
>> mehr lesen

Undermining Mobile Phone Users’ Privacy Won’t Make Us Safer (Mi, 18 Jul 2018)
The Kelsey Smith Act Would Force Cell Providers to Turn Private User Data Over to Law Enforcement Tragedies often bring political proposals that would do more harm than help—undermining our right to secure communications, for example, or our right to gather online. It is in these moments we face legislative gambits that are too often willing to trade our privacy for assumed security. It is in these moments that we should be careful about what could be taken from us. The Kelsey Smith Act (H.R. 5983) tries to correct a tragedy that occurred a decade ago by expanding government surveillance authorities. It is a mis-correction. The bill would force cell phone companies to disclose the location of a person’s device at the request of police who believe that person is in distress. On its face, that’s not unreasonable. But if the police make a mistake—or abuse their power—the bill offers almost no legal recourse for someone whose location privacy was wrongfully invaded. As the Supreme Court recently recognized in Carpenter, cell phone location information is incredibly sensitive data. It provides “an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’ These location records “hold for many Americans the ‘privacies of life.’” With this in mind, any legislative attempt to intrude on this private data must be done extremely carefully. The Kelsey Smith Act fails to do that. History of the Kelsey Smith Act The first version of the bill came after the 2007 kidnapping of 17-year-old Overland Park, Kansas resident Kelsey Smith. After Smith had been reported missing, police asked Verizon to disclose the location of her cell phone. But Verizon first required the police to submit a subpoena before complying with the request (as it’s allowed to do under current law). The police eventually discovered Smith was killed the same day she was kidnapped. Kansas lawmakers responded with a bill that traded privacy for perceived protection. Signed into state law in 2009 by the Kansas governor, similar versions of the Kelsey Smith Act have been replicated in 21 states around the country. Today, U.S. Senators Jerry Moran and Pat Roberts and Representative Kevin Yoder believe the law should go national. The three Kansas-based lawmakers introduced the federal Kelsey Smith Act into the Senate and the House of Representatives in May, but for Rep. Yoder, this is his third attempt. His prior bills failed to pass in 2015 and 2016. (EFF opposed the 2016 bill.) The latest Kelsey Smith Act is no better than its past iterations. Cell Phone Providers Can Already Provide Location Information in an Emergency—Keep it That Way Many bills and laws have their own set of emergency carve-outs—situations where the laws can be bent to respond to immediate threats that pose a serious risk of death or physical injury. But the Kelsey Smith Act turns that notion on its head, requiring disclosure when the government claims an emergency exists. And it goes too far. Under the current bill, police could force telecommunications providers—like Verizon, Sprint, and AT&T—to disclose the location of a device simply by asserting one of two things. Police can show that the device being sought was used in the last 48 hours to call 9-1-1 to request emergency assistance, or they can show with “reasonable suspicion that the device is in the possession of an individual who is involved in an emergency situation that involves the risk of death or serious physical harm.” To start, EFF is troubled by the bill’s expansive definition of an “emergency.” The Kelsey Smith Act allows law enforcement agents to access the location of any cell phone that has dialed 9-1-1 for emergency assistance in the last 48 hours. Almost by definition, that’s not an emergency. Emergencies are of-the-moment crises, requiring immediate responses. If you call 9-1-1 today to request emergency assistance, law enforcement shouldn’t be able to get your location information 48 hours later without showing that the call relates to a current emergency. In addition to being far too broad, this expanded definition could further deter marginalized communities from calling 9-1-1—communities that are already hesitant to seek emergency assistance from law enforcement in the first place. Under current law, cell phone location information can already be requested by law enforcement agents from telecom companies during emergency situations. But the law allows telecom providers to have a say—they can assess what is and isn’t a real emergency and protect their users’ privacy by denying an invalid request. The Kelsey Smith Act would effectively bar providers from protecting their users. The potential for law enforcement agents to misuse emergency requests is more than theoretical. According to written testimony from ACLU attorney Nathan Wessler (the same attorney who argued in Carpenter), police in California, Texas, New York, and Maryland have made emergency requests for information when in fact there was no true emergency: “Police in Anderson, California, coerced a person seeking a restraining order into saying she had been held against her will for six hours, and then sent a false emergency request for location information to the purported kidnapper’s cellular service provider.” Also, “Police in Rochester, New York, obtained location information about a suspect’s cell phone when they already knew the suspect’s location but wanted to build a better case by obtaining information from the phone.’” Those situations involved local police fraudulently claiming life-threatening situations to obtain cell phone location information. But emergency disclosure authorities can be abused under other circumstances, like making a fake claim about national security. In a 2010 report, the Department of Justice’s Inspector General found systemic misuse of emergency requests for call record information by the FBI. The report found that emergency requests were used in entirely non-life-threatening situations, including three “media leak investigations,” one of which resulted in the collection of telephone records from Washington Post and New York Times reporters.  Those reporters whose privacy was wrongfully invaded were eventually told about it. No similar notification safeguards are required under the Kelsey Smith Act. While EFF sympathizes with the bill’s intended purpose, creating an overly broad route for law enforcement to demand people’s personal information is not the answer. EFF urges Congress to reject the Kelsey Smith Act.
>> mehr lesen

Win for Public Right to Know: Court Vacates Injunction Against Publishing the Law (Di, 17 Jul 2018)
Industry Groups Want to Control Access to Legal Rules and Regulations San Francisco – A federal appeals court today ruled that industry groups cannot control publication of binding laws and standards. This decision protects the work of Public.Resource.org (PRO), a nonprofit organization that works to improve access to government documents. PRO is represented by the Electronic Frontier Foundation (EFF), the law firm of Fenwick & West, and attorney David Halperin. Six large industry groups that work on building and product safety, energy efficiency, and educational testing filed suit against PRO in 2013. These groups publish thousands of standards that are developed by industry and government employees. Some of those standards are incorporated into federal and state regulations, becoming binding law. As part of helping the public access the law, PRO posts those binding standards on its website. The industry groups, known as standards development organizations, accused PRO of copyright and trademark infringement for posting those standards online. In effect, they claimed the right to decide who can copy, share, and speak the law. The federal district court for the District of Columbia ruled in favor of the standards organizations in 2017, and ordered PRO not to post the standards. Today, a three-judge panel of the Court of Appeals for the D.C. Circuit reversed that decision, ruling that the district court did not properly consider copyright’s fair use doctrine. It rejected the injunction and sent the case back to district court for further consideration of the fair use factors at play. “[I]n many cases,” wrote the court, “it may be fair use for PRO to reproduce part or all of a technical standard in order to inform the public about the law.” “Our mission at PRO is to give citizens access to the laws that govern our society,” said PRO founder Carl Malamud. “We can’t let private industry control how we access, share, and speak the law. I’m grateful that the court recognized the importance of fair use to our archive.” This is an important ruling for the common-sense rights of all people. As Judge Katsas wrote in his concurrence, the demands of the industry groups for exclusive control of the law "cannot be right: access to the law cannot be conditioned on the consent of a private party." Based on today’s unanimous ruling, EFF is confident we can demonstrate that Public Resource's posting of these standards is protected fair use. “Imagine a world where big companies can charge you to know the rules and regulations you must follow,” said EFF Legal Director Corynne McSherry. “The law belongs to all of us. We all have a right to read, understand and share it.” For the full opinion: https://www.eff.org/document/opinion-4   For more on ASTM v. Public.Resource.org: https://www.eff.org/cases/publicresource-freeingthelaw Contact:  Corynne McSherry Legal Director corynne@eff.org Mitch Stoltz Senior Staff Attorney mitch@eff.org
>> mehr lesen

Hearing Thursday: EFF Asks Court to Block Enforcement of FOSTA While Lawsuit Proceeds (Di, 17 Jul 2018)
Law Is Causing Online Censorship and Removal of Protected Speech Washington, D.C.—On Thursday, July 19, at 4 pm, the Electronic Frontier Foundation (EFF) will urge a federal judge to put enforcement of FOSTA on hold during the pendency of its lawsuit challenging the constitutionality of the federal law. The hold is needed, in part, to allow plaintiff Woodhull Freedom Foundation to organize and publicize its annual conference, held August 2-5. FOSTA, or the Allow States and Victims to Fight Online Sex Trafficking Act, was passed by Congress in March. But despite its name, FOSTA attacks online speakers who speak favorably about sex work by imposing harsh penalties for any website that might be seen as “facilitating” prostitution or “contribute to sex trafficking.” In Woodhull Freedom Foundation v. U.S., filed on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, EFF maintains the law is unconstitutional because it muzzles constitutionally protected speech that protects and advocates for sex workers and forces speakers and platforms to censor themselves. Enforcement of the law should be suspended because the plaintiffs are likely to win the case and because it has caused, and will continue to cause, irreparable harm to the plaintiffs, EFF co-counsel Bob Corn-Revere of Davis Wright Tremaine will tell the court at a hearing this week on the plaintiffs' request for a preliminary injunction. Because of the risk of criminal penalties, the plaintiffs have had their ads removed from Craigslist and censored information on their websites. Plaintiff Woodhull Freedom Foundation has censored publication of information that could assist sex workers negatively impacted by the law. FOSTA threatens Woodhull’s ability to engage in protected online speech, including livestreaming and live tweeting its August meeting, unless FOSTA is put on hold. What: Hearing on plaintiffs' motion for preliminary injunction in Woodhull Freedom Foundation v. U.S. When: Thursday, July 19, 4 pm Where: U.S. District Court for the District of Columbia Courtroom 18, 6th Floor 333 Constitution Avenue N.W. Washington D.C. 20001 For more on this case: https://www.eff.org/cases/woodhull-freedom-foundation-et-al-v-united-states For the motion for preliminary injunction: https://www.eff.org/document/woodhull-freedom-foundation-et-al-v-united-states-motion-preliminary-injunction-and   Contact:  David Greene Civil Liberties Director davidg@eff.org Aaron Mackey Staff Attorney amackey@eff.org
>> mehr lesen

A Key Victory Against European Copyright Filters and Link Taxes - But What's Next? (Mo, 16 Jul 2018)
Against all the odds, but with the support of nearly a million Europeans, MEPs voted earlier this month to reject the EU's proposed copyright reform—including controversial proposals to create a new "snippet" right for news publishers, and mandatory copyright filters for sites that published user uploaded content. The change was testimony to how powerful and fast-moving Net activists can be. Four weeks ago, few knew that these crazy provisions were even being considered. By the June 20th vote, Internet experts were weighing in, and wider conversations were starting on sites like Reddit. The result was a vote on July 5th of all MEPS, which ended in a 318 against 278 victory in favour of withdrawing the Parliament's support for the languages. Now all MEPs will have a chance in September to submit new amendments and vote on a final text—or reject the directive entirely. While re-opening the text was a surprising set-back for Article 13 and 11, the battle isn't over: the language to be discussed on in September will be based on the original proposal by the European Commission, from two years ago—which included the first versions of the copyright filters, and snippet rights. German MEP Axel Voss's controversial modifications will also be included in the debate, and there may well be a flood of other proposals, good and bad, from the rest of the European Parliament. There's still sizeable support for the original text: Article 11 and 13's loudest proponents, led by Voss, persuaded many MEPs to support them by arguing that these new powers would restore the balance between American tech giants and Europe's newspaper and creative industries—or "close the value gap", as their arguments have it. But using mandatory algorithmic censors and new intellectual property rights to restore balance is like Darth Vader bringing balance to the Force: the fight may involve a handful of brawling big players, but it's everybody else who would have to deal with the painful consequences. That's why it remains so vital for MEPs to hear voices that represent the wider public interest. Librarians, academics, and redditors, everyone from small Internet businesses and celebrity Youtubers, spoke up in a way that was impossible for the Parliament to ignore. The same Net-savvy MEPs and activists that wrote and fought for the GDPR put their names to challenge the idea that these laws would rein back American tech companies. Wikipedians stood up and were counted: seven independent, European-language encyclopedias consensed to shut down on the day of the vote. European alternatives to Google, Facebook and Twitter argued that this would set back their cause. And European artists spoke up that the EU shouldn't be setting up censorship and ridiculous link rights in their name. To make sure the right amendments pass in September, we need to keep that conversation going. Read on to find out what you can do, and who you should be speaking to. Who Spoke Up In The European Parliament? As we noted last week, the decision to challenge the JURI committee's language on Article 13 and 11 last week was not automatic -- a minimum of 78 MEPs needed to petition for it to be put to the vote. Here's the list of those MEPs who actively stepped forward to stop the bill. Also heavily involved was Julia Reda, the Pirate Party MEP who worked so hard on making the rest of the proposed directive so positive for copyright reform, and then re-dedicated herself to stopping the worst excesses of the JURI language, and Marietje Schaake, the Parliament's foremost advocate for human rights online. These are the core of the opposition to Article 13 and 11. A look at that list, and the final list of votes on July 5th, shows that the proposals have opponents in every corner of Europe's political map. It also shows that every MEP who voted for Article 13 and 11, has someone close to them politically who knows why it's wrong. What happens now? In the next few weeks, those deep in the minutiae of the Copyright directive will be crafting amendments for MEPs to vote on in September. The tentative schedule is that the amendments are accepted until Wednesday September 5th, with a vote at 12:00 Central European Time on Wednesday September 12th. The European Parliament has a fine tradition of producing a rich supply of amendments (the GDPR had thousands). We'll need to coalesce support around a few key fixes that will keep the directive free of censorship filters and snippet rights language, and replace them with something less harmful to the wider Net. Julia Reda already proposed amendments. And one of Voss' strongest critics in the latest vote was Catherine Stihler, the Scottish MEP who had created and passed consumer-friendly directive language in her committee, which Voss ignored. (Here's her barnstorming speech before the final vote.) While we wait for those amendments to appear, the next step is to keep the pressure on MEPs to remember what's at stake—no mandatory copyright filters, and no new ancillary rights on snippets of text. In particular, if you talk to your MEP, it's important to convey how you feel these proposals will affect you. MEPs are hearing from giant tech and media companies. But they are only just beginning to hear from a broader camp: the people of the Internet.
>> mehr lesen

FBI Wish List: An App That Can Recognize the Meaning of Your Tattoos (Mo, 16 Jul 2018)
We’ve long known that the FBI is heavily invested in developing face recognition technology as a key component in its criminal investigations. But new records, obtained by EFF through a Freedom of Information Act (FOIA) lawsuit, show that’s not the only biometric marker the agency has its eyes on. The FBI’s wish list also includes image recognition technology and mobile devices to attempt to use tattoos to map out people’s relationships and identify their beliefs. EFF began looking at tattoo recognition technology in 2015, after discovering that the National Institute for Standards & Technology (NIST), in collaboration with the FBI, was promoting experiments using tattoo images gathered involuntarily from prison inmates and arrestees. The agencies had provided a dataset of thousands of prisoner tattoos to some 19 outside groups, including companies and academic institutions, that are developing image recognition and biometric technology. Government officials instructed the groups to demonstrate how the technology could be used to identify people by their tattoos and match tattoos with similar imagery. Our investigation found that NIST was targeting people who shared common beliefs, with a heavy emphasis on religious imagery.  NIST researchers, we discovered, had also bypassed basic oversight measures. Despite rigid requirements designed to protect prisoners who might be used as subjects in government research, the researchers failed to seek sign-off from the in-house watchdog before embarking on the project. Following our report, NIST stopped responding to EFF’s FOIA requests. The agency also rushed to retroactively alter its documents to downplay the nature of the research. In a statement issued to the press, NIST denied our findings, claiming that its goal was simply to evaluate the effectiveness of tattoo recognition algorithms and “not about the many complex law enforcement policies or approaches that may be related to images of tattoos.” This claim rings especially hollow now that the FBI has released email communications and slide presentations with NIST in response to our FOIA suit.  Read the FBI records provided in response to our FOIA litigation.  Among the records were two previously withheld presentations from FBI divisions focused on deciphering tattoos that were delivered at a special event organized by NIST for participants in the research project.  What the FBI “Wants” and “Needs”  While the FBI has long used a rudimentary system called TAG-IMAGE to use tattoos to identify people, according to the presentation, "It does NOT [sic] attempt to answer the question 'What does it mean?'" Officials told the roomful of government, academic, and corporate researchers assembled by NIST that the FBI is seeking to establish “affiliation” using tattoo images.  This means identifying “gang membership, terrorist relevance, location, [and] symbols description/meaning.” One particularly alarming presentation by the chief of the FBI’s Cryptanalysis & Racketeering Records Unit was titled, “Tattoo, Graffiti, and Symbol Recognition: A Codebreakers Perspective.” It began with a graphic of exclusively religious symbols, including Christian, Jewish, Hindu, and Taoist iconography.  A subsequent slide was even more ominous. The headline read: “We want a one stop database to tell us what a symbol means.” The presentation went onto explain that the FBI currently uses open-source resources to decipher the meanings of symbols and tattoos. These include the Anti-Defamation League’s Hate on Display database,  the University of Michigan’s Science Fiction and Fantasy “Dictionary of Symbolism,” and the U.S. Patent and Trademark Office’s website.  The heavily redacted presentation included blacked-out slides that would have shown the extent of the FBI’s current efforts as well as several individual examples that posed the question, “What does this tattoo mean?” Further in the presentation, the FBI further elaborated on its goals: “The Technology We Need: web accessible user populated database with instant i2i matching” (i2i stands for “image to image”). The slide included a photograph of a New York Police Department officer tinkering with a smartphone. This technology is not science fiction: several years ago the Department of Homeland Security promoted a controversial program at Purdue University to crowdsource gang graffiti and tattoo images for a mobile recognition app. A prototype technology was deployed in Indiana and Illinois and introduced to the FBI in 2014. However, very little information is available about this pilot program after 2016. Tattoos and Meaning Most biometric technology, such as face, iris, or finger print recognition, is designed to establish the identity of otherwise unknown suspects or victims. Tattoo recognition is different in that not only can tattoos be used for identification, they can also reveal further information about the individual. For example a tattoo could say a lot about a person’s politics, religious beliefs, who their family members are, or even their favorite recording artist. It’s this utility that raises civil liberties concerns, particularly in an age where law enforcement may be using tattoos to add individuals to gang databases or prioritize immigrants for deportation proceedings. For example, while the FBI says that the database could link members of a particular gang, it is also true that it could be used to compile a list of individuals who subscribe to a particular religion. It’s also important to note the risk of erroneous interpretations of tattoos, whether by human or technology. A swastika tattoo, for example, could refer to the Neo-Nazi movement, but it’s also a symbol used by Native Americans. A six-pointed star is often associated with Judaism, but it can also signify a particular street gang. In one high profile case, federal officials attempted to deport a Mexican man after accusing him of having a gang-affiliated tattoo. A judge found that claim to be unfounded [PDF] after prosecutors were unable to counter a well-respected gang expert’s testimony that he had “never seen a gang member with a similar tattoo nor would [he] attribute this tattoo to have any gang-related meaning.” Even if a person has a gang tattoo, it does not mean they are part of that gang: often people who leave gangs cannot afford to have tattoos removed. In some cases, gang members have forcibly tattooed women against their will.  NIST’s research itself also illustrates how dangerous this technology can be: none of the third parties were able to produce better than 15% accuracy in matching tattoos based on imagery.  More Records on the Way The new documents from the FBI begin to bare the truth about the agency’s tattoo recognition plans, however we are still waiting on the government to fully provide the records we requested. For example, EFF has demanded NIST and the FBI provide a list of the 19 companies and research institutions that received copies of the images collected from inmates. Tattoo recognition technology is still in its early stages, but as we see an increased interest from federal agencies to use tattoos as an excuse to persecute immigrants, it’s more important than ever to expose this technology before it reaches maturation. Related Cases:  NIST Tattoo Recognition Technology Program FOIA
>> mehr lesen

EFF to Japan: Reject Website Blocking (Fr, 13 Jul 2018)
Website blocking to deal with alleged copyright infringement is like cutting off your hand to deal with a papercut. Sure, you don’t have a papercut anymore, but you’ve also lost a lot more than you’ve gained. The latest country to consider a website blocking proposal is Japan, and EFF has responded to the call for comment by sharing all the reasons that cutting off websites is a terrible solution for copyright violations. In response to infringement of copyrighted material, specifically citing a concern for manga, the government of Japan began work on a proposal that would make certain websites inaccessible in Japan. We’ve seen proposals like this before, most recently in the European Union’s Article 13. In response to Japan’s proposal, EFF explained that website blocking is not effective at the stated goal of protecting artists and their work. First, it can be easily circumvented. Second, it ends up capturing a lot of lawful expression. Blocking an entire website does not distinguish between legal and illegal content, punishing both equally. Blocking and filtering by governments has frequently been found to violate national and international principles of free expression [pdf]. EFF also shared the research leading Internet engineers did in response to a potential U.S. law that would have enabled website blocking. They said that website blocking would lead to network errors and security problems. According to numerous studies, the best answer to the problem of online infringement is providing easy, lawful alternatives. Doing this also has the benefit of not penalizing legitimate expression the way blocking does. Quite simply, website blocking doesn’t work, violates the right to free expression, and breaks the Internet. Japan shouldn’t go down this path but look to proven alternatives.
>> mehr lesen

EFF to Patent Office: Don't Make it Harder to Kill Bad Patents (Fr, 13 Jul 2018)
It’s already much too difficult to invalidate bad patents—the kind that never should have been issued in the first place. Now, unfortunately, the Patent Office has proposed regulation changes that will make it even harder. That’s the wrong path to take. This week, EFF submitted comments [PDF] opposing the Patent Office’s proposal. Congress created some new kinds of Patent Office proceedings as part of the America Invents Act (AIA) of 2011. That was done with the goal of improving patent quality by giving third parties the opportunity to challenge patents at the Patent Trial and Appeal Board, or PTAB. EFF used one of these proceedings, known as inter partes review, to successfully challenge a patent that had been used to sue podcasters. Congress didn’t explicitly say how these judges should interpret patent claims in AIA proceedings. But the Patent Office, until recently, read the statute as EFF still does: it requires the office to interpret patent claims in PTAB challenges the same way it does in all other proceedings. That approach requires giving the words of a patent claim their broadest reasonable interpretation (BRI). That’s different than the approach used in federal courts, which apply a standard that can produce a claim of narrower scope. Using the BRI approach in AIA proceedings makes sense. Critically, it ensures the Patent Office reviews a wide pool of prior art (publications and products that pre-date the patent application). If the patent owner thinks this pool is too broad, it can amend claims to narrow their scope and avoid invalidating prior art. Requiring patent owners to amend their claims to avoid invalidating prior art encourages innovation and deters baseless litigation by giving the public clearer notice about what the patent does and does not claim. But you don’t have to take our word for it. Barely two years ago, the Patent Office made the same argument to the Supreme Court to justify the agency’s use of the BRI approach in AIA proceedings. The Supreme Court agreed. In Cuozzo v. Lee [PDF], the court upheld the agency’s approach based on the text and structure of the AIA, a century of agency practice, and considerations of fairness and efficiency. After successfully convincing the Supreme Court that the BRI standard should apply in AIA proceedings, why has the PTO changed its mind? Unfortunately, the Patent Office’s notice says little to explain its sudden change of course. Nor does it offer any reasons why this change would improve patent quality, or the efficiency of patent litigation. Apparently, the Patent Office assumes minimizing differences between two deliberately different types of proceedings will be more efficient. That assumption is flawed. The PTAB’s interpretation of claim language will only be relevant to a district court if similar terms are in dispute. If not, the change will only ensure more lawsuits, based on bad patents, clog up the courts. The timing of the Patent Office’s proposal may hint at its impetus. When the agency adopted and argued for the BRI standard, the Director was Michelle Lee. On February 8, 2018, Andrei Iancu became Director. Three months later, on May 9, the Patent Office proposed abandoning the BRI standard. In his keynote speech, Director Iancu referenced unfounded criticisms of AIA proceedings, from “some” who, “pointing to the high invalidation rates . . . hate the new system with vigor, arguing that it’s an unfair process that tilts too much in favor of the petitioner.” The Patent Office’s sudden change of view on this topic may be a capitulation to these unfounded criticisms and a sign of further policy changes to come. We hope the Patent Office will reconsider its proposal, after considering our comments, as well as those submitted by the R Street Institute and CCIA, a technology trade group. Administrative judges must remain empowered to weed out those patents that should never have issued in the first place.    
>> mehr lesen

Should Your Company Help ICE? “Know Your Customer” Standards for Evaluating Domestic Sales of Surveillance Equipment (Fr, 13 Jul 2018)
Employees at Google, Microsoft, and Amazon have raised public concerns about those companies assisting U.S. military, law enforcement, and the Immigration and Customs Enforcement Agency (ICE) in deploying various kinds of surveillance technologies. These public calls from employees raise important questions: what steps should a company take to ensure that government entities who purchase or license their technologies don’t misuse them? When should they refuse to sell to a governmental entity? Tech companies must step up and ensure that they aren’t assisting governments in committing human rights abuses. While the specific context of U.S. law enforcement using new surveillance technologies is more recent, the underlying questions aren’t. In 2011, EFF proposed a basic Know Your Customer framework for these questions. The context then was foreign repressive governments’ use of the technology from U.S. and European companies to facilitate human rights abuses. EFF’s framework was cited favorably by the European Commission in its implementation guide for technology companies for the United Nations' Guiding Principles on Business and Human Rights. Now, those same basic ideas about investigation, auditing, and accountability can be, and should be, deployed domestically. Put simply, tech companies, especially those selling surveillance equipment, must step up and ensure that they aren’t assisting governments in committing human rights, civil rights and civil liberties abuses. This obligation applies whether those governments are foreign or domestic, federal or local. One way tech companies can navigate this difficult issue is by adopting a robust Know Your Customer program, modeled on requirements that companies already have to follow in the export control and anti-bribery context. Below, we outline our proposal for sales to foreign governments from 2011, with a few updates to reflect shifting from an international to domestic focus. Employees at companies that sell to government agencies, especially agencies with a record as troubling as ICE, may want to advocate for this as a process to protect against future corporate complicity. We propose a simple framework: Companies selling surveillance technologies to governments need to affirmatively investigate and "know your customer" before and during a sale. We suggest customer investigations similar to what many of these companies are already required to do under the Foreign Corrupt Practices Act and the export regulations for their foreign customers. Companies need to refrain from participating in transactions where their "know your customer" investigations reveal either objective evidence or credible concerns that the technologies provided by the company will be used to facilitate governmental human or civil rights or civil liberties violations. This framework can be implemented voluntarily, and should include independent review and auditors, employee participation, and public reporting. A voluntary approach can be more flexible as technologies change and situations around the world shift. Nokia Siemens Networks has already adopted a Human Rights Policy that incorporates some of these guidelines. In a more recent example, Google's AI principles contain many of these steps along with guidance about how they should be applied.  If companies don’t act on their own, however, and don’t act with convincing transparency and commitment, then a legal approach may be necessary. Microsoft has already indicated that it not only would be open to a legal (rather than voluntary) approach, but that such an approach is necessary. For technology companies to be truly accountable, a legal approach can and should include extending liability to companies that knowingly and actively facilitate governmental abuses, whether through aiding and abetting liability. EFF has long advocated for corporate liability for aiding governmental surveillance, including in the Doe v. Cisco case internationally and in our Hepting v. AT&T case domestically.  Elaborating on the basic framework above, here are some guidelines: [Note: These guidelines use key terms—Technologies, Transaction, Company, and Government—that are defined at the bottom and capitalized throughout.] Affirmatively Investigate: The Company must have a process, led by a specifically designated person, to engage in an ongoing evaluation of whether Technologies or Transactions will be, or are being, used to aid, facilitate, or cover up human rights, civil rights, and civil liberties abuses (“governmental abuses”).  This process needs to be more than lip service and needs to be verifiable (and verified) by independent outsiders. It should also include concerned employees, who deserve to have a voice in ensuring that the tools they develop are not misused by governments. This must be an organizational commitment, with effective enforcement mechanisms in place. It must include tools, training, and education of personnel, plus career consequences when the process is not followed. In addition, in order to build transparency and solidarity, a Company that decides to refuse (or continue) further service on the basis of these standards should, where possible, report that decision publicly so that the public understands the decisions and other companies can have the benefit of their evaluation. The investigation process should include, at a minimum: Review what the purchasing Government and Government agents, and Company personnel and agents, are saying about the use of the Technologies, both before and during any Transaction. This includes, among other things, review of sales and marketing materials, technical discussions and questions, presentations, technical and contractual specifications, and technical support conversations or requests. For machine learning or AI applications, it must include review of training data and mechanisms to identify what questions the technology will be asked to answer or learn about. Examples include: Evidence in the Doe v. Cisco case, arising from Cisco’s participation with the Chinese government in building surveillance tools aimed at identifying Falun Gong, are the presentations made by Cisco employees that brag about how their technology can help the Chinese Government combat the “Falun Gong Evil Religion.” In 2016, the ACLU of Northern California published a report outlining how Geofeedia advertised that its location-based, social media surveillance system could be used by government offices and the police to monitor the protest activities of activists, including specifically of color, raising core First Amendment concerns. Review the capabilities of the Technology for human rights abuses and consider possible mitigation measures, both technical and contractual. For instance, the fact that facial recognition software misidentifies people of color at a much higher rate than white people is a clear signal that the Technology is highly vulnerable to governmental abuses. Note that we do not believe that Companies should be held responsible merely for selling general purpose or even dual-use products to the government that are later misused, as long as the Company conducted a sufficient investigation that did not reveal governmental abuse as a serious risk.   Review the Government’s laws, regulations, and practices regarding surveillance, including approval of purchase of surveillance equipment, laws concerning interception of communications, access to stored communications, due process requirements, and other relevant legal process. For sellers of machine learning and artificial intelligence tools, the issue of whether the tool can be subject to true due process requirements–that is, whether a person impacted by a system's decision can have sufficient access to be able to determine how an adverse decision was made–should be a key factor. For instance, Nokia Siemens says that it will only provide core lawful intercept (i.e. surveillance) capabilities that are legally required and are "based on clear standards and a transparent foundation in law and practice."  In some instances, as with AI, this review may include interpreting and applying legal and ethics principles, rather than simply waiting for “generally accepted” ones to emerge, since law enforcement often implements technologies before those rules are clear. EFF and a broad international coalition have already interpreted key international legal doctrines on mass surveillance in the Necessary and Proportionate Principles. For domestic uses, this review must include an evaluation of whether sufficient local control is in place. EFF and the ACLU have worked to ensure this with a set of proposals called  Community Control Over Police Surveillance or (CCOPS). If local control and protections are not yet in place, the company should decline to provide the technology until they are, especially in locations in which the population is already at risk from surveillance. Review credible reports about the Government and its human rights record, including news or other reports from nongovernmental sources or local sources that indicate whether the Government engages in the use or misuse of surveillance capabilities to conduct human rights abuses.  Internationally, this can include U.S. State Department reports as well as other governmental and U.N. reports, as well as those by well-respected NGOs and journalists. Domestically, this can include all of the above, plus Department of Justice reports about police departments, like the ones issued about Ferguson, MO, and San Francisco, CA. For both, this review can and should included nongovernmental and journalist sources as well. Refrain from Participation: The Company must not participate in, or continue to participate in, a Transaction or provide a Technology if it appears reasonably foreseeable that the Transaction or Technology will directly or indirectly facilitate governmental abuses. This includes cases in which: The portion of the Transaction that the Company is involved in or the specific Technology provided includes building, customizing, configuring, or integrating into a system that is known or is reasonably foreseen to be used for governmental abuses, whether done by the Company or by others. The portion of the Government that is engaging in the Transaction or overseeing the Technologies has been recognized as committing governmental abuses using or relying on similar Technologies. The Government's overall record on human rights generally raises credible concerns that the Technology or Transaction will be used to facilitate governmental abuses. The Government refuses to incorporate contractual terms confirming the intended use or uses of the Technology, confirming local control similar to the CCOPS Proposals, or allowing the auditing of their use by the Government purchasers in sales of surveillance Technologies. The investigation reveals that the technology is not capable of operating in a way that protects against abuses, such as when due process cannot be guaranteed in AI/ML decision-making, or bias in training data or facial recognition outcome is endemic or cannot be corrected. Key Definitions and the Scope of the Process: Who should undertake these steps? The field is actually pretty small: Companies engaging in Transactions to sell or lease usrveillance Technologies to Governments, defined as follows: “Governmental Abuses” includes governmental violations of international human rights law, international humanitarian law, domestic civil rights violations, domestic civil liberties violations and other legal violations that involve governments doing harm to people. As noted above, in some instances involving new or evolving technology or uses of technology, this may include interpreting and applying those principles and laws, rather than simply waiting for legal interpretations to emerge. “Transaction” includes all sales, leases, rental or other types of arrangements where a Company, in exchange for any form of payment or other consideration, either provides or assists in providing Technologies, personnel or non-technological support to a Government. This also includes providing of any ongoing support to Governments such as software or hardware upgrades, consulting or similar services. “Technologies” include all systems, technologies, consulting services, and software that, through marketing, customization, government contracting processes, or otherwise are known to the company to be used or be reasonably likely to be used to surveil third parties. This includes technologies that intercept communications, packet-sniffing software, deep packet inspection technologies, facial recognition systems, artificial intelligence and machine learning systems aimed at facilitating surveillance, certain biometrics devices and systems, voting systems, and smart meters.  Note that EFF does not believe that general purpose technologies should be included in this, unless the Company has a clear reason to believe that they will be used for surveillance. Surveillance technologies like facial recognition systems are generally not sold to Governments off the shelf. Technology providers are almost inevitably involved in training, supporting, and developing these tools for specific governmental end users, like a specific law enforcement agency. “Company” includes subsidiaries, joint ventures (especially joint ventures directly with government entities), and other corporate structures where the Company has significant holdings or has operational control. “Government” includes all segments of government: local law enforcement, state law enforcement, and federal and even military agencies. It includes formal, recognized governments, including State parties to the United Nations. It also includes governing or government-like entities, such as the Chinese Communist Party or the Taliban and other nongovernmental entities that effectively exercise governing powers over a country or a portion of a country. For these purposes “Government” includes indirect sales through a broker, reseller, systems integrator, contractor, or other intermediary or multiple intermediaries if the Company is aware or should know that the final recipient of the Technology is a Government. If tech companies want to be part of making the world better, they must commit to making business decisions that consider potential governmental abuses.  This framework is similar to the one in the current U.S. export controls and also to the steps required by Companies under the Foreign Corrupt Practices Act. It is based on the recognition that companies involved in domestic government contracting, especially for the kinds of expensive, service-heavy surveillance systems provided by technology companies, are already participating in a highly regulatory process with many requirements. For larger federal contractors, these include providing complex cost or pricing data, doing immigration checks and conducting drug testing. Asking these companies to ensure that they are not facilitating governmental abuses is not a heavy additional lift. Regardless of how tech companies get there, if they want to be part of making the world better, not worse, they must commit to making business decisions that consider potential governmental abuses.  No reasonable company wants to be known as the company that knowingly helps facilitate governmental abuses. Technology workers are making it clear that they don’t want to work for those companies either. While the blog posts and public statements from a few of the tech giants are a good start, it’s time all tech companies take real, enforceable steps to ensure that they aren’t serving as "abuse’s little helpers." Related Cases:  Doe I v. Cisco
>> mehr lesen

EFF Responds to Vigilant Solutions’ Accusations About EFF ALPR Report (Do, 12 Jul 2018)
On Tuesday, we wrote a report about how the Irvine Company, a private real estate development company, has collected automated license plate reader (ALPR) data from patrons of several of its shopping centers, and is providing the collected data to Vigilant Solutions, a contractor notorious for its contracts with state and federal law enforcement agencies across the country.  The Irvine Company initially declined to respond to EFF’s questions, but after we published our report, the company told the media that it only collects information at three malls in Orange County (Irvine Spectrum Center, Fashion Island, and The Marketplace) and that Vigilant Solutions only provides the data to three local police departments (the Irvine, Newport Beach, and Tustin police departments).  The next day, Vigilant Solutions issued a press release claiming that the Irvine Company ALPR data actually had more restricted access (in particular, denying transfers to the U.S. Immigration & Customs Enforcement [ICE] agency), and demanding EFF retract the report and apologize. As we explain below, the EFF report is a fair read of the published ALPR policies of both the Irvine Company and Vigilant Solutions. Those policies continue to permit broad uses of the ALPR data, far beyond the limits that Vigilant now claims exist.    Vigilant Solutions’ press release states that the Irvine Company’s ALPR data "is shared with select law enforcement agencies to ensure the security of mall patrons," and that those agencies "do not have the ability in Vigilant Solutions' system to electronically copy this data or share this data with other persons or agencies, such as ICE."   However, neither Vigilant Solutions nor the Irvine Company have updated their published ALPR policies to reflect these restrictions.  Pursuant to California Civil Code § 1798.90.51(d), an ALPR operator "shall" implement and publish a usage and privacy policy that includes the "restrictions on, the sale, sharing, or transfer of ALPR information to other persons." This is important because the published policies are extremely broad. To begin with, the Irvine Company policy explains that "[t]he automatic license plate readers used by Irvine or its contractors are programmed to transmit the ALPR Information to" "a searchable database of information from multiple sources ('ALPR System') operated by Vigilant Solutions, LLC" "upon collection."  Moreover, the Irvine Company policy still says that Vigilant Solutions "may access and use the ALPR System for any of the following purposes: (i) to provide ALPR Information to law enforcement agencies (e.g., for identifying stolen vehicles, locating suspected criminals or witnesses, etc.); or (ii) to cooperate with law enforcement agencies, government requests, subpoenas, court orders or legal process." Under this policy, the use of ALPR data is not limited only to uses that "ensure the security of mall patrons," nor even to any particular set of law enforcement agencies, select or otherwise. The policy doesn’t even require legal process; instead it allows access where the "government requests."  Likewise, Vigilant Solutions’ policy states that the "authorized uses of the ALPR system" include the very broad category of "law enforcement agencies for law enforcement purposes," and—unlike the policy it claims to have in their press release—does not state any restriction on access by any particular law enforcement agency or to any particular law enforcement purpose. ICE is a law enforcement agency.  We appreciate that Vigilant Solutions is now saying that the information collected from customers of the Irvine Spectrum Center, Fashion Island, and The Marketplace will never be transited to ICE and will only be used to ensure the security of mall patrons. But if they want to put that issue to rest, they should, at a minimum, update their published ALPR policies. Better yet, given the inherent risks with maintaining databases of sensitive information, Irvine and Vigilant Solutions should stop collecting information about mall patrons and destroy all the collected information. As a mass-surveillance technology, ALPR can be used to gather information on sensitive populations, such as immigrant drivers, and may be misused. Further, once collected, ALPR may be accessible by other government entities—including ICE—through various legal processes.  In addition, Vigilant Solutions’ press release takes issue with EFF’s statement that "Vigilant Solutions shares data with as many as 1,000 law enforcement agencies nationwide."  According to Vigilant Solutions press release, "Vigilant Solutions does not share any law enforcement data. The assertion is simply untrue. Law enforcement agencies own their own ALPR data and if they choose to share it with other jurisdictions, the[y] can elect to do so."    This is a distinction without a difference. As Vigilant Solutions’ policy section on "Sale, Sharing or Transfer of LPR Data" (emphasis added) states, "the company licenses our commercially collected LPR data to customers," "shares the results of specific queries for use by its customers" and "allows law enforcement agencies to query the system directly for law enforcement purposes." The only restriction is that, for information collected by law enforcement agencies, "we facilitate sharing that data only with other LEAs … if sharing is consistent with the policy of the agency which collected the data."  If Vigilant Solutions only meant to dispute "sharing" with respect to information collected by law enforcement, this is a non-sequitur, as the Irvine Company is not a law enforcement agency. Nevertheless, Vigilant Solutions’ dispute over whether it truly "shares" information puts an Irvine Company letter published yesterday in an interesting light. The Irvine Company reportedly wrote to Vigilant Solutions to confirm that "Vigilant has not shared any LPR Data generated by Irvine with any person or agency other than the Irvine, Newport Beach and Tustin police departments and, more specifically you have not shared any such data with U.S. Immigration and Customs Enforcement (ICE)."  Under the cramped "sharing" definition in the Vigilant Solutions press release, any such "confirmation" would not prevent Vigilant from licensing the Irvine data, sharing results of specific queries, allowing law enforcement to query the system directly, or "facilitate sharing" with ICE if the police department policies allowed it. If Irvine and Vigilant didn’t mean to allow this ambiguity, they should be more clear and transparent about the actual policies and restrictions.  The rest of the press release doesn’t really need much of a response, but we must take issue with one further claim. Vigilant Solutions complains that, while EFF reached out several times to the Irvine Company (with no substantive response), EFF did not reach out to them directly about the story. This assertion is both misleading and ironic.  A year ago, EFF sent a letter to Vigilant Solutions with 31 questions about its policies and practices. To date, Vigilant Solutions has not responded to a single question. In addition, Vigilant Solutions had already told the press, "as policy, Vigilant Solutions is not at liberty to discuss or share any contractual details. This is a standard agreement between our company, our partners, and our clients."  Indeed, Vigilant Solutions has quite a history of fighting EFF’s effort to shine a light on ALPR practices, issuing an open letter to police agencies taking EFF to task for using Freedom of Information Act and Public Records Act requests to uncover information on how public agencies collect and share data. A common Vigilant Solutions contract has provisions where the law enforcement agency "also agrees not to voluntarily provide ANY information, including interviews, related to Vigilant, its products or its services to any member of the media without the express written consent of Vigilant."  Vigilant Solutions has built its business on gathering sensitive information on the private activities of civilians, packaging it, and making it easily available to law enforcement. It’s deeply ironic that Vigilant gets so upset when someone wants to take a closer look at its own practices.
>> mehr lesen

Egypt's Draconian New Cybercrime Bill Will Only Increase Censorship (Do, 12 Jul 2018)
The hope that filled Egypt's Internet after the 2011 January 25 uprising has long since faded away. In recent years, the country's military government has instead created a digital dystopia, pushing once-thriving political and journalism communities into closed spaces or offline, blocking dozens of websites, and arresting a large number of activists who once relied upon digital media for their work. In the past two years, we’ve witnessed the targeting of digital rights defenders, journalists, crusaders against sexual harassment, and even poets, often on trumped-up grounds of association with a terrorist organization or “spreading false news.” Now, the government has put forward a new law that will result in its ability to target and persecute just about anyone who uses digital technology. The new 45-article cybercrime law, named the Anti-Cyber and Information Technology Crimes law, is divided into two parts. The first part of the bill stipulates that service providers are obligated to retain user information (i.e. tracking data) in the event of a crime, whereas the second part of the bill covers a variety of cybercrimes under overly broad language (such as “threat to national security”). Article 7 of the law, in particular, grants the state the authority to shut down Egyptian or foreign-based websites that “incite against the Egyptian state” or “threaten national security” through the use of any digital content, media, or advertising. Article 2 of the law authorizes broad surveillance capabilities, requiring telecommunications companies to retain and store users’ data for 180 days. And Article 4 explicitly enables foreign governments to obtain access to information on Egyptian citizens and does not make mention of requirements that the requesting country have substantive data protection laws. The implications of these articles are described in detail in a piece written by the Association for Freedom of Thought and Expression (AFTE) and Access Now. In the piece, the organizations state “These laws serve to close space for civil society and deprive citizens of their rights, especially the right to freedom of expression and of association” and call for the immediate withdrawal of the law. We agree—the law must be withdrawn. It would appear that the bill’s underlying goal is to set up legal frameworks to block undesirable websites, intimidate social media users, and solidify state control over websites.   By expanding government’s power to block websites, target individuals for their speech, and surveil citizens, the Egyptian parliament is helping the already-authoritarian executive branch inch ever closer toward a goal of repressing anyone who dares speak their mind. The overly broad language contained throughout the law will lead to the persecution of individuals who engage in online speech and create an atmosphere of self-censorship, as others shy away from using language that may be perceived as threatening to the government. The Egyptian law comes at a time of increased repression throughout the Middle East. In the wake of the 2011 uprisings, a number of countries in the region began to crack down on online speech, implementing cybercrime-related laws that utilize broad language to ensure that anyone who steps out of line can be punished. In a 2015 piece for the Committee to Protect Journalists, Courtney Radsch wrote: “Cybercrime legislation, publicly justified as a means of preventing terrorism and protecting children, is a growing concern for journalists because the laws are also used to restrict legitimate speech, especially when it is critical or embarrassing to authorities.” A June 2018 report from the Gulf Center for Human Rights maps both legal frameworks and violations of freedom of expression in the six Gulf states, as well as Jordan, Syria, and Lebanon, noting that “The general trend for prosecution was that digital rights and freedoms were penalised and ruled as 'cybercrime' cases delegated to general courts. Verdicts in these cases have been either based on an existing penal code where cybercrime laws are absent, in the process of being drafted, or under the penal code and a cybercrime law.” These are difficult times for free expression in the region. EFF continues to monitor the development of cybercrime and other relevant laws and offers our support to the many organizations in the region fighting back against these draconian laws.
>> mehr lesen

Don’t Give the DHS Free Rein to Shoot Down Private Drones (Do, 12 Jul 2018)
When government agencies refuse to let the members of the public watch what they’re doing, drones can be a crucial journalistic tool. But now, some members of Congress want to give the federal government the power to destroy private drones it deems to be an undefined “threat.” Even worse, they’re trying to slip this new, expanded power into unrelated, must-pass legislation without a full public hearing. Worst of all, the power to shoot these drones down will be given to agencies notorious for their absence of transparency, denying access to journalists, and lack of oversight. Back in June, the Senate Homeland Security and Governmental Affairs Committee held a hearing on the Preventing Emerging Threats Act of 2018 (S. 2836), which would give the Department of Homeland Security and the Department of Justice the sweeping new authority to counter privately owned drones. Congress shouldn’t grant DHS and DOJ such broad, vague authorities that allow them to sidestep current surveillance law. Now, Chairman Ron Johnson is working to include language similar to this bill in the National Defense Authorization Act (NDAA). EFF is opposed to this idea, for many reasons. The NDAA is a complicated and complex annual bill to reauthorize military programs and is wholly unrelated to both DHS and DOJ. Hiding language in unrelated bills is rarely a good way to make public policy, especially when the whole Congress hasn’t had a chance to vet the policy. But most importantly, expanding the agencies’ authorities without requiring that they follow the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act raises large First and Fourth Amendment concerns that must be addressed. Drones are a powerful tool for journalism and transparency. Today, the Department of Homeland Security routinely denies reporters access to detention centers along the southwest border. On the rare occasions DHS does allow entry, the visitors are not permitted to take photos or record video. Without other ways to report on these activities, drones have provided crucial documentation of the facilities being constructed to hold children. Congress should think twice before granting the DHS the authority to shoot drones down, especially without appropriate oversight and limitations. If S. 2836 is rolled into the NDAA, it would give DHS the ability to “track,” “disrupt,” “control,” “seize or otherwise confiscate” any drone that the government deems to be a “threat,” without a warrant or due process. DHS and DOJ might interpret this vague and overbroad language to include the power to stop journalists from using drones to document government malfeasance at these controversial children’s detention facilities. As we said before, the government may have legitimate reasons for engaging drones that pose an actual, imminent, and narrowly defined “threat.” Currently, the Department of Defense already has the authority to take down drones, but only in much more narrowly circumscribed areas directly related to enumerated defense missions. DHS and DOJ have not made it clear why existing exigent circumstance authorities aren’t enough. But even if Congress agrees that DHS and DOJ need expanded authority, that authority must be carefully balanced so as not to curb people’s right to use drones for journalism, free expression, and other non-criminal purposes. EFF has been concerned about government misuse of drones for a long time. But drones also represent an important tool for journalism and activism in the face of a less-than-transparent government. We can’t hand the unchecked power to destroy drones to agencies not known for self-restraint, and we certainly can’t let Congress give them that power through an opaque, backroom process.
>> mehr lesen

All Hands on Deck: Join EFF (Di, 10 Jul 2018)
It’s easy to feel adrift these days. The rising tide of social unrest and political extremism can be overwhelming, but on EFF’s 28th birthday our purpose has never been more clear. With the strength of our numbers, we can fight against the scourge of pervasive surveillance, government and corporate overreach, and laws that stifle creativity and speech. That's why today we're launching the Shipshape Security membership drive with a goal of 1,500 new and renewing members. For two weeks only, you can join EFF for as little $20 and get special member swag that will remind you to keep your digital cabin shipshape. Join TOday Online Freedom Begins with You! Digital security anchors your ability to express yourself, challenge ideas, and have candid conversations. It’s why EFF members fight for uncompromised online tools and ecosystems: the world can no longer resist tyranny without them. We also know that our impact is amplified when we approach security together and support one another. The future of online privacy and free expression depend on our actions today. EFF's new logo member t-shirt If you know people who care about online freedom, the Shipshape Security drive is a great time to encourage them to join EFF. On the occasion of our birthday, EFF has also released a new member t-shirt for this year featuring our fresh-from-the-oven logo. Members support EFF's work educating policymakers and the public with crucial analysis of the law, developing educational resources like Surveillance Self-Defense, developing software tools like Privacy Badger, empowering you with a robust action center, and doing incisive work in the courts to protect the public interest. Before the rise of the Internet, a crew of pioneers established EFF to help the world navigate the great promise and dangerous possibilities of digital communications. Today, precisely 28 years later, EFF is the flagship nonprofit leading a tenacious movement to protect online rights. Support from the public makes it possible, and EFF refuses to back down. Come hell or high water, EFF is fighting for your rights online. Lend your support and join us today.
>> mehr lesen

DNA Collection is Not the Answer to Reuniting Families Split Apart by Trump’s “Zero Tolerance” Program (Di, 10 Jul 2018)
The Trump Administration’s “zero tolerance” program of criminally prosecuting all undocumented adult immigrants who cross the U.S.-Mexico border has had the disastrous result of separating as many as 3,000 children—many no older than toddlers—from their parents and family members. The federal government doesn’t appear to have kept track of where each family member has ended up. Now politicians, agency officials, and private companies argue DNA collection is the way to bring these families back together. DNA is not the answer. Politicians argue DNA collection is the way to bring these families back together. DNA is not the answer. Two main DNA-related proposals appear to be on the table. First, in response to requests from U.S. Representative Jackie Speier, two private commercial DNA-collection companies proposed donating DNA sampling kits to verify familial relationships between children and their parents. Second, the federal Department of Health and Human Services has said it is either planning to or has already started collecting DNA from immigrants, also to verify kinship. Both of these proposals threaten not just the privacy, security, and liberty of undocumented immigrants swept up in Trump’s Zero Tolerance program but also the privacy, security, and liberty of everyone related to them. Jennifer Falcon, communications director at RAICES, an organization that provides free and low-cost legal services to immigrant children, families, and refugees in Texas succinctly summarized the problem: These are already vulnerable communities, and this would potentially put their information at risk with the very people detaining them. They’re looking to solve one violation of civil rights with something that could cause another violation of civil rights. Why is this a problem? DNA reveals an extraordinary amount of private information about us. Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get various genetically-determined diseases. Researchers have also theorized DNA may predict race, intelligence, criminality, sexual orientation, and even political ideology. DNA collected from one person can be used to track down and implicate family members, even if those family members have never willingly donated their own DNA to a database. In 2012, researchers used genetic genealogy databases and publicly-available information to identify nearly 50 people from just three original anonymized samples. The police have used familial DNA searching to tie family members to unsolved crimes. Once the federal government collects a DNA sample—no matter which agency does the collecting—the sample is sent to the FBI for storage, and the extracted profile is incorporated into the FBI’s massive CODIS database, which already contains over 13 million “offender” profiles (“detainees” are classified as “offenders”). It is next to impossible to get DNA expunged from the database, and once it’s in CODIS it is subject to repeated warrantless searches from all levels of state and federal law enforcement. Those searches have implicated people for crimes they didn’t commit. Unanswered Questions Both of the proposals to use DNA to verify kinship between separated family members raise many unanswered questions. Here are a few we should be asking: Who is actually collecting the DNA samples from parents and children?Is it the federal government? If so, which agency? If it’s a private entity, which entity? What legal authority do they have to collect DNA samples?DHS still doesn’t appear to have legal authority to collect DNA samples from anyone younger than 14. Children younger than 14 should not be deemed to have consented to DNA collection. And under these circumstances, parents cannot consent to the collection of DNA from their children because the federal government has admitted it has already lost track of which children are related to which adults. How are they collecting and processing the DNA?Are they collecting a sample via a swab of the cheek? Is collection coerced or is it with the consent and assistance of the undocumented person? Once the sample is collected, how is it processed? Is it processed in a certified lab? Is it processed using a Rapid DNA machine? How is chain of custody tracked, and how is the collecting entity ensuring samples aren’t getting mixed up? What happens to the DNA samples after they are collected, and who has access to them?Are samples deleted after a match is found? If not, and if they are collected by a private genetics or genetic geneology company like 23 and Me or MyHerritage, do these companies get to hold onto the samples and add them to their databanks? Are there any limits on who can access them and for what purpose? If the federal government collects the samples, where is it storing them and who has access to them? Will the DNA profiles extracted from the samples end up in FBI’s vast CODIS criminal DNA database?Currently DHS does not have its own DNA database. Any DNA it collects goes to the FBI, where it may be searched by any criminal agency in the country. Will the collected DNA be shared with foreign governments?The U.S. government shares biometric data with its foreign partners. Will it share immigrant DNA? Will this be used to target immigrants if or when they are sent back home? What if the separated family members aren’t genetically related or don’t represent a parent-child relationship?How is the U.S. government planning to determine who is a “family member” once agencies have lost track of the families who traveled here together? What if the parent is a step-parent or legal guardian? What if the child is adopted? What if the adult traveling with the child is a more distant relative? Will they still be allowed to be reunited with their children? Undocumented families shouldn’t have to trade one civil rights violation for another These proposals to use DNA to reunite immigrant families aren’t new. In 2008, the United Nations High Commissioner for Refugees (UNHCR) looked at this exact problem. In a document titled DNA Testing to Establish Family Relationships in the Refugee Context, it recognized that DNA testing “can have serious implications for the right to privacy and family unity” and should be used only as a “last resort.” In 2012, we raised alarms about DHS’s proposals at that time to use DNA to verify claimed kinship in the refugee and asylum context. The concerns raised by DNA collection ten years ago have only heightened today. The Trump administration shouldn’t be allowed to capitalize on the family separation crisis it created to blind us to these concerns. And well-meaning people who want to reunite families should consider other solutions to this crisis. Immigrant families shouldn’t have to trade the civil rights violation of being separated from their family members for the very real threats to privacy and civil liberties posed by DNA collection. Related Cases:  Maryland v. King Federal DNA Collection
>> mehr lesen

Grassroots Group Confronts Privacy-Invasive WiFi Kiosks in New York  (Di, 10 Jul 2018)
Free WiFi all across New York City? It might sound like a dream to many New Yorkers, until the public learned that it wasn’t “free” at all. LinkNYC, a communications network that is replacing public pay phones with WiFi kiosks across New York City, is paid for by advertising that tracks users, festooned with cameras and microphones, and has questionable processes for allowing the public to influence its data handling policies. These kiosks also gave birth to ReThink LinkNYC, a grassroots community group that’s uniting New Yorkers from different backgrounds in standing up for their privacy. In a recent interview with EFF, organizers Adsila Amani and Mari Dej described the organization as a “hodgepodge of New Yorkers” who were shocked by the surveillance-fueled WiFi kiosks entering their neighborhoods. More importantly, they saw opportunity. As Dej described, “As we began scratching the surface, [we] saw that this was an opportunity as well to highlight some of the problems that are largely invisible with data brokers and surveillance capitalism.” ReThink LinkNYC, which has launched an informational website and hosts events across New York, has been pushing city officials for transparency and accountability. They have demanded a halt to construction on the kiosks until adequate privacy safeguards are enacted. The group has already had some successes. As Dej described it, “We certainly got the attention of LinkNYC, and that itself is a victory – [they] know that there is an organized group of everyday peeps unhappy with the lack of transparency around the LinkNYC 'spy kiosks.’”  But Amani cautioned that it was too early to know whether early changes in response to the group’s advocacy—including a revised LinkNYC privacy policy, the creation of a Chief Privacy Officer role for the city, and a new city taskforce—will actually advance the privacy concerns of New Yorkers. “We would like to see the end of individualized tracking of location, faces, and all biometric data on the kiosks,” Amani offered, “With LinkNYC having the means to collect this data and still not having figured out the path for community oversight of the hardware and software, it’s saying trust us, we won't hurt you. That's naive, especially in these times.” ReThink LinkNYC has thrived in part because it actively cultivated partnerships, and not just with the tech community. Dej noted, “Inasmuch as the structure of surveillance affects us all, all of us deserve to be aware, and welcomed into action.  A movement needs to extend beyond the tech community.”  To other groups around the country that might be interesting in campaigning to defend civil liberties in their own communities, Amani advised organizers to examine the power structures they are opposing and cultivate personal connections: “Civic involvement remains a more or less fringe activity for a majority of people.  So appeal to what human community is—feelings of connection, acceptance, creating a safe world for our children, and a chance to be creative, 'seen', and given a sense that one’s participation is valued. If we'd like our tech future to be cooperative (versus dominated by wealth or authoritarian styles), then that's how we organize.  If we dedicate ourselves to unlearning the hierarchical behavioral model, we can more easily sense our power.” Dej agreed, adding “We have the power, we just have yet to realize it.”  ReThink LinkNYC joined the Electronic Frontier Alliance (EFA) over a year ago, and has used the network to help connect with other digital rights activists in New York City, get assistance with event promotion, and discuss strategies. Dej shared that EFA has been useful for connecting with other activists, saying, “It helps us connect to other people and other parts of this issue that you wouldn’t think of right off the bat, like Cryptoparty, who gave us insight into the technology part of all this… It’s also good to see people working and that we’re not the only ones going through this struggle. There are other people fighting different parts of this system as hard as they can.”  The Electronic Frontier Alliance was launched in March 2016 to help inspire and connect community and campus groups across the United States to defend digital rights. While each group is independent and has its own focus areas, every member group upholds five principles: Free expression: people should be able to speak their minds to whomever will listen. Security: technology should be trustworthy and answer to its users. Privacy: technology should allow private and anonymous speech, and allow users to set their own parameters about what to share with whom. Creativity: technology should promote progress by allowing people to build on the ideas, creations, and inventions of others. Access to knowledge: curiosity should be rewarded, not stifled. To learn more about the Electronic Frontier Alliance, find groups in your area, or join the alliance, check out our website.  To learn more about ReThink LinkNYC, visit their website. Interviews with ReThink LinkNYC were conducted by phone with follow up over email, and responses edited lightly for clarity.
>> mehr lesen

California Shopping Centers Are Spying for an ICE Contractor (Di, 10 Jul 2018)
Update July 12, 2018. On July 11, Vigilant Solutions issued a press release disputing EFF’s report. We have posted the details and our response in a new post.  Update 10:45 a.m., July 11, 2018: The Irvine Company has disclosed the three shopping centers are Irvine Spectrum Center, Fashion Island, and The Marketplace.  The local police departments are the Irvine, Newport Beach, and Tustin police departments.    Update 7:30 p.m. July 10, 2018: The Irvine Company provided The Verge with the following response.  “Irvine Company is a customer of Vigilant Solutions. Vigilant employs ALPR technology at our three Orange County regional shopping centers. Vigilant is required by contract, and have assured us, that ALPR data collected at these locations is only shared with local police departments as part of their efforts to keep the local community safe.” EFF urges the Irvine Company to release the names of the three regional shopping centers that are under surveillance and to provide a copy of the contract indicating the data is only shared with local police. The company should also release the names of which local agencies are accessing its data.  We remain concerned and skeptical.  EFF would appreciate any information that would clear up this matter. The public deserves greater transparency from The Irvine Company and Vigilant Solutions.  A company that operates 46 shopping centers up and down California has been providing sensitive information collected by automated license plate readers (ALPRs) to Vigilant Solutions, a surveillance technology vendor that in turn sells location data to Immigrations & Customs Enforcement.  The Irvine Company—a real estate company that operates malls and mini-malls in Irvine, La Jolla, Newport Beach, Redwood City, San Jose, Santa Clara and Sunnyvale—has been conducting the ALPR surveillance since just before Christmas 2016, according to an ALPR Usage and Privacy Policy published on its website (archived version). The policy does not say which of its shopping centers use the technology, only disclosing that the company and its contractors operates ALPRs at “one or more” of its locations.  Automated license plate recognition is a form of mass surveillance in which cameras capture images of license plates, convert the plate into plaintext characters, and append a time, date, and GPS location. This data is usually fed into a database, allowing the operator to search for a particular vehicle’s travel patterns or identify visitors to a particular location. By adding certain vehicles to a “hot list,” an ALPR operator can receive near-real time alerts on a person’s whereabouts. EFF contacted the Irvine Company with a series of questions about the surveillance program, including which malls deploy ALPRs and how much data has been collected and shared about its customers and employees. After accepting the questions via phone, Irvine Company did not provide further response or answer questions. The Irvine Company's Shopping Centers in California:  mytubethumb play %3Ciframe%20width%3D%22600%22%20height%3D%22500%22%20scrolling%3D%22no%22%20frameborder%3D%22no%22%20src%3D%22https%3A%2F%2Ffusiontables.google.com%2Fembedviz%3Fq%3Dselect%2Bcol1%2Bfrom%2B1rD4SnEV0A5H8Omp5gVauQe7_oBVrJLMNp6buenLN%26amp%3Bviz%3DMAP%26amp%3Bh%3Dfalse%26amp%3Blat%3D34.871845934180776%26amp%3Blng%3D-119.24998871540208%26amp%3Bt%3D1%26amp%3Bz%3D6%26amp%3Bl%3Dcol1%26amp%3By%3D2%26amp%3Btmplt%3D2%26amp%3Bhml%3DONE_COL_LAT_LNG%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from google.com The Irvine Company’s policy describes a troubling relationship between the retail world and the surveillance state. The cooperation between the two companies allows the government to examine the travel patterns of consumers on private property with little transparency and no consent from those being tracked. As private businesses, Vigilant Solutions and the Irvine Company are generally shielded from transparency measures such as the California Public Records Act. The information only came to light due to a 2015 law passed in California that requires ALPR operators—both public and private alike—to post their ALPR policies online. Malls in other states where no such law exists could well be engaged in similar violations of customer privacy without any public accountability.   In December 2017, ICE signed a contract with Vigilant Solutions to access its license-plate reader database. Data from Irvine Company’s malls directly feeds into Vigilant Solutions’ database system, according to the policy. This could mean that ICE can spy on mall visitors without their knowledge and receive near-real-time alerts when a targeted vehicle is spotted in a shopping center’s parking lot.  Vigilant Solutions’ dealings with ICE have come under growing scrutiny in California as the Trump administration accelerates its immigrant enforcement. The City of Alameda rejected a contract with Vigilant Solutions following community outcry over its contracts with ICE. The City of San Pablo put an expansion of its surveillance network on hold due to the same concerns. But ICE isn’t the only agency accessing ALPR data. Vigilant Solutions shares data with as many as  1,000 law enforcement agencies nationwide. Through its sister company, Digital Recognition Network, Vigilant Solutions also sells ALPR data to financial lenders, insurance companies, and debt collectors. “Irvine is committed to limiting the access and use of ALPR Information in a manner that is consistent with respect for individuals' privacy and civil liberties,” the Irvine Company writes in its policy. “Accordingly, contractors used to collect ALPR Information on Irvine's behalf and Irvine employees are not authorized to access or use the ALPR Information or ALPR System.” And the Irvine Company says it deletes the data once it has been transmitted to Vigilant Solutions. Although the Irvine Company pays lip service to civil liberties, the company undermines that position by allowing Vigilant Solutions to apply its own policy to the data. Vigilant Solutions does not purge data on a regular basis and instead “retains LPR data as long as it has commercial value.” The Irvine Company must shut down its ALPR system immediately. By conducting this location surveillance and working with Vigilant Solutions, the company is putting not only immigrants at risk, but invading the privacy of its customers by allowing a third-party to hold onto their data indefinitely. We will update this post if and when the Irvine Company decides to respond to our questions. Special thanks to Zoe Wheatcroft, the EFF volunteer who first spotted The Irvine Company's ALPR policy. 
>> mehr lesen

Announcing EFF’s New Logo (and Member Shirt) (Di, 10 Jul 2018)
 EFF was founded on this day, exactly 28 years ago. Since that time, EFF’s logo has remained more or less unchanged. This helped us develop a consistent identity — people in the digital rights world instantly recognize our big red circle and the heavy black “E.” But the logo has some downsides. It’s hard to read, doesn’t say much about our organization, and looks a bit out of date. Today, we are finally getting around to a new look for our organization thanks to the generosity of top branding organization Pentagram. We’ve launched a new logo nicknamed “Insider,” and it was created for us by Pentagram under the leadership of the amazing Michael Bierut. To celebrate, we’re releasing our new EFF member shirt, featuring the new logo. It’s a simple black shirt with the logo in bright red and white. Join us or renew your membership and get a new shirt today!  A photo of the front and back of the Insider EFF member shirt. A photo of two EFF staffers wearing the new shirt. There’s a good story behind how this new logo came about.  Last year, EFF defended Kate Wagner, the blogger behind McMansion Hell, a popular blog devoted to the many flaws and failures of so-called “McMansions,” those oversized suburban tract homes that many people love to hate. The online real estate database Zillow objected to Wagner's use of their photos, and threatened her with legal action. EFF stepped in to defend Wagner. EFF sent a letter on the blogger’s behalf, explaining that her use of Zillow’s images was a textbook example of fair use. Zillow backed down, and her many supporters let out a collective cheer.   One of those supporters was Michael Bierut, who also happens to be one of the best logo designers on the planet. (You have probably seen some of his work: among his many recognizable works are logos for MIT’s Media Lab, MasterCard, and Hillary Clinton.) Bierut said he loved EFF's letter, recognized it as great legal writing, and also saw that EFF needed a new logo. He and his team at Pentagram offered to make us a new one, pro bono.  We were really touched and pleased by his offer. Over subsequent months, we worked with Bierut and his team to come up with something new. In describing what we were looking for, we told Pentagram that we wanted something simple, classic, and that matched the boldness of our vision for the Internet.  After several rounds and revisions, they came up with this new logo, Insider. One of the great things about this logo is that, in true Pentagram fashion, this logo is really a logo system. The logo can be reconfigured and adjusted in multiple ways, allowing us to adjust our look for many purposes. This logo will look as good on a formal legal letter as it does in an activist campaign. It also uses a great open source typeface called League Gothic! You can access your own copies of this logo in various configurations and file formats from our logo page. Please feel free to use them for any legal purpose. We hope you like the new logo as much as we do—and that when you see it, wear it, or display it, it continues to convey our history of working for your online rights, and our plan to keep up the fight long into the future.  DONATE TOday Join EFF AND GET OUR NEW LOGO T-SHIRT    A happy ending, shared with Kate Wagner and Michael Bierut’s consent.  "That is so exciting!!!!" "I'm at a total loss for words. First, I'm totally freaking out (in a good way) because I've been a huge fan of yours and Pentagram's (who hasn't??) since I developed a consciousness of contemporary graphic design, so the fact that I was the impetus for such a generous gift for a group of people who basically saved my livelihood is a massive honor. I'm of course more than happy to be acknowledged in this epic story of brand transformation."
>> mehr lesen