Deeplinks

Telnet Is Not A Crime: Unconvincing Prosecution Screenshot Leaked in Ola Bini Case (Sa, 24 Aug 2019)
Since EFF visited Ecuador three weeks ago, the investigation into open source developer Ola Bini has proceeded as we described then: drawn out, with little evidence of wrong-doing, but potentially compromised by acts of political theater outside the bounds of due process and a fair trial. Last week — shortly after prosecutors successfully extended the investigation for another 30 days and informed Bini that they would also be opening new investigations into his taxes and visa status — Ecuadorean TV and newspapers published leaked imagery and conversations from evidence collected in the trial, together with claims from sources that this imagery proved Bini hacked the systems of Ecuador’s national communications provider, ECN. The evidence offered was a screenshot, said to be taken from Bini’s mobile phone. The press reported that the phone was unlocked by police after seized security footage revealed Bini’s PIN when he used his phone in his own office elevator. Telnet Is Not A Crime Cursory examination of the actual screen capture reveals that both the leaker and the media misunderstand what the new evidence shows. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network’s systems, it shows the trail of someone who paid a visit to a publicly accessible server — and then politely obeyed the servers’ warnings about usage and access. Here’s the screenshot (with our annotations), taken from the evidence as it was finally submitted to the court. Picture of a telnet session Those knowledgeable about Unix-style command line shells and utilities will recognize this as the photograph of a laptop screen, showing a telnet session (telnet is an insecure communication protocol that has largely been abandoned for public-facing technologies). Command line interactions generally flow down the page chronologically, from top to bottom, including both textual commands typed by the use, and the responses from the programs the user runs. The image shows, in order, someone – (presumably Bini, given that his local computer prompt shows “/home/olabini”) – requesting a connection, via Tor, to an open telnet service run on a remote computer. Telnet is a text-only communication system, and the local program echoes the remote service’s warning against unauthorised access. The remote service then asks for a username as authorization. The connection is then closed by the remote system with a “timeout” error, because the person connecting has not responded. It’s the Internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and moving on. The last line on the screen capture shows the telnet program exiting, and returning the user to their own computers’ command line prompt. This is not demonstrative of anything beyond the normal procedures that computer security professionals conduct as part of their work. A user discovers an open telnet service, and connects to it out of curiosity or concern. The remote machine responds with a message by the owner of the device, with a warning not to log on without authorization. The user chooses to respect the warning and not proceed. It’s the Internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and moving on. It’s notable also what was not leaked: the complete context surrounding the screenshot. The picture allegedly came from a series of messages between Ola and his system administrator, Ricardo Arguello, a well-known figure in the Ecuadorian networking and free software communities. The full conversation was omitted, except that that Bini sent this screenshot, to which his Arguello replied “It’s a router. I’ll talk to my contact at CNT.” If you found a service that was insecurely open to telnet access on the wider Internet, that’s what you might reasonably and responsibly do — message to someone who might be able to inform its owner, with evidence that the system is open to anyone to connect. And under those conditions, Arguello’s response is just what a colleague might say back — that they would get in touch with someone who might be able to take the potentially insecure telnet service offline, or put it behind a firewall. Certainly, that explanation fits the facts of this screenshot far better than the press reports that claims this is proof that Bini invaded the “entire network” of Ecuador’s national oil company, Petroecuador, and the former National Intelligence Secretariat. EFF’s conclusions from our Ecuador mission were that — from its very beginnings in a hasty press conference held by the Interior Minister that spoke of Russian hackers and Wikileaks members undermining the Ecuadorean state — political actors, including the prosecution, have recklessly tied their reputations to a case with little or no real evidence. It’s disappointing, but not surprising, that Ola Bini’s prosecution continues to be publicly fought in the Ecuadorean press, with misleading and partial leaks and distractions, instead of in a courtroom, before a judge. We trust that, when and if this evidence is presented in court, that judge will examine it more skeptically, and with better technical advice, than the prosecution or media has until now.
>> mehr lesen

Ninth Circuit Goes a Step Further to Protect Privacy in Border Device Searches (Fri, 23 Aug 2019)
The U.S. Court of Appeals for the Ninth Circuit issued a new ruling in U.S. v. Cano [.pdf] that offers greater privacy protection for people crossing the border with their electronic devices, but it doesn’t go as far as we sought in our amicus brief. Cano had attempted to cross the border near San Diego when cocaine was found in his car. He was arrested at the port of entry and border agents manually and forensically searched his cell phone. He was prosecuted for importing illegal drugs and moved to suppress the evidence found on his phone. The Ninth Circuit held that the searches of his cell phone violated the Fourth Amendment and vacated his conviction. In U.S. v. Cotterman (2013), the Ninth Circuit had circumscribed the border search exception as it applies to electronic devices. The court held that the Fourth Amendment required border agents to have had reasonable suspicion—a standard between no suspicion and probable cause—before they conducted a forensic search, aided by sophisticated software, of the defendant’s laptop. Unfortunately, the Cotterman court also held that a manual search of a laptop is “routine” and so the border search exception applies: no warrant or any suspicion of wrongdoing is needed. In Cano, it was disappointing though not surprising that the three-judge panel reaffirmed Cotterman’s en banc rule and held that a manual search of a cell phone requires no suspicion while a forensic search requires reasonable suspicion. We argued in our amicus brief that the Ninth Circuit should revisit this issue and require a probable cause warrant for all border device searches, in light of the Supreme Court’s decision in Riley v. California (2014). In that watershed case, the Court acknowledged the extraordinary privacy interests people have in their cell phones, irrespective of how the devices are searched, and held that police must obtain a warrant to search the cell phone of an arrestee. On the bright side, the Cano court further held that warrantless, suspicionless border device searches—both manual and forensic—are only permissible under the Fourth Amendment to determine whether the device contains digital contraband. The court agreed with the arguments we presented in our amicus brief that the border search exception is “narrow,” being justified by the purpose of interdicting contraband and not simply finding evidence of illegal activity. Additionally, the court held with respect to forensic searches, “We clarify Cotterman by holding that ‘reasonable suspicion’ in this context means that officials must reasonably suspect that the cell phone contains digital contraband.” While we still believe that electronic devices should fall outside the border search exception and thus require a warrant for search, limiting the scope of all device searches under the border search exception to looking for digital contraband is a good pro-privacy rule. The Cano court emphasized that border agents may not conduct warrantless, suspicionless border device searches “for evidence of past or future border-related crimes.” This is striking because we know from our civil case against the government, Alasaad v. Nielsen, that CBP and ICE agents do regularly conduct device searches (under the border search exception, they argue) to look for mere evidence of border-related crimes and in support of general law enforcement. The Cano rule means that border agents within the Ninth Circuit states can’t conduct broad-ranging fishing expeditions for digital data such as correspondence between the traveler and his associates, or metadata like location information. Such data might be evidence, but is not itself contraband. It’s important to note, however, that emails and text messages are not totally off limits. The Cano court noted that child pornography may be sent via email or text message, and so border device searches for digital contraband within these kinds of cell phone data are reasonable under the Fourth Amendment. As for Cano himself, the Ninth Circuit held that the recording of phone numbers and text messages during a manual search “had no connection whatsoever to digital contraband.” And while border agents “had reason to suspect that Cano’s cell phone would contain evidence leading to additional drugs,” the forensic search was unconstitutional because “the record does not give rise to any objectively reasonable suspicion that the digital data in the phone contained contraband.” The Cano court also stated that “the detection-of-contraband justification” for warrantless, suspicionless border device searches “would rarely seem to apply to an electronic search of a cell phone outside the context of child pornography.” We will advocate for courts to narrowly define the “digital contraband” that, under Cano, is the outer limit of the scope of warrantless, suspicionless border device searches. We will also continue to advocate for a warrant requirement.
>> mehr lesen

Browsers Take a Stand Against Kazakhstan’s Invasive Internet Surveillance (Thu, 22 Aug 2019)
Yesterday, Google Chrome, Mozilla Firefox, and Apple’s Safari browsers started blocking a security certificate previously used by Kazakh ISPs to compromise their users’ security and perform dragnet surveillance. We encourage other browsers to take similar security measures. Since the fix has been implemented upstream in Chromium, it shouldn’t take long for other Chromium-based browsers, like Brave, Opera, and Microsoft’s Edge, to do the same. What Happened, and Why Is It a Problem? Back in July, Kazakhtelecom, Kazakhstan’s state telecommunications operator, began regularly intercepting encrypted web (HTTPS) connections. Usually, this kind of attack on encrypted HTTPS connections is detectable and leads to loud and visible browser warnings or other safeguards that prevent users from continuing. These security measures work because the certificate used is not trusted by user devices or browsers. However, Kazakh ISPs also sent instructions telling users to compromise their own security by manually trusting the certificate on their devices and browsers, bypassing the security checks that are built into most devices. The two-step of Kazakh ISPs deploying an untrusted certificate, and users manually trusting that certificate allows the ISPs to read and even alter the online communication of any of their users, including sensitive user data, messages, emails, and passwords sent over the web. Research and monitoring from Censored Planet found around 40 domains that were being regularly intercepted, including Google services, Facebook services, Twitter, and VK (a Russian social media site). The government of Kazakhstan had expressed their intention to perform dragnet surveillance like this in the past, but, following widespread backlash, it failed to act on those statements. Now, it seems the Kazakh authorities were serious about undermining the privacy of their entire country's communications — even if it meant forcing individual Internet users to manually compromise their devices’ own built-in privacy protections. What’s Next? Earlier this month, Kazakhstan’s National Security Committee stated that Kazakhstan had halted the program. The announcement, along with a tweet from the president of Kazakhstan, called the program a successful pilot, claiming it was mounted to detect and counteract external security threats, even though the government’s actions primarily compromised the security of Kazakhstan’s own citizens. The announcement also stated that the program may be deployed again in the future. Censored Planet’s live monitoring indicate that the system was turned off after the first week of August. This step by Google, Mozilla, and Apple to block the particular certificate that Kazakh ISPs used for traffic interception prevents the government of Kazakhstan from resuming this invasive program, as well as setting a precedent such that browsers may take similar actions against network attacks of this nature in the future. Without strong pushback, it’s likely that Kazakhstan, or other states, might try to repeat their “pilot,” so we also encourage browser vendors, device manufacturers, and operating systems to improve the warnings and tighten the flow around manually trusting new certificates. Kazakhstan’s actions were a drastic response to the slowly improving security of end-user devices and end-to-end communication online, but they and other countries could take even more invasive steps. Faced with just a handful of secure browsers, the government could next push their citizens to use a browser that does not currently implement this safeguard. We encourage other browsers to take the same steps and stand in solidarity against the government of Kazakhstan’s decision to compromise the Internet security of their entire population. What’s more, designers of user software should anticipate such intrusive state action in future threat models.
>> mehr lesen

YouTube's New Lawsuit Shows Just How Far Copyright Trolls Have to Go Before They're Stopped (Wed, 21 Aug 2019)
YouTube has taken a stand against a particularly pernicious copyright troll who was not only abusing the takedown system to remove content but was also using it in an extortion scam. While this gives the weight—and resources—of a large corporation in a fight that will benefit users, it also serves as a reminder of how flawed part of the DMCA is. The “safe harbor” provision of the DMCA protects platforms like YouTube from liability for copyright infringement done by users. As long as these services do certain things, they cannot be liable for damages. These requirements include having a registered agent to receive copyright complaints, promptly removing content after receiving a complaint, a counter-notice system where the person who’s content has been removed can get it back up, and—most at issue in YouTube’s case—a way to deal with “repeat” infringers. YouTube’s way of fulfilling that last requirement is its copyright strikes system. Getting DMCA complaints filed against you means accumulating strikes. If an account accumulates three strikes at once, YouTube will terminate the account and remove all of the videos. People who make their living through their videos, and have worked hard to build an audience there, will suddenly find their lives ruined. Unsurprisingly, videomakers will go to a lot of effort to avoid getting strikes. In this case, YouTube v. Brady, Christopher Brady is alleged to have filed false DMCA claims—claiming he either owned things he did not or claiming videos were infringing that were not. That alone is prohibited by the DMCA, which requires the person sending the takedown to affirm that they either own the work or are an agent of the owner and that the notice is being sent in good faith—that is, that they actually believe it is infringement. Someone who thinks a video is fair use but doesn’t like what it’s saying is not allowed to send a DMCA claim. And people who send DMCA claims can’t pretend that fair use rights don’t exist—they have to consider whether the uploader may have been protected by fair use. Brady’s scheme is also alleged to go further than false claims. Brady apparently sent messages to the people he filed claims against, promising to withdraw the claims in exchange for money. For those targeted, going along with it might seem better than chancing ending up with a copyright strike. Under YouTube’s system, strikes only go away if they are withdrawn, after 90 days and completion of “copyright school,” or going through the DMCA’s counter-notification requirement. And counter-notifications can be quite intimidating, as they require users to turn over identifying information. Plus, resolving a situation this way can take a lot of time, during which YouTubers can’t upload or monetize their existing videos. Three strikes can cause YouTube to suspend a person’s account, which can be a serious blow to YouTubers’ livelihoods. YouTube claims that Brady not only filed false notices, not only extorted users, but also abused the personal information contained in the counter-notices. According to YouTube’s complaint, shortly after of the users Brady targeted did the thing they are supposed to do in the face of a bogus claim—send a counter-notice—they were swatted [pdf]. (Swatting is harassment technique that consists of calling in a fake emergency to 911, resulting in a large number of police officers, often with guns drawn, showing up to the target’s home.) In other words, it’s suspected that Brady was able to swat someone only because of the information contained in the counter-notice. Personal information from a counter-notice being used for harassment purposes even further disincentivizes people from taking advantage of their legal right to respond to bad takedowns. And counter-notices have been shown to be fairly rare. DMCA abuse is not a new problem, obviously. EFF has long documented and fought instances where takedown notices were sent erroneously, misused to silence criticism, or using a process, automated or otherwise, that doesn’t take fair use into account. And then there are complaints from copyright holders of improper counter-notices [pdf] and cases of counter-notices being in likewise bad faith or, frustrating the ability of the rightsholder to sue. There’s a provision of the DMCA that’s supposed to provide a disincentive for abuse. It’s section 512(f). This is the basis for YouTube’s lawsuit. If someone knowingly misrepresents information in a notice or counter-notice, 512(f) allows the sender of the notice, the receiver of the notice, or the service provider to sue, if they’ve been injured. Successful 512(f) suits, on behalf of people who have received bogus notices or bogus counter-notices, are thin on the ground. Ones by service providers? Basically nonexistent. YouTube makes clear that trying to investigate and stop the abuse of the DMCA by Brady cost it “substantial” sums. Moreover, some of the DMCA claims were filed under false identities, making it even hard for YouTube to track that they were coming from a troll, impeding its ability to put a stop to future abuses. While the YouTubers that Brady may have targeted don’t have the resources to go to court, YouTube does. It’s taking a stand against the worst kind of abuse a copyright troll can do. However, for small creators, it’s hard to fund this kind of lawsuit. Unlike copyright holders claiming infringement, who can receive automatic “statutory damages,” people defending against takedown abuse have to prove the extent of their harm. Therefore, whether 512(f) is an effective deterrent is very much up for debate. YouTube is going after someone who isn’t simply alleged to have used the DMCA to take down a legal video they don’t like. The extortion and harassment elevate this case, and it’s great to see YouTube standing up for its users in court. But, 512(f) needs to have real teeth and real enforcement, not just for extreme cases, but the more typical abuse cases. Until it does, it’s not working as intended.
>> mehr lesen

The DOJ Should Keep Its Historic Role Guarding Competition and Innovation in the Music Business (Wed, 21 Aug 2019)
If you want to play music as part of your business, either live or recorded, chances are you are going to have to pay the two big performing rights organizations. The American Society of Composers and Publishers (ASCAP) and Broadcast Music, Inc. (BMI) license the rights to a lot of music, and without safeguards in place, could easily abuse their position. They’ve done so before. That’s why the Department of Justice should keep up its historic role overseeing those licensing societies. In June, the DOJ announced that it was reviewing its “consent decrees” with ASCAP and BMI, the two major performance rights organizations. The consent decrees are agreements with the U.S. government. They were originally done in 1941 to settle antitrust lawsuits, and they have been modified several times over the years. The federal district court in Manhattan (the Southern District of New York) has jurisdiction over the consent decrees and has the authority to accept or reject any changes. These consent decrees impose important limits on ASCAP and BMI’s ability to restrict competition and access to licenses that allow public performances of music compositions. Most importantly, the decrees require ASCAP and BMI to set license fees fairly and to charge uniform fees to similarly situated users. That helps copyright law serve its ultimate goal of spurring creativity and public access to creative works. Given the importance of the limits the consent decrees impose, EFF joined allies including Public Knowledge, the Consumer Technology Association, and the R Street Institute in voicing its opposition to any changes would chip away at the consent decrees’ protections against anti-competitive conduct by ASCAP or BMI. The consent decrees have become an integral part of the music publishing industry and continue to promote competition. There’s simply no good reason to get rid of these structural mechanisms that allow markets to thrive while limiting opportunities for anti-competitive conduct by dominant firms. Admittedly, the current music licensing system is not perfect. But nonetheless, the consent decrees are as necessary today as ever. ASCAP and BMI still control over 90% of the licensing market for music performance rights and remain the predominant players wielding tremendous market power. At the same time, music publishers, who own most of the copyrights in musical works that ASCAP and BMI license, have grown more concentrated over the years. The three biggest music publishers now own a majority of the copyrights on popular music. It might be possible to replace the consent decrees with oversight by the Copyright Office and the Copyright Royalty Board. That would put rate-setting for music performances in the hands of the same body that sets rates for other important copyright licenses, including the licenses for digital audio streaming and cable transmission of broadcast TV channels. But today’s system has important advantages. Today, anyone who wants a license to perform music can challenge the price of those licenses set by ASCAP and BMI by going to the federal court. It’s relatively quick and happens often. The Copyright Royalty Board, in contrast, typically sets rates for each type of license only once every few years. And the Board’s proceedings are not easily accessible to the public, because they tend to operate under blanket protective orders that put a cloak of secrecy over most of the evidence and arguments presented there. Congress could change all this by moving rate-setting functions to the Royalty Board, while perhaps leaving the DOJ in the role of antitrust watchdog. But that’s not a change that the DOJ can make on its own, and it shouldn’t move to end the consent decrees if Congress doesn’t step in. It's an odd accident of history that an important part of the creative economy is governed by a 70-year-old court settlement rather than a law or agency regulation. But in practice, the consent decrees that govern ASCAP and BMI work reasonably well and are as necessary as ever.
>> mehr lesen

Communities Across the Country Reject Automated License Plate Readers (Wed, 21 Aug 2019)
Recent months have seen a wave of cities and counties around the country rejecting the use of automated license plate readers in their communities, citing privacy concerns posed by the technology. Added to recent local level victories barring the use of face recognition technologies, it is encouraging to see local governments across the nation lead the way in proactively stemming the tide of invasive surveillance technologies. Automated license plate readers (ALPRs) are camera systems that scan vehicle license plates and build a searchable database of drivers’ historical travel patterns. ALPRs, often installed on patrol vehicles, streetlights, freeway overpasses and the like, indiscriminately scan every vehicle in a given area and collect data on all drivers regardless of whether their vehicle is under suspicion. Location-based information collected over time can reveal intimate details of a person’s life, such as where they work and live, where they pray, where they seek medical treatment, and who their friends or romantic partners are. In June, the California State Auditor launched a probe into the use of ALPRs by local law enforcement agencies, which is a valuable step towards improved information about how government agencies are collecting, using, and storing ALPR data. Even better, there’s been a recent surge of cities and counties rejecting ALPRs. This shows the power that local governments and residents have when they speak up and voice concerns over threats to their privacy. That’s how a community can curb the spread of this dangerous technology. Half Moon Bay, California In July, the City of Half Moon Bay, California halted considerations regarding ALPRs until the state’s audit is complete. As reported by the Half Moon Bay Review, the City Council began considering the adoption of ALPRs in May. But as planning unfolded, and in light of the state’s audit announcement, many members of the public, as well as city officials, raised privacy concerns. Of particular concern to city councilmembers was the possibility of data collected by local ALPRs being accessed by federal immigration enforcement officials. Delano, California The City of Delano, California, recently determined it will no longer pursue using ALPRs, after a series of meetings between law enforcement officials and community members. Kern Sol News reports that for the past several months, the city’s police chief  had been considering ALPRs. The chief held a series of community meetings to discuss residents’ privacy concerns. Delano residents and privacy advocates, including the ACLU, voiced concerns about the data being accessed by ICE. They explained that primary ALPR vendor, Vigilant Solutions, has an active contract with ICE that enables the agency to access data collected by the systems. Community members objected that the technology would make the county’s undocumented immigrants vulnerable to deportation. At a city council meeting on July 1, the police chief announced the department would no longer seek ALPRs, because losing public trust outweighed any possible benefits the technology would offer.  Michigan City, Indiana  Also last month, Michigan City, Indiana, chose not to advance a local ordinance that would have allowed the city’s police department to purchase ALPR systems, due to opposition from city residents.  According to the La Porte County Herald-Argus, the ordinance would have allowed the Michigan City Police Department to purchase ALPRs. When the ordinance was brought up for discussion at a city council meeting, nearly every speaker opposed ALPRS. Residents objected that the technology is invasive, and would be used to disproportionately profile and harass people of color.  During these deliberations, members of the city council asked local law enforcement to identify the policies that would regulate the access, retention, and purging of the collected ALPR data. Their response? There weren’t any such controls. The city council voted unanimously to amend the proposed ordinance to cut ALPR authorization. Suffolk County, New York  In New York, Suffolk County postponed accepting a million-dollar state grant for the purchase of approximately 70 new ALPR cameras, after listening to concerns from local civil liberties groups and residents. Newsday reported that the ALPRs were to be installed in two communities that have seen high rates of gang violence in the past, though gang activity has decreased in these areas in recent years. Local civil liberties groups questioned how collecting and storing data on every car on the road would help combat gang violence. They also objected that it could be used as a dragnet to target low-income residents, people of color, and immigrants. In early July, county officials postponed accepting the grants, citing questions about how data would be stored and the adequacy of ALPRs in reducing crime. Pima County, Arizona In Arizona’s border region, the Pima County Board of Supervisors, responding to pressure from privacy advocates, recently rejected federal funds designated for ALPRs. A $1.8 million package of federal Operation Stonegarden grants for border enforcement included funds to purchase ALPRs, according to the Tucson Sentinel. The county already uses several ALPRs for their drug interdiction and auto-theft units, but the new federally-funded ALPRs were intended for indiscriminate collection of location data. Privacy advocates objected that increased use of these technologies could strengthen the ability of private companies, advertisers, debt collectors, and ICE to target vulnerable communities. While the supervisors approved the overall package, it excluded funding for plate readers. These scenes from around the country demonstrate a growing public awareness of the threats to privacy and civil rights posed by tools of mass surveillance. The actions taken in these five forward-looking cities and counties also show the increasingly powerful role that concerned residents can play in thwarting the spread of invasive technologies in our communities.  Related Cases:  Automated License Plate Readers (ALPR)
>> mehr lesen

Apple's New WebKit Policy Takes a Hard Line for User Privacy (Tue, 20 Aug 2019)
Ever since mid-2017, Apple has been tackling web tracking in a big way. Various iterations of its Intelligent Tracking Prevention (ITP) technology have been introduced over the past few years in WebKit, the browser engine for Safari. ITP already protects users from tracking in various ways, but it left open a number of questions about the guidelines it uses to determine just who Apple considers a tracker, and what behavior is indicative of tracking. Last week, Apple answered these questions with its WebKit Tracking Prevention Policy, which also includes an extraordinary and newsworthy clause: We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities. Treating Trackers like Hackers? The past decade has seen companies taking product security increasingly seriously. Apple announced its own bug bounty program in 2016 with a maximum pay-out of $200,000. Yet a certain privacy nihilism has prevailed when it comes to companies brokering our personal information. Both big-name social media companies such as Facebook and little-known targeted advertisers such as Criteo have been using a wide variety of techniques to siphon our personal information, including advanced techniques such as fingerprinting and exploiting browser login managers. Until recently, privacy advocates were making precious little headway in convincing browsers to prioritize anti-tracking. This statement by Apple (inspired by a similar anti-tracking policy for Firefox introduced by Mozilla earlier in the year) sends a strong message to trackers: we have zero tolerance for attempts to extract user information without their consent. We applaud Apple for taking this strong stance for user privacy. Intelligent Tracking Protection (ITP) Even before ITP, Apple had been blocking 3rd party cookies and using cache partitioning to mitigate the effects of 3rd party resource cache-based tracking. ITP uses a number of novel techniques to stymie the efforts of trackers even further. For example, it expires cookies when users haven't interacted with a website for 30 days. It uses the Storage Access API which requires meaningful interaction between a user and third-party services before the service is allowed to access its first-party cookies. This means that a 3rd-party service (or a tracker) won't be able to access a stateful, cross-site, persistent identifier in the form of a cookie that they've stored on your browser unless you've actually, say, clicked on that "like" button. And without that identifier, they'll have a hard time linking your visit to `site-with-a-like-button.com` to your Facebook account. ITP most recently also expires cookies that have been set via link decoration. All this amounts to an impressive and powerful set of tracking protections for Safari users. Striking a Balance with Developers Apple's careful roll-out of these technologies has tried to protect users while ensuring that well-meaning web developers aren't caught in the cross-fire. This is a tricky balance to strike: many of the web technologies that enable trackers are also used by non-tracking developers to power the feature-rich web. Outright disabling of a technology such as WebRTC may limit the effectiveness of fingerprinting, but it also disables innovative services such as Google Hangouts, Jitsi Meet and WebTorrent. WebRTC is just one example - the web is replete with technologies that are being used by both good and bad actors. For this reason, it's extraordinarily difficult to remove or limit technologies that enable tracking without causing anger among developers when an application that doesn't track users stops working. Apple has taken a measured approach, introducing technologies and iteratively addressing developers’ concerns. Diving Deep: Some Points of Interest in the Policy In addition to defining exactly what Apple means by the term "tracking," the new policy also enumerates different forms of tracking, including the use of tracking cookies, fingerprinting, HSTS supercookies, and several other examples. The inclusion of HSTS as a tracking technology is significant. HSTS, or HTTP Strict Transport Security, is a web header that sites can use to indicate that they should only be accessed over the secure HTTPS transport layer in the future. Your browser will cache this response and ensure that future requests are not made over insecure HTTP. However, trackers can use this cache to piece together a supercookie that can identify your browser across multiple websites. Safari limits this by only respecting HSTS under certain conditions. For this reason, researchers have lately been suggesting the use of EFF's own HTTPS Everywhere, which maintains a list of HTTPS-supporting sites, as an alternative to caching HSTS headers. Another interesting part of the policy reads: If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention. Apple is reserving itself a great amount of latitude in this clause. We can speculate that this will cause companies which have a business model partially based on tracking to reconsider their practices, for fear of being blocked by Safari users universally. This may cause companies to self-police the shadier side of their revenue stream, if they value the visits of Safari users. The policy ends with the clause We want to see a healthy web ecosystem, with privacy by design. We couldn’t agree more. We sincerely hope more browsers, such as Google's Chrome, adopt the tenet of "privacy by design" as well.
>> mehr lesen

Don't Renew Section 215 Indefinitely (Tue, 20 Aug 2019)
The New York Times reported that the Trump administration wants Section 215, the legal authority that allows the National Security Agency to collect Americans’ telephone records, renewed indefinitely. That’s despite earlier reports the NSA had shuttered its Call Details Record (CDR) Program because it ran afoul of the law, violated the privacy of scores of Americans, and reportedly failed to produce useful intelligence. In a letter to Congress, outgoing Director of National Intelligence Dan Coats argued for permanently reauthorizing the legal authority, which also allows the government to collect a vast array of “tangible things” in national security investigations, as well as other provisions of the Patriot Act that are set to expire in December. For years, the government relied on Section 215 of the USA Patriot Act to conduct a dragnet surveillance program that collected billions of phone records documenting who a person called and for how long they called them—more than enough information for analysts to infer very personal details about a person, including who they have relationships with, and the private nature of those relationships. In 2015, a federal appeals court held that NSA’s interpretation of Section 215 to conduct this surveillance dragnet was “unprecedented and unwarranted.” Despite the passage of the 2015 USA Freedom Act, which gave the government more limited authority to conduct the CDR program, the government continued to collect hundreds of millions of records. And in 2018, the NSA was compelled to delete millions of records after it learned that some of the data had been collected from phone service providers without legal authority or authorization. If the program does not help ensure the safety of Americans, cannot stay within the law, and violates our privacy, then why should Congress reauthorize it? After all, as of now, the NSA isn’t even using it. This December, rather than permanently renew the authorization that allows the NSA to use an invasive program, it’s important that we push Congress to end the Call Details Record program once and for all and enact other important reforms. Take Action Tell Congress to End the CDR Program Related Cases:  Jewel v. NSA
>> mehr lesen

IPANDETEC Rates Panama’s ISPs in its First ¿Quién Defiende Tus Datos? Report (Tue, 20 Aug 2019)
It's Panama’s turn to take a closer look at the practices of its most prominent Internet Service Providers, and how their policies support their users’ privacy. IPANDETEC, the leading digital rights NGO in Panama, has launched its first "Who Defends Your Data" (¿Quién Defiende Tus Datos?) report. The survey shines a light on the privacy practices of the main ISPs of the country: Claro (America Movil), Movistar (Telefonica), Digicel, and Más Móvil (A joint operation between Cable & Wireless Communications and the Panamanian State, who owns 49% of the shares). This year, while all companies surveyed received a low score,  Movistar (Telefonica) led the pack in protecting their customers -- with Digicel right behind. Movistar is the only company that published both a transparency report and law enforcement guidelines, but unfortunately, it did so only on its parent company’s site. Digicel is the only ISP to publish its privacy policy on its Panamanian website; Claro came close, but its policy was limited to the company’s website, not its wider privacy practices. Más Móvil and Movistar direct visitors to their parent company’s privacy policy. Movistar and Claro, through their parent companies, both assured their users that they require judicial authorization before authorities can access consumer data. Más Móvil and Digicel do not. Movistar and Claro were the only ISPs that proactively responded to IPANDETEC’s survey. Más Móvil and Digicel, on the other hand, did not respond when contacted. This is a missed opportunity. At their heart ¿Quién Defiende Tus Datos? reports are a chance for civil society groups and ISPs to understand each other's work. The report will be published each year, and plans to capture ISPs’ progress as they improve. The final results of the study are below.  For more information on each company and Panama’s ICT sector, you can find the full report in Spanish on IPANDETEC’s website. Evaluation Criteria Data Protection: Does the company post a document detailing its collection, use, disclosure, and management of personal customer data?    The data protection policy is published on its website The policy is written in clear and easily accessible language The policy details what data is collected The policy establishes the retention period for user data Transparency: Does the company post an annual transparency report listing the number of government requests for customer data they’ve received, and how many were accepted and rejected?     The company publishes a transparency report on its website The report is written in clear and easily accessible language The reports contain data related to the number and type of requests received, and how many were accepted User Notification: Does the company promise to notify users when the government requests their data?     The company states it will notify users when the government accesses their information as soon as the law allows The company supports public policy that gives users the right to prior notification, allowing them to contest the government request  Judicial Authorization: Does the company explicitly state it will only comply with authorities’ request for user data if they have a warrant? The company states in its policies that it requires a warrant before law enforcement can access the content of users' communications The company rejects requests by law enforcement that violate legal requirements Defense of Human Rights: Does the company publicly promote and defend the human rights of their users, specifically the privacy of their communications and protection of their personal data? The company promotes user privacy and data security through campaigns or initiatives The company supports legislation, impact litigation, or programs favoring user privacy and data security The company participates in cross-sector agreements promoting Human Rights as a core tenant of their business Digital Security: Are the company’s website and online payment service secure? The company uses HTTPS on its website  The company uses HTTPS when processing payments online Law Enforcement Guidelines: Does the company outline procedures, guidelines, and legal requirements required for law enforcement requesting customer data? The company publishes guidelines for law enforcement data requests Main Findings   Conclusions While all four companies received relatively low scores, Movistar is comfortably in the lead in protecting their customers, with Digicel not far behind.  We hope to see all four ISPs engage in a conversation with IPANDETEC to improve their privacy practices in preparation for next year’s report.  This project is only one piece of a much larger initiative across Latin America and Spain. EFF’s Who Has Your Back? has held U.S. internet companies accountable for their privacy policies and processes. Now EFF’s partners around the world are doing the same.
>> mehr lesen

Court Rules That “Patent Troll” is Opinion, Not Defamation (Tue, 20 Aug 2019)
Free speech in the patent world saw a big win on Friday, when the New Hampshire Supreme Court held that calling someone a “patent troll” doesn’t constitute defamation. The court’s opinion [PDF] is good news for critics of abusive patent litigation, and anyone who values robust public debate around patent policy. The opinion represents a loss for Automated Transactions, LLC (ATL), a patent assertion entity that sued [PDF] more than a dozen people and trade groups claiming it was defamed. EFF worked together with the ACLU of New Hampshire to file an amicus brief [PDF] in this case, explaining that the lower court judge got this case right when he ruled against ATL. That decision gave wide latitude for public debate about important policy issues—even when the debate veers into harsh language. We’re glad the New Hampshire Supreme Court agreed. Last week’s ruling court notes that “patent troll” is a phrase used to describe “a class of patent owners who do not provide end products or services themselves, but who do demand royalties as a price for authorizing the work of others.” However, the justices note that “patent troll” has no clear settled definition. For instance, some observers of the patent world would exclude particular entities, like individual inventors or universities, from the moniker “patent troll.” Because of this, when ATL’s many critics call it a “patent troll,” they are expressing their subjective opinions. Differences of opinion about many things—including patent lawsuits—cannot and should not be settled with a defamation lawsuit. “We conclude that the challenged statement, that ATL is a well-known patent troll, is one of opinion rather than fact,” write the New Hampshire justices. “As the slideshow demonstrates, the statement is an assertion that, among other things, ATL is a patent troll because its patent-enforcement activity is ‘aggressive.’ This statement cannot be proven true or false because whether given behavior is ‘aggressive’ cannot be objectively verified.” The court ruling also upheld tough talk about ATL’s behavior beyond the phrase “patent troll.” For instance, the court looked at statements referring to ATL’s actions as “extortive,” and rejected defamation claims on that basis, finding that was rhetorical hyperbole. Another ATL critic had complained that ATL’s efforts “cost them only postage and the paper their demand letters are written on.” This, too, was hyperbole, part of the give-and-take of a public debate. This case has its origins in the patents of inventor David Barcelou, who claims he came up with the idea of connecting ATMs to the Internet. As Barcelou describes in his defamation lawsuit, he saw “his business efforts fail,” before he went on to transfer patent rights to ATL and create a patent assertion business. ATL began suing banks and credit unions that were allegedly using Barcelou’s patents in their ATMs. In all, about 200 different companies paid ATL a total of $3 million in licensing fees to avoid litigation—that’s an average of $15,000 per company. But when they were finally examined by judges, ATL’s patents failed to hold up. The Federal Circuit invalidated several patent claims owned by ATL, and further found that the defendants’ ATMs did not infringe the Barcelou patents. After that court loss, ATL had a steep drop in licensing revenue. That’s when ATL launched its defamation lawsuit, blaming its critics for its setbacks. For software developers and small business owners who bear the brunt of patent troll demands and lawsuits, the New Hampshire decision sends a clear message. If you’re upset about the abuses inherent in our current patent system, it’s okay to speak out by using the term “patent troll.” Calling out bad actors in the system is part and parcel of the debate around our patent and innovation policies. Related Cases:  Abstract Patent Litigation
>> mehr lesen

EFF Calls on California to End Vendor-Driven ALPR Training (Mon, 19 Aug 2019)
A single surveillance vendor has garnered a monopoly on training law enforcement in California on the use of automated license plate readers (ALPRs)—a mass surveillance technology used to track the movements of drivers. After examining the course materials, EFF is now calling on the state body that oversees police standards to revoke the training certification.  In a letter to the California Commission on Peace Officer Standards and Training (POST) sent today, EFF raises a variety of concerns related to factual accuracy of its ALPR training on legal matters. Additionally, we are concerned about the apparent conflict of interest and threat to civil liberties that occurs when a sales-driven company also provides instruction on “best practices” to police. ALPRs are camera systems that capture license plates and character-recognition software to document the travel patterns of vehicles. The cameras are often attached to fixed locations, such as streetlights and overpasses, and to police cars, which collect data while patrolling neighborhoods. This data is uploaded to a central database that investigators can use to analyze a driver’s travel patterns, identify visitors to particular destinations, predict individuals’ locations, and track targeted vehicles in real-time. ALPR is a mass surveillance technology in the sense that the systems collect information on every driver—regardless of whether the vehicles have a nexus to a criminal investigation.  In California, Vigilant Solutions offers ALPR training through a program it calls the “Vigilant Solutions Law Enforcement Academy,” which advertises training courses that come with free trial accounts for the company’s ALPR and face recognition platforms. Vigilant has garnered controversy due to its data-sharing contracts with ICE and its business model, which includes selling data collected with its own ALPR cameras to the private sector in addition to law enforcement. The company also has a history of requiring government agencies to sign agreements prohibiting them from talking publicly without the company’s sign-off in an effort to control media messaging.  Vigilant claims to be the sole entity capable of providing POST-certified training on ALPR to law enforcement agencies. Through the California Public Records Act, EFF obtained copies of the training, as well as the submission materials seeking certification. These records triggered several concerns. Most notably, the training presentation instructs police that there are no laws in California regulating the use of ALPR. While that may have been true in 2014, it has not been the case for nearly four years. In 2015, California passed a law, S.B. 34, regulating the use of ALPR systems and data collected by ALPR. These regulations include developing policies that protect civil liberties and privacy, as well as a long list of requirements related to cybersecurity and transparency. The training also does not touch on the California Values Act, a law passed in 2017 to protect California resources and data from being used in immigration enforcement. Additionally, the training module includes outdated information on case law, such as claims EFF and the ACLU lost a lawsuit over public access to ALPR data. The California Supreme Court ultimately reversed the lower court rulings outlined in the presentation.  In emails to EFF, Vigilant has indicated that it may have updated the presentation. But if so, that version was not resubmitted for certification, as required by POST regulations, according to records obtained by EFF. POST should investigate whether Vigilant is providing its own interpretation of recent developments in law, and if so, whether that instruction serves the public interest. When a surveillance vendor offers cloud storage and sharing services, it has a profit incentive when police collect more data and share it widely.  Troublingly, Vigilant Solutions uses the ALPR training as a platform to sell its products. The training materials are filled with promotion, such as a pitch for its ALPR databases consisting of law enforcement and commercial data, and its mobile software that comes with face recognition capabilities. By having a monopoly on ALPR trainings, Vigilant is able to promote its products and its version of the law surrounding ALPR at the expense of protecting civil liberties and privacy.  Over the last few years, EFF has filed public records requests with hundreds of agencies throughout California and found widespread failure to comply with state law for regulating ALPR technology. These failures necessitate an examination of whether agencies are being properly trained on the use of ALPR. So far, EFF’s research has led the legislature to order the California State Auditor to initiate a statewide investigation into the use of ALPR, including deep audits of entities using Vigilant’s products. In this case, EFF urges POST to initiate the decertification proceeding for the Vigilant course and encourages law enforcement agencies to seek alternatives to Vigilant’s training. Related Cases:  Automated License Plate Readers (ALPR)
>> mehr lesen

A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition (Mon, 19 Aug 2019)
As long we've had electronic mass media, audiences and creators have benefited from periods of technological upheaval that force old gatekeepers to compete with brash newcomers with new ideas about what constitutes acceptable culture and art. Those newcomers eventually became gatekeepers themselves, who then faced their own crop of revolutionaries. But today, the cycle is broken: as media, telecoms, and tech have all grown concentrated, the markets have become winner-take-all clashes among titans who seek to dominate our culture, our discourse and our communications. How did the cycle end? Can we bring it back? To understand the answers to these questions, we need to consider how the cycle worked — back when it was still working. How Things Used to Work In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own "community antenna television" networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born. We don't think that companies should be able to make up their own laws, because these turn into "Felony Contempt of Business Model." The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment. The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes. By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony's Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were "capable of substantial noninfringing uses." It's hard to imagine how controversial the VCR was in its day. MPAA president Jack Valenti made history by attending a congressional hearing where he thundered ,"I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone." Despite that unequivocal condemnation, home recording is so normal today that your cable operator likely offers to bundle a digital recorder with your subscription. Just as the record companies made peace with broadcasters, and broadcasters made peace with cable, cable has made its peace with home recording. It's easy to imagine that this is the general cycle of technology: a new technology comes along and rudely shoulders its way into the marketplace, pouring the old wine of the old guard into its shiny new bottles. The old guard insist that these brash newcomers are mere criminals, and demand justice. The public flocks to the new technology, and, before you know it, the old guard and the newcomers are toasting one another at banquets and getting ready to sue the next vulgarian who has the temerity to enter their market and pour their old wine into even newer bottles. That's how it used to work, but the cycle has been interrupted. The Cycle is Broken In 1998, Congress passed the Digital Millennium Copyright Act, whose Section 1201 bans bypassing a "technical measure" that “controls access” to copyrighted works. The statute does not make an exemption for people who need to bypass a copyright lock to do something legal, so traditional acts of "adversarial interoperability" (making a new thing that plugs into an old thing without asking for permission) can be headed off before they even get started. Once a company adds a digital lock to its products, it can scare away other companies that want to give it the broadcasters vs records/cable vs broadcasters/VCRs vs cable treatment. These challengers will have to overcome their fear that "trafficking” in a “circumvention device" could trigger DMCA 1201's civil damages or even criminal penalties—$500,000 and 5 years in prison...for a first offense. When companies like Sony made the first analog TV recorders, they focused on what their customer wanted, not what the winners of last year's technological battle thought was proper. That's how we got VCRs that could record off the air or cable (so you could record any show, even major Hollywood movies getting their first broadcast airing) and that allowed recordings made on one VCR to be played on another recorder (so you could bring that movie over to a friend's house to watch with a bowl of popcorn). Today's digital video products are different. Cable TV, satellite TV, DVDs/HD DVDs/Blu-Ray, and streaming services all use digital locks that scramble their videos. This allows them to threaten any would-be adversarial interoperators with legal reprisals under DMCA 1201, should they have the temerity to make a user-focused recorder for their products. That stifles a lot of common-sense ideas: for example, a recorder that works on all the programs your cable delivers (even pay-per-views and blockbusters); a recorder that lets you store the Christmas videos that Netflix and Amazon Prime take out of rotation at Christmastime so that you have to pay an upcharge to watch them when they're most relevant; or a recorder that lets you record a video and take it over to a friend's house or transfer it to an archival drive so you can be sure you can watch it ten years (or even ten minutes from now. Since the first record players, every generation of entertainment technology has been overtaken by a new generation—a generation that allowed new artists to find new audiences, a new generation that overturned the biases and preconceptions of the executives that controlled the industry and allowed for new modes of expression and new ideas. Today, as markets concentrate—cable, telecoms, movie studios, and tech platforms—the competition is shifting from the short-lived drive to produce the best TV possible to a long-term strategy of figuring out how to use a few successful shows to sell bundles of mediocre ones. In a world where the cycle that led to the rise of cable and streaming was still in effect, you could record your favorite shows before they were locked behind a rival's paywalls. You could search all the streaming services' catalogs from a single interface and figure out how to make your dollar go farther by automatically assembling a mix of one-off payments and subscriptions. You could stream the videos your home devices received to your phone while you were on the road...and more. And just as last year's pirates — the broadcasters, the cable operators, the VCR makers — became this year's admirals, the companies that got their start by making new services that centered your satisfaction instead of the goodwill of the entrenched industries would someday grow to be tomorrow's Goliaths, facing a new army of Davids. Fatalistic explanations for the unchecked rise of today's monopolized markets—things like network effects and first-mover advantage—are not the whole story. They are not unstoppable forces of nature. The cycle of concentration and renewal in media-tech shows us that, whatever role the forces of first-mover advantage and network effects are playing in market concentration, they are abetted by some badly written and oft-abused legal rules. DMCA 1201 let companies declare certain kinds of competition illegal: adversarial interoperability, one of the most historically tried-and-true methods for challenging dominant companies, can be made into a crime simply by designing products so that connecting to them requires you to bypass a copyright lock. Since DMCA 1201 bans this "circumvention," it also bans any competition that requires circumvention. That's why we're challenging DMCA 1201 in court: we don't think that companies should be able to make up their own laws, because inevitably, these turn into "Felony Contempt of Business Model." DMCA 1201 is just one of the laws and policies that have created the thicket that would-be adversarial interoperators run up against when they seek to upend the established hierarchy: software patents, overreaching license agreements, and theories of tortious interference with contractual relations are all so broadly worded and interpreted that they can be used to intimidate would-be competitors no matter how exciting their products are and no matter how big the market for them would be.
>> mehr lesen

Victory! California Supreme Court Blocks Sweeping Search Condition of Minors’ Electronic Devices and Social Media Accounts (Fri, 16 Aug 2019)
The California Supreme Court just rejected the government’s attempt to require a youth probationer, as a condition of release, to submit to random searches of his electronic devices and social media accounts. The trial court had imposed the condition because the judge believed teenagers “typically will brag” about drug use on the Internet—even though there was no evidence that the minor in this case, Ricardo P., had ever used any electronic devices in connection with any drugs or illegal activity, let alone ever previously bragged about drug use online. EFF and the ACLU filed an amicus brief in the case back in 2016, warning that the search condition imposed here was highly invasive, unconstitutional, and in violation of the California Supreme Court’s own standard for probation conditions—which requires that search conditions be “reasonably related to future criminality.” We also warned of the far-reaching privacy implications of allowing courts to impose such broad electronic search conditions. We’re pleased that the California Supreme Court heeded our warnings and recognized the substantial burden this “sweeping probation condition” imposed on Ricardo’s privacy. The court recognized that the probation condition would give Ricardo’s probation officers “full access, day or night, not only to his social media accounts but also to the contents of his e-mails, text messages, and search histories, all photographs and videos stored on his devices, as well as any other data accessible using electronic devices, which could include anything from banking information to private health or financial information to dating profiles.” And by allowing remote access to Ricardo’s online accounts, the condition would potentially allow his probation officers to monitor his communications in real time. According to the court: “If we were to find this record sufficient to sustain the probation condition at issue, it is difficult to conceive of any case in which a comparable condition could not be imposed, especially given the constant and pervasive use of electronic devices and social media by juveniles today.”    The court noted, for example, that if it were to hold—as the California Attorney General argued—that any search condition facilitating supervision of probationers was “reasonably related to future criminality,” it might be obligated to uphold “a condition mandating that probationers wear 24-hour body cameras or permit a probation officer to accompany them at all times.” This is a critical ruling. The search condition imposed in this case was not unique, but one that many juvenile probationers have been subject to in California in recent years, under the same unsupported reasoning that the trial judge offered here. The California Supreme Court’s decision not only resolves a split in the lower courts regarding the legality of such probation conditions, but it sends a clear message: probation conditions that have “a very heavy burden on privacy with a very limited justification” are not entitled to deference. We applaud the California Supreme Court for recognizing the serious privacy invasion imposed by the search condition issued in this case and for striking down the condition as invalid.
>> mehr lesen

Trailblazing Tech Scholar danah boyd, Groundbreaking Cyberpunk Author William Gibson, and Influential Surveillance Fighters Oakland Privacy Win EFF’s Pioneer Awards (Thu, 15 Aug 2019)
‘Savage Builds’ Star and Maker Advocate Adam Savage to Keynote September 12th Ceremony San Francisco – The Electronic Frontier Foundation (EFF) is honored to announce the winners of its 2019 Pioneer Awards: trailblazing tech scholar danah boyd, groundbreaking cyberpunk author William Gibson, and the influential surveillance-fighting group Oakland Privacy. The ceremony will be held September 12th in San Francisco. The keynote speaker for this year’s awards will be “Savage Builds” and Tested.com star—and all-around advocate for makers—Adam Savage. Tickets for the Pioneer Awards are $65 for current EFF members, or $75 for non-members. danah boyd has consistently been one of the world’s smartest researchers, thinkers, and writers about how technology impacts society, especially for teens and young people. Currently, boyd is focused on detecting and mitigating vulnerabilities in sociotechnical systems. To better understand these vulnerabilities, boyd has been examining the challenges surrounding the 2020 U.S. Census. In 2013, boyd created Data & Society, an independent nonprofit research institute that is committed to identifying thorny problems at the intersection of technology, culture, and community, and advances understanding of the implications of data technologies and automation. danah’s most recent books—“It’s Complicated: The Social Lives of Networked Teens” and “Participatory Culture in a Networked Age”—examine the intersection of everyday life and social media, and have helped families around the world navigate technologies like Facebook, Twitter, YouTube, and Instagram. In addition to her work as a partner researcher at Data & Society, boyd is also Principal Researcher at Microsoft Research and a Visiting Professor at New York University. William Gibson coined the term “cyberspace.” Neuromancer, his first novel, won the Hugo Award, the Nebula Award, and the Philip K. Dick Award in 1984, and is a groundbreaking portrayal of an unforgiving high-tech future with heroes that are thoroughly flawed human beings who nonetheless resist corporate power by seizing the means of computation. His work presents an incisive look at how technology shapes identity, with sharp, prescient depictions of everything from reality TV to wearable computers. Gibson's canon includes such New York Times bestsellers as the Sprawl trilogy, the Bridge trilogy, the Blue Ant trilogy, and The Peripheral. Gibson’s newest novel, Agency, will be published in January of 2020. Oakland Privacy is the group behind many influential anti-surveillance fights in Oakland, California and beyond. Oakland Privacy was born in 2013 when activists discovered a Homeland Security project called the Domain Awareness Center (DAC). DAC was meant to be an Oakland-wide surveillance gauntlet—with cameras, microphones, license plate readers—and a local data center to put it all together. But after Oakland Privacy led a ten-month campaign of opposition, the DAC was finally cancelled. Later, Oakland Privacy was one of the primary organizations behind the Oakland City Council’s creation of the first municipal privacy commission in the country, and then continued to be instrumental in bolstering opposition to surveillance around the San Francisco Bay Area and across the United States. For example, Oakland Privacy helped develop a comprehensive surveillance transparency regulatory law mandating use policies, civil rights impact reports, and annual audits, and pushed for its passage in multiple jurisdictions. The model is now in use in three Bay Area cities and other jurisdictions like Seattle, Nashville, and Cambridge, Massachusetts. Most recently, Oakland Privacy successfully worked to ban facial recognition in San Francisco and Oakland—two of the three cities in the country to enact such a ban. The Pioneer Award winners will be awarded a “Barlow,” a statuette named after EFF’s late co-founder John Perry Barlow and the indelible mark he left on digital rights. “John Perry Barlow knew that you had to visualize the future of technology—both the promise and the perils—in order to create the world we want. All of our winners this year have done just that,” said EFF Executive Director Cindy Cohn. “I’m so proud to be honoring these bold thinkers and brave activists.” Awarded every year since 1992, EFF’s Pioneer Awards recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Vint Cerf, Mitchell Baker and the Mozilla Foundation, Aaron Swartz, and Chelsea Manning. Sponsors of the 2019 Pioneer Awards include Dropbox, O'Reilly Media, Matthew Prince, Medium, Ridder, Costa & Johnstone LLP, and Ron Reed. For tickets and event details: https://supporters.eff.org/civicrm/event/register?id=230&reset=1 For more on the Pioneer Awards: https://www.eff.org/awards/pioneer/2019 Contact:  Rebecca Jeschke Media Relations Director and Digital Rights Analyst press@eff.org
>> mehr lesen

EFF se suma a organizaciones de América Latina que se oponen a la acusación de Ola Bini (Wed, 14 Aug 2019)
Este lunes se cumple el cuarto mes de procesamiento de Ola Bini, el desarrollador de código abierto que se encuentra, actualmente, bajo investigación por parte de las autoridades ecuatorianas. Los fiscales todavía no han revelado ninguna prueba real que apoye las acusaciones formuladas contra Bini. Tras el 12º Foro Regional de Gobernanza de Internet para América Latina y el Caribe (LACIGF) la semana pasada, organizaciones civiles de la región hicieron pública una declaración en la que destacaron las irregularidades en el debido proceso y las presiones políticas que han marcado el caso hasta ahora. EFF se les suma. Después de viajar a Quito para hablar con periodistas, políticos, abogados, académicos, así como con el propio Bini y su equipo de defensa, llegamos a conclusiones similares: que el procesamiento de Bini es un caso político, no criminal. Nosotros también nos oponemos al uso indebido de su enjuiciamiento en nombre de intereses políticos, lo que compromete su derecho a un juicio justo. Desde la fundación de EFF en 1990, hemos trabajado para garantizar que los investigadores y expertos en seguridad como Bini puedan hacer su trabajo sin ser malinterpretados o perseguidos por los que están en el poder, un trabajo que mejora la seguridad de todos en línea. El trabajo de Bini no sólo es legal: ayuda a mejorar la privacidad y seguridad de todos en línea, como explicamos en nuestro documento Derechos de los codificadores en América Latina en 2018, donde conectamos dicho trabajo con los derechos fundamentales de sus profesionales y beneficiarios en la región. Para mayor información, véase la declaración que figura a continuación: Contra la persecución política a Ola Bini Ola Bini es un reconocido activista por el software libre y experto en seguridad digital. Desde el 11 de abril de 2019 se encuentra sujeto a un proceso judicial en Ecuador, acusado de haber vulnerado sistemas informáticos. Tal proceso, sin embargo, ha sido ampliamente cuestionado por la multiplicidad de irregularidades cometidas y por estar bajo un sinnúmero de presiones políticas. El primer elemento ha sido confirmado por el Habeas Corpus otorgado en junio pasado por parte del tribunal de la Corte Provincial de Pichincha y por las expresiones oportunamente realizadas por las Relatorías Especiales sobre la Libertad de Expresión de la Organización de Estados Americanos (OEA) y la Organización de las Naciones Unidas (ONU).[1] [2] Por su parte, la Misión Internacional de la Electronic Frontier Foundation (EFF) enviada recientemente a Ecuador, tras conversar sobre esta situación con políticos, académicos y periodistas de distintas tendencias, ha concluido que la motivación tras el caso de Ola Bini es política, no criminal.[3] De hecho, todavía se desconoce cuáles son los sistemas informáticos de cuya vulneración se le acusó en un principio. Junto con ello, una serie de hechos recientes han encendido nuevas alertas. En primer lugar, la vinculación de una nueva persona a la causa por el sólo hecho de mantener un vínculo profesional con Bini y a pesar de que en la audiencia respectiva no se presentaron los elementos jurídicos necesarios para cumplir con dicho trámite. Además, el Fiscal a cargo de la acusación decidió abrir dos nuevas líneas de investigación contra Ola Bini: por “defraudación fiscal” y “tráfico de influencias”. De tal forma, la fiscalía ahora se propone investigar por hasta el plazo de 2 años más al activista. Esta última decisión sugiere que no se cuentan con pruebas que sustenten las acusaciones originalmente realizadas contra Bini y que la atención de la justicia y el gobierno ecuatoriano no está puesta en un delito, sino en una persona. Esto nos lleva a confirmar el temor expresado por algunas organizaciones internacionales que trabajan por los derechos humanos en internet que desde el momento de la detención de Ola Bini alertaron sobre la espiral de persecución política contra un activista de renombre internacional, cuyo trabajo es globalmente reconocido por la protección de la privacidad. Considerando lo expresado más arriba y las conversaciones mantenidas en el marco del XII Foro de Gobernanza de Internet de América Latina y el Caribe (LACIGF por sus siglas en inglés), los abajo firmantes rechazamos el escenario persecutorio montado contra Bini, demandamos que se respete el debido proceso por parte de todas las funciones del Estado e instamos a que los actores políticos dejen de interferir en la justicia. Asociación para el Progreso de la Comunicaciones Derechos Digitales Electronic Frontier Foundation Internet Bolivia Intervozes Karisma [1] https://cnnespanol.cnn.com/2019/06/20/tribunal-de-ecuador-acepta-recurso-de-habeas-corpus-para-ola-bini/ [2] https://www.eluniverso.com/noticias/2019/04/15/nota/7287350/relatorias-onu-oea-cuestionan-detencion-ola-bini [3] https://www.eff.org/es/deeplinks/2019/08/ecuador-political-actors-must-step-away-ola-binis-case
>> mehr lesen

Interoperability and Privacy: Squaring the Circle (Tue, 13 Aug 2019)
Last summer, we published a comprehensive look at the ways that Facebook could and should open up its data so that users could control their experience on the service, and to make it easier for competing services to thrive. In the time since, Facebook has continued to be rocked by scandals: privacy breaches, livestreamed terrorist attacks, harassment, and more. At the same time, competition regulators, scholars and technologists have stepped up calls for Facebook to create and/or adopt interoperability standards to open up its messenger products (and others) to competitors. To make matters more complex, there is an increasing appetite in both the USA and Europe, to hold Facebook and other online services directly accountable for the actions of its users: both in terms of what those users make available (copyright infringement, political extremism, incitements to violence, etc) and in how they treat each other (harassment, stalking, etc). Fool me twice... Facebook execs have complained that these goals are in conflict: they say that for the company to detect and block undesirable user behaviors as well as interdicting future Cambridge Analytica-style data-hijacking, they need to be able to observe and analyze everything every user does, both to train automated filters and to allow them to block abusers. But by allowing third parties to both inject data into their network and pull data out of it--that is, allowing interoperability--the company's ability to monitor and control its users' bad behavior will be weakened. There is a good deal of truth to this, but buried in that truth is a critical (and highly debatable) assumption: "If you believe that Facebook has the will and ability to stop 2.3 billion people from abusing its systems and each other, then weakening Facebook's control over these 2.3 billion people might limit the company's ability to make that happen." But if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook. After all, Facebook's moderation efforts to date have been a mess of backfiring, overblocking, and self-censorship, a "solution" that no one is happy with. Which is why interoperability is an important piece of the puzzle when it comes to addressing the very real harms of market concentration in the tech sector, including Facebook's dominance over social media. Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden. Interoperability presents a means for people to remain partially on Facebook, but while using third-party tools that are designed to respond to their idiosyncratic needs. While it seems likely that no one is able to build a single system that protects 2.3 billion users, it's certainly possible to build a service whose social norms and technological rules are suited to smaller groups. Facebook can't figure out how to serve every individual and community's needs--but those individuals and communities might be able to do so for themselves, especially if they get to choose which toolsmith's tools they use to mediate their Facebook experience. Standards-washing: the lesson of Bush v Gore But not all interoperability is created equal. Companies have historically shown themselves to be more than capable of subverting mandates to adhere to standards and allow for interconnection. A good historic example of this is the drive to standardize voting machines in the wake of the Supreme Court's decision in Bush v Gore. Ambiguous results from voting machines resulted in an election whose outcome had to be determined by the Supreme Court, which led to Congress passing the Help America Vote Act, which mandated standards for voting machines. The process did include a top-tier standards development organization to oversee its work: the Institute of Electrical and Electronics Engineers (IEEE), which set about creating a standard for their products. But rather than creating a "performance standard" describing how a voting machine should process ballots, the industry sneakily tried to get the IEEE to create a "design standard" that largely described the machines they'd already sold to local election officials: in other words, rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, the could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem...without doing the serious changes needed to fix their systems, including requiring a voter-verified paper ballot. Big Tech is even more concentrated than the voting machine industry is, and it's far more concentrated than the voting machine industry was in 2003 (most industries are more concentrated today than they were in 2003). Legislatures, courts or regulators that seek to define "interoperability" should be aware of the real risk of the definition being hijacked by the dominant players (who are already very skilled at subverting standardization processes). Any interoperability standard developed without recognizing Facebook's current power and interest is at risk of standardizing the parts of Facebook's business that it does not view as competitive risks, while leaving the company's core business (and its bad business practices) untouched. Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that's not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like "tortious interference with contract." Everything not forbidden is mandatory In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge. Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company's existing products or services, without seeking permission to do so. Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook’s users who opted in, to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook was at the center of their illegal conduct. Adversarial interoperability flips the script Clearing this legal thicket would go a long way toward allowing online communities to self-govern by federating their discussions with Facebook without relying on Facebook's privacy tools and practices. Software vendors could create tools that allowed community members to communicate in private, using encrypted messages that are unintelligible to Facebook's data-mining tools, but whose potential members could still discover and join the group using Facebook. This could allow new entrants to flip the script on Facebook's "network effects" advantage: today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people. Revenge of Carterfone There is good historic precedent for using a mix of interoperability mandates and a legal right to interoperate beyond those mandates to reduce monopoly power. The FCC has imposed a series of interoperability obligations on incumbent phone companies: for example, the rules that allow phone subscribers to choose their own long-distance carriers. At the same time, federal agencies and courts have also stripped away many of the legal tools that phone companies once used to punish third parties who plugged gear into their networks. The incumbent telecom companies historically argued that they couldn't maintain a reliable phone network if they didn't get to specify which devices were connected to it, a position that also allowed the companies to extract rental payments for home phones for decades, selling you the same phone dozens or even hundreds of times over. When agencies and courts cleared the legal thicket around adversarial interoperability in the phone network, it did not mean that the phone companies had to help new entrants connect stuff to their wires: manufacturers of modems, answering machines, and switchboards sometimes had to contend with technical changes in the Bell system that broke their products. Sometimes, this was an accident of some unrelated technical administration of the system; sometimes it seemed like a deliberate bid to harm a competitor. Often, it was ambiguous. Monopolists don't have a monopoly on talent But it turns out that you don't need the phone company's cooperation to design a device that works with its system. Careful reverse-engineering and diligent product updates meant that even devices that the phone companies hated--devices that eroded their most profitable markets--had long and profitable runs in the market, with devoted customers. Those customers are key to the success of adversarial interoperators. Remember that the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to. Anything that the Bell system did to block third-party phone devices ultimately punished the customers who bought those devices, creating ill will. And when a critical mass of an incumbent giant's customer base depends on--and enjoys--a competitor's product, even the most jealous and uncooperative giants are often convinced to change tactics and support the businesses they've been trying to destroy. In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers. Is Facebook better than everyone else? Facebook is one of the largest companies in world. Many of the world's most talented engineers and security experts already work there, and many others aspire to do so. Given that, is it realistic to think that a would-be adversarial interoperator could design a service that plugs into Facebook without Facebook's permission? Ultimately, this is not a question with an empirical answer. It's true that few have tried to pull this off since Power Ventures was destroyed by Facebook litigation, but it's not clear whether the competitive vacuum is the result of potential competitors who are too timid to lock engineering horns with Facebook's brain-trust, or potential competitors and investors whose legal departments won't let them even try. But it is instructive to look at the history of the Bell system after Carterfone and Hush-a-Phone: though the Bell system was the single biggest employer of telephone technicians in the world and represented the best, safest, highest-paid opportunities for would-be telecoms innovators, after Carterfone and Hush-a-Phone, Bell's rivals proceeded to make device after device after device that extended the capabilities of the phone network, without permission, overcoming the impediments that the network's operator put in their way. Closer to home, remember that when Facebook wanted to get Power Ventures out of its network, its primary tool of choice wasn't technical measures--Facebook didn't (or couldn't) use API changes or firewall rules alone to keep Power Ventures off the service--it was mainly lawsuits. Perhaps that's because Facebook wanted to set an example for later challengers by winning a definitive legal battle, but it's very telling that the company that operated the network didn't (or couldn't!) just kick its rival out, and instead went through a lengthy, expensive and risky legal battle when simple IP blocking didn’t work. Facebook has a lot of talented engineers, but it doesn't have all of them. Being a defender is hard Facebook's problem with would-be future challengers is a familiar one: in security, it is easier to attack than to defend. For Facebook to keep a potential competitor off its network, it has to make no mistakes. In order for a third party to bypass Facebook's defenses in order to interoperate with Facebook without permission, it has only to find and exploit a single mistake. And Facebook labors under other constraints: like the Bell system fending off Hush-a-Phone, the things that Facebook does to make life hard for competitors who are helping its users get more out of its service are also making life harder for all its users. For example, any tripwire that blocks logins by suspected bots will also block users whose behaviors appear bot-like: the more strict the bot-detector is, the more actual humans it will catch. Here again, Facebook's dizzying user-base works against it: with billions of users, a one-in-a-million event is going to happen thousands of times every day, so Facebook has to accommodate a wide variety of use-cases, and some of those behaviors will be sufficiently weird to allow a rival's bot to slip through. Back to privacy Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We’re rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service. Facebook users can also watch what they say on Facebook, hoping that they won't slip up and put something compromising on the service that will come back to haunt them (though this isn't always easy to predict). But even if people do all this, they're still exposing themselves to Facebook's scrutiny when they use Facebook, which monitors how they use the service, every click and mouse-movement. What's more, anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some suprisingly creepy and underhanded tactics. If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance. Such a tool would allow a someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook refuseniks get out of the Facebook pool while still leaving a toe in its waters. What's more, it lets their friends follow them, by creating alternatives to Facebook where the people they want to talk to are still reachable. One user at a time, Facebook's rivals could siphon off whole communities. As Facebook's market power dwindled, so would the pressure that Web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.. Third-party tools could automate the process of encrypting conversations, allowing users to communicate in private without having to trust Facebook's promises about its security. Finally, such a system would put real competitive pressure on Facebook. Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook. But if there was a constellation of third-party services that were constantly carving escape hatches in Facebook's walled garden, Facebook would have to contend with the very real possibility that a scandal could result in the permanent departure of its users. Just the possibility would change the way that Facebook made decisions: product designers and other internal personnel who argued for treating users with respect on ethical grounds would be able to add an instrumental benefit to being "good guys": failing to do so could trigger yet another exodus from the platform. Lower and upper bounds It's clear that online services need rules about privacy and interoperability setting out how they should treat their users, including those users who want to use a competing service. The danger is that these rules will become the ceiling on competition and privacy, rather than the floor. For users who have privacy needs--and other needs--beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs. None of this means that we should have an online free-for-all. A rival of Facebook that bypassed its safeguards to raid user data should still get in trouble (just as Facebook should get in trouble for privacy violations, inadequate security, or other bad activity). Shouldering your way into Facebook in order to break the law is, and should remain, illegal, and the power of the courts and even law enforcement should remain a check on those activities. But helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we’d be on our way to a more pluralistic, decentralized, diverse Internet.
>> mehr lesen

Victory! Lawsuit May Proceed Against Facebook’s Biometric Surveillance (Sat, 10 Aug 2019)
Biometric surveillance by companies against consumers is a growing menace to our privacy, freedom of expression, and civil rights. Fortunately, a federal appeals court has ruled that a lawsuit against Facebook for its face surveillance may move forward. The decision, by the federal Ninth Circuit about an Illinois privacy law, is the first by an American appellate court to directly identify the unique hazards of face surveillance. This is an important victory for biometric privacy, access to the courts for ordinary people, and the role of state governments as guardians of our digital liberty. Illinois’ Biometric Information Privacy Act The Illinois Biometric Information Privacy Act of 2008 (BIPA) is one of our nation’s most important privacy safeguards for ordinary people against corporations that want to harvest and monetize their personal information. BIPA bars a company from collecting, using, or sharing a person’s biometric information, absent that person’s informed opt-in consent. BIPA also requires a company to destroy a person’s biometric information when its purpose for collection is satisfied, or within three years of the company’s last contact with the person, whichever is sooner. BIPA provides the strongest enforcement tool: a “private right of action,” meaning a person may file their own lawsuit against a company that violates their privacy rights. The Illinois General Assembly explained, when passing BIPA, that “biometrics are unlike other unique identifiers” because they are “biologically unique to the individual.” As a result, “once compromised, the individual has no recourse, [and] is at heightened risk for identity theft.” Lawmakers also pointed out that the ramifications of biometric technology “are not fully known.” In Rosenbach v. Six Flags (2019), the Illinois Supreme Court held that BIPA does not require a plaintiff to prove an injury beyond a violation of the statute itself. The court reasoned: When a private entity fails to adhere to the statutory procedures, as defendants are alleged to have done here, the right of the individual to maintain their biometric privacy vanishes into thin air. The precise harm the Illinois legislature sought to prevent is then realized. This is no mere “technicality.” The injury is real and significant. EFF filed an amicus brief in Rosenbach in support of this outcome, along with the American Civil Liberties Union, ACLU of Illinois, the Center for Democracy and Technology, the Chicago Alliance Against Sexual Exploitation, Illinois Public Interest Research Group, and Lucy Parsons Lab. Patel v. Facebook In 2010, Facebook launched its “Tag Suggestions” feature. It uses face recognition technology to match known faces in user profile pictures and other photos to unknown faces in newly uploaded photos. If this face surveillance system generates a match, then Facebook will notify the person who uploaded the photo and suggest a “tag.” If that person accepts the tag, then the person in the photo will be identified by name. Facebook imposed this face surveillance system on users by default. To avoid it, a user must affirmatively opt-out, which most users won’t do. This is an important victory for biometric privacy, access to the courts for ordinary people, and the role of state governments as guardians of our digital liberty Facebook has migrated some of its users from its “Tag Suggestions” feature to its “Face Recognition” feature, according to the Federal Trade Commission’s recently filed consumer deception complaint against Facebook. The default remains application of face surveillance. In 2015, Illinois residents filed a class action lawsuit in federal court called Patel v. Facebook. The plaintiffs allege that Facebook’s “Tag Suggestions” feature violates BIPA. They reason that this feature collects and uses their biometric information without their informed opt-in consent, and does not satisfy the statutory destruction deadline. Facebook removed the case from Illinois to California, where Facebook has its headquarters. The Patel trial court denied Facebook’s motion to dismiss, and certified a class of Facebook users. The appellate court allowed Facebook to take an immediate appeal of the class certification decision. “Standing” and Spokeo The key issue on appeal in Patel was whether the plaintiffs had sufficiently shown that Facebook’s biometric surveillance caused them a concrete injury. The U.S. Constitution limits the federal courts to deciding “cases and controversies.” That means a plaintiff cannot sue a defendant unless they can show “standing,” meaning that the defendant has injured them in a concrete manner. You’d think that when a company violates a person’s rights under a statute, and that statute provides that person a private right of action to sue that company, then that person automatically has constitutional standing. Unfortunately, you’d be wrong. In Spokeo, Inc. v. Robins (2016), the U.S. Supreme Court held that a person in such circumstances might or might not have standing. This depends, among other things, on the legal history of the particular statutory interest at issue. EFF filed an amicus brief in Spokeo (along with CDT, the Open Technology Institute, and the World Privacy Forum) arguing that standing in such cases should be automatic, but that view did not carry the day. Spokeo can sometimes be a barrier to the enforcement of consumer data privacy laws. For example, when a company’s negligent data security practices cause massive breaches of consumers’ personal information, the company may argue that the injured consumers cannot sue based solely on violations of data security statutes. Rather, the company may argue, the Constitution also requires them to show a financial or physical injury, such as identity theft. This is one of the problems in our legal system that limited the recently proposed settlement of the Equifax data breach litigation. (Don’t forget to file your settlement claim against Equifax.) The New Appellate Court Ruling in Patel On August 8, a unanimous three-judge panel of the U.S. Court of Appeals for the Ninth Circuit held that the Patel plaintiffs have constitutional standing to sue Facebook for violating their statutory privacy rights under BIPA. In doing so, the appellate court forcefully explained the hazards of face surveillance and the importance of BIPA’s privacy protections. The court presented centuries of history of U.S. legal protections for privacy, sounding in the common law and the Constitution. For example, in the context of the Fourth Amendment, the Supreme Court has repeatedly held that “advances in technology can increase the potential for unreasonable intrusions into personal privacy.” The appellate court cited the Supreme Court’s protection of the public from home-intruding heat detectors in Kyllo v. United States (2001), GPS location tracking in United States v. Jones (2012), cellphone searches in Riley v. California (2014), and cell-tower location tracking in Carpenter v. United States (2018). The court held that “an invasion of an individual’s biometric privacy rights has a close relationship to a harm that has traditionally been regarded as providing a basis for a lawsuit in English or American courts.” Quoting Carpenter, the court explained that biometric information is “detailed, encyclopedic, and effortlessly compiled.” Most importantly, the appellate court explained the grave privacy threats posed by Facebook’s face surveillance: Once a face template of an individual is created, Facebook can use it to identify that individual in any of the other hundreds of millions of photos uploaded to Facebook each day, as well as determine when the individual was present at a specific location. Facebook can also identify the individual’s Facebook friends or acquaintances who are present in the photo. Taking into account the future development of such technology as suggested in Carpenter, it seems likely that a face-mapped individual could be identified from a surveillance photo taken on the streets or in an office building. Or a biometric face template could be used to unlock the face recognition lock on that individual’s cell phone. We conclude that the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. The appellate court also upheld the trial court’s certification of a class of Facebook users. Facebook reportedly plans to seek review by the full appellate court. EFF filed an amicus brief in Patel regarding the privacy menace of face surveillance, along with the ACLU, its Illinois and California affiliates, CDT, and Illinois PIRG. Lessons For Legislators Especially after the new Patel decision, Illinois’ BIPA is one of the most important data privacy laws in the country. What lessons does BIPA hold for legislators who want to better protect their constituents from corporations that place their profits before our privacy? First, a privacy law is only as strong as its enforcement tools, and the best enforcement tool is a private right of action. In many cases, government agencies can’t or won’t enforce a statute. So people must be free to protect their own rights by filing their own lawsuits. Second, Congress must not pass a weak federal data privacy law that preempts stronger state privacy laws. Many big tech companies told Congress for years that they could self-regulate. Now some of them are asking Congress for regulation. What changed? They want to dodge Illinois’ BIPA and other state consumer data privacy laws, like California’s Consumer Privacy Act and Vermont’s data broker registration statute. Thus, opposition to preemption and support for private enforcement are EFFs two most important demands, among our many proposals for new consumer data privacy legislation. Next steps The Ninth Circuit’s new ruling in Patel is a watershed in privacy law. It allows litigation to go forward challenging Facebook’s biometric surveillance of users absent their informed opt-in consent. It explains, more forcefully than any American appellate court opinion to date, the extraordinary privacy hazards of face surveillance. It holds that a loss of statutory privacy rights under Illinois BIPA is, by itself, a sufficient injury to show constitutional standing under Spokeo. And it clearly demonstrates the necessity of private rights of action, and why Congress must not preempt stronger state laws. Most importantly, it shows what all of us must do now: contact our federal and state legislators, and demand that they enact strong consumer data privacy laws. Illinois BIPA, as strengthened by Patel, is a model for others to follow.
>> mehr lesen

Amazon’s Ring Is a Perfect Storm of Privacy Threats (Thu, 08 Aug 2019)
Doors across the United States are now fitted with Amazon’s Ring, a combination doorbell-security camera that records and transmits video straight to users’ phones, to Amazon’s cloud—and often to the local police department. By sending photos and alerts every time the camera detects motion or someone rings the doorbell, the app can create an illusion of a household under siege. It turns what seems like a perfectly safe neighborhood into a source of anxiety and fear. This raises the question: do you really need Ring, or have Amazon and the police misled you into thinking that you do? Recent reports show that Ring has partnered with police departments across the country to hawk this new surveillance system—going so far as to draft press statements and social media posts for police to promote Ring cameras. This creates a vicious cycle in which police promote the adoption of Ring, Ring terrifies people into thinking their homes are in danger, and then Amazon sells more cameras. mytubethumb play %3Ciframe%20allowfullscreen%3D%22%22%20src%3D%22https%3A%2F%2Fumap.openstreetmap.fr%2Fen%2Fmap%2Famazon-ring-partnerships-over-225-jurisdictions-ro_353580%3FscaleControl%3Dfalse%26amp%3BminiMap%3Dfalse%26amp%3BscrollWheelZoom%3Dfalse%26amp%3BzoomControl%3Dtrue%26amp%3BallowEdit%3Dfalse%26amp%3BmoreControl%3Dtrue%26amp%3BsearchControl%3Dnull%26amp%3BtilelayersControl%3Dnull%26amp%3BembedControl%3Dnull%26amp%3BdatalayersControl%3Dtrue%26amp%3BonLoadPanel%3Dundefined%26amp%3BcaptionBar%3Dfalse%22%20width%3D%22100%25%22%20height%3D%22400px%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from openstreetmap.fr Map of Ring partnerships with police compiled by Shreyas GandlurSee full screen.  How Ring Surveils and Frightens Residents Even though government statistics show that crime in the United States has been steadily decreasing for decades, people’s perception of crime and danger in their communities often conflict with the data. Vendors prey on these fears by creating products that inflame our greatest anxieties about crime. Ring works by sending notifications to a person’s phone every time the doorbell rings or motion near the door is detected. With every update, Ring turns the delivery person or census-taker innocently standing on at the door into a potential criminal. Neighborhood watch apps only increase the paranoia. Amazon promotes its free Neighbors app to accompany Ring. Other vendors sell competing apps such as Nextdoor and Citizen. All are marketed as localized social networks where people in a neighborhood can discuss local issues or share concerns. But all too often, they facilitate reporting of so-called “suspicious” behavior that really amounts to racial profiling. Take, for example, the story of an African-American real estate agent who was stopped by police because neighbors thought it was “suspicious” for him to ring a doorbell. Even law enforcement are noticing the social consequences of public-safety-by-push-notification. At the International Associations of Chiefs of Police conference earlier this year, which EFF attended, Chandler Police Assistant Chief Jason Zdilla said that his city in Arizona embraced the Ring program, registering thousands of new Ring cameras per month. Though Chandler is experiencing a historic low for violent crime for the fourth year in a row, Ring is giving the public another impression. “What happens is when someone opens up the social media, and every day they see maybe a potential criminal act, or every day they see a suspicious person, they start believing that this is prevalent, and that crime is really high,” Zdilla said. If getting an alert from your front door or your neighbor every time a stranger walks down the street doesn’t cause enough paranoia, Ring is trying to alert users to local 911 calls. The Ring-police partnerships would allow the company to tap into the computer-aided dispatch system, and alert users to local 911 calls as part of the “crime news” alerts on its app, Neighbors. Such push alerts based on 911 calls could be used to instill fear and sell additional Ring services. From Neutral Guardians to Scripted Hawkers Thanks to in-depth reporting from Motherboard, Gizmodo, CNET, and others, we know a lot about the symbiotic relationship between Amazon’s Ring and local police departments, and how that relationship jeopardizes privacy and circumvents regulation. At least 231 law enforcement agencies around the country have partnered with Ring, a report by Motherboard revealed. This partnership takes both a financial and digital form. Police that partner with Ring reportedly have access to Ring’s “Law Enforcement Neighborhood Portal,” which allows police to see a map of the locations of Ring cameras. Police may then ask owners for access to their footage—and when owners give permission, police do not need to acquire a warrant. The arrangement is also financial. Amazon encourages police to encourage residents to install the Ring app and purchase cameras for their homes. Per Motherboard, for every town resident that downloads Ring’s Neighbors app, the local police department gets credits toward buying cameras it can distribute to residents. This arrangement makes salespeople out of what should be impartial and trusted protectors of our civic society. This is not the first time the government has attempted to use an economic incentive to expand the reach of surveillance technology and to subsidize the vendors. In 2017, EFF spoke out against legislation that would provide tax credits for California residents who purchased home security systems. Police departments also get communications instruction from the large global corporation. Documents acquired by Gizmodo revealed that questions directed at police departments concerning Ring are often passed on to Ring’s Public Relations team. Thus, many statements about Ring that residents think are coming from their trusted local police, are actually written by Ring. Worse, Ring instructed police departments not to reveal their connections to the company. Instead of getting an even-handed conversation with your local police about the benefits and pitfalls of installing a networked security camera, residents are fed canned lines from a corporation whose ultimate goal is to sell more cameras. Even the Monitoring Association, an international trading organization for surveillance equipment, announced its concern regarding Ring's police partnerships. The organization's President, Ivan Spector, told CNET, "We are troubled by recent reports of agreements that are said to drive product-specific promotion, without alerting consumers about these marketing relationships...This lack of transparency goes against our standards as an industry, diminishes public trust, and takes advantage of these public servants." Dissemination of Your Video Images So, Ring and the police have an intimate relationship revolving around sharing data and money. But at least users own their own video footage and control who gets access to it, right? Not if you ask Amazon. Earlier this year, social media users pointed out that Ring was using actual security camera footage of alleged wrong-doers in sponsored ads. Amazon harvested pictures of people’s faces and posted them alongside accusations that they were guilty of a crime, without consulting the person pictured or the owners of the cameras. According to their terms of service, Ring and its licensees have “an unlimited, irrevocable, fully paid, and royalty-free, perpetual, worldwide right to re-use, distribute store, delete, translate, copy, modify, display, sell, create derivative works,” in relation to the footage taken from your front door. Police will also seek access to residents’ video footage. Residents may deny police access when requested. However, Amazon actively coaches police on how to persuade residents to hand over the footage. A professional communications expert instructs police on how to manipulate residents into giving away their Ring’s footage. If convincing the resident doesn’t work, police can go straight to Amazon and ask them for the footage. This process circumvents the camera’s owner. Amazon says it will not disclose Ring video to police absent a warrant from a judge or consent from the resident. And California law generally requires police to get a warrant in this situation. But some California police say they don’t need a warrant. Tony Botti of the Fresno County Sheriff’s department told Government Technology that police can “subpoena” a Ring video. A subpoena typically does not require judicial authorization before it is sent. Botti continued: “as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation.” Amazon needs to clear up this uncertainty. Next Steps The rapid proliferation of this partnership between police departments and the Ring surveillance system—without any oversight, transparency, or restrictions—poses a grave threat to the privacy of all people in the community. It also may chill the First Amendment rights of political canvassers and community organizers who spread their messages door-to-door, and contribute to the unfair racial profiling of our minority neighbors and visitors. Even if you chose not to put a camera on your front door, video footage of your comings and goings might easily be accessed and used by your neighbors, the police, and Amazon itself. The growing partnerships between Amazon and police departments corrodes trust in an important civic institution by turning public servants into salespeople for Amazon products. Residents of towns whose police department have already cut a deal with Ring should voice their concern to local officials. Users of Ring should also consider how their privacy, and the privacy of the neighbors, may be harmed by having a camera on their front door, networked into a massive police surveillance system.
>> mehr lesen

Second Circuit Rules That Section 230 Bars Civil Terrorism Claims Against Facebook (Wed, 07 Aug 2019)
The U.S. Court of Appeals for the Second Circuit last week became the first federal appellate court to rule that Section 230 bars civil terrorism claims against a social media company. The plaintiffs, who were victims of Hamas terrorist attacks in Israel, argued that Facebook should be liable for hosting content posted by Hamas members, which allegedly inspired the attackers who ultimately harmed the plaintiffs. EFF filed an amicus brief in the case, Force v. Facebook, arguing that both Section 230 and the First Amendment prevent lawsuits under the Anti-Terrorism Act that seek to hold online platforms liable for content posted by their users—even if some of those users are pro-terrorism or terrorists themselves. We’ve been concerned that without definitive rulings that these types of cases cannot stand under existing law, they would continue to threaten the availability of open online forums and Internet users’ ability to access information. The Second Circuit’s decision is in contrast to that of the Ninth Circuit in Fields v. Twitter and the Sixth Circuit in Crosby v. Twitter, where both courts held only that the plaintiffs in those cases—victims of an ISIS attack in Jordan and the Pulse nightclub shooting in Florida, respectively—could not show a sufficient causal link between the social media companies and the harm suffered by the plaintiffs. Thus, the Ninth and Sixth Circuit rulings are concerning because they tacitly suggest that better pleaded complaints against social media companies for hosting pro-terrorism content might survive judicial scrutiny in the future. The facts underlying all of these cases are tragic and we have the utmost sympathy for the plight of the victims and their families. The law appropriately allows victims to seek compensation from the perpetrators of terrorism themselves. But holding online platforms liable for what terrorists and their supporters post online—and the violence they ultimately perpetrate—would have dire repercussions: if online platforms no longer have Section 230 immunity in this context, those forums and services will take aggressive action to screen their users, review and censor content, and potentially prohibit anonymous speech. The end result would be sanitized online platforms that would not permit discussion and research about terrorism, a prominent and vexing political and social issue. As we have chronicled, existing efforts by companies to filter extremist online speech have exacted collateral damage by silencing human rights defenders. There have been several cases filed in federal courts that seek to hold social media companies such as Twitter, Facebook, and YouTube civilly liable for providing material support to terrorists or aiding and abetting terrorists by allowing terrorist content on their platforms. We hope that the Second Circuit’s ruling will inspire other courts to ensure through their rulings that all Internet users will continue to be able to discuss and access information about controversial topics.
>> mehr lesen

Opening the Door for Censorship: New Trademark Enforcement Mechanisms Added for Top-Level Domains (Wed, 07 Aug 2019)
With so much dissatisfaction over how companies like Facebook and YouTube moderate user speech, you might think that the groups that run the Internet’s infrastructure would want to stay far away from the speech-policing business. Sadly, two groups that control an important piece of the Internet’s infrastructure have decided to jump right in.  The organization that governs the .org top-level domain, known as Public Interest Registry (PIR), and the Internet Corporation for Assigned Names and Numbers (ICANN) are expanding their role as speech regulators through a new agreement, negotiated behind closed doors. And they’re doing it despite the nearly unanimous opposition of nonprofit and civil society groups—the people who use .org domains. EFF is asking ICANN’s board to reconsider. ICANN makes policies for resolving disputes over domain names, which are enforced through a web of contracts. Best-known is the Uniform Domain Name Dispute Resolution Policy (UDRP), which allows trademark holders to challenge bad-faith use of their trademarks in a domain name (specifically, cybersquatting or trademark infringement). UDRP offers a cheaper, faster alternative to domain name disputes than court. When ICANN began to add many new top-level domains beyond the traditional ones (.com, .net, .org, and a few others), major commercial brands and their trademark attorneys predicted a plague of bad-faith registrations and threatened to hold up creation of these new top-level domains, including much-needed domains in non-Latin scripts such as Chinese, Arabic, and Cyrillic. In response, the community allowed trademark interests to create more enforcement mechanisms, but solely for these new top-level domains. One of these was Uniform Rapid Suspension (URS), a faster, cheaper version of UDRP. URS is a summary procedure designed for slam-dunk cases of cybersquatting or trademark infringement. it features shorter deadlines for responding to challenges, and its decisionmakers are paid much less than the panelists who decide UDRP cases. In a move that has drawn lots of criticism, ICANN announced that it is requiring the use of URS in the .org domain, along with other rules that were developed specifically for the newer domains. URS is a bad fit for .org, the third most-used domain and home to millions of nonprofit organizations (including, of course, eff.org). The .org domain has been around since 1985, long before ICANN was created. And with over ten million names already registered, there’s no reason to expect a “land rush” of people snatching up the names of popular brands and holding them for ransom. When nonprofit organizations use brand names and other commercial trademarks, it’s often to call out corporations for their misdeeds—a classic First Amendment-protected activity. That means challenges to domain names in .org need more careful, thorough consideration than URS can provide. Adding URS to the .org domain puts nonprofit organizations who strive to hold powerful corporations and governments accountable at risk of losing their domain names, effectively removing those organizations from the Internet until they can register a new name and teach the public how to find it. Losing a domain name means losing search engine placement, breaking every inbound link to the website, and knocking email and other vital services offline. Beyond URS, the new .org agreement gives Public Interest Registry carte blanche to “implement additional protections of the legal rights of third parties” whenever it chooses to. These aren’t necessarily limited to cases where a court has found a violation of law and orders a domain name suspended. And it could reach beyond disputes over domain names to include challenges to the content of a website, effectively making PIR a censorship bureau. This form of content regulation has already happened in some TLDs. Donuts and Radix, which operate hundreds of top-level domains, already suspend websites’ domain names based on accusations of copyright infringement from the Motion Picture Association of America, without a court order. Some registries also take down the domain names of pharmacy-related websites based on requests from private groups affiliated with U.S. pharmaceutical companies, again without a court order or due process. PIR, the operator of .org, has previously proposed to build its own copyright enforcement system. PIR quickly walked back that proposal after EFF spotlighted it. But PIR’s new agreement with ICANN provides a legal foundation for bringing back that proposal, or other forms of content regulation. And the existence of these contract terms could make it harder for PIR and registrars to say “No” the next time an industry group like MPAA, or a law enforcement agency from anywhere in the world, comes demanding that they act as judge, jury, and executioner of “bad” websites. Bypassing Users’ Input The process that led to these changes was problematic, too. The multistakeholder process, which is supposed to account for the views and needs of all groups affected by a policy change, was simply bypassed. ICANN did announce the new .org contract and provided for a period of public comment. But this seems to have been a hollow gesture. The Non-Commercial Stakeholder Group, a group that represents many hundreds of the organizations that have .org domain names, filed a comment laying out why that domain shouldn’t have the URS system and other “rights protection mechanisms” beyond the UDRP. EFF and the Domain Name Rights Coalition also filed a comment, which was joined by top academics and activists on domain name policy. An extraordinary and unprecedented 3,250 others filed comments opposing the new .org contract, mainly on the grounds that it removed price caps from .org registrations, potentially allowing Public Interest Registry to increase the fees it charges millions of nonprofit organizations. In contrast, only six commenters, including groups representing trademark holder interests and incumbent registries, filed supportive comments. But ICANN made no meaningful changes in response to these comments from the actual users of .org domain names. The contract they concluded on July 30th was the same as the one they proposed at the start of the public comment period. The ICANN Staff seem to think they can make any policies they choose by contract. What Comes Next? EFF has asked the ICANN board to reconsider their new contract, to submit the issue to the ICANN community for a decision, and to remove URS from the .org domain. Public Interest Registry has not yet created any new enforcement mechanisms, nor returned to the copyright enforcement proposal it made and shelved in 2016—but if the new contract stands, it will give them legal cover for doing so. It’s important that Internet users, especially nonprofits, make clear to ICANN, PIR, and PIR’s parent organization, the Internet Society, that nonprofits don’t need new, accelerated trademark enforcement or new forms of content regulation. After all, there’s no reason to think that these organizations will regulate the speech of Internet users any better than Facebook, YouTube, Twitter, and other prominent social networks have done. It would be best if they stay out of that role entirely.
>> mehr lesen

EFF Delegation Returns from Ecuador, says Ola Bini’s Case is Political, Not Criminal (Tue, 06 Aug 2019)
Globally Recognized Technologist Still Facing Charges in Drawn-Out Prosecution San Francisco – A team from the Electronic Frontier Foundation (EFF) has returned from a fact-finding mission in Quito for the case of Ola Bini—a globally renowned Swedish programmer who is facing tenuous computer-crime charges in Ecuador. Bini was detained in April, as he left his home in Quito to take a vacation to Japan. His detention was full of irregularities: for example, his warrant was for a “Russian hacker,” and Bini is Swedish and not a hacker. Just hours before Bini’s arrest, Ecuador’s Minister of the Interior, Maria Romo, held a press conference to announce that the government had located a “member of Wikileaks” in the country, and claimed there was evidence that person was “collaborating to destabilize the government.” Bini was not read his rights, allowed to contact his lawyer, or offered a translator. Bini was released from custody in June, following a successful Habeas Corpus plea by his lawyers. But he is still accused of “assault on the integrity of computer systems”—even though prosecutors have yet to make public any details of his alleged criminal behavior. “If someone breaks into a house, and authorities arrest a suspect, the prosecution should at the very least be able to tell you which house was broken into,” said EFF Director of Strategy Danny O’Brien, who was part of EFF’s delegation to Quito. “The same principle applies in the digital world.” In Ecuador, EFF’s team spoke to journalists, politicians, lawyers, academics, as well as to Bini and his defense team. These experts have concluded that Bini's continuing prosecution is a political case, not a criminal one. “We believe that Ecuadorian authorities have grown concerned about the wider political consequences of either abandoning Bini’s case or continuing to prosecute, creating an impasse,” said O’Brien. “But Ola Bini’s innocence or guilt should be determined by a fair trial that follows due process. It should in no way be impacted by potential political ramifications.” Bini has worked on several key open source projects, including JRuby, and several Ruby libraries, as well as implementations of the secure and open communication protocol OTR. He has also contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world. Bini recently co-founded Centro de Autonomía Digital, a non-profit organization devoted to creating user-friendly security tools. For more on Ola Bini and EFF’s delegation to Ecuador: https://www.eff.org/deeplinks/2019/08/ecuador-political-actors-must-step-away-ola-binis-case Tags:  ecuador olabini Contact:  Danny O'Brien Director of Strategy danny@eff.org
>> mehr lesen

DEEP DIVE: CBP’s Social Media Surveillance Poses Risks to Free Speech and Privacy Rights (Tue, 06 Aug 2019)
The U.S. Department of Homeland Security (DHS) and one of its component agencies, U.S. Customs and Border Protection (CBP), released a Privacy Impact Assessment [.pdf] on CBP’s practice of monitoring social media to enhance the agency’s “situational awareness.” As we’ve argued in relation to other government social media surveillance programs, this practice endangers the free speech and privacy rights of Americans. “Situational Awareness” The Privacy Impact Assessment (PIA) states that CBP searches public social media posts to bolster the agency’s “situational awareness”—which includes identifying “natural disasters, threats of violence, and other harmful events and activities” that may threaten the safety of CBP personnel or facilities, including ports of entry. The PIA aims to inform the public of privacy and related free speech risks associated with CBP’s collection of personally identifiable information (PII) when monitoring social media. CBP claims it only collects PII associated with social media—including a person’s name, social media username, address or approximate location, and publicly available phone number, email address, or other contact information—when “there is an imminent threat of loss of life, serious bodily harm, or credible threats to facilities or systems.” Why Now? It is unclear why DHS and CBP released this PIA now, especially since both agencies have been engaging in social media surveillance, including for situational awareness, for several years. The PIA cites authorizing policies DHS Directive No. 110-01 (June 8, 2012) [.pdf] and DHS Instruction 110-01-001 (June 8, 2012) [.pdf] as governing the use of social media by DHS and its component agencies (including CBP) for various “operational uses,” including situational awareness. The PIA also cites CBP Directive 5410-003, “Operational Use of Social Media” (Jan. 2, 2015), which does not appear to be public. EFF asked for the release of this document in a coalition letter sent to the DHS acting secretary in May. Federal law requires government agencies to publish certain documents to facilitate public transparency and accountability related to the government’s collection and use of personal information. The E-Government Act of 2002 requires a PIA “before initiating a new collection of information that will be collected, maintained, or disseminated using information technology” and when the information is “in an identifiable form.” Additionally, the Privacy Act of 1974 requires federal agencies to publish Systems of Records Notices (SORNs) in the Federal Register when they seek create new “systems of records” to collect and store personal information, allowing for the public to comment. This appears to be the first PIA that CBP has written related to social media monitoring. The PIA claims that the related SORN on social media monitoring for situational awareness is DHS/CBP-024 Intelligence Records System (CIRS) System of Records, 82 Fed. Reg. 44198 (Sept. 21, 2017). Given that DHS issued directives in 2012 and CBP issued a directive in 2015 around social media monitoring, this PIA comes seven years late. Moreover, there is no explanation as to why the SORN was published two years after CBP’s 2015 directive, nor why the present PIA was published two years after the SORN. In March, CBP came under scrutiny for engaging in surveillance of activists, journalists, attorneys, and others at the U.S.-Mexico border, with evidence suggesting that their social media profiles had been reviewed by the government. DHS and CBP released this PIA only three weeks after that scandal broke. Chilling Effect on Free Speech CBP’s social media surveillance poses a risk to the free expression rights of social media users. The PIA claims that CBP is only monitoring public social media posts, and thus “[i]ndividuals retain the right and ability to refrain from making information public or, in most cases, to remove previously posted information from their respective social media accounts.” While social media users retain control of their privacy settings, CBP’s policy chills free speech by causing people to self-censor—including curbing their public expression on the Internet for fear that CBP could collect their PII for discussing a topic of interest to CBP. Additionally, people running anonymous social media accounts might be afraid that PII collected could lead to their true identities being unmasked, despite that the Supreme Court has long held that anonymous speech is protected by the First Amendment. This chilling effect is exacerbated by the fact that CBP does not notify users when their PII is collected. CBP also may share information with other law enforcement agencies, which could result in immigration consequences or being added to a government watchlist. Finally, CBP’s definition of situational awareness is broad, and includes “information gathered from a variety of sources that, when communicated to emergency managers and decision makers, can form the basis for incident management decision making.” We have seen this chilling effect play out in real life. Only three weeks before DHS and CBP released this PIA, NBC7 San Diego broke the story that CBP, along with other DHS agencies, created a secret database of 59 activists, journalists, and attorneys whom the government flagged for additional screening at the U.S. border because they were allegedly associated with the migrant caravan. Dossiers on certain individuals included pictures from social media and notations of designations such as “administrator” of a Facebook group providing support to the caravan, indicating that the government had surveilled their social media profiles. As one lawyer stated, “It has a real chilling effect on people who might go down [to the border].” A journalist who was on the list of 59 individuals said the “increased scrutiny by border officials could have a chilling effect on freelance journalists covering the border.” EFF joined a coalition letter to the DHS acting secretary about CBP’s secret dossiers. Several senators wrote a follow-up letter [.pdf]. In May, CBP finally admitted to targeting journalists and others at the border, but justified its actions by claiming, without evidence, that journalists had “some level of participation in the violent incursion events.” In July, the DHS Inspector General [.pdf] informed the senators that her office would be launching an investigation into the circumstances surrounding the creation of the secret dossiers. She also indicated that the investigation will look into “other specific allegations of targeting and/or harassment of lawyers, journalists, and advocates, and evaluate whether CBP’s actions complied with law and policy.” CBP’s Practices Don’t Mitigate Risks to Free Speech The PIA claims that any negative impacts on free speech of social media surveillance are mitigated by both CBP policy and the Privacy Act’s prohibition on maintaining records of First Amendment activity. Yet, these supposed safeguards ultimately provide little protection. First Amendment The PIA emphasizes that CBP personnel are trained to “use a balancing test” to determine whether social media information presents a “credible threat”—as opposed to First Amendment-protected speech—and thus may be collected. According to the PIA, the balancing test involves gauging “the weight of a First Amendment claim, the severity of the threat, and the credibility of the threat.” However, this balancing test has no basis in constitutional law. The Supreme Court has a long line of decisions that have established when speech rises to the level of a true threat or incitement to violence and is thus unprotected by the First Amendment. In Watts v. United States (1969), the Supreme Court held that under the First Amendment only “true threats” may be punishable. The Court stated that alleged threats must be viewed in context, and noted that in the “political arena” in particular, language “is often vituperative, abusive, and inexact.” Thus, the Court further held that “political hyperbole” is not a true threat. In Elonis v. United States (2015), the Supreme Court held that an individual may not be criminally prosecuted for making a true threat based only on an objective test of negligence, i.e., whether a reasonable person would have understood the communication as a threat. Rather, the defendant’s subjective state of mind must be considered, including whether he intended to make a threat or knew that his statement would be viewed as a threat. (The Court left open whether a recklessness standard would also be sufficient for the speech to fall out of First Amendment protections.) Additionally, in Brandenburg v. Ohio (1969), the Supreme Court held that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” There, the Court struck down an Ohio law that penalized individuals who advocated for violence to accomplish political reform, holding that the abstract advocacy of violence “is not the same as preparing a group for violent action and steeling it to such action.” In Hess v. Indiana (1973), the Court further clarified that speech that is mere “advocacy of illegal action at some indefinite future time,” is “not directed to any person or group of persons,” and is unsupported by evidence or rational inference that the speaker’s words were “intended to produce, and likely to produce, imminent disorder,” remains protected by the First Amendment. Similarly, the Court in NAACP v. Claiborne Hardware Co. (1982), held that “[a]n advocate must be free to stimulate his audience with spontaneous and emotional appeals for unity and action in a common cause. When such appeals do not incite lawless action, they must be regarded as protected speech.” While the PIA states that CBP considers threatening posts to be those that “infer an intent, or incite others, to do physical harm or cause damage, injury, or destruction,” the PIA does not fully embrace the nuances of the Supreme Court’s jurisprudence—and CBP’s balancing test fails to comport with constitutional law. A seemingly threatening social media post may, in fact, be protected by the First Amendment if it is political hyperbole or other contextual facts suggest that the speaker did not intend to make a threat or did not believe that readers would view the post as a threat. Furthermore, a social media post that advocates for violence against CBP facilities or personnel may nevertheless be protected by the First Amendment if it is not directed at any particular person or group, and evidence does not reasonably indicate that the speaker intended to incite imminent violence or illegal action, or that imminent violence or illegal action is likely to result from the speech. Thus, CBP may be collecting social media information and related PII even when the speech is protected by the First Amendment—contrary to its own policy—and further contributing to the chilling effect of CBP’s social media surveillance program. Privacy Act The PIA also mentions the Privacy Act, a federal law that establishes rules about what type of information the government can collect and keep about U.S. persons. In particular, the PIA points to 5 U.S.C. § 552a(e)(7), the prohibition against federal agencies maintaining records “describing how any individual exercises rights guaranteed by the First Amendment.” Unfortunately, this prohibition is followed by an exception that effectively swallows the rule—that information about First Amendment activity may be collected if it is “pertinent to and within the scope of an authorized law enforcement activity.” In Raimondo v. FBI, a Privacy Act case currently before the Ninth Circuit, the FBI kept surveillance files for “threat assessments” on two individuals who ran an antiwar website. EFF argued in an amicus brief against an expansive interpretation of the Privacy Act’s law enforcement activity exception in light of modern technology—specifically, given the ease with which law enforcement can collect, store, and share information about First Amendment activity on the internet, such information should not be stored “in government files in perpetuity when the record is not relevant to an active investigation.” We reminded the Ninth Circuit that in MacPherson v. I.R.S. (1986), the court recognized that “even ‘incidental’ surveillance and recording of innocent people exercising their First Amendment rights may have a ‘chilling effect’ on those rights that (e)(7) [of the Privacy Act] was intended to prohibit.” Raimondo demonstrates the seemingly limitless nature of the law enforcement activity exception, including allowing for the indefinite retention of records of online activism and journalism, activity that is clearly protected by the First Amendment. Similarly, under this PIA, because CBP follows a “credible threat” assessment not rooted in the First Amendment and the Privacy Act’s law enforcement activity exception can be interpreted broadly, CBP could very well collect and retain information that is protected by the First Amendment. Unidentified Government Social Media Profiles Pose Risk to User Privacy The PIA inspires little confidence not only in DHS and CBP’s interpretation of the law related to protected speech, but also in CBP personnel’s ability to follow the agencies’ own policies related to respecting social media users’ privacy. The PIA states that CBP personnel “may conceal their identity when viewing social media for operational security purposes,” effectively allowing CBP agents to create fake accounts. However, this provision conflicts with DHS’s 2012 directive, which requires employees to “[u]se online screen names or identities that indicate an official DHS affiliation and use DHS email addresses to open accounts used when engaging in social media in the performance of their duties.” Moreover, if, as according to the PIA, CBP personnel do not engage with other social media users and may only monitor “publicly available, open source social media,” it begs the question: why would a CBP agent need to create a fake account? Public posts or information are equally available to all social media users on a platform. Why would CBP personnel need to conceal their identity before viewing a publicly available post if they are not attempting to engage with a user? This concern is backed by past practices where DHS agencies used fake profiles and interacted with users during the course of monitoring their social media activity. Earlier this year, journalists revealed that U.S. Immigration and Customs Enforcement (ICE) officers created fake Facebook and LinkedIn profiles to lend legitimacy to a sham university intended to identify individuals allegedly engaged in immigration fraud. There, ICE officers friended other users and exchanged emails with students, thereby potentially bypassing social media privacy settings and gaining access to information intended to remain private. Such practices not only violate DHS’ existing policies, but also allow law enforcement to obtain access to content that would otherwise require a probable cause warrant. Furthermore, fake profiles violate the policies of several social media platforms. Facebook has publicly stated that law enforcement impersonator profiles violate the company’s terms of service.  Fighting Back The CBP PIA is just one sliver of a broad federal government campaign to engage in social media surveillance. DHS, through its National Operations Center, has been monitoring social media for “situational awareness” since at least 2010. DHS also has been monitoring social media for intelligence gathering purposes. More recently, DHS and the State Department have greatly expanded social media surveillance to vet visitors and immigrants to the U.S., which EFF and other civil society groups have consistently opposed. Several congressional committees have the responsibility and the opportunity to review CBP’s budget and provide oversight of the agency’s operations, including its social media surveillance.  At a minimum, EFF urges these committees to ensure that CBP is following DHS’ own policies and is reporting, both to Congress and the public, how often officers are engaging in social media monitoring to understand the prevalence and scale of this program. Fundamentally, Congress should be asking why social media surveillance programs are necessary for public safety. Additionally, Congress has the responsibility to ensure that CBP and DHS are abiding by settled case law respecting the free speech and privacy rights of Americans and foreign travelers. We’re also pushing social media companies to do more when they identify law enforcement impersonator profiles at the local, state, and federal level. Earlier this year, Facebook’s legal staff demanded that the Memphis Police Department “cease all activities on Facebook that involve the use of fake accounts or impersonation of others.” Additionally, Facebook updated its “Information for Law Enforcement Authorities” page to highlight how its misrepresentation policy also applies to police. While EFF applauds these steps, we are skeptical that warnings or policy changes alone will deter the activity. Facebook says it will delete accounts brought to its attention, but too often these accounts only become publicly known—through a lawsuit or a media report—long after the damage has been done. Instead, EFF is calling on Facebook to take specific steps to provide transparency into these law enforcement impersonator accounts by notifying users who have interacted with these accounts, following the Santa Clara Principles when removing the law enforcement accounts, and adding notifications to agencies’ Facebook pages to inform the public when the agencies’ policies permit impersonator accounts in violation of Facebook’s policy. Please contact your members of Congress and urge them to hold CBP accountable. Congress depends on hearing from their constituents to know where to focus, and public pressure can ensure that social media surveillance won’t get overlooked.
>> mehr lesen

'IBM PC Compatible': How Adversarial Interoperability Saved PCs From Monopolization (Mon, 05 Aug 2019)
Adversarial interoperability is what happens when someone makes a new product or service that works with a dominant product or service, against the wishes of the dominant business. Though there are examples of adversarial interoperability going back to early phonograms and even before, the computer industry has always especially relied on adversarial interoperability to keep markets competitive and innovative. This used to be especially true for personal computers. From 1969 to 1982, IBM was locked in battle with the US Department of Justice over whether it had a monopoly over mainframe computers; but even before the DOJ dropped the suit in 1982, the computing market had moved on, with mainframes dwindling in importance and personal computers rising to take their place. The PC revolution owes much to Intel's 8080 chip, a cheap processor that originally found a market in embedded controllers but eventually became the basis for early personal computers, often built by hobbyists. As Intel progressed to 16-bit chips like the 8086 and 8088, IBM entered the PC market with its first personal computer, which quickly became the de facto standard for PC hardware. There are many reasons that IBM came to dominate the fragmented PC market: they had the name recognition ("No one ever got fired for buying IBM," as the saying went) and the manufacturing experience to produce reliable products. IBM's success prompted multiple manufacturers to the market, creating a whole ecosystem of Intel-based personal computers that competed with IBM. In theory, all of these computers could run MS-DOS, the Microsoft operating system adapted from 86-DOS, which it acquired from Seattle Computer Products, but, in practice, getting MS-DOS to run on a given computer required quite a bit of tweaking, thanks to differences in controllers and other components. When a computer company created a new system and wanted to make sure it could run MS-DOS, Microsoft would refer the manufacturer to Phoenix Software (now Phoenix Technologies), Microsoft's preferred integration partner, where a young software-hardware wizard named Tom Jennings (creator of the pioneering networked BBS software FidoNet) would work with Microsoft's MS-DOS source code to create a custom build of MS-DOS that would run on the new system. While this worked, it meant that major software packages like Visicalc and Lotus 1-2-3 would have to release different "PC-compatible" versions, one for each manufacturer's system. All of this was cumbersome, error-prone, and expensive, and it meant, for example, that retailers would have to stock multiple, slightly different versions of each major software program (this was in the days when software was sold from physical retail locations, on floppy disks packaged in plastic bags or shrink-wrapped boxes). The PC marked a departure for IBM from its usual business practice of pursuing advantage by manufacturing entire systems, down to the subcomponents. Instead, IBM decided to go with an "open" design that incorporated the same commodity parts that the existing PC vendors were using, including MS-DOS and Intel's 8086 chip. To accompany this open hardware, IBM published exhaustive technical documentation that covered every pin on every chip, every way that programmers could interact with IBM's firmware (analogous to today's "APIs"), as well as all the non-standard specifications for its proprietary ROM chip, which included things like the addresses where IBM had stored the fonts it bundled with the system. As IBM's PC became the standard, rival hardware manufacturers realized that they would have to create systems that were compatible with IBM's systems. The software vendors were tired of supporting a lot of idiosyncratic hardware configurations, and IT managers didn't want to have to juggle multiple versions of the software they relied on. Unless non-IBM PCs could run software optimized for IBM's systems, the market for those systems would dwindle and wither. Phoenix had an answer. They asked Jennings to create a detailed specification that included the full suite of functions on IBM's ROMs, including the non-standard features that IBM had documented but didn’t guarantee in future versions of the ROM. Then Phoenix hired a "clean-room team" of programmers who had never written Intel code and had never interacted with an IBM PC (they were programmers who specialized in developing software for the Texas Instruments 9900 chip). These programmers turned Jennings's spec into the software for a new, IBM-PC-compatible ROM that Phoenix created and began to sell to IBM's rivals. These rivals could now configure systems with the same commodity components that IBM used, and, thanks to Phoenix's ROMs, could also support the same version of MS-DOS and the same application programs that ran on the IBM PC. So it was that IBM, a company that had demonstrated its expertise in cornering and dominating computing markets, was not able to monopolize the PC. Instead, dozens of manufacturers competed with it, extending the basic IBM architecture in novel and innovative ways, competing to find ways to drive down prices, and, eventually, giving us the modern computing landscape. Phoenix's adversarial interoperability meant that IBM couldn't exclude competitors from the market, even though it had more capital, name recognition and distribution than any rival. Instead, IBM was constantly challenged and disciplined by rivals who nipped at its heels, or even pulled ahead of it. Today, computing is dominated by a handful of players, and in many classes of devices, only one vendor is able to make compatible systems. If you want to run iPhone apps, you need to buy a device from Apple, a company that is larger and more powerful than IBM was at its peak. Why have we not seen an adversarial interoperability incursion into these dominant players' markets? Why are there no iPhone-compatible devices that replicate Apple's APIs and run their code? In the years since the PC wars, adversarial interoperability has been continuously eroded. In 1986, Congress passed the Computer Fraud and Abuse Act, a sweeping "anti-hacking" law that Facebook and other companies have abused to obtain massive damages based on nothing more than terms-of-service violations. In 1998, Congress adopted the Digital Millennium Copyright Act, whose Section 1201 threatens those who bypass "access controls" for copyrighted works (including software) with both criminal and civil sanctions; this has become a go-to legal regime for threatening anyone who expands the functionality of locked devices, from cable boxes to mobile phones. Software patents were almost unheard of in the 1980s; in recent years, the US Patent and Trademark Office's laissez-faire attitude to granting software patents has created a patent thicket around the most trivial of technological innovations. Add to these other doctrines like "tortious interference with contract" (which lets incumbents threaten competitors whose customers use new products to get out of onerous restrictions and terms of service), and it's hard to see how a company like Phoenix could make a compatible ROM today. Such an effort would have to contend with clickthrough agreements; encrypted software that couldn't be decompiled without risking DMCA 1201 liability; bushels of low-quality (but expensive to litigate) software patents, and other threats that would scare off investors and partners. And things are getting worse, not better: Oracle has convinced an appeals court to ban API reimplementations, which would have stopped Phoenix's ROM project dead in its tracks. Concentration in the tech-sector is the result of many factors, including out-of-control mergers, but as we contemplate ways to decentralize our tech world, let's not forget adversarial interoperability. Historically, adversarial interoperability has been one of the most reliable tools for fighting monopoly, and there's no reason it couldn't play that role again, if only we'd enact the legal reforms needed to clear the way for tomorrow's Phoenix Computers and Tom Jenningses. Update: We have corrected this post to remove inaccurate chronology for IBM's PC launch. Images below: IBM PC Technical Reference, courtesy of Tom Jennings, licensed CC0.
>> mehr lesen

ICE’s Rapid DNA Testing on Migrants at the Border Is Yet Another Iteration of Family Separation (Fri, 02 Aug 2019)
As the number of migrants at the southern border has surged in the past several months, the Trump administration has turned to increasingly draconian measures as a form of deterrence. While the separation of children from their parents and housing of migrants in overcrowded and ill-equipped holding facilities have rightfully made front-page headlines, the administration’s latest effort—to conduct Rapid DNA testing on migrant families at the border—has flown under the radar. However, this new tactic presents serious privacy concerns about the collection of biometric information on one of the most vulnerable populations in the U.S. today—and raises questions of where this practice could lead. Background In May 2019, CNN reported that Immigration and Customs Enforcement (ICE) was launching a pilot program to conduct Rapid DNA testing on families at the U.S.-Mexico border. The purpose of the pilot program was to identify and prosecute individuals who were not related through a biological parent-child relationship. The pilot program was confirmed as a joint operation between ICE and Customs and Border Protection (CBP) at two locations at the border. The government contracted with ANDE, a Massachusetts-based Rapid DNA testing company, to conduct the Rapid DNA testing for the pilot program. Later that month, ICE released a Request for Proposal seeking a contractor to expand the Rapid DNA testing program for ten months at seven locations at the U.S.-Mexico border. In mid-June, Bode Cellmark Forensics, Inc. was awarded the Rapid DNA testing expansion contract for $5.2 million. On June 25, 2019, the U.S. Department of Homeland Security (DHS) and ICE released a Privacy Impact Assessment (PIA) on Rapid DNA Operational Use [.pdf], stating that: The issue of “family unit fraud” has been increasing since the spring of 2018 and that such fraud “can lead to, or stem from, other crimes, including immigration violations, identity and benefit fraud, alien smuggling, human trafficking, foreign government corruption, and child exploitation.” Rapid DNA testing to establish a biological parent-child relationship can be conducted in approximately 90 minutes without human review, unless there is an inconclusive result. Families subjected to the testing are provided with a privacy notice and consent form. The Rapid DNA test is voluntary, but “failure to submit to Rapid DNA testing may be taken into account as one factor in ICE’s assessment of the validity of the claimed parent-child relationship.” Rapid DNA testing will only be used to establish the biological parent-child relationship. After the testing results are returned, the vendor is required to destroy DNA samples and purge electronic data from the system. ICE’s initial planned use of Rapid DNA testing will involve migrant families at the border that agents suspect of family unit fraud; however, it may roll out the use of Rapid DNA more broadly in the future, including to lawful permanent residents and to situations beyond the border. Problems with Rapid DNA Testing In 2017, the Swedish National Forensic Centre published a report [.pdf] detailing serious problems with a Rapid DNA testing system, the RapidHit System by IntegenX, including: [N]umerous issues with the system related to the hardware, firmware, software as well as the cartridges. The most severe issues are the retrieval of an incorrect DNA profile, PCR product or sample leakage and the low success rate. In total 36% of the runs had problems or errors effecting two or more samples resulting in a 77% success rate for samples consisting of . . . amounts where complete DNA profiles are expected. Although not the same system, the Swedish report’s characterization of the many problems with their Rapid DNA testing technology just two years ago raises questions about the accuracy of the Rapid DNA testing used by ICE. The PIA states that a biological parent-child match must be verified by a 99.5% accuracy. But we don’t even know the baseline rate of success that these Rapid DNA testing companies have established: the government has provided no statistical information or peer-reviewed studies as to the testing’s accuracy. Nor is there any indication in the PIA of an appeals process if a test misidentifies a parent and child as biologically unrelated, implicating a lack of due process. When the stakes are so high—indefinite separation of a child from their parent—the government must have, at a minimum, a process to challenge designations of biological non-relation. Continued Erosion of Privacy Rights Rapid DNA testing of migrant families also raises serious concerns about the Fourth Amendment, which protects against searches and seizures—in this case, DNA—without a warrant. The government claims that it retains authority to conduct Rapid DNA testing from 8 U.S.C. § 1357(b), which allows ICE to “take and consider evidence concerning the privilege of any person to enter, reenter, pass through, or reside in the United States.” However, this statute generally has applied to “evidence” such as inspecting entry documents, not conducting invasive DNA testing. In fact, recent actions taken by Congress indicate that ICE doesn’t have the statutory authority to conduct Rapid DNA testing. In 2016, Congress failed to pass H.R. 5203, which proposed language that would require DNA tests to verify family relationships for visa petitions. Similar language appeared in two rejected immigration bills put before Congress in 2018, H.R. 4760 and H.R. 6136. Just this week, Rep. Lance Gooden introduced a bill [.pdf] that would amend federal law to require DNA testing for adults entering the country with accompanying children to determine a familial relationship. Congress’ repeated actions to amend federal law to give ICE statutory authority to conduct DNA testing indicates that ICE currently doesn’t possess such authority. The government relies on the fact that these tests are “voluntary” and that migrants are provided with consent forms and privacy notices prior to being tested. However, the consent forms note that if families decide to opt out of the Rapid DNA testing, that could factor into a decision of whether or not to separate parent from child in immigration detention. Consent is meaningless when families are threatened with indefinite separation if they don’t provide it. Despite the PIA’s claims that DNA profiles are not stored and that tests are immediately destroyed after the results return, there still exist serious concerns about the amount of personal information collected through Rapid DNA testing. Just last month, the Washington Post reported a massive data breach when face and license plate images held by a CBP subcontractor, Perceptics, were hacked. CBP claimed that Perceptics kept the data in its own database and was trying to match faces to license plates at the time of the breach, both practices contrary to CBP policy. The threats of government (or subcontractor) misconduct and breaches further jeopardize the vast amount of personal information at stake as a result of Rapid DNA testing. What’s a Family? Perhaps the most fundamental question that Rapid DNA testing at the border raises is one that technology cannot solve: what constitutes a family? One of the reasons the government is employing Rapid DNA testing is to show a lack of biological parent-child relationship, which it then uses to justify separating the adult and child in different detention facilities. While the court-ordered Flores settlement governs how long ICE can hold minors in government custody (which immigrant rights attorneys allege the federal government is currently violating), no such settlement governs the detention of adults. The Trump administration believes that the threat of indefinite detention will act as a deterrence to migration across the southern border. But the federal government’s narrow interpretation of a family as a biological parent-child relationship has no basis in immigration law. In fact, a person may apply for asylum and claim as a derivative a child born in or out of wedlock, a stepchild, or an adopted child. And the government generally encourages a broad definition of a parent-child relationship. For example, under common law, there is a legal presumption of paternity if the child is born in wedlock or if there is otherwise no dispute. This presumption exists even though the rate of people in the U.S. biologically unconnected to at least one of their parents—due to adoption, infidelity, or other circumstances—is fairly significant. With the introduction of Rapid DNA testing on migrant families at the border, the government is challenging familial relationships, including the presumption of paternity, thereby imposing a different standard on immigrants than citizens. Window to a Dystopian Future The Trump administration is creating its very own dystopia at the border, not only by separating families and caging migrants, but also by conducting Rapid DNA testing on one of the most vulnerable groups in the country. ICE is working quietly and quickly to expand the program, having gone from a pilot to a privacy notice for an ostensibly agency-wide program in less than two months. And the PIA foreshadows what could be ahead: Rapid DNA testing of lawful permanent residents and in contexts outside of the border. EFF encourages the public to contact their congressional representatives and voice their concerns about ICE’s Rapid DNA testing program. We cannot allow the federal government to exploit migration as an excuse to erode fundamental privacy rights through DNA testing. Related Cases:  Maryland v. King Federal DNA Collection
>> mehr lesen

In Ecuador, Political Actors Must Step Away From Ola Bini’s Case (Fri, 02 Aug 2019)
After spending nearly a week in Ecuador to learn more about the case against Swedish open source software developer Ola Bini, who was arrested here in April, EFF has found a clear consensus among the experts: the political consequences of his arrest appear to be outweighing any actual evidence the police have against him. The details of who stood to benefit from Bini's prosecution varied depending on who we spoke with, but overall we have been deeply disturbed by how intertwined the investigation is to the political effects of its outcome. Ola Bini’s innocence or guilt is a fact that should be determined only be a fair trial that follows due process; it should in no way be impacted by potential political ramifications. mytubethumb play %3Cobject%20width%3D%22480%22%20height%3D%22395%22%3E%3Cparam%20name%3D%22movie%22%20value%3D%22https%3A%2F%2Fwww.youtube.com%2Fv%2FAIxBBzFYBj4%3Fautoplay%3D1%22%20%2F%3E%3Cparam%20name%3D%22wmode%22%20value%3D%22transparent%22%20%2F%3E%3Cembed%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fv%2FAIxBBzFYBj4%3Fautoplay%3D1%22%20type%3D%22application%2Fx-shockwave-flash%22%20wmode%3D%22transparent%22%20width%3D%22480%22%20height%3D%22395%22%3E%3C%2Fembed%3E%3C%2Fobject%3E Privacy info. This embed will serve content from youtube.com EFF Press Conference on Ola Bini Case, August 2nd  Since EFF was founded in 1990, we have frequently stepped in to defend security researchers from misunderstandings made by law enforcement, and raised awareness when technologists in the United States have been incarcerated. And last year, we launched a new Coders’ Rights in Latin America project, which seeks to connect the work of security research with the fundamental rights of its practitioners. While security researchers play a vital role in fixing flaws in the software and hardware that everyone uses, their actions and behaviors are often misunderstood. For example, as part of their work, they may discover and inform a company of a dangerous software flaw—a civic duty that could be confused with as a hacking attack. When we first began analyzing Ola Bini’s case, we thought this was what had happened. The so-called “evidence” presented after his arrest—which included USB sticks, security keys, books on programming—suggested this might be the case. Of course, owning such things is not a crime, but together, they can seem suspicious to an authority who isn’t in the know. But, as the case progressed, questions arose that we could not answer from California, which is why we traveled to Ecuador to better understand what was happening. This week, three members of our team met directly with those involved and others familiar with the Ecuadorian criminal justice system, to get a clearer sense of what’s happened in the case. Ola Bini is known globally as a computer security expert; he is someone who builds secure tools and contributes to free software projects. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and most recently, Ola co-founded a non-profit organization devoted to creating user-friendly security tools. A picture of ola bini. The picture shows a young man in a suit and tie, wearing a black hat. Ola Bini What Ola is not known for, however, is conducting the kind of security research that could be mistaken for an “assault on the integrity of computer systems,” the crime for which he is being investigated. Furthermore, the lack of details about this alleged hacking attack is a point of confusion for EFF. If someone breaks into a house, and authorities arrest a suspect, the prosecution should at the very least be able to tell you which house was broken into. The same principle should apply in the digital world. Ola Bini has been facing prosecution now for nearly four months, and we still haven’t been told what systems he is supposed to have broken into, or any other details of his alleged criminal behavior. After being in Quito for a week and speaking to journalists, politicians, lawyers, academics, and Ola and his defense team—and extending invitations to Interior Minister María Paula Romo and Diana Salazar Mendez to meet with us—we believe we have a better picture of what is happening. In brief, based on the interviews that we have conducted this week, our conclusion is that Bini's prosecution is a political case, not a criminal one.  Bini's lawyers told us that they have counted 65 violations of due process so far during the trial, and the Habeas Corpus decision confirmed the weakness of the initial detention. Journalists have told us that no one is able to provide them with concrete descriptions of what he has done. And we know that while Ola Bini’s behavior and contacts in the security world may look strange to authorities, his computer security expertise is not a crime. We urge political actors of all sides to step away from this case, and to allow justice to be done. If they refuse, they risk damaging the reputation of Ecuador’s judicial system abroad, and violating the international human rights standards as defined within the Inter-American system for the protection of human rights. 
>> mehr lesen

EFF at Vegas Security Week (Thu, 01 Aug 2019)
EFF is back this year at Vegas Security Week, sometimes affectionately known as Hacker Summer Camp. Stop by our booths at BSides, Black Hat, and DEF CON to find out about the latest developments in protecting digital freedom, sign up for our action alerts and mailing list, and donate to become an EFF member. We'll also have our limited-edition DEF CON 27 shirts available. These shirts have a puzzle incorporated into the design—try your hand at cracking it! BSides Las Vegas 2019August 6-7, 2019, Tuscany Suites and CasinoBooth Location: Chillout Room Black Hat Briefings USA 2019August 7-8, 2019, Mandalay BayBooth Location: Business Hall DEF CON 27August 8-11, 2019, Paris, Bally's, and Planet Hollywood Casinos Booth Location: Vendor Hall As in past years, EFF staff attorneys will be present to help support the community. If you have legal concerns regarding an upcoming talk or sensitive InfoSec research that you are conducting at any time, please email info@eff.org and we will do our best to assist you. In addition to visiting the booth, you can attend EFF staff talks at each of the conferences on a variety of topics related to digital security and online rights. Check out the schedule of events below. BSides Schedule BSidesLV 2019: Ask the EFFAugust 6, 2019 - 6:00pm to 6:55pm Location: Underground Track “Ask the EFF” will be a panel presentation and question-and-answer session, featuring Kurt Opsahl, Deputy Executive Director and General Counsel; Eva Galperin, Director of Cyber Security; Nathan ‘nash’ Sheard, Grassroots Advocacy Organizer and India McKinney, Legislative Analyst. It’s your chance to ask EFF questions about law and technology issues that are important to you. BSidesLV 2019: Why Can't We Be Friends (Ask a Fed & the EFF.)August 7, 2019 - 5:00pm to 5:55pm Location: Ground 1234 Do you dance madly on the lip of the volcano regarding your own research, or would like to research a particular topic that you feel might have a non-desirable personal outcome? Do you know someone who does these things? If so, you should come to this session and learn about some new processes and relationships researchers can benefit from. Bring your questions to Kurt Opsahl from EFF and Russell Handorf of the FBI. Black Hat Schedule Black Hat Briefings 2019: Hacking for the Greater Good - Empowering Technologists to Strengthen Digital SocietyAugust 7, 2019 - 11:15am to 12:05pmLocation: South Seas CDFTrack: CommunityWe’re at a critical juncture right now where the benefits from technological advances are increasingly counterbalanced by harmful applications and perilous consequences. To address these issues we need the critical thinking, creativity, and passion that ethical hackers and technologists use to strengthen cybersecurity applied to social causes and protecting the public interest. In this panel, security technologist Bruce Schneier, Mozilla Fellow and Graphika Chief Innovation Officer Camille Francois and EFF Director of Cybersecurity Eva Galperin will discuss specific examples where public interest technologists are most needed to ensure an open, positive and safe digital society and provide suggestions for what hackers and security-forward companies can do to solve some of the biggest social problems we have and make a difference. Black Hat Briefings 2019: Speak Tech to Power - Working with Congress on Tech PolicyAugust 7, 2019 - 11:15am to 12:05pmLocation: South Pacific HI, Lower Level, North HallTrack: Community Workshops Election Security vulnerabilities, Data Breaches, Encryption Backdoors: Lawmakers have plenty of ideas about where the problems are in technology policy and even more ideas about how to fix them. How will these fixes impact the infosec community, and are they even the right solutions? Come hear Deputy Executive Director and General Counsel Kurt Opsahl, Legislative Analyst India McKinney, and Technologists Jeremy Gillula and Andrés Arrieta discuss the proposals we're tracking and how we can work together to inform the legislative process. DEF CON 27 Schedule DEF CON 27: r00tz Asylum Opening CeremoniesAugust 9, 2019 - 10:00am to 10:15amContest Location: Planet Hollywood, The Studio r00tz Asylum at DEF CON is a safe and creative space for kids to learn white-hat hacking from the leading security researchers from around the world. Through hands-on workshops and contests, DEF CON’s youngest attendees understand how to safely deploy the hacker mindset in today’s increasingly digital and prone to vulnerabilities world. Only after mastering the honor code, kids learn reverse engineering, soldering, lock-picking, cryptography and how to responsibly disclose security bugs. r00tz’s mission is to empower the next generation of technologists and inventors to make the future of our digital world safer. Attend the opening ceremonies to hear from Nic0, y0rk, & EFF's General Counsel Kurt Opsahl. DEF CON 27: EFF Tech TriviaAugust 9, 2019 - 5:00pm to 7:00pmContest Location: Contest Stage, Planet Hollywood Mezzanine EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and Internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Cup and EFF swag pack. The second and third place teams will also win great EFF gear. DEF CON 27: WISP Leadership Panel August 9, 2019 - 6:30 to 7:00pm Location: Caesars Palace Suite Join Women in Security and Privacy (WISP), Women in CyberSecurity (WiCyS), and BSides Las Vegas for an informal networking mixer and leadership panel in one of the biggest suites Vegas has to offer. Attendees can expect food, drink, swag, and an opportunity to hear from some stellar women in security. RSVP is required for entry. DEF CON 27: Meet the EFF - Meetup PanelAugust 10, 2019 - 8:00pm to 10:00pmLocation: Fireside Lounge at Planet Hollywood Join EFF staffers for a candid chat about how the law is racing to catch up with technological change. Then, meet representatives from Electronic Frontier Alliance allied community and campus organizations from across the country. These technologists and advocates are working within their communities to educate and empower their neighbors in the fight for data privacy and digital rights.This discussion will include updates on current EFF issues updates on cases and legislation affecting security research, and much more. Half the session will be given over to question-and-answer, so it's your chance to ask EFF questions about the law, surveillance and technology issues that are important to you. Want to keep up with EFF’s efforts to secure a better digital future? Subscribe to Effector and check out our events calendar. If you’re unable to visit us while we’re in Vegas and would like to support our work, consider becoming a member.
>> mehr lesen

Google’s Plans for Chrome Extensions Won’t Really Help Security (Thu, 01 Aug 2019)
Note: Sam Jadali, the author of the DataSpii report referenced in this blog post, is an EFF Coders’ Rights client. However, the information about DataSpii in this post is based entirely on public reports. Last week we learned about DataSpii, a report by independent researcher Sam Jadali about the “catastrophic data leak” wrought by a collection of browser extensions that surreptitiously extracted their users’ browsing history (and in some cases portions of visited web pages). Over four million users may have had sensitive information leaked to data brokers, including tax returns, travel itineraries, medical records, and corporate secrets. While DataSpii included extensions in both the Chrome and Firefox extension marketplaces, the majority of those affected used Chrome. Naturally, this led reporters to ask Google for comment. In response to questions about DataSpii from Ars Technica, Google officials pointed out that they have “announced technical changes to how extensions work that will mitigate or prevent this behavior.” Here, Google is referring to its controversial set of proposed changes to curtail extension capabilities, known as Manifest V3.  As both security experts and the developers of extensions that will be greatly harmed by Manifest V3, we’re here to tell you: Google’s statement just isn’t true. Manifest V3 is a blunt instrument that will do little to improve security while severely limiting future innovation. To understand why, we have to dive into the technical details of what Manifest V3 will and won’t do, and what Google should do instead. The Truth About Manifest V3 To start with, the Manifest V3 proposal won't do much about evil extensions extracting people’s browsing histories and sending them off to questionable data aggregators.  That’s because Manifest V3 doesn’t change the observational APIs available to extensions. (For extension developers, that means Manifest V3 isn’t changing the observational parts of chrome.webRequest.) In other words, Manifest V3 will still allow extensions to observe the same data as before, including what URLs users visit and the contents of pages users visit. (Privacy Badger and other extensions rely on these observational APIs.) Additionally, Manifest V3 won’t change anything about how “content scripts” work. Content scripts are pieces of Javascript that allow extensions to interact with the contents of web pages, both an important capability to allow extensions to deliver useful functionality and yet another way to extract user browsing data. One change in Manifest V3 that may or may not help security is how extensions get permission to interact with websites. Under Manifest V3, users will be able to choose when they’re visiting a website whether or not they want to give the extension access to the data on that website. Of course it’s not practical to have to allow an ad- or tracker-blocker or accessibility-focused extension every time you visit a new site, so Chrome will still allow users to give extensions permission to run on all sites. As a result, extensions that are designed to run on every website—like several of those involved in DataSpii—will still be able to access and leak data. The only part of Manifest V3 that goes directly to the heart of stopping DataSpii-like abuses is banning remotely hosted code. You can’t ensure extensions are what they appear to be if you give them the ability to download new instructions after they’re installed. But you don't need the rest of Google’s proposed API changes to stop this narrow form of bad extension behavior. Manifest V3 Crushes Innovation What Manifest V3 does do is stifle innovation. Google keeps claiming that the proposed changes are not meant to “[prevent] the development of ad blockers.” Perhaps not, but what they will do in their present form is effectively destroy powerful privacy and security tools such as uMatrix and NoScript. That’s because a central part of Manifest V3 is the removal of a set of powerful capabilities that uMatrix, NoScript, and other extensions rely on to protect users (for developers, we’re talking about request modification using chrome.webRequest). Currently, an extension with the right permissions can review each request before it goes out, examine and modify the request however it wants, and then decide to complete the request or block it altogether. This enables a whole range of creative, innovative, and highly customizable extensions that give users nearly complete control over the requests that their browser makes. Manifest V3 replaces these capabilities with a narrowly-defined API (declarativeNetRequest) that will limit developers to a preset number of ways of modifying web requests. Extensions won’t be able to modify most headers or make decisions about whether to block or redirect based on contextual data. This new API appears to be based on a simplified version of Adblock Plus. If your extension doesn’t work just like Adblock Plus, you will find yourself trying to fit a square peg into a round hole. If you think of a cool feature in the future that doesn’t fit into the Adblock Plus model, you won’t be able to make an extension using your idea unless you can get Google to implement it first. Good luck! Google doesn’t have an encouraging track record of implementing functionality that developers want, nor is it at the top of Google’s own priority list. Legitimate use cases will never get a chance in Chrome for any number of reasons. Whether due to lack of resources or plain apathy, the end result will be the same—removing these capabilities means less security and privacy protection for Chrome’s users. For developers of ad- and tracker-blocking extensions, flexible APIs aren’t just nice to have, they are a requirement. When particular privacy protections gain popularity, ads and trackers evolve to evade them. As a result, the blocking extensions need to evolve too, or risk becoming irrelevant. We’ve already seen trackers adapt in response to privacy features like Apple’s Intelligent Tracking Prevention and Firefox’s built-in content blocking; in turn, pro-privacy browsers and extensions have had to develop innovative new countermeasures. If Google decides that privacy extensions can only work in one specific way, it will be permanently tipping the scales in favor of ads and trackers. The Real Solution? Enforce Existing Policies In order to truly protect users, Google needs to start properly enforcing existing Chrome Web Store policies. Not only did it take an independent researcher to identify this particular set of abusive extensions, but the abusive nature of some of the extensions in the report has been publicly known for years. For example, HoverZoom was called out at least six years ago on Reddit. Unfortunately, the collection of extensions uncovered by DataSpii is just the latest example of an ongoing pattern of abuse in Chrome Web Store. Extensions are bought out (or sometimes outright hijacked), and then updated to steal users’ browsing histories and/or commit advertising fraud. Users complain, but nothing seems to happen. Often the extension is still available months later. The “Report Abuse” link doesn't seem to produce results, obfuscated code doesn't seem to trigger red flags, and no one responds to user reviews. “SHINE for reddit” stayed up for several years while widely known to be an advertising referrals hijacker that fetched and executed remote code. A study from 2015 demonstrated various real-world obfuscation and remote code execution techniques. A study from 2017 analyzed the volume of outgoing traffic to detect history leakage. The common thread here is that the Chrome Web Store does not appear to have the oversight to reject suspicious extensions. The extensions swept up by DataSpii are not obscure by any measure. According to the DataSpii report, some of the extensions had anywhere from 800,000 to 1.4+ million users. Is it too much to ask a company that makes billions in profit every year to prioritize reviewing all popular extensions? Had Google systematically started reviewing when the scope of Chrome Web Store abuse first became clear years ago, Google would have been in place to catch malicious extensions before they ever went live. Ultimately, users need to have the autonomy to install the extensions of their choice to shape their browsing experience, and the ability to make informed decisions about the risks of using a particular extension. Better review of extensions in Chrome Web Store would promote informed choice far better than limiting the capabilities of powerful, legitimate extensions. Google could have banned remote code execution a long time ago. It could have started responding promptly to extension abuse reports. It could have invested in automated and manual extension review. Instead, after years of missed opportunities, Google has given us Manifest V3: a nineteen-page document with just one paragraph regarding remote code execution—the actual extension capabilities oversight that continues to allow malicious extensions to exfiltrate your browsing history. The next time Google claims that Manifest V3 will be better for user privacy and security, don’t believe their hype. Manifest V3 will do little to prevent the sort of data leaks involved in DataSpii. But Manifest V3 will curtail innovation and hurt the privacy and security of Chrome users.
>> mehr lesen

The T-Mobile and Sprint Merger Is Blatantly Anticompetitive (Wed, 31 Jul 2019)
There is no saving grace for the federal government approving what is on its face an illegal horizontal merger between T-Mobile and Sprint. The wireless market is already highly concentrated according to the Department of Justice’s own guidelines, and this merger only exacerbates the problem. Mergers that bring extreme levels of concentration are supposed to be blocked. No supposed benefit to consumers is actually waiting on this merger, including any and all claims about 5G. Here's what this merger really means: people will have fewer choices for wireless services, at higher prices, while innovation suffers. It was not that long ago when the DOJ said that mergers that shrunk a highly concentrated market from four competitors into three competitors “significantly harmed” consumers per their own antitrust guidelines. What could possibly be different about this merger? Ignoring Its Own Guidelines To approve this deal, the DOJ had to ignore its own guidelines. The traditional scrutiny applied to these kind of mergers under the guidelines is to measure the market share of the relevant companies and combine them using a formula that indicates concentration levels in a market. That formula, called the Herfindahl-Hirschman Index (HHI), is calculated by squaring the market share percentages of each of the companies in a market and then adding them together. A complete monopoly has an HHI of 1002, or 10,000, while a highly competitive market can have a score close to 1. For the U.S. wireless broadband market in 2018, the calculation looks like this: AT&T (34.5)² + Verizon (34.6)² + T-Mobile (17.8)² + Sprint (13.1)² = 2875.86 This is considerably higher than 2500, the level at which the DOJ considers a market to be highly concentrated. When two companies are merging, especially in already highly concentrated markets, the guidelines say that a merger that raises the total HHI by 100 to 200 points raises “significant competitive concerns,” and mergers that raise the index by more than 200 points “will be presumed to be likely to enhance market power.” The T-Mobile and Sprint merger blasts past the red zone and raises the market concentration numbers by 466 points nationally. By some estimates that number exceeds 1,000 points in some major metropolitan markets. This merger's anti-competitive warnings are effectively off the charts. 5G Hype Does Not Make an Illegal Merger Legal The Sprint-T-Mobile merger has been the subject of a lot of 5G hype. EFF has called attention to the political leveraging of 5G before, and this merger is the perfect example of how it can be weaponized to blow holes in consumer protection laws. From the outset, Sprint and T-Mobile repeatedly over-represented, claiming the merger would bring 5G wireless services to all Americans. The companies’ argument is that Americans must accept fewer choices at higher prices if they want to see these new services. This is just untrue. 5G services can reach U.S. Internet users without the merger. The means of delivering those services is through government-regulated licenses. Those licenses can be modified with new policies to promote competition and access. In particular, instead of approving anti-competitive mergers, the government could simply change the terms of the licenses it gives companies for their use of spectrum, the radio frequencies used to transmit services. Spectrum is not the scarce resource we are told it is, so long as there are effective rules for sharing it. We have government management of spectrum primarily to establish a logical structure for who can use the resource and how it can be used. Therefore, it is the government licensing of that resource that creates scarcity.   When major wireless carriers are arguing that only a merger would allow them to deliver innovative new uses, they are arguing that scarcity of spectrum requires them to consolidate. This intentionally ignores the government's power to change the terms of their licenses, requiring competitors to share airwaves in order to enable 5G, without those competitors having to merge into a single company. Innovation Will Suffer At the end of the day, fewer wireless carriers means fewer risk-takers. And fewer risk-takers means less innovation. The smartphone is ubiquitous in today’s wireless market, but it owes its success to a single carrier being willing to take a risk and try something different than its competitors. Prior to the iPhone, the wireless carriers dictated the design and functionality of cellular phones and were unwilling to push the envelope. Apple’s effort to build something very different than common handsets was originally rejected by Verizon because the carrier wanted more say over the design than Apple was willing to grant. Even the negotiations with AT&T Wireless (formerly Cingular) were contentious as they debated whether the phone would allow video streaming functionality, tethering, and video calling. But imagine if AT&T also rejected Apple’s idea? Apple would still have had other major national wireless carriers to pitch who might have a different set of values, and more willingness to try something risky to gain market share. That is the essence of how competition promotes innovation. The more entities there are competing with one another, the greater the possibility that one of those entities will try something different to win customers. If the U.S. market is allowed to consolidate into three national carriers, future Apples will have fewer parties to negotiate with and there will be a greater reduction in risk-taking. Advancements from manufacturers in areas such as cognitive radio and other means to utilize spectrum in new and dynamic ways will have to hope that one of three carriers will engage productively with them, but that is exactly what mergers tend to diminish. It is well accepted in antitrust law that a smaller number of players have a greater propensity to behave like one another as they have fewer competitors to maneuver around and fewer reasons to rock the boat. The DoJ offers no solutions to this outcome, other than to say that DISH Network, which is acquiring a handful of assets from Sprint and T-Mobile, should hopefully fill that void (despite having no wireless broadband customers and no infrastructure to serve them). Such blind trust in a non-existent competitor to do a good enough job to compete with massive, entrenched incumbents is questionable at best. Ultimately, it argues in favor of just denying the merger. Now it is up to the courts to decide if this merger can proceed. Ten state Attorneys General have sued to block this merger and they will make the case that what Sprint and T-Mobile are attempting is illegal. The difference this time is that unlike the last time the federal and state governments blocked a wireless telecom merger (the DoJ’s successful challenge to the proposed AT&T-TMobile merger in 2011), the DOJ and the FCC will be on the side of the monopolists in the courtroom. But all is not lost. There is a burgeoning sense that many industries—not just Big Tech and telecoms (or even eyewear or professional wrestling)—have grown dangerously overconcentrated. America is losing patience with monopolistic conduct and lawmakers are waking up to public sentiment. Even though the DOJ and the FCC have changed sides in the fight against monopolistic mergers in the telecoms sector, there is a growing movement that is pushing back. This is just a skirmish in a bigger fight, and even if we lose it, the fight is just getting started.
>> mehr lesen

DOJ and FBI Show No Signs of Correcting Past Untruths in Their New Attacks on Encryption (Wed, 31 Jul 2019)
Last week, Attorney General William Barr and FBI Director Christopher Wray chose to spend some of their time giving speeches demonizing encryption and calling for the creation of backdoors to allow the government access to encrypted data. You should not spend any of your time listening to them.  Don’t be mistaken; the threat to encryption remains high. Australia and the United Kingdom already have laws in place that can enable those governments to undermine encryption, while other countries may follow. And it’s definitely dangerous when senior U.S. law enforcement officials talk about encryption the way Barr and Wray did. The reason to ignore these speeches is that DOJ and FBI have not proven themselves credible on this issue. Instead, they have a long track record of exaggeration and even false statements in support of their position. That should be a bar to convincing anyone—especially Congress—that government backdoors are a good idea.  Barr expressed confidence in the tech sector’s “ingenuity” to design a backdoor for law enforcement that will stand up to any unauthorized access, paying no mind to the broad technical and academic consensus in the field that this risk is unavoidable. As the prominent cryptographer and Johns Hopkins University computer science professor Matt Green pointed out on Twitter, the Attorney General made sweeping, impossible-to-support claims that digital security would be largely unaffected by introducing new backdoors. Although Barr paid the barest lip service to the benefits of encryption—two sentences in a 4,000 word speech—he ignored numerous ways encryption protects us all, including preserving not just digital but physical security for the most vulnerable users. For all of Barr and Wray’s insistence that encryption poses a challenge to law enforcement, you might expect that that would be the one area where they’d have hard facts and statistics to back up their claims, but you’d be wrong. Both officials asserted it’s a massive problem, but they largely relied on impossible-to-fact-check stories and counterfactuals. If the problem is truly as big as they say, why can’t they provide more evidence? One answer is that prior attempts at proof just haven’t held up. Some prime examples of the government’s false claims about encryption arose out of the 2016 legal confrontation between Apple and the FBI following the San Bernardino attack. Then-FBI Director James Comey and others portrayed the encryption on Apple devices as an unbreakable lock that stood in the way of public safety and national security. In court and in Congress, these officials said they had no means of accessing an encrypted iPhone short of compelling Apple to reengineer its operating system to bypass key security features. But a later special inquiry by the DOJ Office of the Inspector General revealed that technical divisions within the FBI were already working with an outside vendor to unlock the phone even as the government pursued its legal battle with Apple. In other words, Comey’s statements to Congress and the press about the case—as well as sworn court declarations by other FBI officials—were untrue at the time they were made. Wray, Comey’s successor as FBI Director, has also engaged in considerable overstatement about law enforcement’s troubles with encryption. In congressional testimony and public speeches, Wray repeatedly pointed to almost 8,000 encrypted phones that he said were inaccessible to the FBI in 2017 alone. Last year, the Washington Post reported that this number was inflated due to a “programming error.” EFF filed a Freedom of Information Act request, seeking to understand the true nature of the hindrance encryption posed in these cases, but the government refused to produce any records. But in their speeches last week, neither Barr nor Wray acknowledged the government’s failure of candor during the Apple case or its aftermath. They didn’t mention the case at all. Instead, they ask us to turn the page and trust anew. You should refuse. Let’s hope Congress does too.   Related Cases:  Apple Challenges FBI: All Writs Act Order (CA)
>> mehr lesen

Someone Is Suing Companies for Using SMS Messages in 2019 (Wed, 31 Jul 2019)
Anuwave’s Suit Against Coinbase Demonstrates a Longstanding Flaw in the Patent System This month’s Stupid Patent of the Month deals with SMS (short messaging service), a technology that goes back to the mid-1980s. Modern-day SMS messages, typically bundled with mobile phone services, have been around since 1992, but one company believes that you should have to pay a licensing fee simply to incorporate them into your app or service. That company is Anuwave, which recently sued cryptocurrency exchange Coinbase (PDF) for infringement of US Patent 8,295,862. That’s only the most recent suit: Anuwave has sued dozens of companies since 2015 for alleged infringement of the patent—Symantec, Avast, and Bitdefender, just to name a few that have faced lawsuits. Anuwave’s patent is on a software application using SMS to check for information—for example, for use on a device that can send and receive SMS messages, but doesn’t have an Internet connection. Anuwave alleges that Coinbase infringed the patent by letting users perform tasks like checking their balance via SMS. Illustration from US Patent 8,295,862 Here’s the first claim of the patent: A method of enabling communication through SMS communication channel, comprising: listing all services at a terminal station that are available with an SMS gateway according to meta information available at the terminal station; upon selecting a service, a network aware application displaying associated parameters that a user needs to select or enter; upon user selection, submitting a request to the SMS gateway; and the SMS gateway responding back with a response, wherein the associated parameters include the parameters listed at the terminal station and the parameters desired by the user and not listed at the terminal station. Coinbase is not the first company to use SMS messages to perform basic software commands. Unified Patents filed a complaint in 2017 with the Patent Trial and Appeal Board to invalidate Anuwave’s patent (PDF), and Unified’s complaint identifies three different provisional patent applications as prior art. (Unfortunately, the PTAB never made a decision: Unified reached a settlement with Anuwave and dropped the complaint.) In the world of software, combining existing technologies or processes happens every day as a matter of course. According to the law, a person isn’t entitled to a patent if the claimed invention already existed when the application was filed or would have been obvious to someone skilled in the relevant technology area. The Supreme Court has held that a combination of existing inventions can be ruled obvious even if that particular combination didn’t previously exist before the patent was issued. In the world of software, combining existing technologies or processes happens every day as a matter of course. As patent expert Charles Duan wrote, “Non-proprietary software developers and other innovation communities value interoperability and combinability of software. Thus, the legal assumption that new combinations are uncommon and often worthy of patents conflicts with the experiences of those software developers, for whom new combinations are routine and expected.” But let’s put aside the question of whether combining SMS with other services would have been obvious before Anuwave’s patent was granted. It really shouldn’t have been issued for a much more basic reason: it’s not an invention. The landmark Supreme Court opinion Alice v. CLS Bank says that an abstract idea does not become a patentable invention simply by being implemented on a computer. At its core, Anuwave’s patent is on the idea of using SMS messages to provide information to a device. It’s clearly vulnerable to a challenge under Alice. Anuwave v. Coinbase is one of the first patent lawsuits ever in the blockchain world, so we expect that the cryptocurrency community will be watching it closely. But it tells an all-too-common story about how low-quality software patents all too often undermine innovation: a company that does not produce anything wields an overly broad software patent against an entire field of actual, practicing companies. This is only the most recent example. Today, some members of Congress are bent on undermining the Alice decision, destroying the most valuable tool that innovators can use against these stupid software patents. Please take a moment to write your members of Congress and urge them to reject the Tillis-Coons proposal. Take Action Tell Congress not to open the floodgates to stupid patents
>> mehr lesen

Building Community in Brooklyn: A Grassroots Case Study (Wed, 31 Jul 2019)
Grassroots-level organizing has long been an important tool for advancing policy goals and activating a constituency. More importantly, local organizing can provide an avenue through which the skills and knowledge of some are leveraged to support the previously-unmet needs of the wider community.  As a member of the Electronic Frontier Alliance—a network of independent local advocacy groups in the U.S.—The Cypurr Collective is offering down-to-earth tech guidance to their neighbors in Brooklyn, New York, and holding space for greater digital rights and privacy awareness. Cypurr utilizes such engagement methods as tea socials, digital security workshops, and cross-issue allyship, which enables the group to speak on local organizing from a people-focused perspective. We asked group members Grey Cohen, Rory Mir, and Sam DiBella to share a bit of what they’ve learned in their quest for digital equality. How did Cypurr originate? Grey: We started up The Cypurr Collective because we were in activist spaces in which folks had only recently begun to reckon with the importance of tech within modern-day activism. With these discussions came a lot of anxiety around the limits to personal and group security when using tech. We wanted a way for folks to lessen the stress around tech stuff, as well as help them create a framework to better understand these tools and devices. We started with our first workshop at a local feminist bookstore about three years ago, and we went on from there. Rory: Something that I think makes our group unique is that it was a group of activists turned technologists, when it's often the reverse. We all came to the group with a very wide range of experience in cybersecurity, which made being accessible and beginner-friendly a must from day one. This focus helped us foster a relationship with Brooklyn Public Library, which was a major source of stability for the group. How did that relationship with Brooklyn Public Library come about? What steps did you take that others could possibly in their cities? Grey: Libraries are such an important community resource. We were lucky enough to have one member of the Brooklyn Infocommons come by one of our presentations, and we began our relationship from there. Since then, we have had monthly workshops at the BPL, with a significant amount of attendees each time. This showed us that 1) libraries can reach audiences that we might not be able to reach ourselves, and 2) there are enough folks out there who are concerned about their cybersecurity (or who are looking for others with whom they can discuss their concerns around cybersecurity). I think finding a way to develop a long-term relationship with your local library/librarian is a super useful effort that can help folks in tech education get in touch with folks who really need some cyber-HALP! How is intersectional allyship factored into the group dynamic? Grey: Seeing that our project began within an intersectional feminist space, intersectionality is at the center of the work we have done and continue to do. In short, it didn't make sense for us to spend our time sharing cybersecurity skills if this effort was not in a framework that acknowledges the socio-economic inequalities that create gaps of access to tech, education, and the general safety/security of our audience. Rory: At every workshop, we start by having a frank discussion about space and sometimes use a progressive-stack. It's easy for conversations to be dominated by a handful of voices, and for folks to make assumptions about others, particularity when discussing tech. We try to push back on that and make space for folks to engage without being talked over. It's typically those silenced voices who have a greater need for our workshop, and everyone benefits from hearing marginalized voices and the unique concerns they bring to the table. What do you find to be the core needs of your community? Does that differ from your original expectations? Sam: I have a distinct memory of our first cryptoparty at the bookstore. I was so excited that I taught myself PGP email encryption and we had a great discussion about the panopticon. When it came time for my partner and I to present at a table, we didn’t get to email encryption at all. One person wanted to learn about recovering hard drive backups and the other was worried about images of her art being stolen online. I’ve always kept that lesson about building on what people are already interested in for workshops. Now that we present at the library, computer literacy is one of the skills we try to build into our events. We want people to be more comfortable with computers, so they feel safe to try things out on their own too. Rory: On that same note, I consistently need to relearn the same lesson. I sometimes go into workshops expecting the most specific and technical questions and get a little blindsided by pragmatic concerns I hadn't considered. As a result, we've tried to give participants more control over the workshop over time. Less "save your questions for the end" and more group discussions. How are organizing duties assigned among organizers to avoid burnout and ensure continuity? Grey: It is very important for us that we value the time of all the volunteers involved in the project. We prevent burnout by trying not to put too much stress on ourselves while doing this project. Sometimes this means we do more, sometimes it means we do less. But most importantly, it means we have the energy to continue this work and put on great workshops for folks. Sam: I also think having the steady support of presentation spaces has made our work a lot easier. Having regularly scheduled workshops makes it easier for people to find us.  It makes organizing easier, as well. What does the future hold for Cypurr? Sam: There’s a lot of community-based tech work going on now, from local mesh nets to tech worker organizing. I hope that we, and other cryptopartiers, can help those groups build a less hierarchical digital world. Grey: We hope to continue helping those who are continually oppressed by machines and machine learning, whether that takes the form of a workshop, a tea-time chat, or supporting those doing the work of increasing the security of marginalized communities and other activist groups. Our thanks to The Cypurr Collective for reminding us that showing up for your community—however you define community—is an excellent way to help foster the tech future you want. Find a grassroots group near you within the Electronic Frontier Alliance network. If you are organizing where you live, please consider joining the network.
>> mehr lesen

Equifax Settlement Won’t be Enough to Deter Future Breaches: The Law Must Catch Up (Tue, 30 Jul 2019)
Last week, news broke of a large financial settlement for the massive 2017 Equifax data breach affecting 147 million Americans.  While the direct compensation to those harmed and the fines paid are important, it’s equally important to evaluate how much this result is likely to create strong incentives to increase data security for both Equifax and the other companies that are closely watching.  We doubt it will do enough.  Without stronger privacy legislation, the lawyers and regulators trying to respond to these data leaks are operating with one hand tied behind their back.    In the meantime, EFF strongly urges everyone impacted by the calamitous Equifax breach to participate in the settlement claims process. Equifax must pay for the harm they have caused to everyone. And all too often, the fact that too few people make claims in these consumer privacy cases is used in the next case to argue that consumers just don’t care about privacy, making it even harder to force real security upgrades.  If you do care about your privacy and want to make companies more responsible with your data, make your position known.   Overview of the Equifax Settlement The ultimate Equifax settlement number is flexible—Equifax will initially pay $300 million into a fund that will provide breach victims with credit monitoring services, reimburse (up to 25%) for credit monitoring services purchased from Equifax, and compensate for other out-of-pocket expenses incurred as a result of the breach. If the $300 million is not enough to compensate affected consumers, then Equifax is required to pay an additional $125 million into the fund. Equifax will also pay $275 million to states and the Consumer Financial Protection Bureau. Those are big numbers, but they don’t paint the whole picture.  To get some perspective, a potential total settlement amount of $700 million is less than a quarter’s worth of Equifax’s revenue in 2017. So, while it’s a lot of money to you or me, it isn’t that much to Equifax.  Out of the potential $425 million available to consumers, only $31 million is initially available for consumers if they elect to receive a $125 cash payment instead of credit monitoring services. So, the amount paid out goes down after 248,000 people elect this remedy. If all 147 million affected people were to file for a claim, each person would receive mere 21 cents for the breach of their most sensitive personal information, although there are some contingent provisions in the settlement that might increase that amount.      If a consumer chooses to forego the cash payment, they can enroll in credit monitoring services for 10 years—though only the first 4 years include monitoring at all 3 major credit bureaus, and the remaining 6 years are only for monitoring the Equifax credit report. Moving forward, we hope policy makers will require consumer credit reporting agencies to provide free-and-easy credit freezes, in addition to any credit monitoring. In addition, the settlement includes compensation for consumers’ out-of pocket-damages. For instance, it includes hourly compensation for time spent dealing with the immediate aftermath of the breach in Equifax’s horrible, slipshod processes (up to 20 hours), and damages for misuse of personal information as a result of the breach. All data breach victims will receive access to identity theft recovery assistance for a period of 7 years. This is especially helpful, since the U.S. currently has no good ways for people who suffer identity loss to set the record straight. Instead they are forced to rectify the problem piecemeal at each individual place where they need credit or a clear identity, so help doing that one-by-one negotiation could be a good thing.   The settlement also includes some ambitious notice provisions, including a multi-part plan to try to give notice to the 147 million people potentially impacted and to make sure that they are aware of and can use the identity recovery service even a few years later. Aside from the money, Equifax will have to set up better security practices—although the company should already have had these practices in place before the breach even occurred. The new security practices will include a third-party auditor who will monitor and report Equifax’s compliance with the security practices in the settlement to the plaintiffs’ attorneys, the FTC, and some state Attorneys General. We would have preferred a process where the public was informed of Equifax’s compliance, rather than the information being kept secret. But still, this may mean that future bad decision-making around security will be avoided or caught before another breach.    We Need Better Privacy Laws  The bad news is that this result is still far from what is needed to incentivize companies like Equifax to prioritize security and, better yet, limit what they collect and keep, so that there’s less to leak. The lawyers who sued Equifax—both private and governmental—had to negotiate for all of this relief with far less leverage than they should have had. Why? Because the law is still far behind in recognizing the kinds of harms that occur from these data breaches. As we explained just after the breach occurred, right now privacy law is simply insufficient to spur companies to protect us from these large data breaches. There is no comprehensive federal privacy law, much less one with the kind of teeth that could push companies to invest in information security the way they invest in, say, compliance with securities law. Worse, efforts to strengthen and protect state laws, like California's Consumer Privacy Act, have faced stiff opposition from the very companies who voraciously gather, buy, sell and trade our data.  The truth is, while the numbers can seem large, these settlements confirm that we need stronger privacy legislation to give the lawyers and regulators the leverage they need to protect us. These include:  We need to create (or recognize) fiduciary or other high-level responsibility for those who hold the kind of data that can be used in identity theft. Anyone who holds data that, if stolen, can let someone effectively “be you” for purposes of credit, purchasing, accessing your bank accounts, travel, and otherwise should be held to a high duty of care and loyalty to you with real accountability if they fail. This must include, at a minimum, prompt notification, simple fast and free credit freezes and a specific duty to secure customers’ personal information as a matter of course, not as a negotiated settlement years later. We should encourage a race to the top by states in passing privacy laws, and the federal government should raise the bar, not lower it. One good idea comes from Vermont’s new data privacy law, which requires data brokers to register annually. People should be able to have their day in court. A direct private cause of action for data breaches and other digital privacy harms is crucial to get us there. Data harms can be hard to quantify financially, especially when damage only occurs over time, so we should apply statutory liquidated damages like we do for illegal wiretapping, copyright, and similar harms. Non-discrimination rules can ensure that companies don’t just turn your desire for privacy into another strategy to make you pay more. Pay-for-privacy is unfair. A federal advocate for victims could help, with mandatory reporting on data breaches and harms. Federal regulators must have the authority and funding to write and enforce rules that dig deep into digital security for our data. And finally, one thing to avoid: existing computer crime laws are already extremely overbroad. That causes real harm and injustice, and often creates threats to the very security researchers who are trying to keep the rest of us safe.  Any new efforts to address data breaches should focus on incentives to protect data rather than further expanding criminal liability. The Equifax settlement is a good effort, especially considering the hurdles that the lawyers and the agencies faced in trying to hold Equifax accountable. But the data breaches continue unabated, with one affecting Capital One revealed just yesterday. Going forward, we need to eliminate those hurdles or mass data breaches are going to continue unabated. Anyone who hasn’t been a victim of a data breach so far needs to join with those who have—because without a serious change in course, we’ll all be victims sooner or later.  And again, don’t forget to file a claim.   Special thanks to former EFFer (and current Hastings Law student) Amul Kalia for help with this blog post. 
>> mehr lesen

Congress Is Home For the Summer and Ready to Hear From You (Tue, 30 Jul 2019)
When it comes to politics, in-person meetings make a huge difference. Just a few questions from constituents during town halls can show a representative or senator which issues are resonating with the residents of their district or state. Even if you’ve never met an elected representative before, showing up IRL is actually pretty easy to do, and this is the perfect time: in August, Congress takes a break from considering legislation so members can be in their districts, giving you the opportunity to meet and talk to them without traveling to Washington, D.C. While in D.C., representatives and senators have to rely on calls and emails to know what the people they represent think about the issues. Those calls and emails are important, but when they’re back in their districts, you can make sure they hear directly from you—in person—about the issues that matter to you. Even if you have called or emailed before, putting a face to the same concerns can help your elected representative understand your concerns. Even if you didn't get them to agree with you, those conversations will help shape their legislative priorities once they return to D.C. in September.    Where In the World Is My Congressional Representative? The best way to meet your senator or representative is either to call the local office and simply ask for a meeting, or to fill out a meeting request form on the member’s official website. While it may be more difficult to meet with your Senator—who generally covers a bigger geographic area and may be further from you—your federal Representative may be more available for a meeting. The member's staff may be able set up a meeting over the phone, or they may direct you to a town hall or other district event where the member of Congress will provide an update on current events and take questions. Make sure to carefully follow any instructions listed about parking and security and look to see if you need to register ahead of time to attend. Be aware that registering may mean including your name and contact information and that failing to register may mean you can’t get into a town hall with heightened security. If you do schedule a meeting, it’s important to know that you may not get a meeting with the actual senator or member of Congress, but meeting with Congressional staff will still get your concerns to the member. Also, consider subscribing to the online newsletters of your House member, as well as your state’s two senators, since they often email their local events directly to constituents and subscribers. With so many issues vital to digital rights looming in the congressional calendar, this August is a perfect time for Internet users to pressure Congress in person to do the right thing. Below, you can find updates on issues that are critical to bring up with your representative, however you contact them. Tell Congress to protect free speech online, end the suspicionless collection of Americans’ telephone records, and don’t subject Internet users to huge potential fines for regular activity like sharing memes. Protect Section 230, the Most Important Law for Preserving Free Speech and Innovation Online Section 230 is the most important law protecting free speech online. The law shields online platforms, services, and users from liability for most speech created by others. Without Section 230, many of the online communities we all rely on every day would not exist in their current form. Last year, Congress undermined Section 230 with the disastrous law SESTA-FOSTA, which has incentivized online platforms to censor their users, silencing marginalized voices in the process. Now, it appears that Congress has developed a taste for undermining 230 and members of both parties seem eager to do it again. One attack on 230 has been introduced in Congress this year—Senator Hawley’s “bias” bill, which would give the government unprecedented authority to decide which online platforms are allowed to enjoy Section 230 protections. Rumor has it that there are more anti-230 bills on the way.  Please tell your members of Congress that a strong Section 230 is essential to an Internet where everyone can gather, find like-minded friends, and speak their minds. Tell them that attempts to punish large tech companies by gutting 230 will almost certainly backfire, making it far more difficult for competitors ever to reach the scale of a Google or Facebook. If you work in an Internet-based business that hosts other people’s speech, tell your member of Congress that your business and livelihood rely on Section 230.  End the NSA’s Mass Telephone Records Program This fall, your elected official will vote on whether to reauthorize Section 215 of the USA Patriot Act. This is the law that famously allows the intelligence community to demand that companies, like telephone service providers, hand over any records or any other “tangible thing” deemed “relevant” to foreign intelligence investigations. For years, the government relied on Section 215 to conduct a dragnet surveillance program that collected billions of phone records documenting who a person called and for how long they called them—more than enough information for analysts to infer very personal details about a person, including who they have relationships with and the private nature of those relationships. That invasive dragnet collected data without an individualized basis for suspicion, violated our privacy, and suppresses dissent and democracy. In 2015, a federal appeals court held that the mass collection of phone records is “unprecedented and unwarranted.” Later that year, Congress passed the USA FREEDOM Act, which renewed Section 215 while imposing some—albeit insufficient—limitations on the government forcing phone companies to provide the NSA with phone records from thousands or millions of Americans at once. Now is the time to talk to your elected officials about ending the suspicionless collection of Americans’ telephone records, and encouraging transparency and public hearings on the other uses of Section 215 and what materials it gathers. These public hearings should extend to public disclosures about whether and how people can become targets of surveillance because of their speech and First Amendment-related activities, as well as their race, religion, national origin, gender, or sexual orientation. Stop the CASE Act From Subjecting Regular Internet Users to Life-Altering Copyright Lawsuits The Copyright Alternative in Small-Claims Enforcement Act (CASE Act, H.B. 2426, S. 1273) is a bill that is supposed to help photographers and other artists who find their images taken and used whole, no fair use in sight. But the way the bill is written is catastrophically flawed. Instead of going to a court or a judge, the CASE Act creates a “Claims Board” at the Copyright Office in Washington DC, where “claims officers” will hear infringement claims and issue damage awards that could reach tens of thousands of dollars. Things that regular Internet users do all the time—sharing memes, images, and so forth—could make them subject to claims under CASE. The CASE Act is often described as a system people will find themselves in “voluntarily.” This isn’t really true. Rather than requiring both sides to agree to be subject to the judgments of the Copyright Office, the bill actually requires the person receiving the complaint to “opt out” within 60 days of getting a notice from the Copyright Office. Failing to opt out—and maybe even failing to opt out in a specific way—leaves you bound by the judgments of the Copyright Office, including judgments issued by default. The net effect would not be artists collecting against true infringers, who are most likely to learn how to opt out, but instead regular people getting notices they don’t understand and ending up owing enough to put them into bankruptcy. The CASE Act won’t help artists, will hurt regular people, and will create a perfect breeding ground for copyright trolls, who will be able to squeeze whatever money they can out of anyone unfortunate enough to wander into their sight.  While You’re At It, Tell Congress to Save Net Neutrality, Stop Face Surveillance, and Protect the Patent System The Save the Internet Act (S. 682) would make the net neutrality protections we had under the 2015 Open Internet Order the law of the land, undoing the FCC’s repeal of these popular and important rules. The House of Representatives has already passed it, and now it’s the Senate’s turn. Find out more about what you can do with our Net Neutrality Defense Guide: Summer 2019 Edition. We also need to make sure we have a patent system that supports creators and users of technology, not just patent troll lawsuits. Senators Thom Tillis (R-N.C.) and Chris Coons (D-Del.) have proposed draft legislation that would allow patents on abstract ideas and laws of nature. The bill isn’t yet in final form, but now is a good time to reach out to your representatives and tell them that the Tillis-Coons patent bill would be a disaster for innovation. Lastly, remind Congress that face surveillance technology has well documented disproportionately high error rates in accurately identifying women and people of color—and even if researchers are one day able to correct these current shortcomings, the threat this pernicious and covert mass surveillance represents to Americans’ freedom of expression, religion, and association would remain. Now is the time to stand up and say no to government use of face surveillance. Don’t Let Congress Stand Still  Town halls and meetings truly matter. When members hear repeatedly from their own constituents in person about how issues are affecting people in the district, those conversations travel with the members back to D.C. If the members think that the issue could generate enough controversy and press, local stories can influence votes, legislation, and private conversations with other members. And, if you’re interested in getting together a group of like-minded allies to visit a town hall or event, there may even be an Electronic Frontier Alliance grassroots group in your area. The Electronic Frontier Alliance is a grassroots network of community and campus organizations across the United States working to educate others about the importance of digital rights.   However you reach out to them, this is your chance to remind Congress, face-to-face, what matters to you. Resources:  Contact Your Senator Contact Your Representative Section 230 Summary Section 215 Summary What’s Wrong With the CASE Act Net Neutrality Defense Guide: Summer 2019 Why We Must Stop The Tillis-Coons Patent Bill Street-Level Surveillance:  Face Recognition
>> mehr lesen

Thanks for Fighting for a Better Digital Future (Tue, 30 Jul 2019)
We at EFF are deeply grateful to each one of the 2,076 people who gave a donation during our Better Digital Future membership drive. They join the over 30,000 supporters around the world who have answered our call to reclaim the fate of the Internet. Your help allows us to develop free privacy-enhancing technologies, advocate for consumers in the courts, defend free expression online, and so much more. On each anniversary of EFF's founding, we renew our commitment to fight for the rights of ordinary folks as technology becomes an ever more present part of life. We don't need to abide by tech that exploits and surveils us because this weird and beautiful Internet still has the power to uplift free expression and privacy, rather than serve as a tool of control. A better digital future is necessary and—with your help—it's possible. The membership drive may be over, but we still need you! Online freedom waits for no one and I hope you consider joining EFF this year if you haven't already. Members can pick up great gear including an EFF shirt celebrating the best parts of the Internet. Thanks, sincerely, for supporting EFF, and for ensuring that we retain the ability to connect, explore ideas, be expressive, and have private conversations. Didn't Join Yet? We Still Need Your Help! EFF is a U.S. 501(c)(3) nonprofit and contributions are tax-deductible as allowed by law. Consider making your contribution go even further with an automatic monthly or annual donation!
>> mehr lesen

The Key to Safety Online Is User Empowerment, Not Censorship (Fri, 26 Jul 2019)
The Senate Judiciary Committee recently held a hearing on “Protecting Digital Innocence.” The hearing covered a range of problems facing young people on the Internet today, with a focus on harmful content and privacy-invasive data practices by tech companies. While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids. Don’t Censor Users; Empower Them to Choose Though tech companies weren’t represented in the hearing, senators offered plenty of suggestions about how those companies ought to make their services safer for children. Sen. John Kennedy suggested that online platforms should protect children by scanning for “any pictures of human genitalia.” It’s foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. Sen. Kennedy’s idea is a good example of how lawmakers sometimes misunderstand the complexity of modern-day platform moderation, and the extreme difficulty of getting it right at scale. Many online platforms do voluntarily use automated filters, human reviewers, or both to snoop out nudity, pornography, or other speech that companies deem inappropriate. But those measures often bring unintended consequences that reach much further than whatever problems the rules were intended to address. Instagram deleted one painter’s profile until the company realized the absurdity of this aggressive application of its ban on nudity. When Tumblr employed automated filters to censor nudity, it accidentally removed hundreds of completely “safe for work” images. The problem gets worse when lawmakers attempt to legislate what they consider good content moderation. In the wake of last year’s Internet censorship law SESTA-FOSTA, online platforms were faced with an awful choice: err on the side of extreme prudishness in their moderation policies or face the risk of overwhelming liability for their users’ speech. Facebook broadened its sexual solicitation policy to the point that it could feasibly justify removing discussion of sex altogether. Craigslist removed its dating section entirely. Legislation to “protect” children from harmful material on the Internet will likely bring similar collateral damage for free speech: when lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor. During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion “from protection to empowerment.” In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree. It’s foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn’t abuse copyright and other laws to prevent third parties from offering customization options to people who want them. Congress and Government Must Do More to Fight Unfair Data-Collection Practices Like all Internet users, kids are often at the mercy of companies’ privacy-invasive data practices, and often have no reasonable opportunity to opt out of collection, use, and sharing of their data. Congress should closely examine companies whose business models rely on collecting, using, and selling children’s personal information. Some of the proposals floated during the hearing for protecting young Internet users’ privacy were well-intentioned but difficult to implement. Georgetown Law professor Angela Campbell suggested that platforms move all “child-directed” material to a separate website without behavioral data collection and related targeted advertising. Platforms must take measures to put all users in charge of how their data is collected, used, and shared—including children—but cleanly separating material directed at adults and children isn’t easy. It would be awful if a measure designed to protect young Internet users’ privacy made it harder for them to access materials on sensitive issues like sexual health and abuse. A two-tiered Internet undermines the very types of speech for which young Internet users most need privacy. We do agree with Campbell that enforcement of existing children’s privacy laws must be a priority. As we’ve argued in the student privacy context, the Federal Trade Commission (FTC) should better enforce the Children’s Online Privacy Protection Act (COPPA), the law that requires websites and online services that are directed to children under 13 or have actual knowledge that a user is under 13 to obtain parental consent before collecting personal information from children for commercial purposes. The Department of Education should better enforce the Family Educational Rights and Privacy Act (FERPA), which generally prohibits schools that receive federal funding from sharing student information without parental consent. EFF’s student privacy project catalogues the frustrations that students, parents, and other stakeholders have when it comes to student privacy. In particular, we’ve highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google’s student data practices, but the agency never responded. Sen. Marsha Blackburn cited our FTC complaint against Google as an example of the FTC’s failure to protect children’s privacy: “They go in, they scoop the data, they track, they follow, and they’ve got that virtual you of that child.” While Google has made some progress since 2015, Congress should still investigate whether the relevant regulatory agencies are falling down on the job when it comes to protecting student privacy. Congress should also explore ways to ensure that users can make informed decisions about how their data is collected, used, and shared. Most importantly, Congress should pass comprehensive consumer privacy legislation that empowers users and families to bring their own lawsuits against the companies that violate their privacy rights. Undermining Section 230 Won’t Improve Companies’ Practices At the end of the hearing, Sen. Lindsey Graham (R-SC) turned the discussion to Section 230, the law that shields online platforms, services, and users from liability for most speech created by others. Sen. Graham called Section 230 the “elephant in the room,” suggesting that Congress use the law as leverage to force tech companies to change their practices: “We come up with best business practices, and if you meet those business practices you have a safe haven from liability, and if you don’t, you’re going to get sued.” He followed his comments with a Twitter thread claiming that kneecapping liability protections is “the best way to get social media companies to do better in this area.” Don’t be surprised if the big tech companies fail to put up a fight against these proposals. Sen. Graham didn’t go into detail about what “business practices” Congress should mandate, but regardless, he ought to rethink the approach of threatening to weaken Section 230. Google and Facebook are more willing to bargain away the protections of Section 230 than their smaller competitors. Nearly every major Internet company endorsed SESTA-FOSTA, a bill that made it far more difficult for small Internet startups to unseat the big players. Sen. Josh Hawley’s bill to address supposed political bias in content moderation makes the same mistake, giving more power to the large social media companies it’s intended to punish. Don’t be surprised if the big tech companies fail to put up a fight against these proposals: the day after the hearing, IBM announced support for further weakening Section 230, just like it did last time around. More erosion of Section 230 won’t necessarily hurt big Internet companies, but it will hurt users. Under a compromised Section 230, online platforms would be incentivized to over-censor users’ speech. When platforms choose to err on the side of censorship, marginalized voices are the first to disappear. Congress Must Consider Unintended Consequences The problems facing young people online are complicated, and it’s essential that lawmakers carefully consider the unintended consequences of any legislation in this area. Companies ought to help users and families customize online services for their own needs. But congressional attempts to legislate solutions to harmful Internet content by forcing companies to patrol users’ speech are fraught with the potential for collateral damage (and would likely be unconstitutional). We understand Congress’ desire to hold large Internet companies accountable, but it shouldn’t pass laws to make the Internet a more restrictive place. At the same time, Congress does have an historic opportunity to help protect children and adults from invasive, unfair data-collection and advertising practices, both by passing strong consumer privacy legislation and by demanding that the government do more to enforce existing privacy laws.
>> mehr lesen

This Summer, Take Some Time to Stand Up for Net Neutrality (Fri, 26 Jul 2019)
As we head into the August, Congress will be on recess and most of your senators and representatives will be heading back to their home states. That means it’ll be easier for you to reach out and talk to them or their staff and ask them to act on important legislation. Earlier this year, the Save the Internet Act—a bill which would restore the net neutrality protections of the 2015 Open Internet Order and make them the law of the land—passed the House of Representatives. The Senate needs to be pressured into following suit. To help you do that, we’re updating and relaunching our Net Neutrality Defense Guide. Last year, the Defense Guide was focused on using a vehicle called the Congressional Review Act (the CRA) to overturn the FCC’s repeal. Since the Senate voted for the CRA with a bipartisan majority vote, last year’s guide focused on getting the House of Representatives to vote. This year, we have the opposite situation. Since the House has voted for the Save the Internet Act and the Senate has not, and our guide has been updated to reflect the new bill, the new target, and the new arguments we’ve heard for and against the Save the Internet Act. Net neutrality means that ISPs like AT&T, Comcast, and Verizon don’t get to block websites, slow speeds on certain sites, or make deals that give faster speeds to some content and not others. It means that you—and not your ISP—control your experience online. A free and open Internet depends on net neutrality to maintain a level playing field, which disappears once ISPs are free to do whatever they want with your traffic. Established players—or companies under the same umbrella as the ISP (like, say, HBO and AT&T)—shouldn’t get to leverage their money and connections to get to customers more easily than competitors with better products, but less money. We can prevent that by passing strong net neutrality protections, like those in the Save the Internet Act. The Senate needs to know that this is an important issue, supported by a majority of Americans, and that we want them to vote on this bill. The Net Neutrality Defense Guide is built to empower both regular people and local organizations to make themselves heard on this issue. It includes: A how-to on setting up in-person meetings with senators Tips on how to get press coverage and place op-eds in local papers A sample letter to send to senators A sample call script for calling local and DC offices of senators Basic talking points and counters to common arguments against net neutrality An image pack you can use and remix for your own campaigns The guide is located here, along with a downloadable pdf version. Get out there and make yourself heard!
>> mehr lesen

Fixed? The FTC Orders Facebook to Stop Using Your 2FA Number for Ads (Thu, 25 Jul 2019)
Since academics and investigative journalists first reported last year that Facebook was using people’s two-factor authentication numbers and “shadow” contact information for targeted advertising, Facebook has shown little public interest in fixing this critical problem. Subsequent demands that Facebook stop all non-essential uses of these phone numbers, and public revelations that Facebook’s phone number abuse was even worse than initially reported, failed to move the company to action. Yesterday, rather than face a lawsuit from FTC, Facebook agreed to stop the most egregious of these practices. The Victory In one of just a few concrete wins in an overall disappointing settlement, Facebook agreed not to use phone numbers provided for any security feature (like two-factor authentication, account recovery, and login alerts) for targeted advertising purposes. Until this settlement, Facebook had been using contact information that users explicitly provided for security purposes for targeted advertising, contrary to those user’s expectations and Facebook representatives’ own previous statements. Revelation of this practice seriously damaged users’ trust in a foundational security practice and undermined all the companies and platforms that get two-factor authentication right. The FTC’s order that Facebook stop using security phone numbers for targeted advertising is, hopefully, a first step toward rebuilding users’ trust in security features on Facebook in particular and on the web in general. The Loose Ends But the FTC didn’t go far enough here, and Facebook continues to be able to abuse your phone number in two troubling ways. First, two-factor authentication numbers are still exposed to reverse-lookup searches. By default, anyone can use the phone number that a user provides for two-factor authentication to find that user’s profile. Problems with this search functionality have been public since at least 2017. Facebook even promised to disable it over a year ago in the wake of the Cambridge Analytica scandal, but left open a loophole in the form of contact uploads. For people who need two-factor authentication to protect their account and stay safe, Facebook’s failure to fill this loophole forces them into an unnecessary choice between security and privacy.  Second, the FTC’s settlement misses a whole additional category of phone numbers: “shadow” contact information, which refers to a phone number you never gave Facebook but which your friends uploaded with their contacts. In other words, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books. This shadow contact information remains available to advertisers, and inaccessible and opaque to users. You can’t find your “shadow” contact information in the “contact and basic info” section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a “right to know” what information it has on them. No Fix Throughout this year, we have been demanding that a handful of companies fix some of their biggest privacy and security problems. For Facebook, we have taken aim at its tendency to use phone numbers for purposes contrary to what users understood or intended. While the FTC’s order may seem like a fix, it does not go far enough for us to consider it a complete victory. Until Facebook takes the initiative to address the reverse-lookup and shadow contact information problems described above, users can expect that its reckless misuse of their phone numbers will continue. And we’ll continue watching and putting pressure on them to fix it already.
>> mehr lesen

Adblocking: How About Nah? (Thu, 25 Jul 2019)
For more than a decade, consumer rights groups (including EFF) worked with technologists and companies to try to standardize Do Not Track, a flag that browsers could send to online companies signaling that their users did not want their browsing activity tracked. Despite long hours and backing from the FTC, foot-dragging from the browser vendors and outright hostility from the big online media companies mean that setting Do Not Track in your browser does virtually nothing to protect your privacy. Do Not Track grew out of widespread public concern over invasive "behavioral advertising" that relied on tracking to target ads; despite a generation of promises from the ad industry that consumers would welcome more relevant advertising, the consistent result has been that users are freaked out by "relevant" ads because they understand that relevancy is synonymous with privacy invasion. Nothing is so creepy as ads for a product you looked into earlier following you from site to site, then from app to app, as you are tracked and retargeted by a desperate vendor's algorithm. Internet users didn't take this situation lying down. They wanted to use the Web, but not be tracked, and so they started to install ad-blockers. A lot of ad-blockers, and more every year. Ad-blockers don't just stop users from seeing ads and being tracked (and indeed, some ad-blockers actually track users!). They can also stop the publishers and marketers who rely on tracking and ad-clicks from earning money. Predictably, industry responded with ad-blocker-blockers, which prevented users from seeing their sites unless they turned off their ad-blocker. You'll never guess what happened next. Actually, it's obvious what happened next: users started to install ad-blocker-blocker-blockers. The Biggest Boycott in History The rise and rise of ad-blockers (and ad-blocker-blocker-blockers) is without parallel: 26% of Internet users are now blocking ads, and the figure is rising. It’s been called the biggest boycott in human history. It's also something we've seen before, in the earliest days of the Web, when pop-up ads ruled the world (wide web), and users went to war against them. In 1994, Hotwired (the defunct online adjunct to Wired magazine) displayed the first banner ad in Internet history. Forty-four percent of the people who saw that ad clicked on it. At the time, it felt like advertising had taken a great leap, attaining a conversion rate that bested print, TV, direct mail, or display advertising by orders of magnitude. But it turned out that the click-rate on that Hotwired ad had more to do with novelty than any enduring persuasive properties of banner ads. Even as Web companies were raising millions based on the fabulous performance of early ads, the efficacy of those ads was falling off a cliff, with clickthrough rates plummeting to low single digits. This created a desperate situation, where publishers needed to do something -- anything -- to goose their clickthrough rates. Enter the Pop-Up Ad That's when Ethan Zuckerman—then an employee at Tripod—invented the pop-up ad (he has since apologized). These ads spawned in new windows and were much harder to ignore—for a while. Human beings' response to stimulus tends to regress to the mean (the refrigerator hum gets quieter over time because you adapt to it, not because the decibel level decreases) and so pop-up ads evolved into ever-more virulent forms—pop-under ads, pop-ups with fake "close" boxes, pop-up ads that respawned, pop-up ads that ran away from your mouse when you tried to close them... At the height of the pop-up wars, it seemed like there was no end in sight: the future of the Web would be one where humans adapted to pop-ups, then pop-ups found new, obnoxious ways to command humans' attention, which would wane, until pop-ups got even more obnoxious. But that's not how it happened. Instead, browser vendors (beginning with Opera) started to ship on-by-default pop-up blockers. What's more, users—who hated pop-up ads—started to choose browsers that blocked pop-ups, marginalizing holdouts like Microsoft's Internet Explorer, until they, too, added pop-up blockers. Chances are, those blockers are in your browser today. But here's a funny thing: if you turn them off, you won't see a million pop-up ads that have been lurking unseen for all these years. Because once pop-up ads became invisible by default to an ever-larger swathe of Internet users, advertisers stopped demanding that publishers serve pop-up ads. The point of pop-ups was to get people's attention, but something that is never seen in the first place can't possibly do that. How About Nah? The Internet is full of take-it-or-leave-it offers: click-through and click-wrap agreements that you can either click "I agree" on or walk away from. As the online world has grown more concentrated, with more and more power in fewer and fewer hands, it's become increasingly difficult for Web publishers to resist advertisers' insistence on obnoxious tracking ads. But Internet users have never been willing to accept take-it-or-leave-it as the last word in technological self-determination. Adblockers are the new pop-up blockers, a way for users to do what publishers can't or won't do: demand a better deal from advertisers. When you visit a site, the deal on offer is, "Let us and everyone we do business with track you in every way possible or get lost" and users who install adblockers push back. An adblocker is a way of replying to advertisers and publishers with a loud-and-clear "How about nah?" Adversarial Interoperability Adversarial interoperability occurs when someone figures out how to plug a new product or service into an existing product or service, against the wishes of the company behind that existing product or service. Adblocking is one of the most successful examples of adversarial interoperability in modern history, along with third-party printer ink. When you visit a website, the server sends your browser a bunch of material, including the code to fetch and render ads. Adblockers throw away the ad parts and show you the rest, while ad-blocker-blocker-blockers do the same, and then engage in an elaborate technological game of cat-and-mouse in a bid to fool the server into thinking that you are seeing the ads, while still suppressing them. Browsers have always been playgrounds for adversarial interoperability, from the pop-up wars to the browser wars. Thanks to open standards and a mutual disarmament rule for software patents among browser vendors, it's very hard to use the law to punish toolsmiths who make adblocking technologies (not that that's stopped people from attempting it). Adversarial interoperability is often a way for scrappy new upstarts to challenge the established players—like the company that got sued by IBM's printer division for making its own toner cartridges and grew so big it now owns that printer division (!). But adversarial interoperability is also a way for the public to assert its rights and push back against unfair practices. Take-it-or-leave it deals are one thing when the market is competitive and you can shop around for someone with better terms of service, but in highly concentrated markets where everyone has the same rotten deal on offer, adversarial interoperability lets users make a counteroffer: "How about nah?" But for How Long? Concentration in the tech industry—including the “vertical integration” of browsers, advertising networks, and video content under one corporate umbrella—has compromised the Internet's openness. In 2017, the World Wide Web Consortium published its first-ever "standard" that could not be fully implemented without permission from the giant tech and media companies (who have since refused that permission to anyone who rocks the boat). In publishing that standard, the W3C explicitly rejected a proposal to protect adversarial interoperability by extracting legally binding nonaggression promises from the companies that make up the consortium. The standard the W3C published—Encrypted Media Extensions (EME), for restricting playback of video—comes with many dangers for would-be adversarial interoperators, notably the risk of being sued under Section 1201 of the Digital Millennium Copyright Act, which bans tampering with “access controls” on copyrighted works and holds out both criminal and civil liability for toolsmiths who traffic in programs that let you change the rules embodied by EME. One driving force behind the adoption of EME was the ever-tighter integration between major browser vendors like Google, video distributors, and advertising networks. This created a lopsided power-dynamic that ultimately ended up in the standardization of a means of undoing the configurable Web—where the user is king. EME is the first crack in the wall that protected browsers from those who would thwart adversarial operability and take "how about nah?" off the table, leaving us with the kind of take-it-or-leave-it Web that the marketing industry has been striving for since the first pop-up ad.
>> mehr lesen

Original Cult of the Dead Cow Members Keep it "Wacky, Weird, and Wild" to Celebrate Joseph Menn's Newest Book (Thu, 25 Jul 2019)
On June 18, the Internet Archive hosted a reading and panel discussion in celebration of Joseph Menn's new book Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World. As the evening's event began, an archived video of Cult of the Dead Cow (cDc) interviews from 1996 played silently on a wall-mounted TV, featuring some of the very same original members who would be a part of that evening's panel. In addition to the strong turnout at the Internet Archive itself, those unable to attend in person were able to watch the event livestreamed on the Internet Archive's Youtube channel. Guests enjoyed light refreshments and mingled before moving into the main auditorium to be welcomed by Internet Archive founder Brewster Kahle. After sharing a brief history of the Internet Archive's mission, Executive Director of the Electronic Frontier Foundation Cindy Cohn took the stage as MC for the evening. Cohn expressed the importance of remembering the "wacky, weird, and wild" history of Internet security, and acknowledged the cDc's contributions to improving the community before introducing Joseph Menn to the stage. Menn recounted the beginning of cDc and cybersecurity by highlighting notable hackers and their contributions throughout the years, including crediting the cDc with coining the term "hacktivism" by "using it at every interview they could at DEFCON to get it into the English language." Looking forward, he went on to express how "the rank-and-file in Silicon Valley now are the most important heirs of the cDc's tradition of critical moral thinking." Following Menn, Cohn retook the stage and introduced the panel speakers: Chris Rioux AKA DilDog, Back Orifice 2000 author and Veracode founder; Window Snyder, cDc fellow traveler and former core security staffer at Microsoft, Apple, and now Square; and Michael "Misha" Kubecka AKA Omega, cDc's editor, media list curator and archivist. Each took turns sharing what had originally drawn them to the cDc and their individual reasons for staying. Left to Right: Josh Buchbinder (cDc), Cindy Cohn, Adam O'Donnell (cDc), Window Snyder, Mike Seery (cDc), Chris Rioux (cDc), Katie Moussouris, Misha Kubecka (cDc), Joseph Menn. After sharing, Cohn began to read questions from the audience, starting with a question for Window Snyder: Question: If you could go back in time and change one thing to make the Internet more secure, where and when would you go, and what would you do? Window Snyder: If we had taken what we knew back then and applied it to all the different systems that we were building at the time and also made it easy. Easy for the developers to take an existing API, an existing library, and use it to encrypt the security of those systems. Easy for the consumer to not have to go through a thousand steps to get full-volume encryption on their Windows device. If everyone is following the same steps, that's something we could automate. We didn't do that end of that work, and I don't think there was a lot of value given to that aspect of security, which is the part that makes it accessible to others—the democratization of security—until we had significant security problems already, had an ecosystem of malware built upon taking advantage of consumer information on these devices. There was an opportunity there and we missed it. The panel continued to answer questions from the audience, and as the evening concluded, several excellent questions still remained. Rather than let these questions go unanswered, the speakers were able to follow up via email with some further insight: Question: How do we, as individuals, cope with the commercialization of our digital identities? Practically, psychologically, spiritually? Chris Rioux: Demand a constitutional right to privacy. You don't cope, you fight it. Demand laws that allow you to withdraw your identity from databases. Assert a legal right to manage your data and that withdrawing access to your data from corporations is your right. This includes derivative works, including your connected social graph. Corporations won't hand you these rights without you punching them in the face with the law. Question: With the rapid growth of fake photos and video technology, do you think it is possible to still protect authenticity and anonymity in the media? Chris Rioux: Yes. Digital signing of video can make fakes harder for the average person. The chips that are recording the video in your mobile devices and cameras can be using cryptography to digitally sign the media as unaltered. While this would prevent some forms of modification of the video, a technical solution that allowed a small amount of compositing and resizing/cropping while maintaining the digital signature is possible. Media outlets should insist on using only signed media where it can be proved where the origin of the video came from. Question: Tell us about the name Cult of the Dead Cow? Joseph Menn: Like many things in cDc, it was a bit of an inside joke—a reference to an abandoned slaughterhouse in Lubbock, Texas, which was where the founders lived. It was a creepy hangout for them. As teenagers on the early Internet, it seemed important to be a bit sinister. Otherwise, what would be the attraction? Question: Do you see any contemporary groups/cons/etc. carrying on the cDc spirit? Michael Kubecka: Germany's Chaos Computer Club (CCC) has long been a socially-conscious organization using its tech skills and wry sense of humor to highlight issues of surveillance and privacy. Telecomix's technical support of ordinary Egyptians during the Arab Spring to help them evade government censorship was laudatory. SecureDrop, developed by Aaron Swartz & Kevin Poulsen to facilitate secure communication between whistleblowers and journalists, will help bring sunshine to dark places. The good news is that hacktivism is no longer the exclusive domain of hackers and hacking groups. To name just two examples: Joshua Browder wrote a chatbot to automate the process of contesting parking tickets, saving ordinary people millions of dollars. Now that marijuana is legal in California, Los Angeles county and Code for America are using an algorithm to clear more than 50,000 pot convictions, restoring dignity and employability countless people. The event ended with a final question from the audience: "What ways do you recommend I spread the word and get people to think about ethics?" In keeping with the cDc's history and focus on community, the speakers stressed building interpersonal relationships, practicing empathy, and focusing on public service. As Cohn brought the event to a close, she encouraged everyone to meet with others who care about ethics, the future, and having fun with technology, starting with the people already in the room. For those able to attend in person or watch via the livestream, the event was an insightful look back into the not-so-distant past of cybersecurity. Much of the discussion demonstrated how the hacking community began as exclusive and inaccessible, growing to eventually encompass, and ultimately prioritize, today's average user. While the need for those willing to take on the increasing challenges surrounding technology is greater than ever before, the cDc's notoriously unconventional legacy continues to inspire us to rise up to face them, tongue firmly in cheek.
>> mehr lesen

Thank Q, Next (Thu, 25 Jul 2019)
In its next release, Android plans to up its privacy game. But the operating system still caters to ad trackers at its users’ expense. The newest release of Android, dubbed “Q,” is currently in late-stage beta testing and slated for a full release this summer. After a year defined by new privacy protections around the world and major privacy failures by Big Tech, this year, Google is trying to convince users that it is serious about “protecting their information.” The word “privacy” was mentioned 22 times during the 2019 Google I/O keynote. Keeping up that trend, Google has made—and marketed—a number of privacy-positive changes to Android for version Q. Many of the changes in Q are significant improvements for user privacy, from giving users more granular control over location data to randomizing MAC addresses when connecting to WiFi networks by default. However, in at least one area, Q’s improvements are undermined by Android’s continued support of a feature that allows third-party advertisers, including Google itself, to track users across apps. Furthermore, Android still doesn’t let users control their apps’ access to the Internet, a basic permission that would address a wide range of privacy concerns. One ID to rule them all Q places new restrictions on non-resettable device identifiers like IMEI number and serial number. Apps will need to request a new “Read privileged phone state” permission to access them. These changes are good: they will help prevent apps from tracking users based on information they can’t modify or reset, and they obey the principle of least privilege: apps that don’t absolutely need access to potentially sensitive information shouldn’t have it. Unfortunately, Android Q will still allow unrestricted access to its own, custom-made tracking identifier. Android generates and exposes a unique device identifier, called an “advertising ID,” that allows tracking advertisers to link your behavior across different apps. The ad ID can be thought of as a tracking cookie, visible by default to every app on your device, that can’t be restricted or deleted (though it can be reset). As of the latest release, Google encourages ad trackers to eschew other device identifiers, like IMEI, in favor of the ad ID. Facebook and other targeting companies allow businesses to upload lists of ad IDs that they have collected in order to target those users on other platforms. Android includes an “opt out of ad personalization” checkbox, buried deep in the settings, that allows users to indicate that they don’t want to be tracked by their ad ID. Checking it should delete the ID entirely, or at least restrict apps’ access to it, right? Wrong. Instead, the checkbox doesn’t affect the ad ID in any way. It only encodes the user’s “preference”, so that when an app asks Android whether a user wants to be tracked, the operating system can reply “no, actually they don’t.” Google’s terms tell developers to respect this setting, but Android provides no technical safeguards to enforce this policy. You can view your advertising ID on Android by heading to Settings > Google > Ads, and you can reset it by tapping Reset advertising ID. This will cause your phone to generate a new, unique ad ID that is unrelated to the old one. While it’s nice that Google gives you some control over your ad ID, neither a preference flag nor a simple “reset” will actually prevent anyone from tracking you. Apps on your device can access more than enough information to allow them to link your old ID to your new one if they so choose. Once again, Google politely instructs trackers “respect the user's intention in resetting the advertising ID,” but does not indicate how this is enforced. Apple’s iOS has a nearly identical “Identifier for Advertisers (IDFA),” which is also available to developers without any special permissions. Like Google, Apple’s decision to make allow this kind of tracking by default conflicts with its privacy-focused marketing campaign. Unlike Google, Apple does give users the ability to turn off tracking completely by setting the IDFA to a string of zeros. On Android, there is no way for the user to control which apps can access the ID, and no way to turn it off. While we support Google taking steps to protect other hardware identifiers from unnecessary access, its continued support of the advertising ID—a “feature” designed solely to support tracking—undercuts the company’s public commitment to privacy. Internet access: the permission that isn’t The advertising ID should not be enabled by default, and users should have a way to turn it off for good. But apps can’t collect your advertising ID, or any other kind of personal information, without access to the Internet. Much of the most egregious tracking in the Play Store is performed by apps that have no business on the Internet at all, like single-player games, stopwatches, and “flashlights.” This should be simple. If an app doesn’t need access to the Internet, it shouldn’t have it. And users should be able to decide which apps can and can’t share data over the network. But neither iOS nor Android has an “Internet” permission that users can grant or revoke. Every developer of every app has access to as much data as it can gather whenever the device is online. It’s time for Google to fix it already.
>> mehr lesen

EFF Extensions Recommended by Firefox (Thu, 25 Jul 2019)
Earlier this month, Mozilla announced the release of Firefox 68, which includes a curated "list of recommended extensions that have been thoroughly reviewed for security, usability and usefulness." We are pleased to announce that both of our popular browser extensions, HTTPS Everywhere and Privacy Badger, have been included as part of the program. Now, when you navigate to the built-in Firefox add-ons page (URL: about:addons), you'll see a new tab: "Recommendations," which includes HTTPS Everywhere and Privacy Badger among a list of other recommended extensions. In addition, they will be highlighted in Add-ons for Firefox and in add-on searches. What does this mean for users who already have our extensions installed? If you initially installed them from addons.mozilla.org or the recommendation list, it means that there will be a slight delay after we update the extensions while Mozilla reviews the new versions for security, utility, and user experience. If you installed the self-hosted extensions directly from eff.org without going through Mozilla, you'll get the updates right away after a routine automated check. Either way, you can rest assured that EFF has audited every piece of software we release for security and performance problems. We're thrilled that Mozilla is highlighting privacy and security-focused extensions, and grateful that HTTPS Everywhere and Privacy Badger are included in that list.
>> mehr lesen

The FTC-Facebook Settlement Does Too Little to Protect Your Privacy (Wed, 24 Jul 2019)
EFF is disappointed by the terms of the settlement agreement announced today between the Federal Trade Commission (FTC) and Facebook. It is grossly inadequate to the task of protecting the privacy of technology users from Facebook’s surveillance-based system of social networking and targeted advertising. This settlement arises from the FTC’s 2012 settlement order against Facebook, concerning the company’s deceptive statements about user privacy. Facebook violated the 2012 FTC order through its role in the Cambridge Analytica scandal, which violated the privacy rights of millions of Facebook users. Today’s FTC-Facebook settlement does not sufficiently protect user privacy. For example: The agreement does not limit how Facebook collects, uses, and shares the personal information of its users. It is not enough for the agreement to require Facebook to conduct its own privacy review of new products; that just empowers Facebook to decide its own collection, use, and sharing practices. The agreement does not provide public transparency regarding how Facebook collects, uses, and shares personal information, or how Facebook implements the FTC settlement. It is not enough for only Facebook and the government to have this information. This agreement does nothing to address Facebook’s market power in social networks and internet advertising, and may risk cementing Facebook’s market power. These deficiencies are not cured by the $5 billion fine against Facebook. For a company the size of Facebook, this is not an effective deterrent against future violations of user privacy. If the FTC were serious about putting a dent in the privacy problems created by Facebook’s targeted advertising business model, it could have taken aim at two of Facebook’s biggest sources of information: data brokers and third-party tracking. Some provisions of the settlement agreement are positive. For example, it requires Facebook to delete existing face recognition templates, and bars Facebook from creating new ones, absent the user’s informed opt-in consent. Also, the settlement bars Facebook from using phone numbers provided by users to enhance their security (i.e., for two-factor authentication) for advertising purposes. Unfortunately, the settlement does not address Facebook’s other egregious abuses of user phone numbers, including exposing two-factor authentication numbers to public reverse lookup, and vacuuming up “shadow” contact information that users never gave to Facebook in the first place. Taken as a whole, this settlement is bad news for consumer privacy. But this is bigger than Facebook. Its surveillance-driven targeted ad business model is common across the web. To protect user’s privacy rights, we need solid consumer data privacy legislation.
>> mehr lesen

When Will We Get the Full Truth About How and Why the Government Is Using Face Recognition? (Tue, 23 Jul 2019)
Earlier this month, the House Committee on Homeland Security held a hearing to discuss the role of face recognition and other invasive biometric technologies in use by the Department of Homeland Security (DHS). Despite some pushback from some lawmakers on the committee, John Wagner of the U.S. Customs and Border Protection (CBP), Austin Gould of the Transportation Security Administration (TSA), Joseph DiPietro of the Secret Service, and Charles Romine from the National Institute of Standards and Technology (NIST) argued that face recognition and biometric surveillance is safe, regulated, and essential for the purposes of keeping airports and U.S. borders secure. This hearing made clear: this technology is not well-regulated, it does impact the privacy of travelers, and its effectiveness has yet to be proven. Oddly enough, the group most in need of a check on how they use these technologies, Immigration and Customs Enforcement (ICE), was not in attendance at this hearing. By far, the most questions from the committee were directed toward CBP, which recently announced that data, including photographs taken of license plates at checkpoints, had been accessed in a hack of the third-party contractor that provided the cameras. Although Wagner, of the CBP, said they were unaware that the camera provider could extract data, he offered little assurance—outside of saying that CBP would review protocol—that cameras feeding traveler photos into face recognition software could avoid similar vulnerabilities. What this exchange makes clear is that the best way to avoid the risk of having photographs of travelers’ faces hacked and leaked to the world is not to put up the cameras in the first place. Chairman Thompson also expressed concern over face recognition software’s well-documented tendency to have a higher error rate when analyzing the faces of people of color. On the mind of Chairman Thompson was the recent test of Amazon’s Rekognition software, which falsely matched 28 members of Congress to mugshots in a database. As he stated in the hearing, while not all of the Members of Congress misidentified were people of color, a disproportionate 40% were. Although false positives continue to be a grave concern as face recognition becomes more ubiquitous, improving the software’s accuracy does not negate the more overwhelming dangers posed by face recognition. The use of face recognition and other biometric surveillance threaten to chill free speech and the freedom to travel. This is particularly true for people of color, religious minorities, and other groups who have been stereotyped and whose presence at protests, in airports, or in public, have been met with unfair suspicion and sometimes violence by authorities. Another one of our concerns is the slow expansion of how and why CBP is using face recognition and Rapid DNA identification at the border. Wagner said, “U.S. citizens are clearly outside the scope of the biometric entry/exit tracking.” However, he went on to say, “The technology we’re using for the entry/exit program, we’re also using to validate the identity of a U.S. citizen. Someone has to do that. Someone has to determine who is in scope or out of scope.” Determining that involves scanning, but allegedly not storing images for a prolonged period of time, or sending those images to DHS for additional screening. This is exactly what face recognition on U.S. citizens sounds like. Wagner also had no specific time frame for when CBP would release a long-awaited report that would document how their security measures at the border have helped to keep the United States safe. These invasive technologies continue to be deployed under the promise that they are deterring countless criminals and terrorists. It is past time for the CBP to prove that these averted threats actually exist. It’s also not convincing when a representative of TSA, Austin Gould, boasts that 99% of all people traveling through their face recognition airport security trial were happy to let the government scan their faces. After all, we know it can be quite tricky and unclear for a person to assert their rights to opt out of such invasive procedures. Although we can all be encouraged by the fact that people across the country are slowly recognizing the threat that face recognition poses to privacy and are pushing to ban its use, the government’s expansion of these programs continues. In spite of this changing public perception, the U.S. government continues to push its expanding use of face recognition and biometric surveillance. It’s up to us to stop it.
>> mehr lesen

Thank Laws Supported By AT&T and Comcast for California’s Broadband Monopoly Problem (Tue, 23 Jul 2019)
If you, like a great many Californians, have shopped for high-speed broadband options (in excess of 100 mbps) and found that you always ended up with Comcast, it is because the state’s legislature has failed to promote broadband competition for more than ten years. That reality has resulted in the death of competitive access in many parts of the state with a disproportionate impact on low income residents and rural Californians. With the exception of last year's S.B. 822 (the state’s net neutrality bill) and A.B. 1999 (legislation that made it legal for local governments to build their own ISPs), the big ISPs have gotten exactly what they want out of Sacramento—which is for the state to abandon its residents to broadband monopolies so they can charge monopoly rents. Take, for example, the debate this year regarding an AT&T and Comcast bill being moved by Assembly Member Lorena Gonzalez (A.B. 1366). Very few lawmakers in the state’s legislature have willingly opposed this bill, which will hurt consumers. The legislation’s premise is in lockstep with the Trump Administration’s FCC agenda to abandon all means of using the law to promote competition policy. The bill maintains a restraint on state and local authority to promote broadband access competition that was originally instituted in 2012 after heavy lobbying by the major ISPs. Take Action Don’t Let California’s Legislature Extend Broadband Monopolies for Comcast and AT&T Commissioner Marta Guzman Aceves, a California regulator from the California Public Utilities Commission (CPUC) pleaded with the Senate Utilities Committee earlier this month to block the bill in its most recent hearing (see video below). She cited the fact that millions of Californians face a monopoly market that lacks any semblance of competition. The Commissioner further stated that the lack of access and lack of investment in the broadband infrastructure of California carries serious risks to public safety, as these are essential means for communications during emergencies. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FNA9yw_7MxIY%3Fautoplay%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com But, despite all of these facts and the realities on the ground, only three state Senators voted against the bill in committee— Sens. Hill, McGuire, and Wiener. The legislation is now heading for the Senate floor, when session resumes in a month. Just How Bad is the High-Speed Market in California? Really Bad! Thanks to help from the Institute for Local Self-Reliance, we have the latest data to show what the future holds if the state decides to do nothing—the outcome of passing A.B. 1366—broken down by Senate district. Literally every California Senate district is facing a monopoly broadband market in high-speed access—with the exception of San Francisco, which has enjoyed the advent of gigabit broadband competition. That is mostly thanks to one small regional ISP called Sonic Fiber, which was recently found to be the country’s fastest ISP. Chart showing every California Senate district except San Francisco faces a broadband monopoly Competition Maps and Charts based on FCC Form 477 December 2017 v.2, FCC Population and Household Estimates 2017. Assembled by the Institute for Local Self-Reliance for EFF. Source: FCC Form 477 December 2017 v.2, FCC Population and Household Estimates 2017  At the end of 2017, the most recent government data showed a vast majority of Californians did not have access to gigabit networks that can be delivered by fiber or another high-speed telecommunications standard, DOCSIS 3.1. 2018’s data is still being compiled by the government, but we know two things that have happened between 2018 and today, and would inform the data. First, the cable industry has generally converted their systems across the board to DOCSIS 3.1, allowing them to sell broadband download speeds (not uploads) at the gigabit range. That means the percentage of Californians with no access to gigabit networks will drop, but the "one choice" monopoly percentage will grow for 2018—and not the green bar indicating at one competitor. Second, large competitors are leaving the market, not entering it. AT&T, the only major national ISP in California that can rival Comcast, has abandoned its plans to build fiber to the home. and has started laying off workers that build those fiber networks. If the major national competitor to Comcast is not building infrastructure that will rival DOCSIS 3.1, it means they do not intend to compete with Comcast. Rather, what we have seen from AT&T is an intent to invest in their wireless products and promote (albeit falsely at times) 5G wireless access. Despite AT&T’s efforts to argue otherwise, wireless has never, nor ever will be, competitive with wireline services in the broadband market when it comes to capacity, reliability, and speeds. Choosing not to compete would normally prompt a regulatory and policy response. But if ISPs can strip the state regulator and local governments of their authority to promote competition, as envisioned under A.B. 1366, then they do not have to worry about any response, since the FCC also abandoned its authority in this area in 2017. Our future does not have to look like this. Right now, each Californian has a chance not only to tell their state Senator to vote NO on A.B. 1366, but also to demand that lawmakers start doing their jobs and promote universal, competitive, and affordable access to 21st century broadband infrastructure. It is long past time the California legislature realized it has been too deferential to the incumbent ISPs, and their constituents are suffering monopoly rents today due to their unwillingness to act. Rather than renew a law crafted by AT&T and Comcast —and handed to a willing legislator in Assembly Member Lorena Gonzalez—it is about time they start looking at states that are skyrocketing past California when it comes to broadband access. In Utah, people have a dozen choices in gigabit fiber broadband. North Dakota now has a staggering 60 percent of its homes connected to fiber networks, despite being a very rural state. New York retained its authority and expert state regulator over broadband, and was going to literally kick out their cable company for failing to deliver to its residents—forcing the ISP to invest in the state and upgrade its facilities as part of its settlement. California leaders can also learn from the EU, which adopted a gigabit-for-all plan years ago, or the advanced Asian markets that long ago surpassed the United States. The point is, doing nothing and passing a law that makes doing nothing the mandate of the state only favors the incumbents. That is why they wrote the legislation. The only result of renewing this law, via A.B. 1366, is that a vast majority of Californians will remain stuck with Comcast as their only choice for a very long time.
>> mehr lesen

New Chilean ¿Quién Defiende Tus Datos? Report Shows Greater ISPs Commitment to User Privacy (Tue, 23 Jul 2019)
Derechos Digitales, the leading digital rights organization in Chile, published its third annual Who Defends Your Data report today, in collaboration with EFF. The report assesses whether the country’s top ISPs enforce privacy policies and practices that put their users first. Kurt Opsahl, EFF’s Deputy Executive Director and General Counsel, joined the launch in Santiago de Chile, which highlighted the main findings and achievements of the report. ISPs have made considerable strides forward in this year's edition. Five of the six ISPs now publish transparency reports; four have released public guidelines on how and when they hand over user's data to government officials. Claro leads the pack in protecting its customers’ data, with WOM close behind. Both have policies that are both public and privacy-protective, publish clear and detailed law enforcement guidelines, and have made significant progress towards notification about authorities’ requests for personal information—a real breakthrough for users' rights throughout Latin America.  VTR, Movistar and GTD Manquehue still have a long way to catch up. The summary of the latest Who Defends Your Data? report is below.  The full report, including details about each company, is available in Spanish.  Evaluation Criteria Data Protection: Does the company have a copy of their internet service contract and its data protection policy published on its website? ISPs were not only judged on whether or not they published their policies on their website, but also the policies’ privacy-protective contents. Full star: Policies prominently published, clear, reflected key user-centric data protection principles, were in line with the current national legislation, and identified a point of contact to address user grievances. Partial star: Partial compliance. Transparency: Does the company have a transparency report? Full star: Published a transparency report on users’ data management and handling of government data requests Must have included the specific number of data requests the ISP has approved or rejected; a summary of the requests by investigation authority, type, and purpose; whether the report disaggregates the requests by geographic region; and whether third-parties managing user data do so in a privacy-protective manner and inform about government data requests they receive. Partial star: Published transparency reports, but did not specifically refer to data protection and the monitoring of communications. User Notification: Does the company notify users about government requests for information? Full star: Notify users about authorities' requests for access to their personal information at the earliest possible moment under the law Partial star: Making progress to implement a notification system. Law Enforcement Guidelines: Does the company publish the procedure, requirements and legal obligations that the government must comply with when requesting personal information from its users? Full star: Specifically outline, on their website, the requirements authorities must comply with when requesting user data. The description must be easy to understand; it must specify the procedures the company uses to respond to data requests from authorities; and it must indicate how long it retains user data. Partial star: Publishes information on how it handles user data, but does not fully specify the requirements that authorities must comply with. Commitment to Privacy: Has the company defended privacy and actively protected users' data, either in court or as part of a legislative discussion in Congress? Full star: Challenged government requests in courts as unlawful or disproportionate requests for data. Partial star: Publicly defended users outside of court, whether that be opposing bills or administrative procedures that threaten user privacy, or joining a multi-stakeholder coalition in favor of users' rights. Main Findings WHYB Chile 2019 - chart Compared to last year’s edition, the new report gave lower overall scores for the companies’ data protection policies, with Movistar, GTD, and VTR lagging behind. That’s because the 2019 report raised the standards, requiring that ISPs not only publish clear data protection policies, but also go a step further and commit to privacy-protective principles considered in the research. Both WOM and Claro were the only two to do so, maintaining their perfect scores. WOM, VTR and Claro stood by supporting the user’s right to notification. For years, similar reports across Latin America have underscored ISPs’ reluctance and fear to lay out a proper procedure for alerting users of government data requests, in contrast to notification practices now common in the United States. In previous installments of Derechos Digitales’ report, this has been a serious shortfall for the country’s ISPs. This year’s edition, however, shows Chile is making significant improvements. WOM, VTR, and Claro laid out users’ right to be notified within their policies. Claro went above and beyond in making it easy for its users: they even crafted a formal letter for users to use  to gather more information in the event of a notification. This is crucial for ensuring users’ ability to challenge the request and  to seek remedies when it’s unlawful or disproportionate. ISPs have also been hesitant to  challenge illegal and excessive requests. Chile’s report indicates that many ISPs are still failing to confront such requests in the courts on behalf of their users—except one. This year, Entel got top marks because it was the only ISP to refuse the government’s request for an individual’s data, out of the several ISPs contacted for the same information. Claro and WOM made strides as well, the first for supporting legislative initiatives favoring users’ rights and the latter for not handing over personal information they saw as confidential in administrative procedures. Finally, this year’s edition shows more stars shining in transparency reports and public law enforcement guidelines for access to users’ data. Now, VTR and Entel joined WOM and Claro in publicly sharing their law enforcement guidelines. All four received full stars, indicating that not only were their guidelines listed, but also that their contents met the standards. And except for GTD Manquehue, all the other ISPs published transparency reports—a huge improvement from the almost three full stars given last year. It signals a larger trend within Chile that will hopefully make transparency reports an industry norm. All five of the reports covered in this category meet the baseline standards laid out in the research. Conclusions There’s a clear gap between Chilean companies when it comes to defending users privacy. Claro and WOM are comfortably in the lead in protecting their customers, with Entel not far behind. As for Movistar and GTD Manquehue, there’s a great deal they need to improve on. Derechos Digitales’ work is part of a series of reports throughout Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies. Fundación Karisma in Colombia published its report in late 2018, Hiperderecho in Peru has launched its second edition this year, and IPANDETEC in Panamá is about to start its own series.  
>> mehr lesen

Department of Commerce: Address Privacy Before Licensing Satellites to Watch Over Us (Fri, 19 Jul 2019)
EFF legal intern Roger Li co-wrote this blog post. Satellites could soon track our movements from space, allowing for surveillance on a mass scale that most people haven’t ever contemplated. Yet U.S. rules governing commercial satellite licenses require satellite companies to disclose the unenhanced data they collect to governments around the world. This week, EFF filed comments with the Department of Commerce and the National Oceanic and Atmospheric Administration (NOAA) urging the agencies to take privacy into consideration when they issue satellite licenses. U.S. companies and research institutions that want to launch private satellites must first obtain a license from the federal government. As satellites have become smaller and less expensive to launch, and as the market has increased for the images and other data collected by satellites, satellite companies have been pushing the government to streamline the licensing process. This spring, the Department of Commerce issued a proposed rule that is designed to do that and then requested comments from interested parties. Noticeably absent from this proposed rule are any new protections to address the clear privacy risks raised by satellite images and recordings and the existing rule’s data sharing requirement. EFF’s comment urges the Department of Commerce to address these concerns. Private Satellites Pose Substantial Risks to Privacy and Civil Liberties Satellites are capable of highly advanced and continuous surveillance through high resolution imaging, thermal imaging, and near real-time video, and their capabilities are increasing every day. Private satellites are currently allowed to create images of up to 25 centimeters of resolution, which is enough to discern something the size of a mailbox. Satellites can also conduct thermal imaging, hyperspectral imaging (which captures electromagnetic wavelengths outside the visible spectrum), and atmospheric monitoring. Technologies like these can be used to cut through cloud cover, to determine the height of objects,  and even to “identify underground bunkers or nuclear materials.” A single one of these satellites can orbit around the Earth revisiting and reimaging the same area every 90 minutes, and several satellite operators advertise an archive of images that dates back ten or nearly 20 years. These vast archived datasets—which include data on private citizens—allow anyone with access the ability to enter a virtual time machine and view and monitor past actions for as long and as far back as a satellite operator retains data.  As a condition of a license, private satellite operators are required to, upon request, provide unenhanced data to “the government of any country (including the United States)” if the data covers that country’s territory “unless doing so would be prohibited by law or license conditions.” Governments can use these images and video from satellites to surveil our activities, and searchable archives can help them track this activity over time. Governments could also combine satellite data with information gathered from other law enforcement surveillance technologies like automated license plate readers, street-level surveillance cameras, and face and object recognition to deduce an even more detailed log of a person’s movements. With the advent of real-time video, private satellites could subject the entire world to continuous 24/7 surveillance. Government access to sensitive information like this raises stark Fourth Amendment concerns. As the Supreme Court recently recognized in Carpenter v. United States, a case addressing the collection of historical location information, “time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’”[1] The Supreme Court has also expressed concerns about surveillance that reaches inside the home (including via thermal imaging), surveillance that logs travel in public, surveillance over time that allows law enforcement to look back in time, and the ability to surveil everyone—all concerns that are implicated by private satellites. These Risks Can Be Mitigated Through a Few Key Changes  To address these concerns, EFF suggests several changes to Commerce’s proposed rule, which would increase transparency and diminish satellites’ privacy risks. These changes include: expanding disclosure of privacy risks in licensing applications, conducting regular audits on government data requests, incorporating privacy considerations in the criteria for high-risk applicants, and conducting a further rulemaking on these privacy concerns. We hope the Department of Commerce will consider these changes in writing their final rule. By doing so, they could begin to address the current and growing risks that private satellites pose to our privacy and our civil liberties.
>> mehr lesen

Don’t Let Encrypted Messaging Become a Hollow Promise (Fri, 19 Jul 2019)
Why do we care about encryption? Why was it a big deal, at least in theory, when Mark Zuckerberg announced earlier this year that Facebook would move to end-to-end encryption on all three of its messaging platforms? We don’t just support encryption for its own sake. We fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world. And although encryption may be the backbone, it’s important to recognize that protecting digital security and privacy encompasses much more; it’s also about additional technical features and policy choices that support the privacy and security goals that encryption enables. But as we careen from one attack on encryption after another by governments from Australia to India to Singapore to Kazakhstan, we risk losing sight of this bigger picture. Even if encryption advocates could “win” this seemingly forever crypto war, it would be a hollow victory if it came at the expense of broader security. Some efforts—a recent proposal from Germany comes to mind—are as hamfisted as ever, attempting to give government the power to demand the plaintext of any encrypted message. But others, like the GCHQ’s “Ghost” proposal, purport to give governments the ability to listen in on end-to-end encrypted communications without “weakening encryption or defeating the end-to-end nature of the service.” And, relevant to Facebook’s announcement, we’ve seen suggestions that providers could still find ways of filtering or blocking certain content, even when it is encrypted with a key the provider doesn’t hold. So, as governments and others try to find ways to surveil and moderate private messages, it leads us to ask: What policy choices are incompatible with secure messaging? We know that the answer has to be more than “don’t break encryption,” because, well, GCHQ already has a comeback to that one. Even when a policy choice technically maintains the mathematical components of end-to-end encryption, it can still violate the expectations users associate with secure communication. So our answer, in short, is: a secure messenger should guarantee that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about. Any time a messaging app has to add “unless...” to that guarantee, whether in response to legislation or internal policy decisions, it’s a sign that messenger is delivering compromised security to its users. EFF considers the following signs that a messenger is not delivering end-to-end encryption: client-side scanning, law enforcement “ghosts,” and unencrypted backups. In each of these cases, your messages remain between you and your intended recipient, unless... Client-side scanning Your messages stay between you and your recipient....unless you send something that matches up to a database of problematic content. End-to-end encryption is meant to protect your messages from any outside party, including network eavesdroppers, law enforcement, and the messaging company itself. But the company could determine the contents of certain end-to-end encrypted messages if it implemented a technique called client-side scanning. Sometimes called “endpoint filtering” or “local processing,” this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a database of “hashes,” or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge. Hash-matching is already a common practice among email services, hosting providers, social networks, and other large services that allow users to upload and share their own content. One widely used tool is PhotoDNA, created by Microsoft to detect child exploitation images. It allows providers to automatically detect and prevent this content from being uploaded to their networks and to report it to law enforcement. But because services like PhotoDNA run on company servers, they cannot be used with an end-to-end encrypted messaging service, leading to the proposal that providers of these services should do this scanning “client-side,” on the device itself. The prevention of child exploitation imagery might seem to be a uniquely strong case for client-side scanning on end-to-end encrypted services. But it’s safe to predict that once messaging platforms introduce this capability, it will likely be used to filter a wide range of other content. Indeed, we’ve already seen a proposal that Whatsapp create “an updatable list of rumors and fact-checks” that would be downloaded to each phone and compared to messages to “warn users before they share known misinformation.” We can expect to see similar attempts to screen end-to-end messaging for “extremist” content and copyright infringement. There are good reasons to be wary of this sort of filtering of speech when it is done on public social media sites, but using it in the context of encrypted messaging is a much more extreme step, fully undermining users’ ability to carry out a private conversation. Because all of the scanning and comparison takes place on your device, rather than in the cloud, advocates of this technique argue that it does not break end-to-end encryption: your message still travels between its two “ends”—you and your recipient—fully encrypted. But it’s simply not end-to-end encryption if a company’s software is sitting on one of the “ends” silently looking over your shoulder and pre-filtering all the messages you send. Messengers can make the choice to implement client-side scanning. However, if they do, they violate the user expectations associated with end-to-end encryption, and cannot claim to be offering it. Law enforcement “ghosts” Your messages stay between you and your recipient...unless law enforcement compels a company to add a silent onlooker to your conversation. Another proposed tweak to encrypted messaging is the GCHQ’s “Ghost” proposal, which its authors describe like this: It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorize today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have. But as EFF has written before, this requires the provider to lie to its customers, actively suppressing any notification or UX feature that allow users to verify who is participating in a conversation. Encryption without this kind of notification simply does not meet the bar for security. Unencrypted backups by default Your messages stay between you and your recipient...unless you back up your messages. Messaging apps will often give users the option to back up their messages, so that conversations can be recovered if a phone is lost or destroyed. Mobile operating systems iOS and Android offer similar options to back up one’s entire phone. If conversation history from a “secure” messenger is backed up to the cloud unencrypted (or encrypted in a way that allows the company running the backup to access message contents), then the messenger might as well not have been end-to-end encrypted to begin with. Instead, a messenger can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If a messenger chooses not to encrypt backups, then they should be off by default and users should have an opportunity to understand the implications of turning them on. For example, WhatsApp provides a mechanism to back messages up to the cloud. In order to back messages up in a way that makes them restorable without a passphrase in the future, these backups need to be stored unencrypted at rest. Upon first install, WhatsApp prompts you to choose how often you wish to backup your messages: daily, weekly, monthly, or never.  In EFF’s Surveillance Self-Defense, we advise users to never back up their WhatsApp messages to the cloud, since that would deliver unencrypted copies of your message log to the cloud provider. In order for your communications to be truly secure, any contact you chat with must do the same. Continuing the fight In the 1990s, we had to fight hard in the courts, and in software, to defend the right to use encryption strong enough to protect online communications; in the 2000s, we watched mass government and corporate surveillance undermine everything online that was not defended by that encryption, deployed end-to-end. But there will always be attempts to find a weakness in those protections. And right now, that weakness lies in our acceptance of surveillance in our devices. We see that in attempts to implement client-side scanning, mandate deceptive user interfaces, or leak plaintext from our devices and apps. Keeping everyone’s communications safe means making sure we don’t hand over control of our devices to companies, governments, or other third parties.
>> mehr lesen

Victory: Oakland City Council Votes to Ban Government Use of Face Surveillance (Fri, 19 Jul 2019)
Earlier this week, Oakland’s City Council voted unanimously to ban local government use of face surveillance. The amendment to Oakland’s Surveillance and Community Safety Ordinance will make Oakland the third U.S. city to take this critical step toward protecting the safety, privacy, and civil liberties of its residents.  Local governments like those in San Francisco, CA; Somerville, MA; and now Oakland, CA are leading the way in proactively heading off the threat of this particularly pernicious form of surveillance. However, after a series of hearings by the House Oversight Committee, national and international policymakers have also begun to look closely at the technology’s threat to human rights and civil liberties.  On the same day that Oakland’s City Council voted to ban government use of the technology, the House of Representatives passed a bipartisan amendment to the Intelligence Authorization Act (H.R. 3494) that would require the Director of National Intelligence to report on the use of face surveillance by intelligence agencies. David Kaye, the United Nations Special Rapporteur on freedom of opinion and expression, has also called for a moratorium on face surveillance saying, "Surveillance tools can interfere with human rights, from the right to privacy and freedom of expression to rights of association and assembly." Over the last several years, EFF has continuously voiced concerns over the First and Fourth Amendment implications of government use of face surveillance. These concerns are exacerbated by research conducted by MIT’s Media Lab regarding the technology’s high error rates for women and people of color. However, even if manufacturers are successful in addressing the technology’s substantially higher error rates for already marginalized communities, government use of face recognition technology will still threaten safety and privacy, chill free speech, and amplify historical and ongoing discrimination in our criminal system. Even as Oakland’s face surveillance ban awaits a procedural second reading, lawmakers and community members across the country are considering their own prohibitions and moratoriums on their local government’s use. This week, the Public Safety Committee in the neighboring city of Berkeley, CA held a hearing on their own proposed ban, and lawmakers across the country took to Twitter to share news of their like intentions. Massachusetts residents, beyond Somerville, hoping to protect their communities from face surveillance should contact their state lawmakers in support of S.1385 and H.1538, the proposed bills calling for a moratorium throughout the Commonwealth. Outside of Massachusetts, as governing bodies across the country adjourn for their summer recess, now is an opportune time to call on your own representatives to take a stand for the rights of their constituents, by banning government use of face surveillance in your community. 
>> mehr lesen

SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects (Fri, 19 Jul 2019)
Before there was Big Tech, there was "adversarial interoperability": when someone decides to compete with a dominant company by creating a product or service that "interoperates" (works with) its offerings. In tech, "network effects" can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn't just have to be better than Facebook, it has to be so much better than Facebook that it's worth using, even though all the people you want to talk to are still on Facebook. That's a tall order. Adversarial interoperability is judo for network effects, using incumbents' dominance against them. To see how that works, let's look at a historical example of adversarial interoperability role in helping to unseat a monopolist's dominance. The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn't open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn't play nice with Macs. But as office networking spread, the battle moved to a new front: networking compatibility. AppleTalk, Apple's proprietary protocol for connecting up Macs and networked devices like printers, pretty much Just Worked, providing you were using a Mac. If you were using a Windows PC, you had to install special, buggy, unreliable software. And for Apple users hoping to fit in at Windows shops, the problems were even worse: Windows machines used the SMB protocol for file-sharing and printers, and Microsoft's support for MacOS was patchy at best, nonexistent at worst, and costly besides. Businesses sorted themselves into Mac-only and PC-only silos, and if a Mac shop needed a PC (for the accounting software, say), it was often cheaper and easier just to get the accountant their own printer and backup tape-drive, rather than try to get that PC to talk to the network. Likewise, all PC-shops with a single graphic designer on a Mac—that person would often live offline, disconnected from the office network, tethered to their own printer, with their own stack of Mac-formatted ZIP cartridges or CD-ROMs. All that started to change in 1993: that was the year that an Australian PhD candidate named Andrew Tridgell licensed his SAMBA package as free/open source software and exposed it to the wide community of developers looking to connect their non-Microsoft computers—Unix and GNU/Linux servers, MacOS workstations—to the dominant Microsoft LANs. SAMBA was created by using a "packet sniffer" to ingest raw SMB packets as they traversed a local network; these intercepted packets gave Tridgell the insight he needed to reverse-engineer Microsoft's proprietary networking protocol. Tridgell prioritized compatibility with LAN Manager, a proprietary Network Operating System that enterprise networks made heavy use of. If SAMBA could be made to work in LAN Manager networks, then you could connect a Mac to a PC network—or vice-versa—and add some Unix servers and use a mix of SAMBA and SMB to get them all to play nice with one another. The timing of Tridgell's invention was crucial: in 1993, Microsoft had just weathered the Federal Trade Commission’s antitrust investigation of its monopoly tactics, squeaking through thanks to a 2-2 deadlock among the commissioners, and was facing down a monopoly investigation by the Department of Justice. The growth of local-area networks greatly accelerated Microsoft's dominance. It's one thing to dominate the desktop, another entirely to leverage that dominance so that no one else can make an operating system that connects to networks that include computers running that dominant system. Network administrators of the day were ready to throw in the towel and go all-Microsoft for everything from design workstations to servers. SAMBA changed all that. What's more, as Microsoft updated SMB, SAMBA matched them, relying on a growing cadre of software authors who relied on SAMBA to keep their own networks running. The emergence of SAMBA in the period when Microsoft's dominance was at its peak, the same year that the US government tried and failed to address that dominance, was one of the most salutary bits of timing in computing history, carving out a new niche for Microsoft's operating system rivals that gave them space to breathe and grow. It's certainly possible that without SAMBA, Microsoft could have leveraged its operating system, LAN and application dominance to crush all rivals. So What Happened? We don't see a lot of SAMBA-style stories anymore, despite increased concentration of various sectors of the tech market and a world crying out for adversarial interoperability judo throws. Indeed, investors seem to have lost their appetite for funding companies that might disrupt the spectacularly profitable Internet monopolists of 2019, ceding them those margins and deeming their territory to be a "kill zone." VCs have not lost their appetite for making money, and toolsmiths have not lost the urge to puncture the supposedly airtight bubbles around the Big Tech incumbents, so why is it so hard to find a modern David with the stomach to face off against 2019's Goliaths? To find the answer, look to the law. As monopolists have conquered more and more of the digital realm, they have invested some of those supernormal profits in law and policy that lets them fend off adversarial interoperators. One legal weapon is "Terms of Service": both Facebook and Blizzard have secured judgments giving their fine print the force of law, and now tech giants use clickthrough agreements that amount to, "By clicking here, you promise that you won't try to adversarially interoperate with us." A modern SAMBA project would have to contend with this liability, and Microsoft would argue that anyone who took the step of installing SMB had already agreed that they wouldn't try to reverse-engineer it to make a compatible product. Then there's "anti-circumvention," a feature of 1998's Digital Millennium Copyright Act (DMCA). Under Section 1201 of the DMCA, bypassing a "copyright access control" can put you in both criminal and civil jeopardy, regardless of whether there's any copyright infringement. DMCA 1201 was originally used to stop companies from making region-free DVD players or modding game consoles to play unofficial games (neither of which is a copyright violation!). But today, DMCA 1201 is used to control competitors, critics, and customers. Any device with software in it contains a "copyrighted work," so manufacturers need only set up an "access control" and they can exert legal control over all kinds of uses of the product. Their customers can only use the product in ways that don't involve bypassing the "access control," and that can be used to force you to buy only one brand of ink or use apps from only one app store. Their critics—security researchers auditing their cybersecurity—can't publish proof-of-concept to back up their claims about vulnerabilities in the systems. And competitors can't bypass access controls to make compatible products: third party app stores, compatible inks, or a feature-for-feature duplicate of a dominant company's networking protocol. Someone attempting to replicate the SAMBA creation feat in 2019 would likely come up against an access control that needed to be bypassed in order to peer inside the protocol's encrypted outer layer in order to create a feature-compatible tool to use in competing products. Another thing that's changed (for the worse) since 1993 is the proliferation of software patents. Software patenting went into high gear around 1994 and consistently gained speed until 2014, when Alice v. CLS Bank put the brakes on (today, Alice is under threat). After decades of low-quality patents issuing from the US Patent and Trademark Office, there are so many trivial, obvious and overlapping software patents in play that anyone trying to make a SAMBA-like product would run a real risk of being threatened with expensive litigation for patent infringement. This thicket of legal anti-adversarial-interoperability dangers has been a driver of market concentration, and the beneficiaries of market concentration have also spent lavishly to expand and strengthen the thicket. It's gotten so bad that even some "open standards organizations" have standardized easy-to-use ways of legally prohibiting adversarial interoperability, locking in the dominance of the largest browser vendors. The idea that wildly profitable businesses would be viewed as unassailable threats by investors and entrepreneurs (rather than as irresistible targets) tells you everything you need to know about the state of competition today. As we look to cut the Big Tech giants down to size, let's not forget that tech once thronged with Davids eager to do battle with Goliaths, and that this throng would be ours to command again, if only we would re-arm it.
>> mehr lesen

A Bad Copyright Bill Moves Forward With No Serious Understanding of Its Dangers (Thu, 18 Jul 2019)
The Senate Judiciary Committee voted on the Copyright Alternative in Small-Claims Enforcement Act, aka the CASE Act. This was without any hearings for experts to explain the huge flaws in the bill as it’s currently written. And flaws there are. We’ve seen some version of the CASE Act pop up for years now, and the problems with the bill have never been addressed satisfactorily. This is still a bill that puts people in danger of huge, unappealable money judgments from a quasi-judicial system—not an actual court—for the kind of Internet behavior that most people engage in without thinking. During the vote in the Senate Judiciary Committee, it was once again stressed that the CASE Act—which would turn the Copyright Office into a copyright traffic court—created a “voluntary” system. “Voluntary” does not accurately describe the regime of the CASE Act. The CASE Act does allow people who receive notices from the Copyright Office to “opt-out” of the system. The average person is not really going to understand what is going on, other than that they’ve received what looks like a legal summons. Take Action Tell the Senate Not to Enable Copyright Trolls Furthermore, the CASE Act gives people just 60 days from receiving the notice to opt-out, so long as they do so in writing “in accordance with regulations established by the Register of Copyrights,” which in no way promises that opting out will be a simple process, understandable to everyone. But because the system is opt-out, and the goal of the system is presumably to move as many cases through it as possible, the Copyright Office has little incentive to make opting out fair to respondents and easy to do. That leaves opting out as something most easily taken advantage of by companies and people who have lawyers who can advise them of the law and leaves the average Internet user at risk of having a huge judgment handed down by the Copyright Office. At first, those judgments can be up to $30,000, enough to bankrupt many people in the U.S., and that cap can grow even higher without any more action by Congress. And the “Copyright Claims Board” created by the CASE Act can issue those judgments to those who don’t show up. A system that can award default judgments like this is not “voluntary.” We know how this will go because we’ve seen this kind of confusion and fear with the DMCA. People receive DMCA notices and, unaware of their rights or intimidated by the requirements of a counter-notice, let their content disappear even if it’s fair use. The CASE Act makes it extremely easy to collect against people using the Internet the way everyone does: sharing memes, photos, and video. If the CASE Act was not opt-out, but instead required respondents to give affirmative consent, or “opt-in,” at least the Copyright Office would have greater incentive to design proceedings that safeguard the respondents’ interests and have clear standards that everyone can understand. With both sides choosing to litigate in the Copyright Office, it’s that much harder for copyright trolls to use the system to get huge awards in a place that is friendly to copyright holders. We said this the last time the CASE Act was proposed and we’ll say it again: Creating a quasi-court focused exclusively on copyright with the power to pass judgment on parties in private disputes invites abuse. It encourages copyright trolling by inviting filing as many copyright claims as one can against whoever is least likely to opt-out—ordinary Internet users who can be coerced into paying thousands of dollars to escape the process, whether they infringed copyright or not. Copyright law fundamentally impacts freedom of expression. People shouldn’t be funneled to a system that hands out huge damage awards with less care than a traffic ticket gets. Take Action Tell the Senate Not to Enable Copyright Trolls
>> mehr lesen