Deeplinks

Tell FCC Commissioner Ajit Pai: Startups Depend on Net Neutrality (Di, 25 Apr 2017)
Startups, entrepreneurs, investors, accelerators, and incubators are signing onto a letter urging Trump’s FCC Commissioner Ajit Pai not to undermine the FCC’s net neutrality rules. The letter affirms the need for net neutrality rules to protect entrepreneurs and innovators, and responds to recent reports that Pai plans to roll back the Commission’s net neutrality rules, replacing them with empty promises from broadband providers: Without net neutrality, the incumbents who provide access to the Internet would be able to pick winners or losers in the market. They could impede traffic from our services in order to favor their own services or established competitors. Or they could impose new fees on us, inhibiting consumer choice. Those actions directly impede an entrepreneur’s ability to “start a business, immediately reach a worldwide customer base, and disrupt an entire industry.” Our companies should be able to compete with incumbents on the quality of our products and services, not our capacity to pay tolls to Internet access providers. Fortunately, in 2015 the Federal Communications Commission put in place light touch net neutrality rules that not only prohibit certain harmful practices, but also allow the Commission to develop and enforce rules to address new forms of discrimination. We are concerned by reports that you would replace this system with a set of minimum voluntary commitments, which would give a green light for Internet access providers to discriminate in unforeseen ways. It’s not too late to add your voice to theirs. Engine Advocacy, Y Combinator, and Techstars are calling for members of the startup community to sign on to the letter by 5pm ET on April 28th.         Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF Asks Appeals Court to Break Through Five-Year Logjam in Megaupload Case (Mo, 24 Apr 2017)
Lawful Users Still Waiting for Return of Files After Government Seizure San Francisco - The Electronic Frontier Foundation (EFF), on behalf of its client Kyle Goodwin, is asking a federal appeals court to break through the five-year logjam in the Megaupload.com case, and help lawful users who are still waiting for the return of their photos, videos, and other personal files after the government seized Megaupload’s servers. Megaupload was a popular cloud-storage site when the FBI shut it down in January of 2012 looking for evidence of copyright infringement. Agents seized all of Megaupload’s assets during their search, locking out customers from their accounts. Goodwin, a sports videographer, lost access to video files containing months of his professional work. For five years, the U.S. government has continued pursuing a criminal case against Megaupload and its owners. But the data stored by millions of customers—including obviously lawful material like Goodwin’s sports videos—have languished on servers that sit disconnected in a warehouse. “Mr. Goodwin, and many others, used Megaupload to store legal files, and we’ve been asking the court for help since 2012. It’s deeply unfair for him to still be in limbo after all this time,” said EFF Senior Staff Attorney Mitch Stoltz. “The legal system must step in and create a pathway for law-abiding users to get their data back.” In a petition filed today with the United States Court of Appeals for the Fourth Circuit, EFF, along with the firm of Williams Mullen and attorney Abraham D. Sofaer, argue that the court should issue a writ of mandamus to the trial court, ordering it to act on Goodwin’s request and create a process for other users to retrieve their data. “We’re likely to see even more cases like this as cloud computing becomes increasingly popular,” said EFF Legal Director Corynne McSherry. “If the government takes over your bank, it doesn’t get to keep the family jewels you stored in the vault. There’s a process for you to get your stuff back, and you have a right to the same protection for your data.” For the full brief filed today: https://www.eff.org/document/petition-writ-mandamus For more on this case: https://www.eff.org/cases/megaupload-data-seizure Contact:  Mitch Stoltz Senior Staff Attorney mitch@eff.org Corynne McSherry Legal Director corynne@eff.org Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Access Now and EFF Condemn the Arrest of Tor Node Operator Dmitry Bogatov in Russia (Mo, 24 Apr 2017)
This post was written in collaboration with Amie Stepanovich at Access Now. On April 6, Russian math instructor Dmitry Bogatov was arrested in Moscow and charged with “preparing to organize mass disorder” and making “public calls for terrorist activity” due to a gross misunderstanding about the operation of the Tor internet anonymization service. Bogatov is accused of authoring a series of online posts published to the sysadmins.ru discussion platform on March 29 under the username “Ayrat Bashirov.” One post called for protesters to attend an unsanctioned, anonymously organized demonstration on April 2 with “rags, bottles, gas, turpentine, styrofoam, and acetone.” Another post linked to the music video for Kanye West’s “No Church in the Wild,” described by investigators as “a video recording with insubordination to the legal demands of the police, and mass disorder.” The posts appear to have come from the IP address of a server located in Bogatov’s home, but this server is a part of the Tor network—an exit node that routes anonymous traffic from all over the world and makes it appear to have originated from that computer. There is considerable evidence that Bogatov did not post the content at issue. According to a Global Voices report, “Surveillance footage shows Bogatov and his wife leaving a supermarket four minutes before one of the posts was made on March 29. Given that the supermarket is half a kilometer from their home, it is unlikely that Bogatov could have made it home and posted online within four minutes.” Additionally, “Ayrat Bashirov” has continued posting on the forum and has even exchanged messages with an Open Russia journalist explicitly denying that he is Bogatov. Tor exit node operators mistakenly accused of crimes committed from their exit nodes is nothing new. This is one of the reasons that EFF cautions against running an exit node in your home in its Legal FAQ for Tor Relay Operators. In the past, law enforcement has always backed down once it had become clear that they had the wrong party. But rather than acknowledge their mistake, the Investigative Committee (the main federal investigative committee in the Russian Federation), appears to be doubling down. When a judge initially ruled that the charges against Bogatov were not serious enough to justify his continued detention, the Investigative Committee added the second, more serious charge of inciting terrorism. Days later, the court upheld the additional charges, formally arrested Bogatov, and ordered that he be held until his trial date on June 8. The arrest comes in midst of an online crackdown related to anti-corruption protests in cities across Russia on April 2. The protests have resulted in the arrest of hundreds of individuals, including Leonid Volkov, who was arrested for having livestreamed the protests. Volkov was detained for ten days, and as a result was unable to attend RightsCon, where he was scheduled to speak about Russian surveillance systems. As global organizations working to defend human rights, Access Now and EFF condemn Dmitry Bogatov’s continued detention and the detention of others by Russia or other governments for exercising their human rights or facilitating increased internet security. Put simply: running a Tor exit node is not a crime and Tor exit node operators should not be treated like criminals. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Adobe Puts an End to Indefinite Gag Order (Mo, 24 Apr 2017)
In a newly unsealed case [.pdf], a Los Angeles federal court ruled that Adobe could not be indefinitely gagged about a search warrant ordering it to turn over the contents of a customer account. This is important work by Adobe. Gag orders almost always violate the First Amendment; they prevent service providers from notifying users that the government is requesting their sensitive data and from being transparent about surveillance in general. And yet, providers receive indefinite gags with frustrating frequency. In most contexts, the government must do little to justify these gags and instead relies on rote invocations of national security and the sanctity of investigations. The Adobe gag was issued under 18 U.S.C. § 2705(b), the same law Microsoft is challenging as facially unconstitutional because it allows for indefinite gags.1 These arguments are also at the heart of EFF’s long-running national security letter (NSL) lawsuit, which was argued in the Ninth Circuit Court of Appeals last month. Thankfully, the court in Adobe’s case recognized the serious harm to free speech these gags represent. It held that orders barring companies from notifying their users about government data requests are both prior restraints and content-based restrictions on speech subject to strict scrutiny. That’s a very high bar. The court found that the indefinite gag order imposed on Adobe fails strict scrutiny because the government could make “no showing[] that Adobe’s speech will threaten the investigation in perpetuity.” The government’s attempts to save the Adobe gag order were nearly identical to arguments it made in our NSL litigation. It claimed gags don’t even implicate Adobe’s First Amendment rights because the company only wants to speak about information learned from the government, and that an indefinite gag was OK because Adobe could simply come to court when the need for a gag had passed. But on point after point, the court rejected these arguments. The First Amendment requires gag orders to be narrowly tailored, and Section 2705(b) orders and NSL gags come nowhere close to meeting that standard. As the court put it, “the fact that the speaker cannot know when the restriction's ‘raison d'etre fades’ effectively equates to no tailoring at all.” While the appeals court in our NSL case doesn’t have to follow this court’s lead, we think any First Amendment arguments that can be deployed against 2705(b) orders are doubly effective for NSLs. That’s because the FBI can issue indefinite NSL gags without even going before a court, as Section 2705(b) requires. Adobe’s fight should demolish another of the government’s arguments in our NSL case: that providers don’t want to speak out about gags. Adobe promises to notify its customers about government data requests in all cases unless “legally prohibited from doing so.” And it goes one step further, stating upfront that indefinite gags “are not constitutionally valid and we challenge them in court.” Following through on this promise gives lie to the unsupportable claim that providers don’t care to speak out on these issues. Here’s hoping the days of indefinite gag orders are numbered. 1. Section 2705(b) allows a court to issue a gag “for such period as the court deems appropriate.” There’s an interesting split of opinion on whether that language allows for indefinite gag, or whether the word “period” implies a finite limit. The court in Adobe’s case determined that periods can in fact be indefinite, which led to its First Amendment ruling. Related Cases:  Microsoft v. Department of Justice In re: National Security Letter 2011 (11-2173) In re National Security Letter 2013 (13-80089) In re National Security Letter 2013 (13-1165) Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Tell the DHS: Social Media Passwords Should Not Be a Condition of Entry to the U.S. (Fr, 21 Apr 2017)
New proposals to make U.S. entry screening even more invasive will threaten our privacy, freedom of expression, and digital account security—and you can raise your voice against them. The Department of Homeland Security (DHS) is currently considering new procedures to screen certain foreign travellers. Specifically, Secretary of Homeland Security John. F. Kelly said in a congressional hearing that the DHS is considering requiring certain foreign travelers to hand over their social media passwords in order to apply for a visa and enter the United States. EFF is joining with Access Now and other digital rights organizations to raise your voices against this dangerous proposal. Sign the Fly Don’t Spy petition to tell Secretary Kelly to reject any proposal requiring passwords as a condition of entry to the United States. Take ActionSign the Fly Don't Spy petition. While you’re at it, email your representatives directly and demand that border agents get a warrant before conducting digital searches. Take ActionEmail Congress and demand a warrant at the border. We have written before about the serious privacy risks and Constitutional concerns of border searches, particularly when agents demand social media information. Social media profiles expose not only one’s social network and contacts, but can also provide a detailed map of one’s digital life if that social media account if used to log into other sides. Requiring passwords and log-in access to social media--whether as part of screening procedures before arrival at the border, or at the border itself--expands border agents’ access to particularly sensitive information like direct messages, and invades the privacy of a traveller’s friends and connections. Such a requirement will chill online speech and association, and undermine the digital security and account protections otherwise available to users. Want more information about your rights at the border? Check out our in-depth “Privacy at the U.S. Border” report, as well as two shorter guides on your constitutional rights at the border and digital security tips for before, during, and after your border crossing. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

A Municipal Vote in Providence for Police Reform Carries National Implications (Fr, 21 Apr 2017)
After three years of sustained community mobilization and advocacy, the Providence City Council in Rhode Island voted this Thursday to unanimously approve among the most visionary set of policing reforms proposed around the country to protect civil rights and civil liberties, including digital liberties. EFF supported the proposed Community Safety Act (CSA), and its adoption represents a milestone that should prompt similar measures in other jurisdictions. Reflecting an understanding of of how many different communities endure parallel—but seemingly separate—violations of civil rights and civil liberties, the CSA aims to address surveillance alongside racial and other dimensions of discriminatory profiling. The ordinance imposes crucial limits on police powers at a time when local police have become the leading edge of mass surveillance, as well as longstanding abuses of civil rights and digital liberties rooted in the war on drugs. The most notable facet of the CSA is its sheer breadth. It addresses a wide-ranging set of issues in a single reform measure.  For instance, the Act requires that targeted electronic surveillance be supported by reasonable suspicion of criminal activity. On the one hand, that requirement should be implicit given the history of politicized domestic surveillance within the United States. On the other hand, relative to the prevailing practice of ubiquitous intelligence collection, the Act’s requirements represent a monumental legal shift. In addition, the CSA protects the right of residents to observe and record police activities. That right has been vital to sparking a sustained debate across the country about police accountability, but has come under fire. Just this month, a federal appellate court heard oral argument in an appeal seeking to vindicate the right to record police in the wake of trial court decisions in multiple cases perversely holding that residents gain a right to record only after announcing their hostility to police, effectively inviting retaliation or even violence. The bill also protects Due Process rights threatened by the otherwise arbitrary and secretive inclusion of individuals in government gang databases. In California, for instance, state auditors discovered that the state’s program received “no state oversight” and operated “without transparency or meaningful opportunities for public input,” prompting the state legislature to intervene by passing a new law providing notice of inclusion and an opportunity to contest it. At the same time, responding to controversy about traffic stops and pedestrian stop and frisks rooted in bias rather than observed behavior, the Act requires that police change their processes for searching subjects. In particular, when seeking to search subjects without either a judicial warrant or probable cause to suspect criminal activity, the Act requires police to inform the subjects that they have the right to decline consent to the requested search. That represents a sea change in policing, given the practice among some police departments to train officers to use deception to induce a subject's consent, ensuring that it is neither informed nor voluntary.  Similarly, the Act's restrictions on racial profiling and intelligence collection absent reasonable suspicion of criminal activity offer important bulwarks to reinforce our Fourth Amendment rights to be free from unreasonable searches and seizures, as well as 14th Amendment protections to be free from racial and other forms of discrimination.  Beyond the CSA’s substantive breath lies a novel theory of change informing its construction. Rather than a discrete reform proposed by advocates, the CSA represents a concerted attempt to address the intersectional concerns of several communities responding to a common challenge: discriminatory or otherwise unconstitutional police practices.  While Providence has distinguished itself in the remarkably diverse coalition of community groups that have come together to pursue common cause, the issues to which Providence activists are responding are hardly unique to their city. Ultimately, grassroots groups in every major city across the country might learn something from the coalition to pass the Providence Community Safety Act. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Las Empresas de Internet de Paraguay defienden la información, pero mantienen a sus clientes en la oscuridad (Do, 20 Apr 2017)
Es el turno de Paraguay para examinar de cerca las prácticas de sus proveedores locales de Internet y la manera en que tratan la información privada de sus clientes. La edición paraguaya de  ¿Quien Defiende Tus Datos? es un proyecto de TEDIC, la principal organización de derechos digitales del país y es parte de una iniciativa a nivel de todo el continente de los principales grupos de derechos digitales de América del Sur para arrojar una luz sobre las prácticas en materia de privacidad en Internet en la región y está basado en el informe anual de EFF, ¿Who Has Your Back?. (El informe de Derechos Digitales de Chile fue publicado el lunes, y grupos de derechos digitales en Colombia, México, Brasil y Argentina pronto publicarán estudios similares). La encuesta de TEDIC llega en un momento tenso en la política paraguaya. Después de 24 años de democracia relativamente estable, el país ha pasado los últimos meses atrapados en una batalla política de alto nivel. El actual presidente, Horacio Cartes, impulsó una enmienda constitucional para permitir la reelección. La oposición ve ecos del incremento del poder presidencial que los llevó a la ultima dictadura. Después de los disturbios en marzo que llevaron al asesinato de un militante opositor en manos de la policía, Cartes ha declarado que no se presentará a la reelección. Sin embargo, la mención de la "sombra de la dictadura" sigue presente en Asunción. Los usuarios paraguayos de Internet quieren saber cómo sus ISPs defenderán sus datos en caso de un estado represivo. Las seis empresas encuestadas por TEDIC - Tigo, Telecom Personal, Claro, Vox, Copaco, and Chaco Communications - forman la gran mayoría del mercado fijo, móvil y de banda ancha en Paraguay. Sus archivos históricos tienen registros privados de los movimientos y relaciones de casi todos los ciudadanos del país. TEDIC, en la tradición de Who Has Your Back (¿Quien Defiende Tus Datos?) , evaluó a las compañías por su compromiso con la privacidad y la libre expresión, y otorgó estrellas basadas en sus prácticas actuales y comportamiento público. Se evaluaron siete categorías: sus políticas públicas de privacidad, la exigencia de órdenes judiciales para las demandas de datos, si notifican a los clientes sobre las demandas de datos gubernamentales, si se oponen públicamente a la vigilancia masiva y sus políticas de bloqueo de contenido. La buena noticia del informe de TEDIC es que todas las compañías de telecomunicaciones declararon explícitamente que sólo entregan datos a las autoridades (tanto los metadatos como el contenido de las comunicaciones) en respuesta a una orden judicial legítima. Eso puede parecer un mínimo básico para la protección de datos, pero un compromiso público con el estado de derecho puede ser una declaración importante en tiempos inquietantes. Cada empresa revisada tiene una estrella completa en esta categoría. La noticia más desalentadora es que los consumidores paraguayos todavía no tienen una manera de verificar - confiablemente - que las compañías están cumpliendo verdaderamente con sus promesas públicas. Ninguna de las compañías tenía políticas para notificar a sus usuarios si son objeto de vigilancia, por ejemplo, incluso si esa orden es anulada o si la investigación finalizó completamente. El equipo de investigación de TEDIC señala que notificar al usuario sería, realmente, un signo de un compromiso con la privacidad del cliente más allá de los requisitos financieros o legales. La ley de Paraguay no requiere notificación y, en algunos casos, los ISP podrían tener que solicitar permiso legal explícito para transmitir el aviso de vigilancia a sus usuarios. Sin embargo, sin notificación, es difícil conocer el alcance de la vigilancia, o que cualquier persona pueda impugnar una vigilancia que considere innecesaria o desproporcionada. La transparencia es importante para la supervisión, tanto para mostrar a los clientes cómo a menudo sus gobiernos solicitan datos y si determinadas empresas son más propensos a poner al cliente en primer lugar al responder. Muchas compañías de Internet y telecomunicaciones publican ahora informes de transparencia, documentando el número total de solicitudes que reciben para la vigilancia o retiradas de contenido de agencias gubernamentales o por orden judicial. Estos informes anuales proporcionan información valiosa sobre los niveles de vigilancia y censura del gobierno y sobre cómo cambia la vigilancia con el tiempo. Paraguay tiene su propia entrada en muchos informes globales. Las actividades de Tigo están documentadas en reportes regionales por su multinacional matriz; Millicom. Por desgracia, las filiales locales de telecomunicaciones de Millicom no siguen el ejemplo de la empresa matriz y publican informes específicos de cada país. Esto niega a los ciudadanos paraguayos la oportunidad de rastrear el nivel de espionaje de su propio gobierno, y significa que ninguna compañía en el reporte de TEDIC recibió una estrella completa en esta categoría. Tampoco tenemos mucha información sobre el bloqueo o filtrado de Internet en Paraguay por parte de las compañías de telecomunicaciones. A pesar de los incidentes preocupantes en el pasado, como cuando un ISP bloqueó una mordaz sátira en línea de un periódico, parece que hay poca comprensión pública de cómo o por quéun ISP podría censurar a los usuarios de Internet. Ninguna de las compañías describe cómo manejarían una orden de bloqueo, si recibieron una, o dieron alguna idea de si litigarían contra ella, o notificarían a alguien que no fuera el tribunal o el departamento gubernamental. Sólo una empresa hizo una declaración pública sobre cómo podría bloquearlo: Chaco Comunicaciones, cuya declaración amenazaba con prohibir el tráfico P2P, los dejó como el único ISP sin estrellas en un mar de medias estrellas para esta categoría. Las dos últimas categorías muestran algunos de los incentivos para las empresas de telecomunicaciones en un mercado competitivo. Tres de las seis empresas ganaron media estrella participando en el debate legislativo sobre vigilancia y neutralidad de la red, haciendo un compromiso explícito con los derechos humanos o contribuyendo a foros internacionales de políticas de Internet como los Foros de Gobernanza de Internet. Esto demuestra que al menos algunas empresas reconocen que la política puede tener un impacto en sus clientes, y quizás sus beneficios. Pero, al igual que las compañías telefónicas de todo el mundo, las empresas de telefonía paraguayas son reticentes a descartar nuevos usos para los datos personales de los clientes. Ninguna empresa en la encuesta publicó cómo planean utilizar los datos de los consumidores o dio una detallada política de privacidad que sus clientes podrían utilizar al contratar una proveedora de Internet. Este es el primer informe ¿Quien Defiende Tus Datos? en Paraguay, y TEDIC planea lanzar uno anualmente. El informe de este año muestra a Tigo a la cabeza, pero con muchas oportunidades para que sus competidores se pongan al día. Tigo tiene mucho espacio para mejorar su propio historial. Cualquier empresa que haya decidido comenzar a notificar en caso de vigilancia a sus usuarios, publicar un informe de transparencia o adoptar públicamente principios sólidos de protección de datos podría fácilmente asumir el liderazgo para 2018 y hacer que sus clientes se sientan más seguros contra el uso indebido comercial y estatal de los detalles más privados de su vida. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Paraguay's Internet Companies Defend Data, But Keep Customers in the Dark (Do, 20 Apr 2017)
It's Paraguay's turn to take a closer look at the practices of their local Internet companies, and how they treat their customer's private information. Paraguay's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of TEDIC, the country's leading digital rights organization. It's part of a continent-wide initiative by South America's leading digital rights groups to shine a light on Internet privacy practices in the region, based on EFF's annual Who Has Your Back report. (Derechos Digitale's Chile report was published on Monday, and digital rights groups in Colombia, Mexico, Brazil, and Argentina will be releasing similar studies soon.) TEDIC's survey comes at a tense moment in Paraguayan politics. After 24 years of relatively stable democracy, the country has spent the last few months caught in a high-stakes political battle. The current President, Horacio Cartes, pushed through an amendment to end his office's constitutional term limits. The opposition sees echoes of the presidential power-grab that led to Paraguay's last dictatorship. After riots in March led to setting fire of the Congress and the shooting of an opposition party member by police, Cartes has now declared he will not run for re-election. Still, talk of the "shadow of dictatorship" continues to hover over Asunción. Paraguayan Internet users want to know how their ISPs will defend their data in the event of a repressive or suspicious state. The six companies surveyed by TEDIC—Tigo, Telecom Personal, Claro, Vox, Copaco, and Chaco Communications—together make up the vast majority of the fixed, mobile, and broadband market in Paraguay. Their logs hold intimate records of the movements and relationships of almost every citizen of the country. TEDIC, in the tradition of Who Has Your Back, evaluated the companies for their commitment to privacy and free expression, and awarded stars based on their current practices and public behavior. TEDIC reviewed Paraguay's top ISPs in seven categories: their public privacy policies, whether they require court orders for data demands, whether they notify customers of government data demands, if they publicly stood against mass surveillance, whether they published transparency reports, and their policies on blocking content. The good news from TEDIC's report is that every telco explicitly stated that they only hand over data to the authorities (both metadata and the content of communications) in response to a legitimate court order. That may seem like a basic minimum for data protection, but a public commitment to the rule of law can be an important statement in unsettling times. Every company reviewed got a full star for this. The less positive news is that individual consumers in Paraguay don't yet have a way to reliably check that the companies are truly complying with their public promises. None of the companies had policies in place to notify users if they were the target of surveillance, for instance, even if that order was overturned, or the investigation was complete. The TEDIC research team notes that user notification would truly be sign of a commitment to customer privacy over and above financial or legal requirements. Paraguay law does not require notification, and in some cases the ISPs might have to seek explicit legal permission to pass on notice of surveillance to their users. But without notification, it is difficult to know the extent of surveillance, or for anyone to challenge surveillance they believe to be unnecessary or disproportionate. Transparency is important for oversight, both to show customers how often their governments request data, and whether particular companies are more likely to put the customer first when responding. Many Internet and telecommunication companies now publish transparency reports, documenting the total number of requests they receive for surveillance or content takedowns from government agencies or by court order. These annual reports provide valuable insight into the levels of government surveillance and censorship, and how that surveillance changes over time. Paraguay has its own entry in many global reports. Tigo's activities are documented in regional reporting by its multinational parent corporation, Millicom. Unfortunately, Millicom’s local telecommunication subsidiaries do not follow the parent company’s lead and publish country specific reports. That denies technology users in Paraguay a chance to track their own government's level of spying, and means not one company in TEDIC's report received a full star in this category. We don't get much insight into Paraguayan Internet blocking or filtering from the telecommunication companies either. Despite worrying incidents in the past, such as when ISPs blocked an online satire of a newspaper, it seems that there is very little public understanding of how or why ISPs might censor their users' Internet feeds. None of our companies describe how they would handle a blocking order if they received one, or gave any insight as to whether they would challenge it, or notify anyone other than the court or government department. Only one company made any public statement about how it might block at all: Chaco Communications, whose statement threatened to ban P2P traffic, left them as the only no-star ISP in a sea of half-stars for this category. The final two categories show something of the incentives for telecommunication companies in a competitive market. Three of the six companies gained half stars by participating in the legislative debate over surveillance and net neutrality, making an explicit commitment to human rights, or contributing to international Internet policy fora like the Internet Governance Forum. This shows that at least some companies recognize that politics can have an impact on their customers, and perhaps their profits. But, like telephone companies around the world, Paraguayan phone companies are reticent to rule out new uses for customers' personal data. No company in the survey published how they plan to use consumer data, or gave a detailed privacy policy that their customers could use when shopping for an Internet provider. This is the first ¿Quien Defiende Tus Datos? report in Paraguay, and TEDIC plans to release one annually. This years' report shows Tigo in the lead, but with plenty of opportunity for their competitors to catch up. Tigo has plenty of room to improve on its own track record too. Any company that decided to pioneer user notification of surveillance, publish a transparency report, or publicly adopt strong data protection principles could easily seize the lead for 2018—and make its customers feel safer against both state and commercial misuse of the most private details of their lives. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

The Bill of Rights at the Border: Fifth Amendment Protections for Account Passwords and Device Passcodes (Mi, 19 Apr 2017)
This is the third and final installment in our series on the Constitution at the border. Today, we’ll focus on the Fifth Amendment and passwords. Click here for Part 1 on the First Amendment or Part 2 on the Fourth Amendment. Lately, a big question on everyone's mind has been: Do I have to give my password to customs agents? As anyone who’s ever watched any cop show knows, the Fifth Amendment gives you the right to remain silent and to refuse to provide evidence against yourself – even at the border. If a CBP agent asks you a question, you can tell them you choose to remain silent and want to speak to an attorney, even if you don’t have one retained yet. That choice may not stop CBP agents from pressuring you to “voluntarily” talk to them, but they are supposed to stop questioning you once you ask for a lawyer. Also, beware that government agents are permitted to lie to you in order to convince you to waive your right to remain silent, but you can be criminally prosecuted if you lie to them. CBP agents are unlikely to advise you that you have this choice because the government generally argues that such warnings are only required if you are taken into “custody” and subjected to a criminal prosecution. And at least one federal court of appeals has determined that secondary inspection – the separate interview area you get referred to if the CBP officer can’t readily verify your information at the initial port of entry – doesn’t qualify as “custody.”  But you don’t have to be in custody or subject to a criminal prosecution before you choose to invoke your Fifth Amendment rights to remain silent or to object to being deprived of your property without due process of law. For example, the Second Circuit Court of Appeals has held that a person’s request for an attorney is enough to invoke the privilege against self-incrimination, even at the border. And that privilege includes refusing to provide the password to your device. For example, in 2015, a Pennsylvania court held that you may properly invoke the Fifth Amendment privilege to avoid giving up your cell phone passcode – even to an employer’s phone – because your passcode is personal in nature and producing it requires you to speak or testify against yourself.  Some courts have been less protective, overriding Fifth Amendment protections where the information sought is a so-called “foregone conclusion.” In 2012, a Colorado court ordered a defendant to provide the password to her laptop, only after the government had obtained a search warrant based on the defendant’s admission that there was specific content on her laptop and that the laptop belonged to her. On appeal, the Eleventh Circuit clarified that the government "must [first] show with some reasonable particularity that it seeks a certain file and is aware, based on other information, that . . . the file exists in some specified location" and that the individual has access to the desired file or is capable of decrypting it. So, Fifth Amendment protections do apply at the border, and they protect your right to refuse to reveal your password in most circumstances. That said, individuals passing through the border sometimes choose to surrender their account information and passwords anyway, in order to avoid consequences like missing their flight, being made subject to more constrictive or prolonged detention, or being denied entry to the US. As we have noted in our Digital Border Search Whitepaper, the consequences for refusing to provide your password(s) are different for different classes of individuals. If you are a U.S. citizen, CBP cannot detain you indefinitely as you have a right to re-enter the country. However, agents may escalate the encounter (for example, by detaining you for more time), or flag you for heightened screening during future border crossings. If you are a lawful permanent resident, agents may also raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents might deny you entry to the country entirely. But whatever your status, whether you choose to provide your passwords or not, border agents may decide to seize your digital devices. While CBP guidelines set a five-day deadline for agents to return detained devices unless a CBP supervisor approves a lengthier detention, in practice, device detentions commonly last many months. As always, we want to hear from you if you experience harm or harassment from CBP for choosing to protect your digital data. We’re still collecting stories of border search abuses at: borders@eff.org We recommend that you review our pocket guides for Knowing Your Rights and Protecting Your Digital Data Privacy at the border for a general overview or take a look at our Border Search Whitepaper for a deeper dive into the potential issues and questions you may face. And join EFF in calling for stronger Constitutional protection for your digital information by contacting Congress on this issue today. Related Cases:  United States v. Saboonchi Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Dissent Made Meaningful (Mi, 19 Apr 2017)
Over the last year, large numbers of Americans have grown politically active for the first time. Reflecting the depth of our constitutional crisis, however, many seem not to know how to meaningfully raise their voices or participate in the political process. Civic Participation Beyond Elections Turnout in American elections has remained abysmally low for decades, suggesting some degree of either apathy, suppression, or both. Even Americans who do vote often overlook a litany of further opportunities available to those who pursue them. One source of guidance to many nascent activists has been the Indivisible guide, which emphasizes constituent communications to Members of Congress. It was compiled by congressional staffers whose suggestions aim to replicate the direct engagement of Congress successfully promoted by Tea Party networks that have shared EFF’s transpartisan concerns about, for instance, mass surveillance and the threat it poses to democracy. To their credit, the Indivisible guide's authors acknowledge that their guide “is not a panacea, and it is not intended to stand alone.” While important, letters from individual constituents are most effective when combined with other strategies. How to Make a Letter Matter Contacting an elected member of Congress represents an important act of political expression. Even when taking the time to write letters, however, individual constituents can be disregarded, or engaged in passing without commanding attention. Many who do gain the attention of their elected representatives’ offices receive only a form response.  Letters can, however, carry influence, particularly when they include: An explicit request or demand for a particular vote on a specific piece of proposed legislation, A request for a meeting in person, and Support from at least three (and ideally half a dozen to a dozen) neighbors who co-sign the letter, identify themselves as constituents living in that office's legislative district, and attend the meeting together. Are you part of a community group that gathers to examine the issues and write letters together? Letter writing events can become infinitely more influential when participants simply sign each other's letters, so that they reflect—and are received as indicating—dissent not just by an individual, but rather by an organized group of constituents. To expand its reach, a grassroots group can easily direct letters not only at its Member of the House of Representatives, but also two U.S. senators, as well as members of the state legislature. It takes only five people writing one letter each to meaningfully raise a shared concern across those layers of federal and state representation.  Groups of more than five can also reach elected officials at the municipal and the county level, where policy opportunities are most fluid and potentially transformative. Dissent in Public Even letters written on behalf of groups remain generally private communications. Escalating pressure on elected representatives requires taking one's concerns to the public sphere. One way to express public dissent is to write and submit an op-ed for publication in a local newspaper. Concise, persuasive, forceful writing of 700 words or fewer can often interest editors seeking commentary to share with a broad audience. Whether or not an op-ed submission is published by a newspaper, social media or outlets like Medium.com can offer an alternative platform for publication. Finally, groups of constituents can sometimes meet a newspaper's editorial board to educate editors who write their own columns. Beyond press–based public dissent are any number of event–based alternatives, from expressive events like rallies, marches, and protests, to educational ones like teach ins, public discussions, or debates. Even seemingly recreational events like concerts or parties can prompt a public discourse if organized to emphasize substantive themes. Finally, creative visual stunts, like flash mobs, light brigades, and banner drops—especially when amplified through social media—can offer groups with relatively few participants the chance to reach large audiences. Events educating a public audience can shift the ground beneath an elected official and ultimately offer more influence than requests or demands made directly to their offices.  Opportunities Training is available for any of these tactics through the Electronic Frontier Alliance, a network of local grassroots groups across the U.S. that remotely convenes each month. Any network of neighbors who share concerns about digital rights is welcome to explore and apply to join the EFA. The Alliance offers groups that join access to EFF supporters in their own areas, other grassroots organizers elsewhere, and EFF staff available to provide policy or organizing guidance on request (including a sample letter seeking a meeting with a congressional office). Materials are currently under development offering detailed guidance on various campaign models, from hosting digital security workshops, to seeking legal restrictions on mass surveillance by local police.  Throughout the year, Congress takes occasional recesses, when lawmakers return to their states and districts. During these periods, congressional delegations are most accessible to constituents—and more vulnerable to their criticism. The Senate and House calendars include information about in-district work periods, one of which concludes this week. During this week’s recess, we urge concerned readers to:  Voice your concerns about Congress’ recent decision to side with corporate ISPs over their users and your privacy, Explain your support for net neutrality and encourage opposition to potential proposals that would further limit the FCC’s jurisdiction, and Share your reasons for wanting transparency, oversight, and meaningful limits on NSA mass surveillance. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Hollow Privacy Promises from Major Internet Service Providers (Mi, 19 Apr 2017)
It’s no surprise that Americans were unhappy to lose online privacy protections earlier this month. Across party lines, voters overwhelmingly oppose the measure to repeal the FCC’s privacy rules for Internet providers that Congress passed and President Donald Trump signed into law. But it should come as a surprise that Republicans—including the Republican leaders of the Federal Communications Commission and the Federal Trade Commission—are ardently defending the move and dismissing the tens of thousands who spoke up and told policymakers that they want protections against privacy invasions by their Internet providers. Since the measure was signed into law, Internet providers and the Republicans who helped them accomplish this lobbying feat have decried the “hysteria,” “hyperbole,” and “hyperventilating” of constituents who want to be protected from the likes of Comcast, Verizon, and AT&T. Instead they’ve claimed that the repeal doesn’t change the online privacy landscape and that we should feel confident that Internet providers remain committed to protecting their customers’ privacy because they told us they would despite the law. We’ve repeatedly debunked the tired talking points of the cable and telephone lobby: There is a unique, intimate relationship and power imbalance between Internet providers and their customers. The FTC likely cannot currently police Internet providers (unless Congress steps in, which the White House said it isn’t pushing for at this time). Congress’ repeal of the FCC’s privacy rules does throw the FCC’s authority over Internet providers into doubt. The now-repealed rules—which were set to go into effect later this year—were a valuable expansion and necessary codification of existing privacy rights granted under the law. Internet providers have already shown us the creepy things they’re willing to do to increase their profits. The massive backlash shows that consumers saw through those industry talking points, even if Republicans in Congress and the White House fell for them. Now that policymakers have effectively handed off online privacy enforcement to the Internet providers themselves, advocates for the repeal are pointing to the Internet providers’ privacy policies. “Internet service providers have never planned to sell your individual browsing history to third parties,” FCC Chairman Ajit Pai and FTC acting Chairwoman Maureen Ohlhausen wrote in a recent op-ed. “That’s simply not how online advertising works. And doing so would violate ISPs’ privacy promises.” Aside from pushing back on oversimplification of the problem at hand, we should be asking: What exactly are the “privacy promises” that ISPs are making to their customers? In blog posts and public statements since the rules were repealed, the major Internet providers and the trade groups that represent them have all pledged to continue protecting customers’ sensitive data and not to sell customers’ individual Internet browsing records.  But how they go about defining those terms and utilizing our private information is still going to leave people upset. These statements should also be read with the understanding that existing law already allows the collection of individual browsing history. Comcast said it won’t sell individual browsing histories and it won’t share customers’ “sensitive information (such as banking, children’s, and health information), unless we first obtain their affirmative, opt-in consent.” It also said it will offer an opt-out “if a customer does not want us to use other, non-sensitive data to send them targeted ads.” We think leaving browsing history out of the list of information Comcast considers sensitive was no accident. In other words, we don’t think Comcast considers your browsing history sensitive, and will only offer you an opt-out of using your browsing history to send you targeted ads. There’s no mention of any opt-out of any other sharing of your browsing history, such as on an aggregated basis with third parties. While we applaud Comcast’s clever use of language to make it seem like they’re protecting their customers’ privacy, reading between the lines shows that Comcast is giving itself leeway to do the opposite. Verizon similarly pledged not to sell customers’ “personal web browsing history” (emphasis ours) and described its advertising programs that give advertisers access to customers based on aggregated and de-identified information about what customers do online. By our reading, this means Verizon still plans to collect your browsing history and store it—they just won’t sell it individually. AT&T pointed to its privacy policies, which carve out specific protections for “personal information … such as your name, address, phone number and e-mail address” but explicitly state that it does deliver ads “based on the websites visited by people who are not personally identified.” So just like Verizon, we think this means AT&T is collecting your browsing history and storing it—they’re just not attaching your name to it and selling it to third parties on an individualized basis. In a filing to the FCC earlier this year, CTIA—which represents the major wireless ISPs—argued that “web browsing and app usage history are not ‘sensitive information’” and said that ISPs should be able to share those records by default, unless a customer asks them not to. The common thread here is that Internet providers don’t consider records about what you do online to be worthy of the heightened privacy protections they afford to things like your social security number. Internet providers think that our web browsing histories are theirs to profit off of—not ours to protect as we see fit. And because Congress changed the law, they are now free to change their minds about the promises they make without the same legal ramifications. These “privacy promises” are in no way a replacement for robust privacy protections enforced by a federal agency. If Internet providers want to get serious about proving their commitment to their customers’ privacy in the absence of federal rules, they should pledge not to collect or sell or share or otherwise use information about the websites we visit and the apps we use, except for what they need to collect and share in order to provide the service their customers are actually paying for: Internet access. That would be a real privacy promise. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

¿Quién defiende tus datos en Chile? Primer informe anual busca saber qué ISPs chilenos están del lado de sus usuarios (Di, 18 Apr 2017)
Derechos Digitales, la organización líder en derechos digitales en Chile, ha lanzado un nuevo informe, en colaboración con EFF, evaluando las prácticas de privacidad de los Proveedores de Servicios de Internet chilenos. Este proyecto forma parte de una serie en toda América Latina, y está adaptado de la publicación anual del informe de EFF ¿Quién defiende tus datos?. Los informes tienen por objeto evaluar a los proveedores de servicios de telefonía móvil y fijo para ver cuál se pone del lado de sus usuarios al responder a las solicitudes gubernamentales de información personal. Si bien es cierto que hay margen de mejora, la primera edición chilena del informe ¿Quién defiende sus datos? tiene algunos indicadores esperanzadores. Los chilenos entran a la red más que cualquier otra nacionalidad en América Latina. Cuando los chilenos utilizan Internet, revelan sus datos más privados, incluyendo sus relaciones en línea, discusiones políticas, artísticas y personales, e incluso sus movimientos minuto a minuto. Y todos esos datos necesariamente tienen que pasar por un puñado de ISP. Eso significa que los chilenos son más propensos a confiar en sus proveedores para defender sus datos que nadie en América Central o del Sur. El informe de Derechos Digitales se propuso examinar qué proveedores de servicios de Internet y compañías telefónicas chilenas son quienes mejor defienden a sus clientes. ¿Cuáles, entre ellos, son transparentes acerca de sus políticas con respecto a las solicitudes de datos? ¿Cuáles requieren una orden judicial antes de entregar información personal? ¿Alguno de ellos objeta alguna de las leyes de vigilancia  o de las demandas individuales de los datos de sus usuarios? ¿Alguna de las compañías notifica a sus usuarios cuando cumplen con las solicitudes judiciales? Derechos Digitales examinó la información publicada públicamente, incluyendo las políticas de privacidad y los códigos de prácticas de cinco de los mayores proveedores chilenos de acceso a telecomunicaciones: Movistar, VTR, Claro, Entel y GTD Manquehue. Entre estos proveedores se cubre la gran mayoría de los mercados móviles, fijos y de banda ancha. A cada empresa se le dio la oportunidad de responder a un cuestionario, participar en una entrevista privada y enviar cualquier información adicional que considerara apropiada, información que se incorporó al informe final. Este enfoque se basa en el trabajo anterior de EFF con Who Has Your Back? En los Estados Unidos, aunque las preguntas específicas del estudio de Derechos Digitales fueron adaptadas para ajustarse al marco legal de Chile. Investigaciones personalizadas que utilizan metodologías similares están siendo trabajadas por grupos de derechos digitales en toda América Latina. La Fundación Karisma en Colombia está a punto de publicar, por tercera vez, el informe ¿Dónde Están Mis Datos?. Mientras que InternetLab en Brasil está por publicar su segundo reporte anual, ADC en Argentina, R3D en Mexico, y TEDIC en Paraguay están también trabajando en estudios similares. Abajo encontrará los rankings de Derechos Digitales para los ISP chilenos y las compañías telefónicas; El informe completo, que incluye detalles sobre cada empresa, está disponible en: https://www.derechosdigitales.org/qdtd/ Criterios de evaluación para ¿Quién Defiende tus Datos? Protección de datos: Un ISP gana una estrella completa en esta categoría si publica su contrato de servicios de Internet para todos los tipos de planes y sus políticas de protección de datos en su sitio web de manera clara y accesible para los usuarios. Las políticas de protección de datos deben ajustarse a las normas nacionales. El cumplimiento parcial fue recompensado con media estrella. Transparencia: Para ganar una estrella, los ISP deben publicar un informe de transparencia sobre como ellos manejan la información de los usuarios y los requerimientos del gobierno sobre esa información. Los informes de transparencia deben incluir información útil sobre el número especifico de peticiones de información que los ISP han aprobado y rechazado; un resumen de las peticiones ordenado por autoridad investigadora, tipo y propósito, el número específico de individuos durante el último año que han sido afectados por cada solicitud; Y si los terceros que administran datos de usuario lo hacen de una manera que protege la privacidad. Se concedió una media estrella a los ISP que publicaron informes de transparencia, pero no se refirieron específicamente a la protección de datos y al monitoreo de las comunicaciones. Si el proveedor no ha publicado un informe de transparencia, no se otorga ninguna estrella. Notificación al usuario: Para obtener una estrella en esta categoría, los ISP deben, si están autorizados legalmente a hacerlo, notificar a sus usuarios de manera oportuna cuando las autoridades soliciten acceso a su información personal para que los usuarios puedan solicitar un recurso o apelación según sea necesario. Se otorgó una media estrella a los ISP que notifican a sus clientes cuando las autoridades hacen una solicitud de datos de usuario, pero no lo hacen de manera oportuna, lo que dificulta que los usuarios busquen una solución. Si no hubo evidencia de que un ISP notifica a sus usuarios cuando una autoridad solicita datos de usuario, la compañía no recibió ninguna estrella. Pautas de privacidad de datos: Un ISP obtuvo una estrella en esta categoría si, en su sitio web, explica cómo maneja los datos del usuario, y especifica específicamente los requisitos y las obligaciones legales que las autoridades solicitantes deben cumplir al solicitar datos de la empresa. La explicación debe ser fácil de entender; Debe especificar los procedimientos que la empresa usa para responder a las solicitudes de datos de las autoridades; Y debe indicar durante cuánto tiempo retiene los datos de usuario. Un ISP ganó media estrella si publicó información sobre cómo maneja los datos del usuario, pero no especificó las obligaciones y procedimientos que requiere a las autoridades que solicitan datos del usuario. Compromiso con la privacidad: Para ganar una estrella, un ISP debe haber defendido activamente la privacidad de sus usuarios en los tribunales, o ante el Congreso para impugnar alguna legislación invasiva, perjudicial para la privacidad de sus usuarios. Un ISP podría ganar una media estrella si ha defendido a sus usuarios en una de las dos áreas antes mencionadas (en los tribunales o frente al Congreso) Resultados Conclusión Las compañías en Chile han comenzado bien, pero todavía tienen un camino a seguir para proteger totalmente los datos personales de sus clientes y ser transparentes sobre quién tienen acceso a ellos. Derechos Digitales y EFF esperan publicar este informe anualmente para incentivar a las empresas a mejorar la transparencia y proteger los datos de los usuarios. De esta manera, todos los chilenos tendrán acceso a información sobre cómo se usan sus datos personales y cómo los ISP los controlan para que puedan tomar decisiones más inteligentes del consumidor. Esperamos que el informe brille con más estrellas el próximo año. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Who Has Your Back in Chile? First-Annual Report Seeks to Find Out Which Chilean ISPs Stand With Their Users (Di, 18 Apr 2017)
Derechos Digitales, the leading digital rights organization in Chile, has launched a new report in collaboration with EFF that evaluates the privacy practices of Chilean Internet Service Providers (ISPs). This project is part of a series across Latin America, adapted from EFF’s annual Who Has Your Back? report. The reports are intended to evaluate mobile and fixed ISPs to see which stand with their users when responding to government requests for personal information. While there’s definitely room for improvement, the first edition of the Chilean ¿Quién Defiende Tus Datos? (Who Defends Your Data?) report has some hopeful indicators. Chileans go online more than any other nationality in Latin America. When Chileans use the Internet, they put their most private data, including their online relationships, political, artistic and personal discussions, and even their minute-by-minute movements online. And all of that data necessarily has to go through one of a handful of ISPs. That means that Chileans are more likely to be putting their trust in their providers to defend their data than anyone else in Central or South America. Derechos Digitales’ report set out to examine which Chilean ISPs and telephone companies best defend their customers. Which are transparent about their policies regarding requests for data? Which require a judicial warrant before handing over personal information? Do any challenge surveillance laws or individual demands for their users’ data? Do any of the companies notify their users when complying with judicial requests? Derechos Digitales examined publicly posted information, including the privacy policies and codes of practice, from five of the biggest Chilean telecommunications access providers: Movistar, VTR, Claro, Entel, and GTD Manquehue. Between them, these providers cover the vast majority of mobile, fixed line and broadband markets. Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information they felt appropriate, all of which was incorporated into the final report. This approach is based on EFF’s earlier work with Who Has Your Back? in the United States, although the specific questions in Derechos Digitales’ study were adapted to match Chile’s legal environment. Customized investigations using similar methodologies are being worked on by digital rights groups across Latin America. The Karisma Foundation in Colombia and R3D in Mexico are about to publish their third-annual reports. InternetLab in Brazil is about to publish its second-annual report, and ADC in Argentina and TEDIC in Paraguay are all working on similar studies. Derechos Digitales’ rankings for Chilean ISPs and phone companies are below; the full report, which includes details about each company, is available at: https://www.derechosdigitales.org/qdtd/ Evaluation Criteria for ¿Quién Defiende tus Datos? Data Protection: An ISP earned a complete star in this category if they published their Internet service agreement—for all types of plans—and their data protection policies on their website in a clear and accessible way to users. The data protection policies must be aligned with national regulations. Partial compliance was rewarded with half a star. Transparency: To earn a star, ISPs must have published a transparency report on how they manage their users’ data and handle government requests for data. The transparency report must have included useful information about the specific number of data requests the ISP has approved and rejected; a summary of the requests by investigation authority, type, and purpose; the specific number of individuals over the last year who have been affected by each request; and whether third-parties managing user data do so in a privacy-protective manner. A half star was awarded to ISPs that published transparency reports, but did not specifically refer to data protection and the monitoring of communications. If the provider has not published a transparency report, no star was awarded. User Notification: To earn a star in this category, ISPs must, if legally permitted, notify their users in a timely manner when authorities request access to their personal information so users may seek remedy or appeal as necessary. A half star was awarded to ISPs that notify their customers when authorities make a request for user data, but do not do so in a timely manner, making it difficult for the users to seek remedy. If there was no evidence that an ISP notifies its users when an authority requests user data, the company was not awarded a star. Data Privacy Guidelines: An ISP earned a star in this category if, on their website, it explains how it handles user data—and specifically outlines the requirements and legal obligations requesting authorities must comply with when requesting user data from the company. The explanation must be easy to understand; it must specify the procedures the company uses to respond to data requests from authorities; and it must indicate how long it retains user data. An ISP earned a half star if it published information about how it handles user data, but did not specify the obligations and procedures it requires of authorities who request user data. Commitment to Privacy: To earn a star, an ISP must have actively defended the privacy of their users in the courts, or in front of Congress to challenge broad legislation that is detrimental to the privacy of their users. An ISP could earn a half star if it has defended its users in one of the two areas listed above (in the courts, or in front of Congress). Results Conclusion Companies in Chile are off to a good start but still have a ways to go to fully protect their customers’ personal data and be transparent about who has access to it. Derechos Digitales and EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Chileans will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope the report will shine with more stars next year. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF to California Supreme Court: Website Owners Have a First Amendment Right to Defend Content on Their Platform (Di, 18 Apr 2017)
A bad review on Yelp is an anathema to a business. No one wants to get trashed online. But the First Amendment protects both the reviewer’s opinion and Yelp’s right to publish it. A California appeals court ran roughshod over the First Amendment when it ordered Yelp to comply with an injunction to take down speech without giving the website any opportunity to challenge the injunction’s factual basis. The case is on appeal to the California Supreme Court, and EFF filed an amicus brief asking the court to overturn the lower court’s dangerous holding. The case, Hassell v. Bird, is procedurally complicated. A lawyer, Dawn Hassell, sued a former client, Ava Bird, for defamation in California state court over a negative Yelp review. Bird never responded to the lawsuit, so the trial court entered a default judgment against her. The court—at Hassell’s request—not only ordered Bird to remove her own reviews, but also ordered Yelp to remove them—even though Yelp was never named as a party to the suit. (If this kind of abuse of a default judgment sounds familiar, that’s not a coincidence; it seems to be increasingly common—and it’s a real threat to online speech.) Yelp challenged the order, asserting that Hassell failed to prove that the post at issue was actually defamatory, that Yelp could not be held liable for the speech pursuant to the Communication Decency Act, 47 U.S.C. § 230 (“Section 230”), and that Yelp could not be compelled to take down the post as a non-party to the suit. The trial court rejected Yelp’s arguments and refused to recognize Yelp’s free speech rights as a content provider. The California Court of Appeal affirmed the trial court’s decision, holding that Yelp could be forced to remove the supposedly defamatory speech from its website without any opportunity to argue that the reviews were accurate or otherwise constitutionally protected. This decision is frankly just wrong—and for multiple reasons. Neither court seemed to understand that the First Amendment protects not only authors and speakers, but also those who publish or distribute their words. Both courts completely precluded Yelp, a publisher of online content, from challenging whether the speech it was being ordered to take down was defamatory—i.e., whether the injunction to take down the speech could be justified. And the court of appeals ignored its special obligation, pursuant to California law, to conduct an “independent examination of the record” in First Amendment cases. Both courts also seemed to completely ignore the U.S. Supreme Court’s clear holding that issuing an injunction against a non-party is a constitutionally-prohibited violation of due process. EFF—along with the ACLU of Northern California and the Public Participation Project—urged the California Supreme Court to accept the case for review back in August 2016. The court agreed to review the case in September, and we just joined an amicus brief urging the court to overrule the problematic holding below.  Our brief—drafted by Jeremy Rosen of Horvitz & Levy and joined by a host of other organizations dedicated to free speech—explains to the California Supreme Court that the First Amendment places a very high bar on speech-restricting injunctions. A default judgment simply cannot provide a sufficient factual basis for meeting that bar, and the injunction issued against Yelp in this case was improper. We also explained that the injunction violated clear Supreme Court case law and Yelp’s due process rights, and that the injunction violates Section 230, which prohibits courts from holding websites liable for the speech of third parties. As Santa Clara University law school professor Eric Goldman noted in a blog post about the case, the appeals court’s decision opens up a host of opportunities for misuse and threatens to rip a “hole” in Section 230’s protections for online speech—protections that already seem to be weakening. If not overturned, as the already pervasive misuse of default judgments teaches, this case will surely lead to similar injunctions that infringe on publishers’ free speech rights without giving them any notice or opportunity to be heard. The California Supreme Court cannot allow this. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Victory for Now: California Hits Pause on A.B. 165, Bill that Sought to Undermine Student Privacy (Sa, 15 Apr 2017)
It's a great day for digital privacy in California. Confronted with opposition from a powerful and diverse coalition, Assemblymember Jim Cooper has pulled his legislation, A.B. 165, from consideration by the Assembly Privacy and Consumer Protection Committee. EFF joined over 60 civil rights organizations, technology companies, and school community groups in fighting A.B. 165, and we thank all the EFF members and friends who joined us in speaking out. The unrelenting, principled opposition to this anti-privacy bill stopped it from reaching its first committee hearing. A.B. 165 attempted to create a carve-out in the California Electronic Communications Privacy Act (CalECPA), one of the strongest digital privacy bills in the nation. If A.B. 165 had passed, it would have left millions of Californians who attend our schools without strong protections against invasive digital searches. California students need privacy on their digital devices in order to research sensitive topics, explore political issues, and connect with friends and family members. That’s especially true in this political moment when many students who come from immigrant families, are exploring their sexuality, or who are engaging in political protest may feel heightened concern around government access to their digital data. The students of today will be the voters, creators, and policymakers of tomorrow. By teaching students that our laws respect and uphold their digital privacy from a young age, we can help create a future generation of engaged citizens who understand the value of digital privacy. We thank the California Assemblymembers who responded to the privacy concerns with AB 165 and halted this bill in response to the public outcry, especially Assemblymember Ed Chau, Chair of the Committee. While we are celebrating today, this fight isn’t over. A.B. 165 could be revived at some point during this two-year legislative cycle. If you haven’t already, please tell your California representative you stand for privacy. The price of freedom is vigilance, and EFF relies on individual donations to vigilantly defend digital privacy. Please support our work. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF Urges Court to Roll Back Ruling Allowing Remote-Control Spying (Do, 13 Apr 2017)
Recent Decision Would Allow Foreign Governments to Wiretap Americans on U.S. Soil Washington, D.C. – The Electronic Frontier Foundation (EFF) urged an appeals court today to review a dangerous decision by a three-judge panel that would allow foreign governments to spy on Americans on U.S. soil—just as long as they use technology instead of human agents. In Kidane v. Ethiopia, an American living in Maryland had his family computer infiltrated by the Ethiopian government. Agents sent an infected email that made its way to Mr. Kidane, and the attached Microsoft Word document carried a malicious computer program called FinSpy that’s sold only to governments. The spyware took control of the machine, making copies of every keystroke and Skype call, and sending them back to Ethiopia as part of its crackdown on critics. But last month, a panel of judges on the U.S. Court of Appeals for the District of Columbia Circuit ruled that Mr. Kidane could not seek justice for this surveillance in an American court because the spying was carried out without a human agent of the Ethiopian government setting foot in the U.S. In essence, this would mean governments around the world have immunity for spying, attacking, and even murdering Americans on American soil, as long as the activity is performed with software, robots, drones, or other digital tools. “We already know about technology that will let attackers drive your car off the road, turn off your pacemaker, or watch every communication from your computer or your phone. As our lives become even more digital, the risks will only grow,” said EFF Senior Staff Attorney Nate Cardozo. “The law must make it clear to governments around the world that any illegal attack in the United States will be answered in court in the United States.” In a petition filed today, EFF and our co-counsel Scott Gilmore plus attorneys at the law firms of Jones Day and Robins Kaplan asked the appeals court to rehear this case en banc, arguing that last month’s panel decision puts the U.S. in the absurd situation where the American government must follow strict requirements for wiretapping and surveillance, but foreign governments don’t have the same legal obligations. “American citizens deserve to feel safe and secure in their own homes using their own computers,” said EFF Executive Director Cindy Cohn. “The appeals court should vacate this decision, and ensure that the use of robots or remote controlled tools doesn’t prevent people who have been harmed by foreign government attacks from seeking justice.” For the full petition for rehearing: https://www.eff.org/document/petition-rehearing-1 For more on this case: https://www.eff.org/cases/kidane-v-ethiopia Contact:  Nate Cardozo Senior Staff Attorney nate@eff.org Cindy Cohn Executive Director cindy@eff.org Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Patent Owner Can’t Use Foreign Court Order To Block EFF From Speaking Out (Do, 13 Apr 2017)
EFF Sues Company To Assert Constitutional Right to Criticize a Patent and Litigation Over It San Francisco—The Electronic Frontier Foundation (EFF) filed a lawsuit yesterday against a company that’s using foreign laws to stymie EFF’s free speech rights to publish information about and criticize its litigation over a patent featured in EFF’s “Stupid Patent of the Month” blog series. The company, Global Equity Management (SA) Pty Ltd (GEMSA), owns a patent claiming the idea of using “virtual cabinets” to graphically represent different operating systems and storage partitions. GEMSA has filed dozens of patent infringement cases in the U.S. Since 2014, EFF’s stupid patent blog series has called attention to questionable patents that stifle innovation, harm the public, or can be employed to shake down users of commonplace processes or technologies. After EFF wrote about the patent, GEMSA accused EFF of slander. The company went to court in Australia to obtain an order to take down the article and prohibit EFF from publishing anything about any of GEMSA’s patents. This order, which purports to silence expression of an opinion, would never survive scrutiny under the First Amendment in the United States. In a complaint filed in San Francisco yesterday, EFF asked a federal district court to rule that the order is unenforceable. Under the 2010 Securing the Protection of Our Enduring and Established Constitutional Heritage Act (SPEECH Act), foreign orders aren’t enforceable in the United States unless they are consistent with the free speech protections provided by the U.S. and state constitutions, as well as state law.     The injunction issued by the South Australian court purports to order EFF to remove the blog post and forbid EFF from speaking in the future about any of GEMSA’s intellectual property. It states that failure to comply could result in the seizure of EFF’s assets and prison time for its officers. “We are going to court to ensure that EFF is not silenced by foreign laws that forbid speech our Constitution protects,” said EFF Deputy Executive Director and General Counsel Kurt Opsahl. “GEMSA may not like what we’ve said about its patent, but we will defend our right to express our constitutionally protected opinion." EFF is represented by law firms Levine Sullivan Koch & Schulz, LLP and Jassy Vick Carolan. For the brief: https://www.eff.org/document/eff-v-gemsa-complaint For EFF’s Stupid Patent of the Month series:https://www.eff.org/issues/stupid-patent-month Contact:  Kurt Opsahl Deputy Executive Director and General Counsel kurt@eff.org Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF Releases Spying on Students Ed Tech Report (Do, 13 Apr 2017)
EFF Survey Reveals Gaps in Protecting the Privacy of K-12 Students Using School-Issued Devices and Cloud Apps “They are collecting and storing data to be used against my child in the future, creating a profile before he can intellectually understand the consequences of his searches and digital behavior." This was the response of one parent to an online survey EFF conducted to learn more about the use of mobile devices and cloud services in K-12 classrooms across the country—so called education technology or “ed tech.” Today, EFF released a report entitled “Spying on Students: School-Issued Devices and Student Privacy” that summarizes the results of this survey. While there are educational advantages to incorporating technology into the classroom experience, the survey results reflect an overarching concern that children as young as kindergartners are being conditioned to accept a culture of surveillance. EFF maintains that children should not be taught that using the Internet or technology requires sacrificing personal privacy. The survey, launched in December 2015, elicited responses from over 1000 students, parents, teachers, librarians, school administrators, system administrators, and community members. We organized the survey results into eight themes: Lack of transparency: Schools and districts do not provide adequate notice and disclosures to parents about what technology their children use in the classroom, including devices and online applications that require transferring student information to private companies. Investigative burden: Parents and even students themselves put in significant effort, sometimes over many months, to get information from both schools/districts and ed tech companies, about technology use in the classroom and its implications for student privacy.  Data collection and use: Parents are concerned about the specific data about their children that ed tech companies collect, and what companies do with that data, particularly for non-educational, commercial purposes and without written notice to and consent from parents. Lack of standard privacy precautions: Survey participants reported 152 apps, software programs, and digital services being used in classrooms. Only 118 of these have published privacy policies online. And far fewer address important privacy issues such as data retention, encryption, and data de-identification and aggregation. Barriers to opt-out: Many schools and districts do not provide the ability for parents to opt their children out of using certain technologies. Or if administrators are open to providing an opt-out option, many parents and students have found it difficult to make alternative technologies and teaching methods a reality. Shortcomings of “Privacy by Policy”: Survey participants expressed doubt that the privacy policies of both schools/districts and ed tech companies actually protect student privacy in practice. Inadequate technology and privacy training for teachers: Survey participants emphatically reported that teachers, those who interface most directly with ed tech and students, lack adequate training to move from “privacy by policy” to “privacy by practice.” Digital literacy for students: Survey results revealed that there is a ripe opportunity and need to educate students about how to protect their privacy online, operate safely online, and generally be savvy users of technology, which are skills that they should carry into adulthood. A goal of the “Spying on Students” survey was to highlight the struggles of average people trying to navigate the student privacy issue. So throughout the discussion of the survey results, we present the case studies of a parent, technology director, system administrator, and school librarian. In addition to summarizing the survey results, the “Spying on Students” report includes an overview of relevant student privacy laws, including the federal laws FERPA and COPPA, and a sampling of state laws from California, Colorado, and Connecticut.   The report also discusses the inadequacy of the leading ed tech industry self-regulatory effort, the Student Privacy Pledge. Finally, the report includes privacy recommendations and best practices for school/district administrators, teachers, librarians, system administrators, parents, students, and—of course—ed tech companies. Today’s report is part of our larger student privacy campaign, which aims to educate students, parents, and school officials about digital privacy—and to encourage ed tech companies to institute better privacy policies and practices that actually protect the privacy of minor students, while enabling them to benefit from technology in the classroom. With the right awareness and will—particularly from an $8 billion dollar industry—technology can be both educationally beneficial and privacy protective. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF’s "Spying on Students" Report Highlights Tech Companies’ Data Collection, Parents’ Frustrations (Do, 13 Apr 2017)
Surveillance Culture Starts in Grade School, Schools Fail To Protect Kids’ Privacy San Francisco—School children are being spied on by tech companies through devices and software used in classrooms that often collect and store kids’ names, birth dates, browsing histories, location data, and much more—often without adequate privacy protections or the awareness and consent of parents, according to a new report from Electronic Frontier Foundation (EFF). EFF’s “Spying on Students: School-Issued Devices and Student Privacy” shows that state and federal law, as well as industry self-regulation, has failed to keep up with a growing educational technology industry.  At the same time, schools are eager to incorporate technology in the classroom to engage students and assist teachers, but may unwittingly help tech companies surveil and track students. Ultimately, students and their data are caught in the middle without sufficient privacy protections. One-third of all K-12 students in the U.S. use school-issued devices running software and apps that collect far more information on kids than is necessary, the report says. Resource-strapped school district can receive these tools at steeply-reduced prices or for free as tech companies seek a slice of the $8 billion dollar education technology, or ed tech, industry. But there’s a real, devastating cost—the tracking, cataloguing, and exploitation of data about children as young as five years old. Ed tech providers know privacy is important to parents, students, and schools. Of the 152 ed tech services reported to us, 118 had published privacy policies. But far fewer addressed such important privacy issues as data retention, encryption, de-identification, and aggregation. And privacy pledges don’t stop companies from mining students’ browsing data and other information and using it for their own purposes. “Our report shows that the surveillance culture begins in grade school, which threatens to normalize the next generation to a digital world in which users hand over data without question in return for free services—a world that is less private not just by default, but by design,” said EFF Researcher Gennie Gebhart, an author of the report. EFF surveyed over 1,000 stakeholders across the country, including students, parents, teachers, and school administrators, and reviewed 152 ed tech privacy policies in a year-long effort to determine whether and how ed tech companies are protecting students’ privacy and their data. “Parents, teachers, and other stakeholders feel helpless in dealing with student privacy issues in their community. In some cases students are required to use the tools and can’t opt out, but they and their families are given little to no information about if or how their kids’ data is being protected and collected,” said EFF Analyst Amul Kalia, a co-author of the report. “With this whitepaper, we lay out specific strategies that they can employ to gather allies, and push their schools and districts in the right direction." “Spying on Students” provides comprehensive recommendations for parents, teachers, school administrators, and tech companies to improve the protection of student privacy. Asking the right questions, negotiating for contracts that limit or ban data collection, offering families the right to opt out, and making digital literacy and digital privacy part of school curriculum are just a few of the more than 70 recommendations for protecting student privacy contained in the report. “The data we collected on the experiences, perceptions, and concerns of stakeholders across the country sends a loud and clear message to ed tech companies and lawmakers: families are concerned about student privacy and want an end to spying on students,” said Gebhart. For the report: https://www.eff.org/wp/school-issued-devices-and-student-privacy For more on EFF's student privacy campaign: https://www.eff.org/issues/student-privacy Contact:  Gennie Gebhart Researcher gennie@eff.org Sophia Cope Staff Attorney sophia@eff.org Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

States Introduce Dubious Anti-Pornography Legislation to Ransom the Internet (Mi, 12 Apr 2017)
More than a dozen state legislatures are considering a bill called the “Human Trafficking Prevention Act,” which has nothing to do with human trafficking and all to do with one man’s crusade against pornography at the expense of free speech. At its heart, the model bill would require device manufacturers to pre-install “obscenity” filters on devices like cell phones, tablets, and computers. Consumers would be forced to pony up $20 per device in order to surf the Internet without state censorship.  The legislation is not only technologically unworkable, it violates the First Amendment and significantly burdens consumers and businesses. Perhaps more shocking is the bill’s provenance. The driving force behind the legislation is a man named Mark Sevier, who has been using the alias “Chris Severe” to contact legislators. According to the Daily Beast, Sevier is a disbarred attorney who has sued major tech companies, blaming them for his pornography addiction, and sued states for the right to marry his laptop.  Reporters Ben Collins and Brandy Zadrozny uncovered a lengthy legal history for Sevier, including an open arrest warrant and stalking convictions, as well as evidence that Sevier misrepresented his own experience working with anti-trafficking non-profits. The bill has been introduced in some form Alabama, Florida, Georgia, Indiana, Louisiana, New Jersey, North Dakota, Oklahoma, South Carolina, Texas, West Virginia, and Wyoming (list here). We recommend that any legislator who has to consider this bill read the Daily Beast’s investigation. But that’s not why they should vote against the Human Trafficking Prevention Act. They should kill this legislation because it’s just plain, awful policy.  Obviously, each version of the legislation varies, but here is the general gist.  Read EFF's opposition letter against H.3003, South Carolina's iteration of the Human Trafficking Prevention Act.  Pre-installed Filters Manufacturers of Internet-connected devices would have to pre-install filters to block pornography, including “revenge porn.” Companies would also have to ensure that all child pornography, “revenge pornography,” and “any hub that facilitates prostitution” are rendered inaccessible. Most iterations of the bill require this filtering technology to be turned on and locked in the on position, by default. This is terrible for consumer choice because it forces people to purchase a software product they don’t necessarily want. It’s also terrible for free speech because it restrains what you can see. Because of the risk of legal liability, companies are more likely to over-censor, blocking content by default rather than giving websites the benefit of the doubt.  The proscriptions are also technologically unworkable: for example, an algorithm can hardly determine whether an item of pornography is “revenge” or consensual or whether a site is a hub for prostitution.   To be clear, unlocking such filters would not just be about accessing pornography.  A user could be seeking to improve the performance of their computer by deleting unnecessary software.  A parent may want to install premium child safety software, which may not play well with the default software. And, of course, many users will simply want to freely surf the Internet without repeatedly being denied access to sites mistakenly swept up in the censorship net.  A Censorship Tax The model bills would require consumers to pay a $20 fee to unlock each of their devices to exercise their First Amendment rights to look at legal content. Consumers could end up paying a small fortune to unlock their routers, smartphones, tablets, and desktop computers.  Data Collection Anyone who wants to unlock the filters on their devices would have to put their request in writing. Then they’d be required to show ID, be subjected to a “written warning regarding the potential dangers” of removing the obscenity filter, and then would have to sign a form acknowledging they were shown that warning. That means stores would be maintaining private records on everyone who wanted their “Human Trafficking” filters removed.   The Censorship Machine The bill would force the companies we rely upon to ensure open access to the Internet to create a massive censorship apparatus that is easily abused. Under the bill, tech companies would be required to operate call centers or online reporting centers to monitor complaints that a particular site isn’t included in the filter or complaints that a site isn’t being properly filtered. Not only that, but the bill specifically says they must “ensure that all child pornography and revenge pornography is inaccessible on the product” putting immense pressure on companies to aggressively and preemptively block websites to avoid legal liability out of fear of just one illegal or forbidden image making it past their filters. Social media sites would only be immune if they also create a reporting center and “remain reasonably proactive in removing reported obscene content.”  It’s unfortunate that the Human Trafficking Prevention Act has gained traction in so many states, but we're pleased to see that some, such as Wyoming and North Dakota, have already rejected it. Legislators should do the right thing: uphold the Constitution, protect consumers, and not use the problem of human trafficking as an excuse to promote this individual’s agenda against pornography.  Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

When Did You First Realize the Importance of Online Privacy? (Mi, 12 Apr 2017)
mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2Fovx53x6ZA-c%3Frel%3D0%26autoplay%3D1%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20width%3D%22560%22%20height%3D%22315%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com Was there a moment in your life when you had an awakening about the importance of digital privacy?  Maybe your parents snooped around an email account when you forgot to log out. Maybe photos you thought were private ended up online. Maybe you didn’t land your dream job, and you suspect an old LiveJournal account still visible in search results of your name may be the culprit. Maybe you got hacked. We’re collecting stories from people about the moment digital privacy first started mattering in their lives. Through this collection, we’re hoping to illustrate the varied, often deeply personal reasons that people care about digital privacy. This isn’t a dry policy issue; corporate data practices have lasting ramifications on people’s everyday lives. And the recent vote by Congress to allow companies like Comcast and Time Warner to have unfettered access to our browsing habits puts our privacy even more at risk. We launched the project by sending reporter David Spark to the Security BSides conference in San Francisco, where many fans of digital liberty often come to see EFF and others speakers discuss topics like security, privacy, and online freedom. In the video above, we collected some of those stories. Want to add to the conversation? Post a blog post, article, tweet, or short video, and then share it on Twitter using the hashtag #privacystory. We’ll be collecting these, blogging about them and retweeting them to help spur a broader public conversation about the value of privacy in our digital world. Special thanks to David Spark (@dspark) and Spark Media Solutions, with the support of Remediant, for the production of this video. Creative Commons music attribution to Ben Rama for the song “Binary Iteration.”   Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Congress Is Home These Next Two Weeks. Will Members Hear from You? (Di, 11 Apr 2017)
Starting today, Congress is closed for the next two weeks so members of Congress can be home. That means if you want to tell your member of Congress how you feel on any specific topic, such as your thoughts on the repeal of your broadband privacy rights or the upcoming debate on network neutrality, you have enormous opportunities that will not last long. Hearing directly from constituents is the most direct way to influence a member of Congress to change his or her vote as well as highlight issues that are critical to you. These next two weeks present a special opportunity to attend a town hall or district event near you. If you can't make the time to talk to your legislator or his staff in the next two weeks then make a plan to voice your opinion in the coming months. We've written this guide on how best to communicate your voice to Congress. How to Find Out Where to Meet Your Member of Congress The best way to meet your federal Representative is to contact the local office by phone and ask where you can meet your elected official. The staff is in a position to inform you where you can meet them because they are given the schedule for their time back home. Most times, this will be at a town hall or at a district event where the member of Congress will provide an update on current events and take questions. Other times, it will be at an event in the district where they deliver a keynote address and stay afterwards to talk to constituents. You should also consider subscribing to the online newsletters of your House member, as well as your state’s two senators, since they often email their local events directly to constituents and subscribers. Tell Them You Opposed the Broadband Privacy Repeal Congress was nearly evenly split on whether or not to keep your broadband privacy rights, with 50 votes in favor of repeal against 48 in the Senate and 215 in favor with 205 against in the House. That means the final vote ultimately came down to just eight votes in total (three in the Senate and five in the House). Ultimately, forcing Congress to correct its course would require flipping a very small number of those who sided with the cable and telephone industry over the Internet and Americans who use it. Some in Congress are staunchly defending their vote by relying on myths or persuasive sounding—but ultimately superficial—claims that their votes to strip the FCC of jurisdiction perversely supported your right to privacy. For example, expect to hear proponents of repeal to argue that the FTC is better situated than the FCC to handle privacy. It may be confusing to hear that the cable and telephone industry also prefer the FTC, until you find out that the Federal Trade Commission may not have the legal power to do anything: in 2016 AT&T's lawyers were successful in prompting the 9th Circuit Court of Appeals to declare that the FTC cannot enforce privacy law on cable and telephone companies because they are common carriers. When lawyers working for the cable and telephone industry attempt to replicate that legal victory across the country in the coming years it will render any contemporary push for FTC oversight really a play for no oversight. So make no mistake: when you hear from your elected official about how great the FTC is and how they voted to help the FCC become the primary agency in charge of privacy, the fact of the matter is they voted to hamstring the only federal agency that has direct and explicit legal authority to oversee the activities of the cable and telephone industry—the FCC. The other common refrain from proponents was voting to “level the playing field” because some members of Congress entertain the notion that your social media and email is on the same footing as your cable or telephone company. This ignores the fact that a majority of Americans have only one choice for high speed broadband access, in comparison to many choices among social media and email platforms. This false equivalence between dramatically different industry sectors was contrived by a narrative created by the cable and telephone lobby years ago. Moreover, the FCC is legally restricted to apply privacy protections to just cable and telephone companies because Congress wrote the law to apply to them due to their special position in the market. It has long been a legal tradition that communications platforms (in the past telephone, now broadband) are required to keep private information you disclose when you use the service confidential unless you grant it permission otherwise. Cable and telephone companies have long wanted to remove legal restrictions that, for decades, had prevented them from selling your personal information to third parties. First, the Internet services you utilize typically do not charge you money, whereas you more than compensate your cable and telephone company with monthly subscription fees. Tell Them to Protect Internet Freedom Just last week, it was reported that the new FCC Chairman met with the cable and telephone industry to discuss how best to hand over the Internet to them. According to media accounts, FCC Chairman Ajit Pai intends to surrender the Internet to the cable and telephone industry by no longer enforcing net neutrality protections under Title II of the Telecommunications Act. The worst part about this plan is the FCC intends to do it in exchange for the cable and telephone industry promising they will not actively harm the free and open Internet for their corporate gain. Do you trust your cable and telephone company to not prioritize their own interests over yours? Not only are these “promises” dubious as a legal matter, they cannot be held to keeping them once the FCC surrenders. Worst yet, no federal agency will be able to do anything about their activities. It is not surprising that this is the kind of plan that would be the product of a meeting with only cable and telephone executives. Since the FCC Chairman appears to be heading down a very destructive course of action for the free and open Internet, we need to mobilize to pressure Congress to push back on the FCC. Here is what you need to do to pressure your elected represented to push back on the FCC plan: Call your Member of Congress, plus your state’s two Senators. Explain that you are a constituent, that you want to hear your Representative (or Senator) speak while home from Washington, and that you’d like the time & place for any town halls scheduled this week. Also ask to whom you should address a letter seeking a meeting with a staffer after this week’s recess is over. Recruit at least two neighbors, friends, or colleagues who live in your congressional district to join you. Attend a town hall, and ask a question of the speaker (ideally while an ally records video). Ask why they want ISPs to have the power to sell your browsing history, and whether they want to force the FCC to hand the Internet over to Comcast, Verizon, and AT&T. Publish your video online and share it through social media and encourage your friends to participate. If your group of local allies remains fired up and wants to do more, explore the Electronic Frontier Alliance. Congress as a whole shares responsibility for overseeing and funding the activities of federal agencies. That power offers frequent opportunities to assert influence over the agencies’ plans. That means in the coming months as we fight back on terrible ideas from the FCC, it becomes imperative that you enlist your elected officials as defenders of a free and open Internet and make it explicit that it would be unacceptable for the FCC to stand down in the face of self-serving demands from cable and telephone companies. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Ninth Circuit Sends a Message to Platforms: Use a Moderator, Go to Trial (Sa, 08 Apr 2017)
After almost two decades of litigation, you’d think the contours of the Digital Millennium Copyright Act (DMCA) safe harbors would be settled. But the cases just keep coming, and while the overall trend is pretty favorable, the latest ruling takes an unfortunate turn (PDF). The case involves LiveJournal, a social media platform that allows users to create “communities” based on a common theme or subject. The communities are partly managed by moderators, who review posts (including photos) that users submit to make sure they follow the rules for posting and commenting created by the community. A community focused on celebrity news, called “Oh No They Didn’t” (ONTD), became particularly popular, garnering millions of views every month. Enter Mavrix Photography, a photo agency that specialized in celebrities. Mavrix discovered that several of its celebrity photos had been posted on ONTD between 2010 and 2014. Rather than sending a DMCA takedown notice, Mavrix went straight to court to sue for copyright infringement. LiveJournal took the posts down immediately, and invoked the DMCA safe harbors, asserting that it was simply “hosting content at the direction of a user.” The district court agreed. The Ninth Circuit took another view, based in large part on LiveJournal’s reliance on moderators to review and delete content. Those moderators, the court said, (1) might be LiveJournal’s agents; and, as such, (2) might have played such an active role in shaping the content of the ONTD community that content hosted on LiveJournal was not “at the direction of the user” (as required by the DMCA) but rather “at the direction of LiveJournal;” and (3) might have acquired actual or “red flag” knowledge of infringement that could be attributed to LiveJournal. So the court sent the case back to district court to let a jury figure it out—a very expensive proposition. The court’s approach was surprising as a matter of law and policy. There is no dispute that LiveJournal users initially submitted the allegedly infringing content. As the district court held (PDF), “[U]sers of the LiveJournal service, not LiveJournal, select the content to be posted, put that content together into a post, and upload the post to LiveJournal’s service. LiveJournal does not solicit any specific infringing material from its users or edit the content of its users’ posts.” The fact that moderators reviewed those submissions shouldn’t change the analysis. The DMCA does not forbid service providers from using moderators. Indeed, as we explained in the amicus brief (PDF) we filed with CCIA and several library associations, many online services have employees (or volunteers) who review content posted on their services, to determine (for example) whether the content violates community guidelines or terms of service. Others lack the technical or human resources to do so. Access to DMCA protections does not and should not turn on this choice. The irony here is that copyright owners are constantly pressuring service providers to monitor and moderate the content on their services more actively. This decision just gave them a powerful incentive to refuse. Related Cases:  Mavrix Photographs v. Live Journal Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Dream Job Alert: Defend Digital Freedom as an EFF Activist (Fr, 07 Apr 2017)
Want to spend your days fighting for digital rights and building a grassroots movement across the U.S.? You’re in luck! EFF is hiring.  We’re expanding the grassroots advocacy team at EFF. Part of our larger activism team dedicated to defending digital liberty in the public sphere, the grassroots team focuses on outreach to campus and community groups across the country and connecting them to advocacy opportunities, training resources, community organizing best practices and guidance, and allies both nearby and across the country. The team’s signature project is the Electronic Frontier Alliance. Launched in 2016, the Alliance includes 52 autonomous local groups across the country, from small nonprofits dedicated to civil rights to campus student groups and hacker spaces. Every group in the Alliance embraces a shared set of digital liberty principles including privacy, security, access to knowledge, creativity, and freedom of expression. Groups in the Alliance each set their own agendas and organize their own programs. EFF's grassroots team coaches them in pursuing various forms of public education (including discussion events, teach-ins, movie screenings, and interactive workshops), as well as advocacy opportunities (such as engaging policymakers at both the federal and local level, writing op-eds, and organizing the occasional protest). The team at EFF strives to inspire, coordinate, and amplify their work. The Alliance is the grassroots wing of EFF’s traditional digital advocacy strategy. We’re building these connections in offline spaces to strengthen the digital rights movement beyond EFF and defend the rights of all Internet users. EFF's grassroots team, and our work building and coordinating the Alliance, are also diversifying our community, ensuring that the digital rights movement of tomorrow engages technology users across gender, orientation, race, socio-economic background, age, political affiliation, and location. The Activist role focuses on building local communities and support their independent efforts to defend digital rights. Every day includes opportunities to connect, encourage, inspire, and support people passionately concerned about free speech, privacy, and technology. Sometimes those opportunities entail acting as a mentor to a student who wants to make a difference on their college campus. At other times, they involve connecting supporters seeking digital security training to others in their respective areas poised to address their needs. Others include speaking to public audiences about why free speech is vital to a functional democracy, why both values require privacy, and how individuals can meaningfully defend those values in their respective communities. If you appreciate freedom, share our concerns about how freedom is threatened online, and enjoy facilitating workshops, hosting conference calls, speaking in public, writing articles, connecting allies to each other, and meeting with local digital rights activists to coach and guide their advocacy, you’ll love this job. This position offers a chance for frequent travel and speaking engagements, so it is ideal for someone who is curious about seeing new places and eager to connect with new people. When you’re in town, you’ll work from the funky, fun, and fabulous EFF headquarters in San Francisco, a dog-friendly environment with flexible working hours, people from all walks of life, and staff-organized communities united around everything from weekend bike rides and board games to learning Spanish and baking pies. EFF offers unparalleled benefits, including dental & vision coverage, competitive pay, and retirement savings. We also offer further assistance with housing to ensure that employees (both renters and home buyers) can afford to live in the beautiful Bay Area, as well as relocation expenses for candidates moving from elsewhere. What are you waiting for? Apply today and help us build the future of the digital rights movement. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Twitter Fights Effort by Customs and Border Protection to Identify Administration Critic (Fr, 07 Apr 2017)
Update (April 7, 2017): According to a new filing [.pdf], the Justice Department told Twitter that CBP's request had been withdrawn, and the government no longer seeks to identify the Twitter user. In response, Twitter dropped its suit. Twitter is fighting an attempt by the Customs and Border Protection (CBP) agency to obtain identifying information about an “alternative agency” Twitter account, @ALT_uscis. EFF applauds Twitter for standing up for users’ free speech and swiftly pushing back on the government's attempts to identify a prominent critic. The government must not be able to use its formidable investigatory powers to intimidate and silence its critics, and CBP made almost no effort to justify its request. As Twitter’s complaint [.pdf] explains, the request should be barred by the First Amendment. Since January, accounts like @ALT_uscis have sprung up to criticize Trump administration policy on several fronts, including climate change, foreign relations, and immigration. Although some of these accounts purport to be operated by employees of government agencies, they usually tweet without identifying themselves. Anonymous political speech has a proud place in American history, going back to the country’s founding. It includes the pseudonymous publication of the Federalist Papers, and the first editions of Common Sense, which did not identify Thomas Paine. Not surprising, therefore, that the First Amendment places a high bar on any attempt to unmask anonymous or pseudonymous speakers to protect them from “the tyranny of the majority,” as the Supreme Court put it in 1995. EFF has long defended the right to anonymous speech, including representing users who have sought to protect themselves against unmasking. As this case demonstrates, though, tech companies have a crucial role to play in this fight. We’ve called on companies to fight back against illegitimate or overbroad requests for user data, and to notify users of requests in all cases, unless specifically barred from law. Here, Twitter did both. We stand ready to help both Twitter and the user, and we hope other companies will follow their lead. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Border Search Bill Would Rein in CBP (Do, 06 Apr 2017)
Border Searches of U.S. Persons’ Digital Devices Would Require a Warrant As promised by Sen. Wyden in February, a bill was introduced this week in Congress that would require U.S. Customs and Border Protection or other government agents to obtain a probable cause warrant before searching the digital devices of U.S. citizens and legal permanent residents at the border. Sen. Wyden (D-OR) and Sen. Paul (R-KY) are original sponsors of the Protecting Data at the Border Act in the Senate (S. 823), while Rep. Polis (D-CO), Rep. Smith (D-WA), and Rep. Farenthold (R-TX) are taking the lead on this issue in the House (H.R. 1899). This bill is timely. As NBC News recently reported: Data provided by the Department of Homeland Security shows that searches of cellphones by border agents has exploded, growing fivefold in just one year, from fewer than 5,000 in 2015 to nearly 25,000 in 2016. According to DHS officials, 2017 will be a blockbuster year. Five-thousand devices were searched in February alone, more than in all of 2015. We have been arguing for a while that the Fourth Amendment requires a warrant based on probable cause for border searches of cell phones, laptops, and other digital devices that contain gigabytes of highly personal information. We most recently made these arguments in an amicus brief before the U.S. Court of Appeals for the Fourth Circuit in the case U.S. v. Kolsuz. We have not distinguished between U.S. persons and foreign nationals—for example, Mr. Kolsuz, whose iPhone was searched twice by CBP and Department of Homeland Security officials without a warrant, is a Turkish citizen. We nevertheless support the Protecting Data at the Border Act, even though it more narrowly focuses on the rights of U.S. citizens and green card holders. CBP unreasonably argues that the privacy interest travelers have in digital devices is no different than that of luggage or other physical items travelers may bring with them across the border, thus CBP applies to digital devices the traditional “border search exception” to the Fourth Amendment, which permits warrantless and suspicionless “routine” border searches. However, there is nothing “routine” about unregulated government intrusion into a device that contains, as the Supreme Court has said, “the sum of an individual’s private life.” As the bill’s findings state, the privacy interest in digital data “differs in both degree and kind from [the] privacy interest in closed containers.” In addition to the warrant requirement, the Protecting Data at the Border Act would prohibit the government from delaying or denying entry or exit to a U.S. person based on that person’s refusal to hand over a device passcode, online account login credentials, or social media handles to a border agent. During an April 5 hearing in the Senate Homeland Security & Governmental Affairs Committee, Sen. Paul grilled DHS Secretary John Kelly (starting at 2:15) on CBP agents denying entry to Americans, or threatening to do so, for refusing to provide access to their cell phones. Sen. Paul said, “That’s obscene.” Secretary Kelly appeared woefully ignorant about what is happening with privacy at the border and even incorrectly asserted that border searches of digital devices have not “significantly” increased since President Trump took office. He promised to look into the issue and get back to Sen. Paul. Please contact your members of Congress and urge them to co-sponsor the Protecting Data at the Border Act (S. 823/H.R. 1899)! Also urge your representatives to put pressure on Secretary Kelly to answer Sen. Wyden’s questions from February, which he has yet to do. If you want more information about how to protect your data when crossing the U.S. border, please read our new comprehensive technical and legal guide, as well as our pocket guide. Related Cases:  United States v. Saboonchi Riley v. California and United States v. Wurie Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

The Four Flavors of Automated License Plate Reader Technology (Do, 06 Apr 2017)
Automated License Plate Readers (ALPRs) may be the most common mass surveillance technology in use by local law enforcement around the country—but they're not always used in the same way. Typically, ALPR systems are comprised of high-speed cameras connected to computers that photograph every license plate that passes. The photo is converted to letters and numbers, which are attached to a time and location stamp, then uploaded to a central server.  This allows police to identify and record the locations of vehicles in real time and also identify where those vehicles have been in the past. Using this information, police could establish driving patterns for individual cars. The type of data ALPRs collect, analyze, and access often depends on what kind of systems they use and how they combine the data.  Whether you’re a policymaker, journalist, or a citizen watchdog, it is important to note the specifics about how these technologies are used. 1) Stationary ALPR Cameras Many law enforcement agencies install ALPR cameras in a fixed location, such as permanently affixing the cameras to traffic lights, telephone polls, or at the entrances of facilities. The city of Paradise Valley, Ariz. even disguises ALPRs at cacti. Often a city or county will attach these on freeway exit ramps to capture the plates of every vehicle entering or leaving. With stationary cameras, law enforcement are able to capture only vehicles passing through that specific location. If cameras are pointed opposite each other, or can be repositioned remotely, law enforcement can know which direction a driver is traveling. 2) Semi-Stationary ALPR Cameras Some law enforcement agencies acquire truck trailers or special surveillance vans outfitted with ALPR systems that they will tow and place at strategic locations. When parked, they function much like stationary cameras, capturing the plates of moving vehicles that pass within view.  For example, law enforcement agencies have placed these vehicles at fairgrounds during high-traffic events like gun shows and political rallies to capture information on attendees and to screen them against existing databases. The big difference is that semi-stationary ALPR cameras can easily be moved to different locations as police feel their surveillance needs change. 3) Mobile ALPR Cameras Police patrol cars can also be outfitted with ALPR cameras, allowing law enforcement officers to capture and screen plates as they drive along their normal beat or from crime scene to crime scene.  Mobile ALPR cameras are also more effective at capturing the license plates of parked cars than stationary or semi-stationary cameras. With mobile ALPRs, officers can drive around a mall parking lot and pick up the plates of everyone shopping at that moment.  Of more concern to civil libertarians is the ability for law enforcement to target sensitive places, such as centers of religious worship, health facilities, immigration clinics, union halls, political headquarters, and gun shops. Only two patrol cars in Oakland, for example, were able to cover most of the city in a week of driving around, with a disproportionate amount of coverage in Black and Hispanic neighborhoods. 4) ALPR Databases  A law enforcement agency does not even need to acquire its own ALPR cameras to access ALPR data. Private companies, such as Vigilant Solutions, deploy their own fleet of vehicles equipped with ALPR cameras. The companies then make this data available to law enforcement on a subscription basis. Unlike the other three types of ALPR, this private collection does not include many of the safeguards sometimes found in the government sector, such as transparency requirements, retention limits, and policies approved by an elected body.  These four configurations aren't the end of the story.  Overlapping TechnologIes It is not unusual for a law enforcement agency to deploy multiple flavors of ALPR, such as a combination of mobile and stationary cameras. A large number of agencies that use ALPR also feed data into privately hosted data systems, such as those offered by Vigilant Solutions. This allows agencies to share with one another, but also to draw information collected by the private company itself. Many agencies also share data through a central government system, such as those operated by fusion centers. Hot Lists One common practice is for law enforcement to create targeted “hot lists” of vehicles, such as plate numbers of stolen cars or cars suspected of being involved in crimes or gang activity. In some cases, especially in Texas, law enforcement will create a list of individuals with overdue court fees. That way, police receive real time updates when particular vehicles are spotted by an ALPR camera.  Related Technologies  While the above illustrates the four main ways ALPR is used by police, it is important to recognize some of the adjacent technologies. For example, red light cameras and automated speed traps often use ALPR technology. However, they are usually designed to only collect data on suspected violators, not the public at large. Toll roads and bridges also deploy ALPR technologies to make it more convenient to send bills to drivers. In addition, agencies are combining biometric technology with ALPRs, such as facial recognition or the ability to determine whether someone in the carpool lane actually has a passenger. Cities are also installing motion-sensor cameras that capture plates, but do not digitize them, allowing law enforcement to go back and search only after a crime has occurred. However, it is not difficult to apply software to extract license plate data from the images after the fact. The Devil Is in the Details When policymakers are considering whether to adopt ALPR technology, it is not a simple yes-no question. Constituents must pressure their representatives on the specifics: will these cameras be mobile or stationary, and does the purchase include access to a third-party database?  Policies should specify how long data will be retained, who outside the agency can access the data, and the specific circumstances that allow an officer to search the data or add a vehicle to a hot list. No police chief or elected official should sign off on an ALPR purchase without first answering these questions and balancing them against their constituents’ rights to privacy. ALPR tech poses a unique threat to privacy because it collects information on everyone, not just those connected to crimes. These systems wouldn’t work at all if the government did not require drivers to post identifying numbers in public view. But unlike an officer writing down plate numbers by hand, the collection and storage on a massive, automated scale can reveal intimate details of our travel patterns that should be none of the government’s business. For more on ALPR, check out EFF’s Street-Level Surveillance FAQ. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Security Professionals Scoff at Trump’s Position on Privacy (Do, 06 Apr 2017)
Attendants of this year’s RSA Conference—an event drawing thousands of digital security professionals, cryptographers, engineers, as well as tech companies and intelligence agencies looking to recruit—expressed skepticism of President Trump’s commitment to privacy. Silicon Valley’s response to President Donald Trump has been famously chilly. Tech leaders are organizing political opposition and tech workers are pledging to resist surveillance efforts and even taking to the streets to protest the new administration.  But less has been written about the digital security community’s reaction to Trump’s election. In his role as president, Donald Trump is at the helm of the world’s most powerful digital surveillance apparatus, which can access to the private lives of millions of people with the touch of a button (and, depending on the circumstances, a nod from the FISA Court). Can we trust President Trump to wield his authority ethically and thoughtfully, given his disturbing statements on security issues during the election? We wanted to give security professionals—who dedicate their lives to rooting out digital security threats—a chance to weigh in on the issue. To gather some of their perspectives, we sent reporter David Spark to this year’s RSA Conference to ask attendees what they think the Trump administration will do protect their digital privacy. Watch and see for yourself how some members of the security world feel about the incoming president’s position on protecting private data: mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fc2Jngeb0UvA%3Fautoplay%3D1%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20width%3D%22560%22%20height%3D%22315%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com Are you an engineer or security professional who would like to add to the conversation? We’d love to hear how you respond to this question. Create a quick video on your phone, write a blog post, or post on social media, then share it on Twitter with the hashtag #TrumponPrivacy. Special thanks to David Spark (@dspark) and Spark Media Solutions for their support and production of this video. The background music heard at the end—the song Hydrated—is licensed CC BY-NC-SA 3.0 by Kronstudios. EFF original work (i.e., every thing but the background music heard at the end) is licensed CC BY 4.0. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

London Police Ink Shadowy Deal with Industry on Website Takedowns (Mi, 05 Apr 2017)
We've previously highlighted how payment service providers like Visa, Mastercard, Paypal, and others go beyond the law to isolate and effectively censor websites that infringe their sometimes arbitrary standards. This has resulted in websites that provide information on sexualitypharmaceuticals, or whistleblowing, suddenly finding themselves cut off from their sources of funding, and left with no clear recourse. How RogueBlock Works One of the other reasons why websites can find themselves losing payment services is if they are accused of being associated with the sale of goods that infringe copyright, patents, or trademarks. One program used to accomplish this is a shadowy agreement between the payment processors and the private International AntiCounterfeiting Coalition (IACC) called RogueBlock. In what the IACC euphemistically describes as a "streamlined, simplified" process, it notifies the payment companies that a website allegedly offers goods that infringe a trademark, patent, or copyright, and encourages them to suspend their payment services to that website, usually without any court judgment verifying the allegation.1 The RogueBlock program is self-described as having been "highly encouraged and supported" by the U.S. Intellectual Property Enforcement Coordinator. Encouragement (and tacit pressure) from government is typical of these private enforcement arrangements, deals that EFF describes as Shadow Regulation.  Domain Takedowns Now Part of RogueBlock This week the IACC announced the extension of its RogueBlock program so that it can also be used to take down Internet domains. This extension had been foreshadowed for rollout in 2015, but evidently it has taken a little longer to get partners on board. The first partner announced in this expansion of the project is the Police Intellectual Property Crime Unit (PIPCU) of the City of London Police. PIPCU, which is funded by the UK's Intellectual Property Office (IPO), is dedicated to eradicating digital copyright infringement and online sales of physical goods that are alleged to infringe copyrights, patents, or trademarks. The Steering Group that manages PIPCU consists of representatives from the Intellectual Property Office, the City of London Police, and various rights holders and industry bodies. It contains no representatives of users of copyrighted material such as artists, libraries and archives, educators, or digital rights groups. PIPCU already takes down Internet domains of its own initiative, through its own program called Operation Ashiko.  (Since PIPCU's authority does not extend over the whole Internet, these seem to be limited to domains hosted in the United Kingdom, or hosted under the .uk domain.) What's different now is that PIPCU will also respond to reports submitted from the IACC through the RogueBlock program, potentially resulting in a greater volume of takedowns. Why is This a Problem? Traditionally, enforcement of trademark, patent, and copyright laws against sellers of physical goods takes place at the national level; infringing goods that enter the country are detained or seized at the border, or at marketplaces where they are sold. Sellers or importers have the opportunity to contest the seizure, and the dispute is settled by a court or administrative authority, through a process defined by law. The advent of the Internet has increased the practical difficulty of this approach for enforcement authorities, because even if we leave aside products that are delivered digitally, many goods are now mailed directly to the consumer in small shipments which are likely to slip through the customs net. Therefore it's understandable why the IACC would like to side-step this process altogether, by taking down the websites which appear to offer infringing goods for sale, so that consumers can't order them in the first place. However, while making things easier for the trademark, patent, or copyright holder, it also substantially increases the potential for enforcement overreach and abuse. Firstly, the websites flagged by RogueBlock are based on private reporting by trademark, patent, and copyright holders. For now, some public accountability still exists in that it is a public body, the City of London Police, that is required to act on these reports, and we can hope that they will engage in some independent investigation before doing so. The program will become significantly more of a concern if the authority to take down domains becomes exercisable directly by private actors, as in the case of the MPAA and Donuts trusted notifier program. Another problem is that trademarks and patents, in particular, are claims that exist in national law only, and even copyright varies from one country to another. The RogueBlock program does not accommodate these national differences, allowing a website that offers goods that are infringing in only one country of the world to be made inaccessible across the entire Internet. For example, the United States has restrictive laws on the circumvention of DRM; laws that don't exist (or exist with broader exceptions) in other countries such as Israel and India. Thus if a DRM-circumvention device is taken down from an online market, it becomes unavailable even to those for whom its use is perfectly legal. That's also what happened to the Spanish sports streaming site Rojadirecta, which had its domain name seized by the U.S. government for over a year, despite the site being lawful in its native Spain. Finally, common to other Shadow Regulation deals, the RogueBlock program is lacking in transparency and accountability. If a website is wrongly listed by the IACC in its RogueBlock program, thereby becoming a target for blocking by the City of London Police and the payment processors, there is no readily accessible pathway to have its inclusion reviewed and, if necessary, reversed. This opens up much scope for websites to be wrongly listed for anti-competitive or political reasons, or simply by mistake. We would have less of a problem with RogueBlock if the program was simply a channel to inform enforcement authorities of claims of infringement, which could be investigated and enforced through legal channels. But the program steps into Shadow Regulation territory to the extent that it encourages payment processors to take action against claimed infringers directly, without due process or means of review. The latest expansion of the program to facilitate the takedown of domains threatens to compound these problems, particularly if the City of London Police apply it against websites that are not globally infringing, or if private domain registries or registrars join the program and begin to act on claims of infringement directly. We'll be keeping an eye on the program and taking action if our fears prove to be well founded in practice. 1. A similar IACC program specific to the Alibaba online marketplace is called MarketSafe. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

An Update on Verizon's AppFlash: Pre-Installed Spyware Is Still Spyware (Di, 04 Apr 2017)
This post is an UPDATE to a piece we originally published last week. Verizon recently rolled out a new pilot project to pre-install on customers’ devices an app launcher/search tool that, we believe, is really just spyware. This software, called AppFlash, is preloaded on a new model of LG device—the LG K20 V—rather than in all of their Android line as we previously reported. The software allows Verizon and its partners to track the apps you have downloaded and then sell ads to you across the Internet based what those apps say about you, like which bank you use and whether you’ve downloaded a fertility app. Verizon is touting “AppFlash” as a customer benefit. In reality, it is just the latest display of wireless carriers’ stunning willingness to compromise the security and privacy of their customers by preloading unwanted apps on users’ devices. To see how AppFlash is dangerous, just look at the Privacy Policy. It states that the app can be used to: “collect information about your device and your use of the AppFlash services. This information includes your mobile number, device identifiers, device type and operating system, and information about the AppFlash features and services you use and your interactions with them. We also access information about the list of apps you have on your device.” Troubling as it may be to collect intimate details about what apps you have installed, the policy also illustrates Verizon’s intent to gather location and contact information: “With your permission, AppFlash also collects information about your device’s precise location from your device operating system as well as contact information you store on your device.” And what will Verizon use all of this information for? Why, targeted advertising on third-party websites across the Internet, of course: “AppFlash information may be shared within the Verizon family of companies, including companies like AOL who may use it to help provide more relevant advertising within the AppFlash experiences and in other places, including non-Verizon sites, services and devices.” With the announcement of AppFlash, Verizon has made clear that it intends to start monetizing our private data as soon as possible, if not sooner. In other words, our prediction that mobile Internet providers would start pre-installing spyware on their customers’ phones has come true, even before Congress changed the rule to let ISPs like Verizon, AT&T, and Comcast sell your personal data to advertisers. In our view, the FCC's privacy rules that Congress has voted to roll back would have prohibited Verizon from pre-installing the AppFlash spyware on its phones in this manner—and we can expect Congress' privacy rollback to embolden further privacy-invasive practices by ISPs. Last week, Verizon sent us a statement about its roll out of AppFlash, asserting that “you have to opt-in to use the app.” While it’s true that the user is presented with a click-through license the first time they launch AppFlash, it’s entirely unclear from that screen what information is being collected or shared. Instead, those crucial details are buried within the fine print of a Terms of Service. That’s hardly a meaningful mechanism for obtaining informed opt-in consent. What are the ramifications? For one thing, this is yet another entity that will be collecting sensitive information about your mobile activity on your Android phone. It’s bad enough that Google collects much of this information already and blocks privacy-enhancing tools from being distributed through the Play Store. Adding another company that automatically tracks its customers doesn’t help matters any. But our bigger concern is the increased attack surface an app like AppFlash creates. It is likely that with Verizon rolling this app out on certain new phones, hackers will be probing it for vulnerabilities, to see if they can use it as a backdoor they can break into. We sincerely hope Verizon has invested significant resources in ensuring that AppFlash is secure, because if it’s not, the damage to users’ cybersecurity could be disastrous, especially if Verizon expands its “test” to additional devices. Verizon should immediately abandon its plans to monitor its customers’ behaviors, and do what it’s paid to do: deliver quality Internet service without spying on users. And in no case should Verizon expand its test of this spyware to additional models of mobile devices. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Trump Signs Bill to Roll Back Privacy Rules into Law (Di, 04 Apr 2017)
A measure to roll back crucial privacy protections has crossed the finish line, and Internet users are worse off for it. Despite massive backlash from the American people, Congress passed and President Donald Trump signed into law a resolution that repeals the Federal Communications Commission (FCC) rules to protect consumers from privacy invasions by their Internet service providers (ISPs) like Comcast, AT&T, Verizon, and Time Warner Cable. The rules—which codified and expanded on existing online privacy protections—were passed by the FCC in October of last year and set to go into effect later this year.  They would have kept ISPs from selling customers’ data and using new invasive ways to track and deliver targeted ads to customers. Additionally, the rules would have required those companies to protect customers’ data against hackers. Tens of thousands of people called on lawmakers to protect those rules, but Republicans in Congress repealed them by narrowly passing a Congressional Review Act resolution. That measure not only repeals the rules, it also prevents the FCC from writing similar rules in the future, throwing into question how much the FCC can do to police ISPs looking to trade off their customers’ privacy for higher profits. Because of the current legal landscape, the FTC can’t police ISPs either, leaving customers without a federal agency that can clearly protect them in this space. We’ll continue pushing for these specific privacy protections where we can. We urge state lawmakers and technology providers to look for ways to shore up individual privacy until Congress is ready to listen to the consumers who don't want to trade away their basic privacy rights in order to access the Internet. We’ve spoken up, and many lawmakers got the message that privacy is important to their constituents. Thanks to your actions, we’ve together laid the groundwork to keep fighting for privacy protections. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Here’s How to Protect Your Privacy From Your Internet Service Provider (Di, 04 Apr 2017)
We pay our monthly Internet bill to be able to access the Internet. We don’t pay it to give our Internet service provider (ISP) a chance to collect and sell our private data to make more money. This was apparently lost on congressional Republicans as they voted to strip their constituents of their privacy. Even though our elected representatives have failed us, there are technical measures we can take to protect our privacy from ISPs. Bear in mind that these measures aren’t a replacement for the privacy rules that were repealed or would protect our privacy completely, but they will certainly help. Pick an ISP that respects your privacy It goes without saying: if privacy is a concern of yours, vote with your wallet and pick an ISP that respects your privacy. Here is a list of them. Given the dismal state of ISP competition in the US, you may not have this luxury, so read on for other steps you can take. Opt-out of supercookies and other ISP tracking In 2014, Verizon was caught injecting cookie-like trackers into their users’ traffic, allowing websites and third-party ad networks to build profiles without users’ consent. Following criticism from US senators and FCC action, Verizon stopped auto-enrolling users and instead made it opt-in. Users now have a choice of whether to participate in this privacy-intrusive service. You should check your account settings to see if your ISP allows you to opt-out of any tracking. It is generally found under the privacy, marketing, or ads settings. Your ISP doesn’t have to provide this opt-out, especially in light of the repeals of the privacy rules, but it can never hurt to check. HTTPS Everywhere EFF makes this browser extension so that users connect to a service securely using encryption. If a website or service offers a secure connection, then the ISP is generally not able to see what exactly you’re doing on the service. However, the ISP is still able to see that you’re connecting to a certain website. For example, if you were to visit https://www.eff.org/https-everywhere, your ISP wouldn’t be able to tell that you’re on the HTTPS Everywhere page, but would still be able to see that you’re connecting to EFF’s website at https://www.eff.org While there are limitations of HTTPS Everywhere when it comes to your privacy, with the ISP being able to see what you’re connecting to, it’s still a valuable tool. If you use a site that doesn't have HTTPS by default, email them and ask them to join the movement to encrypt the web. VPNs In the wake of the privacy rules repeal, the advice to use a Virtual Private Network (VPN) to protect your privacy has dominated the conversation. However, while VPNs can be useful, they carry their own unique privacy risk. When using a VPN, you’re making your Internet traffic pass through the VPN provider’s servers before reaching your destination on the Internet. Your ISP will see that you’re connecting to a VPN provider, but won’t be able to see what you’re ultimately connecting to. This is important to understand because you’re exposing your entire Internet activity to the VPN provider and shifting your trust from the ISP to the VPN. In other words, you should be damn sure you trust your VPN provider to not do the shady things that you don’t want your ISP to do. VPNs can see, modify, and log your Internet traffic. Many VPN providers make promises to not log your traffic and to take other privacy protective measures, but it can be hard to verify this independently since these services are built on closed platforms. For example, a recent study found that up to 38% of VPN apps available for Android contained some form of malware or spyware. Below, we detail some factors that should be considered when selecting a VPN provider. Keep in mind that these are considerations for someone who is interested in preventing their ISP from snooping on their Internet traffic, and not meant for someone who is interested in protecting their information from the government—a whistleblower, for instance. As with all things security and privacy-related, it’s important to consider your threat model. Is your VPN service dirt-cheap or free? Does the service cost $20 for a lifetime service? There’s probably a reason for that and your browsing history may be the actual product that the company is selling to others. How long has your VPN provider been around? If it is relatively new and without a reliable history, you’d have to trust the provider a great deal in order to use such a service. Does the VPN provider log your traffic? If yes, what kind of information is logged? You should look for one that explicitly promises to not log your Internet traffic and how active the VPN provider is in advocating for user privacy. Does the VPN provider use encryption in providing the service? It’s generally recommended to use services that support a well-vetted open source protocol like OpenVPN or IPSec. Utilizing these protocols ensures best security available.   If your VPN provider uses encryption, but has a single shared password for all of the users, it’s not sufficient encryption. Do you need to use the VPN provider’s proprietary client to use the service? You should avoid these and look for services that you can use with an open source client. There are many clients that support the above-mentioned OpenVPN or IPSec protocols. Would using the VPN service still leak your DNS queries to your ISP? Does the VPN support IPv6? As the Internet transitions from IPv4 to the IPv6 protocol, some VPN providers may not support it. Consequently, if your digital device is trying to reach a destination that has an IPv6 address using a VPN connection that only supports IPv4, the old protocol, it may attempt to do so outside of the VPN connection. This can enable the ISP to see what you’re connecting to since the traffic would be outside of the encrypted VPN traffic. Now that you know what to look for in a VPN provider, you can use these two guides as your starting point for research. Though keep in mind that a lot of the information in the guides is derived from or given by the provider, so again, it requires us to trust their assertions. Tor If you are trying to protect your privacy from your Internet company, Tor Browser perhaps offers the most robust protection. Your ISP will only see that you are connecting to the Tor network, and not your ultimate destination, similar to VPNs. Keep in mind that with Tor, exit node operators can spy on your ultimate destination in the same way a VPN can, but Tor does attempt to hide your real IP address, which can improve anonymity relative to a VPN. Users should be aware that some websites may not work in the Tor browser because of the protections built in. Additionally, maintaining privacy on Tor does require users to alter their browsing habits a little. See this for more information. It’s a shame that our elected representatives decided to prioritize corporate interests over our privacy rights. We shouldn’t have to take extraordinary steps to limit how our personal information can be used, but that is clearly something that we are all forced to do now. EFF will continue to advocate for Internet users’ privacy and will work to fix this in the future. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

The Bill of Rights at the Border: Fourth Amendment Limits on Searching Your Data and Devices (Mo, 03 Apr 2017)
More than 325,000 people enter the United States via airports every day, with hundreds of thousands more crossing by land at the borders. Not only is that a lot of people, it’s also a lot of computers, smartphones, and tablets riding along in our pockets, bags, and trunks.  Unfortunately, the Fourth Amendment protections we enjoy inside the U.S. for our devices aren’t always as strong when we’re crossing borders—and the Department of Homeland Security takes advantage of it. On the other hand, the border is not a Constitution-free zone. What are the limits to how and how much customs and immigrations officials can access our data? To help answer those questions, we’re offering the second in our series of posts on the Constitution at the border, focusing this time on the Fourth Amendment. Click here for Part 1 on the First Amendment or for Part 3 on the Fifth Amendment. The Default Privacy Rule The Fourth Amendment forbids “unreasonable” searches and seizures by the government. In most circumstances, the Fourth Amendment requires that government agents obtain a warrant from a judge by presenting preliminary evidence establishing “probable cause” to believe that the thing to be searched or seized likely contains evidence of illegal activity before the officer is authorized to search. The Border Search Exception Unfortunately, the Supreme Court has sanctioned a “border search exception” to the probable cause warrant requirement on the theory that the government has an interest in protecting the “integrity of the border” by enforcing the immigration and customs laws. As a result, “routine” searches at the border do not require a warrant or any individualized suspicion that the thing to be searched contains evidence of illegal activity. The Exception to the Exception: “Non-Routine” Searches But the border search exception is not without limits. As noted, this exception only applies to “routine” searches, such as those of luggage or bags presented at the border.  “Non-routine” searches – such as searches that are “highly intrusive” and impact the “dignity and privacy interests” of individuals, or are carried out in a “particularly offensive manner” – must meet a higher standard: individualized “reasonable suspicion.” In a nutshell, that means border agents must have specific and articulable facts suggesting that a particular person may be involved in criminal activity. For example, the Supreme Court held that disassembling a gas tank is “routine” and so a warrantless and suspicionless search is permitted. However, border agents cannot detain a traveler until they have defecated to see if they are smuggling drugs in their digestive tract unless the agents have a “reasonable suspicion” that the traveler is a drug mule. Border Searches of Digital Devices How does this general framework apply to digital devices and data at the border? Border agents argue that the border search exception applies to digital searches.  We think they are wrong.  Given that digital devices like smartphones and laptops contain highly personal information and provide access to even more private information stored in the cloud, the border search exception should not apply. As Chief Justice Roberts recognized in a 2014 case, Riley v. California: Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans the privacies of life. Snooping into such privacies is extraordinarily intrusive, not “routine.” Thus, when the government asserted the so-called “incident to arrest” exception to justify searching a cell phone without a warrant during or immediately after an arrest, the Supreme Court called foul. Why is the Riley decision important at the border? For one thing, the “incident to arrest” exception that the government tried to invoke is directly comparable to the border search exception, because both are considered “categorical” exemptions. Given that the intrusion is identical in both instances, the same privacy protections should apply. Moreover, with the ubiquity of cloud computing, a digital device serves as a portal to highly sensitive data, where the privacy interests are even more significant. Following Riley, we believe that any border search of a digital device or data in the cloud is unlawful unless border agents first obtain a warrant by showing, to a judge, in advance, that they have probable cause to believe the device (or cloud account) likely contains evidence of illegal activity. However, lower courts haven’t quite caught up with Riley.  For example, the Ninth Circuit held that border agents only need reasonable suspicion of illegal activity before they could conduct a non-routine forensic search of a traveler’s laptop, aided by sophisticated software. Even worse, the Ninth Circuit also held that a manual search of a digital device is “routine” and so a warrantless and suspicionless search is still “reasonable” under the Fourth Amendment. Some courts have been even less protective. Last year a court in the Eastern District of Michigan upheld a computer-aided border search of a traveler’s electronic devices that lasted several hours without reasonable suspicion. EFF is working hard to persuade courts (and border agents) to adopt the limits set forth in the Riley decision for border searches of cellphones and other digital devices. In the meantime, what should you do to protect your digital privacy? Much turns on your individual circumstances and personal risk assessment. The consequences for non-compliance with a command from a CBP agent to unlock a device will be different, for example, for a U.S. citizen versus a non-citizen. If you are a U.S. citizen, agents must let you enter the country eventually; they cannot detain you indefinitely. If you are a lawful permanent resident, agents might raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry entirely. We recommend that everyone conduct their own threat model to determine what course of action to take at the border. Our in depth Border Search Whitepaper offers you a spectrum of tools and practices that you may choose to use to protect your personal data from government intrusion. For a more general outline of potential practices, see our pocket guides to Knowing Your Rights and Protecting Your Data at the Border. We’re also collecting stories of border search abuses at: borders@eff.org And join EFF in calling for stronger Constitutional protection for your digital information by contacting Congress on this issue today. Related Cases:  United States v. Saboonchi Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

One Million Badgers (Mo, 03 Apr 2017)
One Million BadgersThis week—for the first time ever—Privacy Badger has surpassed one million users. Privacy Badger is a browser extension for Chrome, Firefox, and Opera that automatically blocks hidden third-party trackers that would otherwise follow you around the web and spy on your browsing habits. Third-party tracking—that is, when advertisers and websites track your browsing activity across the web without your knowledge, control, or consent—is an alarmingly widespread practice in online advertising. Privacy Badger spots and then blocks third-party domains that seem to be tracking your browsing habits (e.g. by setting cookies that could be used for tracking, or by fingerprinting your browser). If the same third-party domain appears to be tracking you on three or more different websites, Privacy Badger will conclude that the third party domain is a tracker and block future connections to it. Privacy Badger always tells how many third-party domains it has detected and whether or not they seem to be trackers. Further, users have control over how Privacy Badger treats these domains, with options to block a domain entirely, block just cookies, or allow a domain. Ending non-consensual browser tracking With this milestone, the Privacy Badger team remains as committed as ever to end non-consensual browser tracking and promote responsible advertising.  Although Privacy Badger blocks many ads in practice, it is more a privacy tool than a strict ad blocker. Privacy Badger encourages advertisers to treat users respectfully and anonymously rather than follow the industry status quo of online tracking. It does this by unblocking content from domains that respect our Do Not Track policy, which states that the participating site will not retain any information about users who have expressed that they do not want to be tracked. Do Not Track and Privacy Badger are here to help you block stealthy online tracking and the exploitation of your browsing history. Download Privacy Badger now to take a stand against tracking and join the movement to build a more privacy-friendly web. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Stupid Patent of the Month: Storing Files in Folders (Fr, 31 Mär 2017)
Our ongoing Reclaim Invention campaign urges universities not to sell patents to trolls. This month’s stupid patent provides a good example of why. US Patent No. 8,473,532 (the ’532 patent), “Method and apparatus for automatic organization for computer files,” began its life with publicly-funded Louisiana Tech University. But in September last year, it was sold to a patent troll. A flurry of lawsuits quickly followed. Louisiana Tech sold the ’532 patent to Micoba LLC, a company that has all the indicia of a classic patent troll. Micoba was formed on September 8, 2016, just a few days before it purchased the patent. The patent assignment agreement lists Micoba’s address as an office building located in the Eastern District of Texas where virtual office services are provided. As far as we can tell, Micoba has no purpose other than to sue with this patent. So what does Micoba’s newly acquired patent cover? Claim 13 reads: A computer system comprising a processor, memory, and software for automatically organizing computer files into folders, said software causing said computer system to execute the steps comprising: a.   providing a directory of folders, wherein substantially each of said folders is represented by a description; b.   providing a new computer file not having a location in said directory, said computer file being represented by a description; c.    comparing said description of said computer file to descriptions of a plurality of said folders along a single path from a root folder to a leaf folder; and d.   assigning said computer file to a folder having the most similar description. In other words, put files into folders that contain similar files. Do it on a "computer system" (in case you were worried office workers from the 1930s might have infringed this patent). For a software patent, the ’532 patent is unusually free of patent jargon and pseudo-technical babble. Its specification (this is the description of the invention that comes before the claims) does describe a method for determining when the contents of a file match a folder description. The patent proposes representing folders and files as vectors (which should reflect the frequency of particular words found within). The patent suggests assessing similarity by calculating the dot product of these vectors. But, even assuming this was a new idea when the application was filed in 2003, many of the patent’s claims are not limited to this method. The patent effectively captures almost any technique for automatically sorting digital files into folders. The ’532 patent issued in June 2013, about a year before the Supreme Court’s decision in Alice v. CLS Bank. In that case, the Supreme Court held that an abstract idea (like sorting files into folders) does not become patentable simply because it is implemented on a computer. The ’532 patent should be found invalid under this standard. In our view, this patent has no value after Alice except as a litigation weapon. Louisiana Tech represents that it “seeks industrial partners to commercialize the technology developed at Louisiana Tech for the benefit of society.” But it completely failed to consider this public interest mission when it sold the ’532 patent to Micoba. Within two months of the sale, Micoba had filed nearly a dozen cases in the Eastern District of Texas, suing companies like SpiderOak and Dropbox, alleging they infringed at least claim 13 of the ’532 patent. Instead of benefiting society, Louisiana Tech unleashed a torrent of wasteful litigation. According to RPX, Micoba is associated with IP Edge, which itself is associated with eDekka (the biggest patent troll of 2014) and Bartonfalls (the winner of our October 2016 Stupid Patent of the Month for its patent on changing the channel). Bartonfalls’ trolling campaign recently collapsed when a judge ruled that its patent infringement contentions were “implausible on their face.” If RPX is correct that these companies are connected, Louisiana Tech has hitched its wagon to one of the biggest trolling operations in the nation. EFF’s Reclaim Invention project was launched to stop universities from feeding patent trolls like this. The project includes a Public Interest Patent Pledge for universities to sign stating that they will not sell their patents to trolls. We also drafted a model state law to help ensure that state-funded universities don't sell their inventions to patent trolls. You can ask your university to sign the pledge. Take ActionTell your university: Don’t sell patents to trolls. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

NAFTA Renegotiation Will Resurrect Failed TPP Proposals (Fr, 31 Mär 2017)
Yesterday a draft letter was leaked from acting USTR Stephen Vaughn to Congress on the Trump administration's intentions towards NAFTA. The letter describes the administration's intention to "update" NAFTA to include provisions on topics such as copyright and e-commerce that had been contained in the TPP: Most chapters are clearly outdated and do not reflect the most recent standards in U.S. trade agreements. For example, digital trade was in its infancy in 1994. ... Rules for intellectual property rights, state-owned enterprises, rules of origin, customs procedures, and ensuring the benefits of trade benefit small and medium businesses have all been improved in newer trade agreements. On copyright, the letter promises to "seek commitments from the NAFTA countries to strengthen their laws and procedures on enforcement of intellectual property rights, such as by ensuring that their authorities have authority to seize and destroy pirated and counterfeit goods, equipment used to make such goods, and documentary evidence." On e-commerce, it commits to tackling "measures that impede digital trade in goods and services, restrict cross-border data flows, or require local storage or processing of data, including with respect to financial services". Both of these are consistent with the wholesale transfer of TPP obligations into NAFTA, although they are annoyingly vague about what specific rules the U.S. will be including, other than the examples given. However it is worth noting that in at least one respect—the extension of the data localization ban to the financial industry—the letter proposes going beyond what was contained in the TPP. Exclusion of the financial services industry from those rules was one of the main sticking points with the TPP for Republicans while Obama was promoting it. The USTR letter also indicates that the administration intends to maintain the controversial Investor-State Dispute Settlement (ISDS) provisions of NAFTA, which allowed pharmaceutical company Eli Lilly to sue Canada for the country's decision not to grant two drug patents. Although Canada recently won that case, the ability for foreign companies to challenge legislation and court decisions that go against their financial interests was one of the TPP's most controversial provisions, and will remain divisive as the NAFTA renegotiation goes forward. Concern on E-Commerce Rules in Trade The news about Trump's plans for NAFTA coincides with EFF's workshop on electronic commerce rules in trade agreements at RightsCon in Brussels today. (An introductory slide presentation from that workshop is attached to this post.) One of several workshops on trade at the event, the panelists were united in their concern about the risks of new digital issues being addressed in trade agreements that are closed, opaque, and lobbyist-dominated. Panelist Michael Geist pointed out that many e-commerce rules had formerly been dealt with in more open fora, and attempting to address then in trade agreements may result in rules that are flawed, weak, and inconsistent in their enforceability. While accepting that there are some digital rules that are relevant to global trade, panelist Meghan Sali from OpenMedia noted that "The devil is in the details, and the details are kept secret", suggesting that a more open process would better reflect users' priorities. Marília Maciel from DiploFoundation stressed the need to separate out the issues that are best dealt with in a trade context, from those that may be better dealt with by other, more specialist global institutions. Burcu Kilic from Public Citizen agreed, pointing out how viewing digital issues through a trade lens results in them being treated in a way that doesn't benefit users and developing countries. The final panelist, Maryant Fernández from European Digital Rights (EDRi), gave an example of the topic of data flows and data localization. Trade negotiators tend to see data protection rules as a trade restriction, rather than as legitimate measures to preserve the human right to privacy. This discussion has a direct bearing on the future of NAFTA, presenting the U.S. with a choice either to focus its reforms on traditional trade issues that would directly impact the manufacturing sector, or to load the deal up with a grab bag of rules on unrelated digital policy issues, which is the approach that led to the implosion of the TPP. Although the administration has since downplayed the significance of the leaked draft letter, it does give us an insight into a least some of the thinking that is going into the NAFTA renegotiation process. Since President Trump has been so scathing of the TPP and abandoned the deal with such fanfare, it would be disappointing if the new NAFTA were little more than a regurgitation of that failed agreement. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF Says No to So-Called “Moral Rights” Copyright Expansion (Fr, 31 Mär 2017)
Thanks to the First Amendment and longstanding copyright limitations, copyright holders don’t have the legal right to prevent others from using their works to express messages that they disagree with or find offensive, nor do they have a right to prevent someone who lawfully purchases a copy of their work from reselling it, repurposing it, or destroying it entirely.  That’s because copyright law in the United States doesn’t provide authors the ability to launch lawsuits over their “moral rights” (except for some works of visual art covered by the Visual Artists Rights’ Act). And that’s a good thing – by limiting authors’ abilities to control how their works are used, U.S. copyright law creates space for downstream creators and users to adapt and remix existing works to create new interpretations and meanings, without facing a veto from the original author. It also allows those who own physical copies of copyrighted works to use those copies in the ways that make most sense for them – they can annotate them, take them apart and reassemble them into new creations, give them away, or even destroy them. We have fought for decades to improve copyright law to create more space for downstream uses, but the Copyright Office sought comments [PDF] on proposals that would do the exact opposite: creating a new right of integrity “to prevent prejudicial distortions of the work” and an unnecessary and potentially damaging attribution right (to be credited as the author). The fight over moral rights, particularly the right of Integrity, is ultimately one about who gets to control the meaning of a particular work. If an author can prevent a use they perceive as a “prejudicial distortion” of their work, that author has the power to veto others’ attempts to contest, reinterpret, criticize, or draw new meanings from those works. These sorts of uses are paradigmatic fair uses, protected under traditional copyright law. But in countries that have adopted moral rights frameworks, authors (and their heirs) have the power to restrict certain uses or interpretations of their works that they disagree with. For example, as Peter Baldwin notes in his book The Copyright Wars, in France, George Bizet’s heirs succeeded in having Otto Preminger’s reinterpretation of the opera Carmen, Carmen Jones, banned, because they objected to the filmmakers’ setting of the opera among African Americans. Further, U.S. defamation law already provides remedies in appropriate cases when false, harmful statements are made about a person. If a work is used or falsely attributed in a way that causes real reputational harm to the author, defamation law provides the appropriate remedy. And, unlike a new right of integrity, defamation law contains safeguards designed to prevent the law from suppressing or punishing speech protected under the First Amendment. The proposed attribution requirement presents lesser, but still significant harms, without adding much benefit. An additional right of attribution is likely to be redundant to existing rights under copyright law, which already provide copyright owners with broad powers to control dissemination of their works. With a right of attribution, copyright holders would have yet another tool to police otherwise non-infringing uses of a work, like fair uses. A fixed, statutory attribution right is also inconsistent with the rapid and diverse participatory cultural practices that prevail online. Cultural symbols are often rapidly reworked and shared, and norms around attribution vary dramatically across contexts. Many creative communities have established norms regarding when and to what extent attribution is needed [PDF]. A rigid attribution requirement could disrupt these practices and impede valuable downstream creativity, while creating further opportunities for copyright trolling. A statutory right of attribution could also interfere with privacy protective measures employed by online platforms. Many platforms strip identifying metadata from works on their platforms to protect their users' privacy, If doing so were to trigger liability for violating an author’s right of attribution, platforms would likely be chilled from protecting their users’ privacy in this way. For centuries, American courts have grappled with how to address harm to reputation without impinging on the freedom of speech guaranteed by the First Amendment. And as copyright’s scope has expanded in recent decades, the courts have provided the safeguards that partially mitigate the harm of overly broad speech regulation. As we told the Copyright Office, introducing new rights to control the use and meaning of copyrighted works would be a step in the wrong direction.           Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

New Report Aims to Help Criminal Defense Attorneys Challenge Secretive Government Hacking (Fr, 31 Mär 2017)
Lawyers at EFF, the ACLU, and the National Association of Criminal Defense Lawyers released a report today outlining strategies for challenging law enforcement hacking, a technique of secretly and remotely spying on computer users to gather evidence. Federal agents are increasingly using this surveillance technique, and the report will help those targeted by government malware—and importantly their attorneys—fight to keep illegally-obtained evidence out of court.   A recent change in little-known federal criminal court procedures, which was quietly pushed by the Justice Department, has enabled federal agents to use a single warrant to remotely search hundreds or thousands of computers without having to specify whose information is being captured or where they are. We expect these changes to result in much greater use of the technique, and the guide will arm attorneys with information necessary to defend their clients and ensure that law enforcement hacking complies with the Constitution and other laws.   In the largest known government hacking campaign to date, the FBI seized servers running a website accused of hosting child pornography and, instead of shutting down the site, continued to operate it. Relying on a single warrant, the FBI then hacked into users that accessed the site, totaling nearly 9,000 devices located in 120 countries around the world. The FBI charged hundreds of suspects who visited the website, several of whom are challenging the validity of the warrant. In briefs filed in these cases, EFF says that the warrant that enabled this massive hacking exercise is unconstitutional and evidence gathered using it should be suppressed. As with every new surveillance power obtained by the government, it’s just a matter of time before these secret malware attacks are used in other cases. That’s why it’s important for criminal defense attorneys to get educated about how these attacks work and how they can vigorously defend their clients rights when the technique is used.   The report, “Challenging Government Hacking in Criminal Cases,” explains how to recognize the use of government malware in a criminal case, and it outlines the most important and potentially effective procedural and constitutional arguments to raise when hacking was used to gather evidence. Our hope is that the guide will help attorneys fight back against illegal surveillance, and ultimately place important and needed checks on the government’s ability to hack into our personal electronic devices. Related Cases:  The Playpen Cases: Mass Hacking by U.S. Law Enforcement Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

The Most Powerful Single Click in Your Facebook Privacy Settings (Mi, 29 Mär 2017)
Getting a new job, recovering from an abusive relationship, engaging in new kinds of activism, moving to a different country—these are all examples of reasons one might decide to start using Facebook in a more private way. While it is relatively straightforward to change your social media use moving forward, it can be more complicated to adjust all the posts, photos, and videos you may have accumulated on your profile in the past. Individually changing the privacy settings for everything you have posted in the past can be impractical, particularly for very active users or those who have been using Facebook for a long time. The good news is that Facebook offers a one-click privacy setting to retroactively change all your past posts to be visible to your friends only. With this tool, content on your timeline that you’ve shared to be visible to Friends of Friends or Public will change to be visible by Friends only. And the change will be “sticky”—it cannot be reversed in one click, and would be very difficult to accidentally undo. Watch this video for a step-by-step tutorial to change this setting and make your posts more private. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F6hmc2zhPM9Y%3Fautoplay%3D1%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20width%3D%22560%22%20height%3D%22315%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com Keep in mind that, if you tagged someone else in a past post, that post will still be visible to them and to whatever audience they include in posts they are tagged in. And, if you shared a past post with a “custom” audience (like “Friends Except Acquaintances” or “Close Friends”), this setting won’t apply Finally, this setting can only change the audience for posts that you have shared. When others tag you in their posts, then they control the audience. So share this blog post and video with your friends and encourage them to change their settings, because privacy works best when we work together. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Repealing Broadband Privacy Rules, Congress Sides with the Cable and Telephone Industry (Di, 28 Mär 2017)
Putting the interests of Internet providers over Internet users, Congress today voted to erase landmark broadband privacy protections. If the bill is signed into law, companies like Cox, Comcast, Time Warner, AT&T, and Verizon will have free rein to hijack your searches, sell your data, and hammer you with unwanted advertisements. Worst yet, consumers will now have to pay a privacy tax by relying on VPNs to safeguard their information. That is a poor substitute for legal protections. Make no mistake, by a vote of 215 to 205 a slim majority of the House of Representatives have decided to give our personal information to an already highly profitable cable and telephone industry so that they can increase their profits with our data. The vote broke along party lines, with Republicans voting yes, although 15 Republicans broke ranks to vote against the repeal with the Democrats.  Should President Donald Trump sign S.J. Res. 34 into law, big Internet providers will be given new powers to harvest your personal information in extraordinarily creepy ways. They will watch your every action online and create highly personalized and sensitive profiles for the highest bidder. All without your consent. This breaks with the decades long legal tradition that your communications provider is never allowed to monetize your personal information without asking for your permission first. This will harm our cybersecurity as these companies become giant repositories of personal data. It won't be long before the government begins demanding access to the treasure trove of private information Internet providers will collect and store. While today is extremely disappointing, there is still tomorrow. Without a doubt Internet providers (with the exception of the small providers who stood with us) will engage in egregious practices, and we are committed to mobilizing the public to push back. EFF will continue the fight to restore our privacy rights on all fronts. We will fight to restore your privacy rights in the courts, in the states, in Washington, D.C., and with technology. We are prepared for the long haul of pushing a future Congress to reverse course and once again side with the public. Join the fight for privacy and the open Internet. donate to EFF Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Privacy By Practice, Not Just By Policy: A System Administrator Advocating for Student Privacy (Di, 28 Mär 2017)
When Matt L. started to raise the alarm about educational technology in his school district, he knew it would ruffle some feathers. As a system administrator (or sysadmin), Matt has had a front-row seat to the increasing use of technology in his rural, public school district. At first, the district only issued Chromebooks to students in guest “kiosk” mode for test-taking. Over time, though, each of the district’s 10,000 students got individual access to school-issued devices, from iPads for younger students who cannot yet type to Chromebooks and G-Suite for Education logins for students as young as third grade. Matt and his sysadmin colleagues are at the center of deploying, configuring, and maintaining Google devices and software for the entire district. This gives Matt opportunities to identify privacy problems with ed tech implementation, and to propose solutions. “All our eggs in one basket” “I don’t want to say that Google or Chromebooks or any of this stuff is inherently bad,” Matt said. “Getting these tools into the hands of kids is hard to argue with. That’s why I got into technology.” As the district has continued to expand its technology use, however, Matt has started to have concerns about consolidating students’ educational and personal information in one company. “We’re putting all our eggs in one basket that we’re not in control of,” he said. “We don’t know where this student data is going.” On top of his privacy concerns, Matt observed students learning about only certain softwares without broader awareness of their technology choices. Having grown up experimenting with Linux and other open softwares, he was dismayed to see students being steered toward only Google services and away from other options. “The beauty of technology is that it is so vast and deep, with so many choices. But we’re funnelling people into one situation, which is not our job,” he said. “We should be teaching concepts of computing, not specific software. We should be giving parents and kids a choice.” Privacy by policy After frustrating initial conversations with colleagues, it became clear to Matt that student privacy advocacy in his district could “get touchy pretty quick.” Even higher-up colleagues who might have been in a position to make district-level changes were hard to effectively approach. “They like Chrome because it’s easy to use and they don’t have to worry much about the mechanics behind it,” he said. “So, I was constantly ridiculed when I brought up concerns about privacy.” Colleagues also pointed out the cost-effectiveness of free Google services in response to Matt’s concerns. But Matt was not convinced. “Nobody's asking why it's free," Matt said. “I thought it was common sense that, generally, if you're not paying for the app, you're the product.” After repeated requests to talk more about student privacy issues, Matt’s boss and members of administration pointed him to the district’s as well as Google’s privacy policies. But this approach of ensuring “privacy by policy” did not lessen Matt’s concerns. “We have privacy policies for our website, and for our student academic records, but not so much for students’ information in regards to what Google is collecting,” he said. “We can’t guarantee what Google is or is not doing with this information. It’s all pretty vague, and it’s not the kind of thing you want to be vague about.” One of the biggest problems with such “privacy by policy” is that it relies on all staff members being up-to-date on complex, sometimes vague policies, and having the time and resources to comply with them consistently. Matt observed that many in his district—including his colleagues in system administration—see student privacy as a long-term issue rather than an active, ongoing project. “Stuff like student privacy gets back-burnered,” Matt said. “It’s hard to look down the road at long-term projects when teachers’ day-to-day is consuming all of our department’s time and energy.” Privacy by practice Unsatisfied by the “privacy by policy” that his district usually practices, Matt is investigating how he can implement “privacy by practice”—that is, prioritizing student privacy with active safeguards to augment and ensure existing policy, like technical settings and opt-out options. His first step has been to “crank down the lid” on privacy settings so that students use Google products as anonymously as possible by default, without associating their online profiles with identifying information. Ideally, technical controls like these will make it harder for teachers or third-party companies to collect student data, making privacy the default in students’ and teachers’ work. He is also advocating for an opt-out policy. EFF helped Matt locate relevant examples of opt-out policies from other school districts to get conversations started. However, this advocacy process has brought up more questions than answers. Coworkers were concerned that giving students the option to opt out of Chromebooks and/or Google services will create more work for teachers and administrators, and it has been hard to build consensus around what classroom alternatives would be available when students choose to opt out. Continuing to advocate Matt’s conversations with colleagues have moved forward in fits and starts, and are constantly changing as the district’s technology situation changes. For example, a system-wide update gave Matt an opportunity to propose concurrent changes in ed tech implementation. But, soon after, discussions about abandoning local storage and migrating completely to Google Drive ran counter to Matt’s efforts to locally control student data and ensure their privacy. In the meantime, Matt is thinking about stepping up student digital literacy education with more student-staff interactions on the topic. He has also brought up his concerns at professional conferences to learn from sysadmin in different schools and districts. Matt remains persistent and committed to advocating for more secure, more private student systems. “It’s a really hard problem, but we need to come up with an answer,” Matt said. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Let’s Make The Copyright Office Less Political, Not More (Di, 28 Mär 2017)
After three years of discussing changes to copyright law, Congress’s first bill is a strange one. House and Senate Judiciary Committee leaders have introduced a bill that would radically change the way the Register of Copyrights is picked – taking the process out of the hands of the Librarian of Congress and putting it into the hands of Congress and the President. That sounds like a pretty technical move, but it could have real consequences for future innovation and creativity.  Let’s break it down. As it stands now, the Register is appointed by the Librarian of Congress, and serves under her direction and oversight.  The “Register of Copyrights Selection and Accountability Act of 2017” would require that the head of the Copyright Office be appointed by the President and confirmed by the Senate, and would authorize the President to remove the Register. This would make the Register’s appointment process more democratic – but also more a captive of special interests. The Copyright Office is supposed to focus on a pretty mundane but important job: registering copyrightable works. Like the entities such as the Congressional Research Service and the GAO, the Copyright Office is also charged with providing advice to Congress, and “information and assistance” to other federal government entities.  It is not, however, responsible for making or officially applying copyright law except in very narrow circumstances (like deciding whether a work qualifies for registration). Instead, the responsibility for setting the nation’s copyright policy rests with Congress. In the past decade, however, the Copyright Office has played an increasingly central role in policymaking – and it has not been a neutral advocate. The Copyright Office has repeatedly put forward policy proposals and legal analyses that have tended to favor the interests of a particular segment of copyright owners (particularly major media and entertainment companies) over other constituencies. For example, one former Register famously stated, “[c]opyright is for the author first and the nation second.”  Under her leadership, the Office supported the disastrous Stop Online Piracy Act (SOPA). And last year, the Office worked closely and quietly with major entertainment companies to derail the FCC’s effort to improve competition and consumer choice in cable set-top boxes. The Office also pushed through an unpopular rule change that puts many small website owners at risk of losing access to copyright law’s safe harbors for intermediaries. More and more people feel the consequences of this bias at the Copyright Office, as some appellate courts have looked to the Office to decide close and critical legal questions.  And thanks to the Digital Millennium Copyright Act, the Copyright Office also plays a central role in shaping our technological future. The Register has gone from being a neutral expert to a political player. In theory, the bill would help mitigate this effect by making this Register more accountable to the public – after all, under the current regime the Register answers only to the Librarian of Congress. In practice, though, we fear it’s designed to do something else: allow powerful incumbent interests to use their lobbying power to control this increasingly politicized office.  No president is going to select an appointee that will be shot down by special interests. And while the Librarian of Congress still oversees the Copyright Office, the Librarian of Congress would not be able to remove the Register no matter how poorly they perform their job. In sum, we’ll have a Register, and a Copyright Office, that is accountable only to the President and the special interests that helped get them approved in the first place.  That will inevitably accelerate the politicization of the Office. Under the current system, the official in charge of selecting the Register is a member of the one community that can usually be trusted to think about all of the interests copyright law affects: librarians.  As we’ve said before, libraries have an institutional obligation to serve the public, and to support access to knowledge and culture. Given copyright’s constitutional mandate to promote progress, we think the Office’s mission is best served when it is subject to the oversight and guidance of the library community. It’s bad enough that Congress and the public can no longer look to the Register as a neutral arbiter of copyright policy.  We shouldn’t make the problem worse by effectively making the Copyright Office into an independent regulator and policymaker. Instead, the Register should remain an advisor to Congress and an administrator of the registration system. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

California Bill To Ban “Fake News” Would Be Disastrous for Political Speech (Di, 28 Mär 2017)
Update (12:00 p.m., March 28, 2017): A.B. 1104 has been pulled and will not be heard in committee today.  Memo to California Assemblymember Ed Chau: you can’t fight fake news with a bad law. On Tuesday, the California Assembly’s Committee on Privacy and Consumer Affairs, which Chau chairs, will consider A.B. 1104—a censorship bill so obviously unconstitutional, we had to double check that it was real.  It’s real.  The proposed law reads:  18320.5. It is unlawful for a person to knowingly and willingly make, publish or circulate on an Internet Web site, or cause to be made, published, or circulated in any writing posted on an Internet Web site, a false or deceptive statement designed to influence the vote on either of the following:  (a) Any issue submitted to voters at an election.  (b) Any candidate for election to public office. Take ActionTweet at the Assembly Privacy Committee now! In other words, it would be illegal to be wrong on the internet if it could impact an election. The bill is unconstitutional under U.S. Supreme Court case law (see our opposition letter for more information on that), and likely to draw immediate and costly lawsuits if it is signed into law. For Chau, A.B. 1104 is an attempt to address the issue of “fake news” that many believe plagued the 2016 election: websites publishing false stories and promoting them over social media. No law, and certainly not A.B. 1104, will remedy this problem. American political speech dating back as far as the John Adams-Thomas Jefferson rivalry has involved unfair smears, half and stretched truths, and even outright lies. During the 2016 campaign alone, PolitiFact ranked 202 statements made by President Donald Trump as mostly false or false statements and 63 “Pants on Fire” statements. Hillary Cllinton made 69 statements ranked mostly false or false and seven as “Pants on Fire.” This bill will fuel a chaotic free-for-all of mudslinging with candidates and others being accused of crimes at the slightest hint of hyperbole, exaggeration, poetic license, or common error. While those accusations may not ultimately hold up, politically motivated prosecutions—or the threat of such—may harm democracy more than if the issue had just been left alone. Furthermore, A.B. 1104 makes no exception for satire and parody, leaving The Onion and Saturday Night Live open to accusations of illegal content. Nor does it exempt news organizations who quote deceptive statements made by politicians in their online reporting—even if their reporting is meant to debunk those claims. And what of everyday citizens who are duped by misleading materials: if 1,000 Californians retweet an incorrect statement by a presidential candidate, have they all broken the law?  At a time when political leaders are promoting “alternative facts” and branding unflattering reporting as “fake news,” we don’t think it’s a good idea to give the government more power to punish speech. In the fight against lies, the government must not create the tools to suppress the truth.  Join us today in filling the committee's Twitter streams with our opposition to this bill. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Small ISPs Oppose Congress's Move to Abolish Privacy Protections (Mo, 27 Mär 2017)
Take ActionCall your Representative now! The Internet is up in arms over Congress's plan to drastically reduce your privacy online, and that includes small Internet providers and networking companies. Many of them agree that we need the Federal Communication Commission's rules to protect our privacy online, and seventeen of them have written to Congress today to express their concerns. The situation before the FCC’s intervention was succinctly described in the fine print of Verizon’s privacy policy:  “If you do not want information collected for marketing purposes from services such as the Verizon Wireless Mobile Internet services, you should not use those particular services.” That was refreshingly honest. Other ISPs including AT&T, Charter, and Sprint also monitored their customers in intrusive ways, but were less frank in admitting it, even in their privacy policies. Below is a letter signed by several small Internet providers who share our concerns. Add your voices to theirs: call your Representative today and tell them not to repeal the broadband privacy rules! Dear U.S. Representatives, Re: Oppose S.J. Res 34 - Repeal of FCC Privacy Rules We, the undersigned founders, executives, and employees of ISPs and networking companies, spend our working lives ensuring that Americans have high-quality, fast, reliable, and locally provided choices available when they need to connect to the Internet. One of the cornerstones of our businesses is respecting the privacy of our customers, and it is for that primary reason that we are writing to you today. We urge Congress to preserve the FCC’s Broadband Privacy Rules and vote down plans to abolish them. If the rules are repealed, large ISPs across America would resume spying on their customers, selling their data, and denying them a practical and informed choice in the matter. Perhaps if there were a healthy, free, transparent, and competitive market for Internet services in this country, consumers could choose not to use those companies’ products. But small ISPs like ours face many structural obstacles, and many Americans have very limited choices: a monopoly or duopoly on the wireline side, and a highly consolidated cellular market dominated by the same wireline firms. Under those circumstances, the FCC’s Broadband Privacy Rules are the only way that most Americans will retain the free market choice to browse the Web without being surveilled by the company they pay for an Internet connection. Signed, Sonic Monkeybrains.net Cruzio Internet Etheric Networks University of Nebraska CREDO Mobile Aeneas Communications Digital Service Consultants Inc. Om Networks Hoyos Consulting LLC Mother Lode Internet Gold Rush Internet Ting Internet Tekify Fiber & Wireless Davis Community Network Andrew Buker (Director of Infrastructure Services & Research computing, University of Nebraska at Omaha) Tim Pozar (co-founder, TwoP LLC) Andrew Gallo (Senior Network Architect for a regional research and education network) Jim Deleskie (co-founder, Mimir networks) Randy Carpenter (VP, First Network Group) Kraig Beahn (CTO, Enguity Technology Corp) Chris Owen (President, Hubris Communications) James Persky (CEO, Pacific Internet) Brian Worthen, (CEO, Visionary Communications)   If you run a small ISP and would like to join our letter, send an email to isp-letter@eff.org. Take part in the action! Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Republicans in Congress Are Disregarding Their Own Privacy Policies (Mo, 27 Mär 2017)
Visit Sen. Jeff Flake’s official website, scroll to the bottom, click “Privacy Policy,” and you’ll find a page where the junior senator from Arizona makes this fine promise: I am committed to protecting the personal privacy of individuals who use the Internet, including website visitors like you. Take ActionCall your Congressmember now to save online privacy! He goes onto say that “your privacy is important to me” and that you should “rest assured” that your data is safe with him. And yet—last week Sen. Flake rushed a resolution through the Senate to repeal landmark privacy protections enacted by the Federal Communications Commission. The legislation would also bar the FCC from ever again acting to protect users’ data from internet providers. Under the repeal, the companies that provide your broadband service—be it Comcast, Cox, Time Warner, AT&T, or Verizon—will be able to engage in all sorts of underhanded ways to monetize your personal information. They’ll be allowed to collect your browsing history, hijack your search results, insert unwanted advertisements, and sell your data to marketers. In other words, if this repeal passes, no user should rest assured again. Sen. Flake isn’t the only senator to act in disregard of their stated commitments to privacy. Forty-nine other Republicans joined him in the vote: Many of these senators make similar statements in their privacy policies. For example, Sens. John Thune, Dean Heller, and Lamar Alexander’s Privacy Policies start:  Protecting the personal privacy of individuals who use the Internet is a priority, and we appreciate the opportunity to describe to you the policies we have put in place to safeguard the privacy of individuals who visit our Web site. Sen. John Cornyn’s Privacy Policy begins: Senator John Cornyn respects your right to privacy and is committed to protecting the privacy and security of visitors to cornyn.senate.gov, and those who correspond with our offices via email. Sen. John Boozman’s says:  Your privacy concerns are very important, so please know that we have safeguards in place to protect the privacy of visitors to my site. Here’s the thing: if you’re a U.S. lawmaker, protecting privacy doesn’t just mean avoiding collecting their data when they visit your website. It means standing up for users’ rights every day on Capitol Hill—the exact opposite of which is to roll back the strong privacy protections already on the books. Now the issue is before House of Representatives, which could vote on the resolution as early as Tuesday.  It’s important you call your lawmaker today to demand they vote down the repeal of the FCC privacy rules. Like their Senate colleagues, House Republicans also claim to respect user privacy. Speaker Paul Ryan’s Campaign site says that he knows “your right to privacy online is important.” Reps. Trent Franks, Tom McClintock, Mimi Walters, and many others use this boilerplate language: We respect the privacy of our visitors and all those who come in contact with our office—be it in-person, through our Web site, or by mail, phone, or email. We therefore try to collect only such personal information as is needed to provide the information, service, or assistance that you request. Is this just lip service? To truly respect the privacy of their constituents, these members need to not only limit what they collect but actively resist the telecommunication lobby’s play to collect and exploit our data. Otherwise, when you visit their sites, your internet provider will know it and be able to sell that information. Don’t let your member of Congress get away with a personal data giveaway. Call them today and demand they vote down the repeal of the FCC’s privacy regulations. Take part in the action! Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Urban Homesteaders Win Cancellation of Bogus Trademarks (Mo, 27 Mär 2017)
Global Community Had Faced Baseless Legal Claims and Content Removal Threats San Francisco – Urban homesteaders can speak freely about their global movement for sustainable living, after convincing the U.S. Patent and Trademark Office (USPTO) to cancel bogus trademarks for the terms “urban homesteading” and “urban homestead.” The authors and activists were represented by the Electronic Frontier Foundation (EFF) and law firm of Winston & Strawn. “This is a victory for free speech and common sense. Threats over this trademark harmed us and the whole urban homesteading community—a group of people who are dedicated to sharing information about sustainable living online and elsewhere,” said Kelly Coyne, co-author with Erik Knutzen of The Urban Homestead: Your Guide to Self-Sufficient Living in the Heart of the City. “We are so pleased to have this issue settled at last, so we can concentrate on making urban life healthier and happier for anyone who wants to participate in this global effort.” “Urban homesteading” has been used as a generic term for decades, describing activities like growing food, raising livestock, and producing simple food products at home. But a group called the Dervaes Institute managed to register “urban homesteading” and “urban homestead” as trademarks with the USPTO for “educational services” like blogging. Citing the trademarks, Dervaes got Facebook to take down content about urban homesteading, including pages that helped publicize Coyne and Knutzen’s book, as well as the Facebook page of a Denver farmer’s market. In 2011, EFF and Winston & Strawn petitioned the USPTO on behalf of Coyne, Knutzen, and book publisher Process Media, asking for the trademarks’ cancellation. “The words and phrases we use every day to describe basic activities should never be the exclusive property of a single person or business,” said EFF Legal Director Corynne McSherry. “It took six years, but we’re proud that this terrible trademark is off the books.” “You can’t trademark generic terms and force ordinary conversations off the Internet,” said Winston & Strawn attorney Jennifer Golinveaux.  “We’re relieved that the urban homesteading community can continue sharing information about their important work without worrying about silly legal threats.” For the full opinion from the U.S. Patent and Trademark Office: https://www.eff.org/document/opinion-cancelling-trademark For more on this case: https://www.eff.org/cases/petition-cancel-urban-homestead-trademark Contact:  Corynne McSherry Legal Director corynne@eff.org Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Five Ways Cybersecurity Will Suffer If Congress Repeals the FCC Privacy Rules (Mo, 27 Mär 2017)
Take ActionCall your Congressmember now to save online privacy! Back in October of 2016, the Federal Communications Commission passed some pretty awesome rules that would bar your Internet provider from invading your privacy. The rules would keep Internet providers like Comcast and Time Warner Cable from doing things like selling your personal information to marketers, inserting undetectable tracking headers into your traffic, or recording your browsing history to build up a behavioral advertising profile on you—unless they got your permission first. The rules were a huge victory for U.S. Internet users who value their privacy. But last Thursday, Republicans in the Senate voted to repeal those rules. If the House of Representatives votes the same way and the rules are repealed, it’s pretty obvious that the results for Americans' privacy will be disastrous. But what many people don’t realize is that Americans’ cybersecurity is also at risk. That’s because privacy and security are two sides of the same coin: privacy is about controlling who has access to information about you, and security is how you maintain that control. You usually can’t break one without breaking the other, and that’s especially true in this context. To show how, here are five ways repealing the FCC’s privacy rules will weaken Americans’ cybersecurity.   Risk #1: Snooping On Traffic (And Creating New Targets for Hackers) In order for Internet providers to make money off your browsing history, they first have to collect that information—what sort of websites you’re browsing, metadata about whom you’re talking to, and maybe even what search terms you’re using. Internet providers will also need to store that information somewhere, in order to build up a targeted advertising profile of you. So where’s the cybersecurity risk? The first risk is that Internet providers haven’t exactly been bastions of security when it comes to keeping information about their customers safe. Back in 2015, Comcast had to pay $33 million for unintentionally releasing information about customers who had paid Comcast to keep their phone numbers unlisted. “These customers ranged from domestic violence victims to law enforcement personnel”, many of who had paid for their numbers to be unlisted to protect their safety. But Comcast screwed up, and their phone numbers were published anyway. And that was just a mistake on Comcast’s part, with a simple piece of data like phone numbers. Imagine what could happen if hackers decided to target the treasure trove of personal information Internet providers start collecting. People’s personal browsing history and records of their location could easily become the target of foreign hackers who want to embarrass or blackmail politicians or celebrities. To make matters worse, FCC Chairman (and former Verizon lawyer) Ajit Pai recently halted the enforcement of a rule that would require Internet providers to “take reasonable measures to protect customer [personal information] from unauthorized use, disclosure, or access”—so Internet providers won’t be on the hook if their lax security exposes your data. This would just be the fallout from passive data collection—where your Internet provider simply spies on your data as it goes by. An even scarier risk is that Internet providers want to be able to do much more than that.   Risk #2: Erasing Encryption (And Making it Easier for Hackers to Spy On You) Right now, your Internet provider can only spy on the portion of your traffic that isn’t encrypted—in other words, whenever you visit a site that starts with https (instead of just http), your Internet provider can’t see the contents of what you’re browsing. They can still see what domain you’re visiting, but they can’t see what specific page, or what’s on that page. That frustrates a lot of Internet providers, because they want to be able to build advertising profiles on the contents of your encrypted data as well. In order to accomplish that, Internet providers have proposed a standard (called Explicit Trusted Proxies) that would allow them to intercept your data, remove the encryption, read the data (and maybe even modify it), and then encrypt it again and send it on its way. At first blush this doesn’t sound so bad. After all, the data is only decrypted within the Internet provider’s servers, so hackers listening in on the outside still wouldn’t be able to read it, right? Unfortunately not. According to a recent alert by US-CERT, an organization dedicated to computer security within the Department of Homeland Security: “Many HTTPS inspection products do not properly verify the certificate chain of the server before re-encrypting and forwarding client data, allowing the possibility of a MiTM [Man-in-The-Middle] attack. Furthermore, certificate-chain verification errors are infrequently forwarded to the client, leading a client to believe that operations were performed as intended with the correct server.” Further, a recent study found that 54% of connections that were intercepted (i.e. decrypted and re-encrypted) ended up with weaker encryption. Translating from engineer-speak, that means many of the systems designed to decrypt and then re-encrypt data actually end up weakening the security of the encryption, which exposes users to increased risk of cyberattack. Simply put, if Internet providers think they can profit from looking at your encrypted data and start deploying these systems widely, we’ll no longer be able to trust the security of our web browsing—and that could end up exposing everything from your email to your banking information to hackers.   Risk #3: Inserting Ads Into Your Browsing (And Opening Holes In Your Browsing Security) One of the major threats to cybersecurity if the FCC’s privacy rules are repealed comes from Internet providers inserting ads into your web browsing. Here we’re talking about your Internet provider placing additional ads in the webpages you view (beyond the ones that already exist). Why is this dangerous? Because inserting new code into a webpage in an automated fashion could break the security of the existing code in that page. As security expert Dan Kaminsky put it, inserting ads could break “all sorts of stuff, in that you no longer know as a website developer precisely what code is running in browsers out there. You didn't send it, but your customers received it.” In other words, security features in sites and apps you use could be broken and hackers could take advantage of that—causing you to do anything from sending your username and password to them (while thinking it was going to the genuine website) to installing malware on your computer.1   Risk #4: Zombie Supercookies (Allowing Hackers to Track You Wherever You Go) Internet providers haven’t been content with just inserting ads into our traffic—they’ve also tried inserting unique tracking tags as well (the way Verizon did two years ago). For Internet providers, the motivation is to make you trackable, by inserting a unique ID number into every unencrypted connection your browser makes with a website. Then, a website that wants to know more about you (so they can decide what price to charge you for a product) can pay your Internet provider a little money and tell them what ID number they want to know about, and your Internet provider will share the desired info associated with that ID number. At first you might be tempted to file this one away as purely a privacy problem. But this is a great example of how privacy and security really are two sides of the same coin. If your Internet provider is sending these tracking tags to every website you visit (as Verizon did originally), then every website you visit, and every third party embedded in websites you visit, can track you—even if you’ve deleted your browser’s cookies or enabled Incognito mode. This means that more people will be able to track you as you surf the Web, you’ll see more creepy and disconcerting ads based on things you’ve done in the past, and many of the tools you might use to protect yourself won’t work because the tracking is being added after the data leaves your machine.    Risk #5: Spyware (Which Opens the Door for Malware) The last risk comes from Internet providers pre-installing spyware on our devices—particularly on mobile phones, which most of us purchase directly from the company that provides our cell service, i.e. our Internet provider. In the past, Internet providers have installed spyware like Carrier IQ on phones, claiming it was only to “improve wireless network and service performance.” After a huge blowback, many Internet providers backed down on using Carrier IQ. But given that software like Carrier IQ could record what websites you visit and what search terms you enter, it would be pretty tempting for Internet providers to resurrect that spyware and use it for advertising purposes. So where’s the cybersecurity risk? As we’ve explained before, part of the problem with Carrier IQ was that it could be configured to record sensitive information into your phone’s system logs. But some apps transmit those logs off of your phone as part of standard debugging procedures, assuming there’s nothing sensitive in them. As a result, “keystrokes, text message content and other very sensitive information [was] in fact being transmitted from some phones on which Carrier IQ is installed to third parties.” Depending on how that information was transmitted, eavesdroppers could also intercept it—meaning hackers might be able to see your username or password, without having to do any real hacking. But the even bigger concern is that for spyware like Carrier IQ to function effectively, it has to have fairly low-level access to your phone’s systems—which is engineer-speak for saying it needs to be able to see and access all the parts of your phone’s operating system that would usually be secure. Thus, if hackers can find a vulnerability in the spyware, then they can use it as a sort of tunnel to get access to almost anything in your phone.   In the end, the cybersecurity implications of repealing the FCC’s privacy rules come from simple logic. If the privacy rules are repealed, Internet providers will resume and accelerate these dangerous practices with the aim of monetizing their customers’ browsing history and app usage. But in order to do that, Internet providers will need to record and store even more sensitive data on their customers, which will become a target for hackers. Internet providers will also be incentivized to break their customers’ security, so they can see all the valuable encrypted data their customers send. And when Internet providers break their customers’ security, you can be sure malicious hackers will be right on their heels. The net result is simple: repealing the FCC’s privacy rules won’t just be a disaster for Americans’ privacy. It will be a disaster for America’s cybersecurity, too. Take part in the action! 1. The mechanisms that can be broken by ad injection include: the Same Origin Policy; the correctness of non-browser applications that use HTTP as a transport mechanism (including many Internet of Things protocols, and software update mechanisms that rely on signatures but not TLS for security); certain uses of Content Security Policy headers. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

We Have 24 Hours to Save Online Privacy Rules (Mo, 27 Mär 2017)
This is our last chance to save critical online privacy protections. Take part in the action! We are one vote away from a world where your ISP can track your every move online and sell that information to the highest bidder. Call your lawmakers now and tell them to protect federal online privacy rules. Last year the FCC passed a set of rules for how ISPs deal with their customers’ data. The commonsense rules updated longstanding federal protections for Internet users. Under the rules, ISPs would be required to protect your data and wouldn’t be allowed to do a host of creepy things, including sell your Internet browsing records without your consent. Those rules were a huge victory for consumers. Of course, the ISPs that stand to make money off of violating your privacy have been lobbying Congress to repeal those rules. Unfortunately, their anti-consumer push has been working. The Senate voted last week 50-48 on a Congressional Review Act (CRA) resolution to repeal the FCC’s privacy rules. Now the resolution heads over the House, where it’s scheduled to get a vote on Tuesday. If the House passes it, you’ll be even more at the mercy of your ISP. Because Congress is using a CRA resolution, the FCC will be prohibited from writing similar rules in the future. And thanks to the current legal landscape, no other federal agency has the authority to protect you against privacy invasions by your ISP. With a House vote scheduled for Tuesday, we have 24 hours to speak up and tell our representatives that they can’t put ISPs’ profits over our privacy. Call your lawmakers today and tell them to oppose S.J. Res. 34, which would repeal the FCC’s broadband privacy rules. Take ActionCall Congress now! Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

EFF Launches Community Security Training Series (Sa, 25 Mär 2017)
EFF is pleased to announce a series of community security trainings in partnership with the San Francisco Public Library. High-profile data breaches and hard-fought battles against unlawful mass surveillance programs underscore that the public needs practical information about online security. We know more about potential threats each day, but we also know that encryption works and can help thwart digital spying. Lack of knowledge about best practices puts individuals at risk, so EFF will bring lessons from its comprehensive Surveillance Self-Defense guide to the SFPL. EFF has tailored this series for technology beginners who may be unaware of potential privacy dangers, but already use smart phones or computers. Library patrons are invited to bring their devices to EFF's introductory classes which include discussions of basic online security concepts and privacy tools. Lisa Wright and Willie Theaker, members of EFF's TechOps Team, will facilitate Digital Privacy and Security: A Beginner-to-Intermediate Workshop followed by Encryption Apps for your Phone: An Intermediate Workshop. There will be two opportunities to attend each class. Digital Privacy and Security: A Beginner-to-Intermediate Workshop Tuesday, March 28, 2017 6:00 pm to 7:30 pm Encryption Apps for your Phone: An Intermediate Workshop Tuesday, April 4, 2017 6:00 pm to 7:30 pm Digital Privacy and Security: A Beginner-to-Intermediate Workshop Tuesday, April 11, 2017 6:00 pm to 7:30 pm Encryption Apps for your Phone: An Intermediate Workshop Tuesday, April 18, 2017 6:00 pm to 7:30 pm Event details are included in each link to the EFF calendar above. Space is limited and attendance is on a first-come, first-served basis so attendees should prepare to arrive early. We encourage all EFF supporters to help people in their circles learn more about online rights issues and how to keep themselves—and each other— safer. At the end of April, EFF's spring Bay Area Members' Speakeasy will feature a more advanced workshop on email encryption and key generation open to EFF members and their guests—we encourage you to bring a friend! Following the workshop, all EFF members will be invited to join our PGP keysigning party to help bring the community together and further expand the web of trust. If you are a current Bay Area member accepting email, you will receive a personal invitation including event details. Not a member yet? Join today! With the Surveillance Self-Defense project and these local events, EFF strives to help make information about online security accessible to beginners as well as seasoned techno-activists and journalists. We hope you will consider our tips on how to protect your digital privacy, but we also hope you will encourage those around you to learn more and make better choices with technology. After all, privacy is a team sport and everyone wins. Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen

Another Loss For Broadcast TV Streaming, And A Dangerous Shift Of Decision-Making Power (Fr, 24 Mär 2017)
Another court has ruled that streaming local broadcast TV channels to mobile devices is something that only traditional pay-TV companies can do—startups need not apply. The Ninth Circuit appeals court has ruled that FilmOn, an Internet video service, cannot use the license created by Congress for “secondary transmissions” of over-the-air TV broadcasts. That likely means that FilmOn and other Internet-based services won’t be able to stream broadcast TV at all. That’s a setback for local TV and the news, weather, local advertising, and community programming it carries. The court’s harmful ruling is bad enough, and is made worse by the way it arrived at that decision. Instead of interpreting the Copyright Act according to its own independent judgment, the court deferred to the opinion of the Register of Copyrights, an official who has no authority to make or interpret laws on her own. And the Register has often acted as more of an advocate for the media and entertainment industries than a neutral authority. Ms. Pallante, the former Register, famously said that “copyright is for the author first and the nation second,” and has gone on to become the head of a trade association for publishers. Can startups take advantage of the law that allows incumbent pay-TV services to carry broadcast TV? The fight to send broadcast TV over the Internet has been a long one. For most people in the U.S., it’s hard if not impossible to watch local TV stations live over the Internet. Unlike other forms of video programming that are available in many different ways, local broadcasts usually require a TV set and a finicky antenna or an expensive cable subscription. Of course, the technology to send local broadcast TV to Internet-connected devices has been around for a while. Copyright law, not technology, has been the barrier. Copyright applies when shows are transmitted “to the public.” That means cable operators need licenses from copyright holders. And since the Supreme Court’s Aereo decision, Internet-based services that “look like cable” to the customer also need licenses. The major difficulty is that the programs, commercials, and other material shown on TV channels have many different copyright holders. A service that wants to help viewers see those channels in more places and on more devices is faced with the difficult (in fact, often impossible) task of negotiating a license with each and every one of those owners before their material goes on the air. Fail to license even a single program or commercial and the would-be cable competitor risks lawsuits and ruinous copyright penalties. But copyright law also includes a way for pay-tv systems to get the permissions they need by paying a set fee. That mechanism, known as Section 111, applies to any “facility” that “receives signals” from broadcast TV stations and “makes secondary transmissions” of those signals to paying subscribers. The law was passed long before Internet video streaming, but its core definition of a “cable system” is written broadly enough to include an Internet-based system like FilmOn’s. Nope, because that law is unclear and the Register of Copyrights said it shouldn’t apply. Major TV and movie studios have long opposed letting Internet-based services use the Section 111 license, and so did Maria Pallante, who was the Register of Copyrights (the head of the Copyright Office) until 2016. She wrote several letters and papers arguing that only traditional cable systems should be able to use the license. In the studios’ case against FilmOn, one of several they filed around the country, the federal district court in Los Angeles ruled that Congress wrote Section 111 broadly enough to include Internet-based services. This week, the Ninth Circuit reversed that decision. The court recognized that applying a complex 41-year-old law to today’s technology is not straightforward: “FilmOn and other Internet-based retransmission services are neither clearly eligible nor clearly ineligible for the compulsory license [Section] 111 makes available to ‘cable systems.’” At this point, the court could have grappled with the purposes of the law, its legislative history, and its effects on the TV market to reach a result. But it didn’t do this in any significant way. Instead, it “deferred” to the Register of Copyrights and treated her opinions on this question as the final word. The judges wrote that the Copyright Office “has a much more intimate relationship with Congress and is institutionally better equipped than we are to sift through and to make sense of the fact and heterogeneous expanse that is the [Copyright] Act’s legislative history.” That’s a troubling conclusion. While the Copyright Office staff might be more familiar with this area of law than a federal judge, the Office doesn’t have the authority to make or interpret laws. Treating the Register of Copyrights’ opinions about the law as binding invades both Congress’s power to make laws and the courts’ role as interpreters of the law. While the Copyright Office serves important functions, including registering copyrights and keeping records of them, and growing the Library of Congress’s collection, it shouldn’t be given the powers of a court to issue binding interpretations of the law. This decision leaves streaming services for broadcast TV in a double bind: they need to get permission from rightsholders, but they can’t get that permission using the streamlined method that Congress created. In practical terms, that means traditional pay-TV systems can retransmit broadcast TV to paying subscribers, but newer competitors that use streaming can’t. Protected against competition from streaming technology, cable subscription prices continue to climb, and broadcast TV continues to diminish as a source of local information and opinion. Related Cases:  WNET v. Aereo Fox v. Aereokiller Share this: Share on Twitter Share on Facebook Share on Google+ Share on Diaspora Join EFF
>> mehr lesen