Deeplinks

How Political Campaigns Use Your Data to Target You (Di, 16 Apr 2024)
Data about potential voters—who they are, where they are, and how to reach them—is an extremely valuable commodity during an election year. And while the right to a secret ballot is a cornerstone of the democratic process, your personal information is gathered, used, and sold along the way. It's not possible to fully shield yourself from all this data processing, but you can take steps to at least minimize and understand it. Political campaigns use the same invasive tricks that behavioral ads do—pulling in data from a variety of sources online to create a profile—so they can target you. Your digital trail is a critical tool for campaigns, but the process starts in the real world, where longstanding techniques to collect data about you can be useful indicators of how you'll vote. This starts with voter records. Your IRL Voting Trail Is Still Valuable Politicians have long had access to public data, like voter registration, party registration, address, and participation information (whether or not a voter voted, not who they voted for). Online access to such records has made them easier to get in some states, with unintended consequences, like doxing. Campaigns can purchase this voter information from most states. These records provide a rough idea of whether that person will vote or not, and—if they're registered to a particular party—who they might lean toward voting for. Campaigns use this to put every voter into broad categories, like "supporter," "non-supporter," or "undecided." Campaigns gather such information at in-person events, too, like door-knocking and rallies, where you might sign up for emails or phone calls. Campaigns also share information about you with other campaigns, so if you register with a candidate one year, it's likely that information goes to another in the future. For example, the website for Adam’s Schiff’s campaign to serve as U.S. Senator from California has a privacy policy with this line under “Sharing of Information”: With organizations, candidates, campaigns, groups, or causes that we believe have similar political viewpoints, principles, or objectives or share similar goals and with organizations that facilitate communications and information sharing among such groups Similar language can be found on other campaign sites, including those for Elizabeth Warren and Ted Cruz. These candidate lists are valuable, and are often shared within the national party. In 2017, the Hillary Clinton campaign gave its email list to the Democratic National Committee, a contribution valued at $3.5 million. If you live in a state with citizen initiative ballot measures, data collected from signature sheets might be shared or used as well. Signing a petition doesn't necessarily mean you support the proposed ballot measure—it's just saying you think it deserves to be put on the ballot. But in most states, these signature pages will remain a part of the public record, and the information you provide may get used for mailings or other targeted political ads.  How Those Voter Records, and Much More, Lead to Targeted Digital Ads All that real world information is just one part of the puzzle these days. Political campaigns tap into the same intrusive adtech tracking systems used to deliver online behavioral ads. We saw a glimpse into how this worked after the Cambridge Analytica scandal, and the system has only grown since then. Specific details are often a mystery, as a political advertising profile may be created by combining disparate information—from consumer scoring data brokers like Acxiom or Experian, smartphone data, and publicly available voter information—into a jumble of data points that’s often hard to trace in any meaningful way. A simplified version of the whole process might go something like this: A campaign starts with its voter list, which includes names, addresses, and party affiliation. It may have purchased this from the state or its own national committee, or collected some of it for itself through a website or app. The campaign then turns to a data broker to enhance this list with consumer information. The data broker combines the voter list with its own data, then creates a behavioral profile using inferences based on your shopping, hobbies, demographics, and more. The campaign looks this all over, then chooses some categories of people it thinks will be receptive to its messages in its various targeted ads. Finally, the campaign turns to an ad targeting company to get the ad on your device. Some ad companies might use an IP address to target the ad to you. As The Markup revealed, other companies might target you based on your phone's location, which is particularly useful in reaching voters not in the campaign's files.  In 2020, Open Secrets found political groups paid 37 different data brokers at least $23 million for access to services or data. These data brokers collect information from browser cookies, web beacons, mobile phones, social media platforms, and more. They found that some companies specialize in more general data, while others, like i360, TargetSmart, and Grassroots Analytics, focus on data useful to campaigns or advocacy. screenshot of spreadsheet with categories, "Qanon, Rightwing Militias, Right to Repair, Inflation Fault, Electric Vehicle Buyer, Climate Change, and Amazon Worker Treatment" A sample of some categories and inferences in a political data broker file that we received through a CCPA request shows the wide variety of assumptions these companies may make. These political data brokers make a lot of promises to campaigns. TargetSmart claims to have 171 million highly accurate cell phone numbers, and i360 claims to have data on 220 million voters. They also tend to offer specialized campaign categories that go beyond the offerings of consumer-focused data brokers. Check out data broker L2’s “National Models & Predictive Analytics” page, which breaks down interests, demographics, and political ideology—including details like "Voter Fraud Belief," and "Ukraine Continue." The New York Times demonstrated a particularly novel approach to these sorts of profiles where a voter analytics firm created a “Covid concern score” by analyzing cell phone location, then ranked people based on travel patterns during the pandemic. Some of these companies target based on location data. For example, El Toro claims to have once “identified over 130,000 IP-matched voter homes that met the client’s targeting criteria. El Toro served banner and video advertisements up to 3 times per day, per voter household – across all devices within the home.” That “all devices within the home” claim may prove important in the coming elections: as streaming video services integrate more ad-based subscription tiers, that likely means more political ads this year. One company, AdImpact, projects $1.3 billion in political ad spending on “connected television” ads in 2024. This may be driven in part by the move away from tracking cookies, which makes web browsing data less appealing. In the case of connected televisions, ads can also integrate data based on what you've watched, using information collected through automated content recognition (ACR). Streaming device maker and service provider Roku's pitch to potential political advertisers is straightforward: “there’s an opportunity for campaigns to use their own data like never before, for instance to reach households in a particular district where they need to get out the vote.” Roku claims to have at least 80 million users. As a platform for televisions and “streaming sticks,” and especially if you opted into ACR (we’ll detail how to check below), Roku can collect and use a lot of your viewing data ranging from apps, to broadcast TV, or even to video games. This is vastly different from traditional broadcast TV ads, which might be targeted broadly based on a city or state, and the show being aired. Now, a campaign can target an ad at one household, but not their neighbor, even if they're watching the same show. Of the main streaming companies, only Amazon and Netflix don’t accept political ads. Finally, there are Facebook and Google, two companies that have amassed a mountain of data points about all their users, and which allow campaigns to target based on some of those factors. According to at least one report, political ad spending on Google (mostly through YouTube) is projected to be $552 million, while Facebook is projected at $568 million. Unlike the data brokers discussed above, most of what you see on Facebook and Google is derived from the data collected by the company from its users. This may make it easier to understand why you’re seeing a political ad, for example, if you follow or view content from a specific politician or party, or about a specific political topic. What You Can Do to Protect Your Privacy Managing the flow of all this data might feel impossible, but you can take a few important steps to minimize what’s out there. The chances you’ll catch everything is low, but minimizing what is accessible is still a privacy win. Install Privacy BadgerConsidering how much data is collected just from your day-to-day web browsing, it’s a good idea to protect that first. The simplest way to do so is with our own tracking blocker extension, Privacy Badger. Disable Your Phone Advertising ID and Audit Your Location SettingsYour phone has an ad identifier that makes it simple for advertisers to track and collate everything you do. Thankfully, you can make this much harder for those advertisers by disabling it: On iPhone: Head into Settings > Privacy & Security > Tracking, and make sure “Allow Apps to Request to Track” is disabled.  On Android: Open Settings > Security & Privacy > Privacy > Ads, and select “Delete advertising ID.” Similarly, as noted above, your location is a valuable asset for campaigns. They can collect your location through data brokers, which usually get it from otherwise unaffiliated apps. This is why it's a good idea to limit what sorts of apps have access to your location: On iPhone: open Settings > Privacy & Security > Location Services, and disable access for any apps that do not need it. You can also set location for only "While using," for certain apps where it's helpful, but unnecessary to track you all the time. Also, consider disabling "Precise Location" for any apps that don't need your exact location (for example, your GPS navigation app needs precise location, but no weather app does). On Android: Open Settings > Location > App location permissions, and confirm that no apps are accessing your location that you don't want to. As with iOS, you can set it to "Allow only while using the app," for apps that don't need it all the time, and disable "Use precise location," for any apps that don't need exact location access. Opt Out of Tracking on Your TV or Streaming Device, and Any Video Streaming Service Nearly every brand of TV is connected to the internet these days. Consumer Reports has a guide for disabling what you can on most popular TVs and software platforms. If you use an Apple TV, you can disable the ad identifier following the exact same directions as on your phone. Since the passage of a number of state privacy laws, streaming services, like other sites, have offered a way for users to opt out of the sale of their info. Many have extended this right outside of states that require it. You'll need to be logged into your streaming service account to take action on most of these, but TechHive has a list of opt out links for popular streaming services to get you started. Select the "Right to Opt Out" option, when offered. Don't Click on Links in (or Respond to) Political Text MessagesYou've likely been receiving political texts for much of the past year, and that's not going to let up until election day. It is increasingly difficult to decipher whether they're legitimate or spam, and with links that often use a URL shortener or odd looking domains, it's best not to click them. If there's a campaign you want to donate to, head directly to the site of the candidate or ballot sponsor. Create an Alternate Email and Phone Number for Campaign StuffIf you want to keep updated on campaign or ballot initiatives, consider setting up an email specifically for that, and nothing else. Since a phone number is also often required, it's a good idea to set up a secondary phone number for these same purposes (you can do so for free through services like Google Voice). Keep an Eye Out for Deceptive Check BoxesSpeaking of signing up for updates, be mindful of when you don't intend to sign up for emails. Campaigns might use pre-selected options for everything from donation amounts to signing up for a newsletter. So, when you sign up with any campaign, keep an eye on any options you might not intend to opt into. Mind Your Social MediaNow's a great time to take any sort of "privacy checkup" available on whatever social media platforms you use to help minimize any accidental data sharing. Even though you can't completely opt out of behavioral advertising on Facebook, review your ad preferences and opt out whatever you can. Also be sure to disable access to off-site activity. You should also opt out of personalized ads on Google's services. You cannot disable behavioral ads on TikTok, but the company doesn't allow political ads. If you're curious to learn more about why you're seeing an ad to begin with, on Facebook you can always click the three-dot icon on an ad, then click "Why am I seeing this ad?" to learn more. For ads on YouTube, you can click the "More" button and then "About this advertiser" to see some information about who placed the ad. Anywhere else you see a Google ad you can click the "Adchoices" button and then "Why this ad?" You shouldn't need to spend an afternoon jumping through opt out hoops and tweaking privacy settings on every device you own just so you're not bombarded with highly targeted ads. That’s why EFF supports comprehensive consumer data privacy legislation, including a ban on online behavioral ads. Democracy works because we participate, and you should be able to do so without sacrificing your privacy. 
>> mehr lesen

Speaking Freely: Lynn Hamadallah (Tue, 16 Apr 2024)
Lynn Hamadallah is a Syrian-Palestinian-French Psychologist based in London. An outspoken voice for the Palestinian cause, Lynn is interested in the ways in which narratives, spoken and unspoken, shape identity. Having lived in five countries and spent a lot of time traveling, she takes a global perspective on freedom of expression. Her current research project investigates how second-generation British-Arabs negotiate their cultural identity. Lynn works in a community mental health service supporting some of London's most disadvantaged residents, many of whom are migrants who have suffered extensive psychological trauma. York: What does free speech or free expression mean to you?  Being Arab and coming from a place where there is much more speech policing in the traditional sense, I suppose there is a bit of an idealization of Western values of free speech and democracy. There is this sense of freedom we grow up associating with the West. Yet recently, we’ve come to realize that the way it works in practice is quite different to the way it is described, and this has led to a lot of disappointment and disillusionment in the West and its ideals amongst Arabs. There’s been a lot of censorship for example on social media, which I’ve experienced myself when posting content in support of Palestine. At a national level, we have witnessed the dehumanization going on around protesters in the UK, which undermines the idea of free speech. For example, the pro-Palestine protests where we saw the then-Home Secretary Suella Braverman referring to protesters as “hate marchers.” So we’ve come to realize there’s this kind of veneer of free speech in the West which does not really match up to the more idealistic view of freedom we were taught about. With the increased awareness we have gained as a result of the latest aggression going on in Palestine, actually what we’re learning is that free speech is just another arm of the West to support political and racist agendas. It’s one of those things that the West has come up with which only applies to one group of people and oppresses another. It’s the same as with human rights you know - human rights for who? Where are Palestinian’s human rights?  We’ve seen free speech being weaponized to spread hate and desecrate Islam, for example, in the case of Charlie Hebdo and the Quran burning in Denmark and in Sweden. The argument put forward was that those cases represented instances of free speech rather than hate speech. But actually to millions of Muslims around the world those incidents were very, very hateful. They were acts of violence not just against their religious beliefs but right down to their sense of self. It’s humiliating to have a part of your identity targeted in that way with full support from the West, politicians and citizens alike.  And then, when we— we meaning Palestinians and Palestine allies—want to leverage this idea of free speech to speak up against the oppression happening by the state of Israel, we see time and time again accusations flying around: hate speech, anti-semitism, and censorship. Heavy, heavy censorship everywhere. So that’s what I mean when I say that free speech in the West is a racist concept, actually. And I don’t know that true free speech exists anywhere in the world really. In the Middle East we don’t have democracies but at least there’s no veneer of democracy— the messaging and understanding is clear. Here, we have a supposed democracy, but in practice it looks very different. And that’s why, for me, I don’t really believe that free speech exists. I’ve never seen a real example of it. I think as long as people are power hungry there’s going to be violence, and as long as there’s violence, people are going to want to hide their crimes. And as long as people are trying to hide their crimes there’s not going to be free speech. Sorry for the pessimistic view! York: It’s okay, I understand where you’re coming from. And I think that a lot of those things are absolutely true. Yet, from my perspective, I still think it’s a worthy goal even though governments—and organizationally we’ve seen this as well—a lot of times governments do try to abuse this concept. So I guess then I would just as a follow-up, do you feel that despite these issues that some form of universalized free expression is still a worthy ideal?  Of course, I think it’s a worthy ideal. You know, even with social media – there is censorship. I’ve experienced it and it’s not just my word and an isolated incident. It’s been documented by Human Rights Watch—even Meta themselves! They did an internal investigation in 2021—Meta had a nonprofit called Business for Social Responsibility do an investigation and produce a report—and they’ve shown there was systemic censorship of Palestine-related content. And they’re doing it again now. That being said, I do think social media is making free speech more accessible, despite the censorship.  And I think—to your question—free speech is absolutely worth pursuing. Because we see that despite these attempts at censorship, the truth is starting to come out. Palestine support is stronger than it’s ever been. To the point where we’ve now had South Africa take Israel to trial at the International Court of Justice for genocide, using evidence from social media videos that went viral. So what I’m saying is, free speech has the power to democratize demanding accountability from countries and creating social change, so yes, absolutely something we should try to pursue.  York: You just mentioned two issues close to my heart. One is the issues around speech on social media platforms, and I’ve of course followed and worked on the Palestinian campaigns quite closely and I’m very aware of the BSR report. But also, video content, specifically, that’s found on social media being used in tribunals. So let me shift this question a bit. You have such a varied background around the world. I’m curious about your perspective over the past decade or decade and a half since social media has become so popular—how do you feel social media has shaped people’s views or their ability to advocate for themselves globally?  So when we think about stories and narratives, something I’m personally interested in, we have to think about which stories get told and which stories remain untold. These stories and their telling is very much controlled by the mass media— BBC, CNN, and the like. They control the narrative. And I guess what social media is doing is it’s giving a voice to those who are often voiceless. In the past, the issue was that there was such a monopoly over mouthpieces. Mass  media were so trusted, to the point where no one would have paid attention to these alternative viewpoints. But what social media has done… I think it’s made people become more aware or more critical of mass media and how it shapes public opinion. There’s been a lot of exposure of their failure for example, like that video that went viral of Egyptian podcaster and activist Rahma Zain confronting CNN’s Clarissa Ward at the Rafah border about their biased reporting of the genocide in Palestine. I think that confrontation spoke to a lot of people. She was shouting “ You own the narrative, this is our problem. You own the narrative, you own the United Nations, you own Hollywood, you own all these mouthpieces— where are our voices?! Our voices need to be heard!” It was SO powerful and that video really spoke to the sentiment of many Arabs who have felt angry, betrayed and abandoned by the West’s ideals and their media reporting. Social media is providing  a voice to more diverse people, elevating them and giving the public more control around narratives. Another example we’ve seen recently is around what’s currently happening in Sudan and the Democratic Republic of Congo. These horrific events and stories would never have had much of a voice or exposure before at the global stage. And now people all over the world are paying more attention and advocating for Sudanese and Congolese rights, thanks to social media.  I personally was raised with quite a critical view of mass media, I think in my family there was a general distrust of the West, their policies and their media, so I never really relied personally on the media as this beacon of truth, but I do think that’s an exception. I think the majority of people rely on mass media as their source of truth. So social media plays an important role in keeping them accountable and diversifying narratives. York: What are some of the biggest challenges you see right now anywhere in the world in terms of the climate for free expression for Palestinian and other activism?  I think there’s two strands to it. There’s the social media strand. And there’s the governmental policies and actions. So I think on social media, again, it’s very documented, but it’s this kind of constant censorship. People want to be able to share content that matters to them, to make people more aware of global issues and we see time and time again viewership going down, content being deleted or reports from Meta of alleged hate speech or antisemitism. And that’s really hard. There’ve been random strategies that have popped up to increase social media engagement, like posting random content unrelated to Palestine or creating Instagram polls for example. I used to do that, I interspersed Palestine content with random polls like, “What’s your favorite color?” just to kind of break up the Palestine content and boost my engagement. And it was honestly so exhausting. It was like… I’m watching a genocide in real time, this is an attack on my people and now I’m having to come up with silly polls? Eventually I just gave up and accepted my viewership as it was, which was significantly lower. At a government level, which is the other part of it, there’s this challenge of constant intimidation that we’re witnessing. I just saw recently there was a 17-year-old boy who was interviewed by the counterterrorism police at an airport because he was wearing a Palestinian flag. He was interrogated about his involvement in a Palestinian protest. When has protesting become a crime and what does that say about democratic rights and free speech here in the UK? And this is one example, but there are so many examples of policing, there was even talk of banning protests all together at one point.  The last strand I’d include, actually, that I already touched on, is the mass media. Just recently we’ve seen the BBC reporting on the ICJ hearing, they showed the Israeli defense part, but they didn’t even show the South African side. So this censorship is literally in plain sight and poses a real challenge to the climate of free expression for Palestine activism. York: Who is your free speech hero?  Off the top of my head I’d probably say Mohammed El-Kurd. I think he’s just been so unapologetic in his stance. Not only that but I think he’s also made us think critically about this idea of narrative and what stories get told. I think it was really powerful when he was arguing the need to stop giving the West and mass media this power, and that we need to disempower them by ceasing to rely on them as beacons of truth, rather than working on changing them. Because, as he argues, oppressors who have monopolized and institutionalized violence will never ever tell the truth or hold themselves to account. Instead, we need to turn to Palestinians, and to brave cultural workers, knowledge producers, academics, journalists, activists, and social media commentators who understand the meaning of oppression and view them as the passionate, angry and, most importantly, reliable narrators that they are.
>> mehr lesen

Americans Deserve More Than the Current American Privacy Rights Act (Tue, 16 Apr 2024)
EFF is concerned that a new federal bill would freeze consumer data privacy protections in place, by preempting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling. We also urge the authors of the American Privacy Rights Act (APRA) to strengthen other portions of the bill. It should be easier to sue companies that violate our rights. The bill should limit sharing with the government and expand the definition of sensitive data. And it should narrow exceptions that allow companies to exploit our biometric information, our so-called “de-identified” data, and our data obtained in corporate “loyalty” schemes. Despite our concerns with the APRA bill, we are glad Congress is pivoting the debate to a privacy-first approach to online regulation. Reining in companies’ massive collection, misuse, and transfer of everyone’s personal data should be the unifying goal of those who care about the internet. This debate has been absent at the federal level in the past year, giving breathing room to flawed bills that focus on censorship and content blocking, rather than privacy. In general, the APRA would require companies to minimize their processing of personal data to what is necessary, proportionate, and limited to certain enumerated purposes. It would specifically require opt-in consent for the transfer of sensitive data, and most processing of biometric and genetic data. It would also give consumers the right to access, correct, delete, and export their data. And it would allow consumers to universally opt-out of the collection of their personal data from brokers, using a registry maintained by the Federal Trade Commission. We welcome many of these privacy protections. Below are a few of our top priorities to correct and strengthen the APRA bill. Allow States to Pass Stronger Privacy Laws The APRA should not preempt existing and future state data privacy laws that are stronger than the current bill. The ability to pass stronger bills at the state and local level is an important tool in the fight for data privacy. We ask that Congress not compromise our privacy rights by undercutting the very state-level action that spurred this compromise federal data privacy bill in the first place. Subject to exceptions, the APRA says that no state may “adopt, maintain, enforce, or continue in effect” any state-level privacy requirement addressed by the new bill. APRA would allow many state sectoral privacy laws to remain, but it would still preempt protections for biometric data, location data, online ad tracking signals, and maybe even privacy protections in state constitutions or some other limits on what private companies can share with the government. At the federal level, the APRA would also wrongly preempt many parts of the federal Communications Act, including provisions that limit a telephone company’s use, disclosure, and access to customer proprietary network information, including location information. Just as important, it would prevent states from creating stronger privacy laws in the future. States are more nimble at passing laws to address new privacy harms as they arise, compared to Congress which has failed for decades to update important protections. For example, if lawmakers in Washington state wanted to follow EFF’s advice to ban online behavioral advertising or to allow its citizens to sue companies for not minimizing their collection of personal data (provisions where APRA falls short), state legislators would have no power to do so under the new federal bill. Make It Easier for Individuals to Enforce Their Privacy Rights The APRA should prevent coercive forced arbitration agreements and class action waivers, allow people to sue for statutory damages, and allow them to bring their case in state court. These rights would allow for rigorous enforcement and help force companies to prioritize consumer privacy. The APRA has a private right of action, but it is a half-measure that still lets companies side-step many legitimate lawsuits. And the private right of action does not apply to some of the most important parts of the law, including the central data minimization requirement. The favorite tool of companies looking to get rid of privacy lawsuits is to bury provision in their terms of service that force individuals into private arbitration and prevent class action lawsuits. The APRA does not address class action waivers and only prevents forced arbitration for children and people who allege “substantial” privacy harm. In addition, statutory damages and enforcement in state courts is essential, because many times federal courts still struggle to acknowledge privacy harm as real—relying instead on a cramped view that does not recognize privacy as a human right. In addition, the bill would allow companies to cure violations rather than face a lawsuit, incentivizing companies to skirt the law until they are caught. Limit Exceptions for Sharing with the Government APRA should close a loophole that may allow data brokers to sell data to the government and should require the government to obtain a court order before compelling disclosure of user data. This is important because corporate surveillance and government surveillance are often the same. Under the APRA, government contractors do not have to follow the bill’s privacy protections. Those include any “entity that is collecting, processing, retaining, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, to the extent that such entity is acting as a service provider to the government entity.” Read broadly, this provision could protect data brokers who sell biometric information and location information to the government. In fact, Clearview AI previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is a point that needs revision because other parts of the bill rightly prevent covered entities (government contractors excluded) from selling data to the government for the purpose of fraud detection, public safety, and criminal activity detection. The APRA also allows entities to transfer personal data to the government pursuant to a “lawful warrant, administrative subpoena, or other form of lawful process.” EFF urges that the requirement be strengthened to at least a court order or warrant with prompt notice to the consumer. Protections like this are not unique, and it is especially important in the wake of the Dobbs decision. Strengthen the Definition of Sensitive Data The APRA has heightened protections for sensitive data, and it includes a long list of 18 categories of sensitive data, like: biometrics, precise geolocation, private communications, and an individual’s online activity overtime and across websites. This is a good list that can be added to. We ask Congress to add other categories, like immigration status, union membership, employment history, familial and social relationships, and any covered data processed in a way that would violate a person’s reasonable expectation of privacy. The sensitivity of data is context specific—meaning any data can be sensitive depending on how it is used. The bill should be amended to reflect that. Limit Other Exceptions for Biometrics, De-identified Data, and Loyalty Programs An important part of any bill is to make sure the exceptions do not swallow the rule. The APRA’s exceptions on biometric information, de-identified data, and loyalty programs should be narrowed. In APRA, biometric information means data “generated from the measurement or processing of the individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to the individual” and excludes “metadata associated with a digital or physical photograph or an audio or video recording that cannot be used to identify an individual.” EFF is concerned this definition will not protect biometric information used for analysis of sentiment, demographics, and emotion, and could be used to argue hashed biometric identifiers are not covered. De-identified data is excluded from the definition of personal data covered by the APRA, and companies and service providers can turn personal data into de-identified data to process it however they want. The problem with de-identified data is that many times it is not. Moreover, many people do not want their private data that they store in confidence with a company to then be used to improve that company’s product or train its algorithm—even if the data has purportedly been de-identified. Many companies under the APRA can host loyalty programs and can sell that data with opt-in consent. Loyalty programs are a type of pay-for-privacy scheme that pressure people to surrender their privacy rights as if they were a commodity. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.” At the very least, the bill should be amended to prevent companies from selling data that they obtain from a loyalty program. We welcome Congress' privacy-first approach in the APRA and encourage the authors to improve the bill to ensure privacy is protected for generations to come.
>> mehr lesen

Tell the FCC It Must Clarify Its Rules to Prevent Loopholes That Will Swallow Net Neutrality Whole (Tue, 16 Apr 2024)
The Federal Communications Commission (FCC) has released draft rules to reinstate net neutrality, with a vote on adopting the rules to come on the 25th of April. The FCC needs to close some loopholes in the draft rules before then. Proposed Rules on Throttling and Prioritization Allow for the Circumvention of Net Neutrality Net neutrality is the principle that all ISPs should treat all traffic coming over their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. Violations of this principle include, but are not limited to, attempts to block, speed up, or slow down certain content as means of controlling traffic. Net neutrality is critical to ensuring that the internet remains a vibrant place to learn, organize, speak, and innovate, and the FCC recognizes this. The draft mostly reinstates the bright-line rules of the landmark 2015 net neutrality protections to ban blocking, throttling, and paid prioritization.It falls short, though, in a critical way: the FCC seems to think that it’s not okay to favor certain sites or services by slowing down other traffic, but it might be okay to favor them by giving them access to so-called fast lanes such as 5G network slices. First of all, in a world with a certain amount of finite bandwidth, favoring some traffic necessarily impairs other traffic. Secondly, the harms to speech and competition would be the same even if an ISP could conjure more bandwidth from thin air to speed up traffic from its business partners. Whether your access to Spotify is faster than your access to Bandcamp because Spotify is sped up or because Bandcamp is slowed down doesn’t matter because the end result is the same: Spotify is faster than Bandcamp and so you are incentivized to use Spotify over Bandcamp.The loophole is especially bizarre because the 2015 FCC already got this right, and there has been bipartisan support for net neutrality proposals that explicitly encompass both favoring and disfavoring certain traffic. It’s a distinction that doesn’t make logical sense, doesn’t seem to have partisan significance, and could potentially undermine the rules in the event of a court challenge by drawing a nonsensical distinction between what’s forbidden under the bright-line rules versus what goes through the multi-factor test for other potentially discriminatory conduct by ISPs.The FCC needs to close this loophole for unpaid prioritization of certain applications or classes of traffic. Customers should be in charge of what they do online, rather than ISPs deciding that, say, it’s more important to consume streaming entertainment products than to participate in video calls or that one political party’s websites should be served faster than another’s. The FCC Should Clearly Rule Preemption to be a Floor, Not a Ceiling When the FCC under the previous administration abandoned net neutrality protections in 2017 with the so-called “Restoring Internet Freedom” order, many states—chief among them California—stepped in to pass state net neutrality laws. Laws more protective than federal net neutrality protections—like California's should be explicitly protected by the new rule. The FCC currently finds that California’s law “generally tracks [with] the federal rules[being] restored. (269)” It goes on to find that state laws are fine so long as they do not “interfere with or frustrate…federal rules,” are not “inconsistent,” or are not “incompatible.” It then reserves the right to revisit any state law if evidence arises that a state policy is found to “interfere or [be] incompatible.” States should be able to build on federal laws to be more protective of rights, not run into limits to available protections. California’s net neutrality is in some places stronger than the draft rules. Where the FCC means to evaluate zero-rating, the practice of exempting certain data from a user’s data cap, on a case-by-case basis, California outright bans the practice of zero rating select apps. There is no guarantee that a Commission which finds California to “generally track” today will do the same in two years time. The language as written unnecessarily sets a low bar for a future Commission to find California’s, and other states’, net neutrality laws to be preempted. It also leaves open unnecessary room for the large internet service providers (ISPs) to challenge California’s law once again. After all, when California’s law was first passed, it was immediately taken to court by these same ISPs and only after years of litigation did the courts reject the industry’s arguments and allow enforcement of this gold standard law to begin. We urge the Commission to clearly state that, not only is California consistent with the FCC’s rules, but that on the issue of preemption the FCC considers its rules to be  the floor to build on, and that further state protections are not inconsistent simply because they may go further than the FCC chooses to. Overall, the order is a great step for net neutrality. Its rules go a distance in protecting internet users. But we need clear rules recognizing that the creation of fast lanes via positive discrimination and unpaid prioritization are violations of net neutrality just the same, and assurance that states will continue to be free to protect their residents even when the FCC won’t. Tell the FCC to Fix the Net Neutrality Rules: 1. Go to this link2. For "Proceeding" put 23-320 3. Fill out the form 4. In "brief comments" register your thoughts on net neutrality. We recommend this, which you can copy and paste or edit for yourself: Net neutrality is the principle that all internet service providers treat all traffic coming through their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. The Commission’s rules as currently written leave open the door for positive discrimination of content, that is, the supposed creation of fast lanes where some content is sped up relative to others. This isn’t how the internet works, but in any case, whether an ISP is speeding up or slowing down content, the end result is the same: the ISP picks the winners and losers on the internet. Further, while the Commission currently finds state net neutrality rules, like California’s, to not be preempted because they “generally track” its own rules, it makes it easy to rule otherwise at a future date. But just as we received net neutrality in 2015 only to have it taken away in 2017, there is no guarantee that the Commission will continue to find state net neutrality laws passed post-2017 to be consistent with the rules. To safeguard net neutrality, the Commission must find that California’s law is wholly consistent with their rules and that preemption is taken as a floor, not a ceiling, so that states can go above and beyond the federal standard without it being considered inconsistent with the federal rule. Take Action Tell the FCC to Fix the Net Neutrality Rules
>> mehr lesen

S.T.O.P. is Working to ‘Ban The Scan’ in New York (Sat, 13 Apr 2024)
Facial recognition is a threat to privacy, racial justice, free expression, and information security. EFF supports strict restrictions on face recognition use by private companies, and total bans on government use of the technology. Face recognition in all of its forms, including face scanning and real-time tracking, pose threats to civil liberties and individual privacy. “False positive” error rates are significantly higher for women, children, and people of color, meaning face recognition has an unfair discriminatory impact. Coupled with the fact that cameras are over-deployed in neighborhoods with immigrants and people of color, spying technologies like face surveillance serve to amplify existing disparities in the criminal justice system. Across the nation local communities from San Francisco to Boston have moved to ban government use of facial recognition. In New York, Electronic Frontier Alliance member Surveillance Technology Oversight Project (S.T.O.P.) is at the forefront of this movement. Recently we got the chance to speak with them about their efforts and what people can do to help advance the cause. S.T.O.P. is a New York-based civil rights and privacy organization that does research, advocacy, and litigation around issues of surveillance technology abuse. What does “Ban The Scan” mean?  When we say scan, we are referring to the “face scan” component of facial recognition technology. Surveillance, and more specifically facial recognition, disproportionately targets Black, Brown, Indigenous, and immigrant communities, amplifying the discrimination that has defined New York’s policing for as long as our state has had police. Facial recognition is notoriously biased and often abused by law enforcement. It is a threat to free speech, freedom of association, and other civil liberties. Ban the Scan is a campaign and coalition built around passing two packages of bills that would ban facial recognition in a variety of contexts in New York City and New York State.  Are there any differences with the State vs City version? The City and State packages are largely similar. The main differences are that the State package contains a bill banning law enforcement use of facial recognition, whereas the City package has a bill that bans all government use of the technology (although this bill has yet to be introduced). The State package also contains an additional bill banning facial recognition use in schools, which would codify an existing regulatory ban that currently applies to schools. What hurdles exist to its passage?   For the New York State package, the coalition is newly coming together, so we are still gathering support from legislators and the public. For the City package, we are lucky to have a lot of support already, and we are waiting to have a hearing conducted on the residential ban bills and move them into the next phase of legislation. We are also working to get the bill banning government use introduced at the City level. What can people do to help this good legislation? How to get involved?  We recently launched a campaign website for both City and State packages (banthescan.org). If you’re a New York City or State resident, you can look up your legislators (links below!) and contact them to ask them to support these bills or thank them for their support if they are already signed on. We also have social media toolkits with graphics and guidance on how to help spread the word!   Find your NYS Assemblymember: https://nyassembly.gov/mem/search/  Find your NYS Senator: https://www.nysenate.gov/find-my-senator  Find your NYC Councilmember: https://council.nyc.gov/map-widget/  
>> mehr lesen

EFF Submits Comments on FRT to Commission on Civil Rights (Sat, 13 Apr 2024)
Our faces are often exposed and, unlike passwords or pin numbers, cannot be remade. Governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. This is why EFF recently submitted comments to the U.S. Commission on Civil Rights, which is preparing a report on face recognition technology (FRT).    In our submission, we reiterated our stance that there should be a ban on governmental use of FRT and strict regulations on private use because it: (1) is not reliable enough to be used in determinations affecting constitutional and statutory rights or social benefits; (2) is a menace to social justice as its errors are far more pronounced when applied to people of color, members of the LGBTQ+ community, and other marginalized groups; (3) threatens privacy rights; (4) chills and deters expression; and (5) creates information security risks. Despite these grave concerns, FRT is being used by the government and law enforcement agencies with increasing frequency, and sometimes with devastating effects. At least one Black woman and five Black men have been wrongfully arrested due to misidentification by FRT: Porcha Woodruff, Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, and Robert Williams. And Harvey Murphy Jr., a white man, was wrongfully arrested due to FRT misidentification, and then sexually assaulted while in jail. Even if FRT was accurate, or at least equally inaccurate across demographics, it would still severely impact our privacy and security. We cannot change our face, and we expose it to the mass surveillance networks already in place every day we go out in public. But doing that should not be license for the government or private entities to make imprints of our face and retain that data, especially when that data may be breached by hostile actors. The government should ban its own use of FRT, and strictly limit private use, to protect us from the threats posed by FRT. 
>> mehr lesen

What Does EFF Mean to You? (Fri, 12 Apr 2024)
We could go on for days talking about all the work EFF does to ensure that technology supports freedom, justice, and innovation for all people of the world. In fact, we DO go on for days talking about it — but we’d rather hear from you.  What does EFF mean to you? We’d love to know why you support us, how you see our mission, or what issue or area we address that affects your life the most. It’ll help us make sure we keep on being the EFF you want us to be. So if you’re willing to go on the record, please send us a few sentences, along with your first name and current city of residence, to testimonials@eff.org; we’ll pick some every now and then to share with the world here on our blog, in our emails, and on our social media.
>> mehr lesen

Bad Amendments to Section 702 Have Failed (For Now)—What Happens Next? (Thu, 11 Apr 2024)
Yesterday, the House of Representatives voted against considering a largely bad bill that would have unacceptably expanded the tentacles of Section 702 of the Foreign Intelligence Surveillance Act, along with reauthorizing it and introducing some minor fixes. Section 702 is Big Brother’s favorite mass surveillance law that EFF has been fighting since it was first passed in 2008. The law is currently set to expire on April 19.  Yesterday’s decision not to decide is good news, at least temporarily. Once again, a bipartisan coalition of law makers—led by Rep. Jim Jordan and Rep. Jerrold Nadler—has staved off the worst outcome of expanding 702 mass surveillance in the guise of “reforming” it. But the fight continues and we need all Americans to make their voices heard.  Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform: Take action TELL congress: 702 Needs serious reforms Yesterday’s vote means the House also will not consider amendments to Section 702 surveillance introduced by members of the House Judiciary Committee (HJC) and House Permanent Select Committee on Intelligence (HPSCI). As we discuss below, while the HJC amendments would contain necessary, minimum protections against Section 702’s warrantless surveillance, the HPSCI amendments would impose no meaningful safeguards upon Section 702 and would instead increase the threats Section 702 poses to Americans’ civil liberties. Section 702 expressly authorizes the government to collect foreign communications inside the U.S. for a wide range of purposes, under the umbrellas of national security and intelligence gathering. While that may sound benign for Americans, foreign communications include a massive amount of Americans’ communications with people (or services) outside the United States. Under the government’s view, intelligence agencies and even domestic law enforcement should have backdoor, warrantless access to these “incidentally collected” communications, instead of having to show a judge there is a reason to query Section 702 databases for a specific American's communications. Many amendments to Section 702 have recently been introduced. In general, amendments from members of the HJC aim at actual reform (although we would go further in many instances). In contrast, members of HPSCI have proposed bad amendments that would expand Section 702 and undermine necessary oversight. Here is our analysis of both HJC’s decent reform amendments and HPSCI’s bad amendments, as well as the problems the latter might create if they return. House Judiciary Committee’s Amendments Would Impose Needed Reforms The most important amendment HJC members have introduced would require the government to obtain court approval before querying Section 702 databases for Americans’ communications, with exceptions for exigency, consent, and certain queries involving malware. As we recently wrote regarding a different Section 702 bill, because Section 702’s warrantless surveillance lacks the safeguards of probable cause and particularity, it is essential to require the government to convince a judge that there is a justification before the “separate Fourth Amendment event” of querying for Americans’ communications. This is a necessary, minimum protection and any attempts to renew Section 702 going forward should contain this provision. Another important amendment would prohibit the NSA from resuming “abouts” collection. Through abouts collection, the NSA collected communications that were neither to nor from a specific surveillance target but merely mentioned the target. While the NSA voluntarily ceased abouts collection following Foreign Intelligence Surveillance Court (FISC) rulings that called into question the surveillance’s lawfulness, the NSA left the door open to resume abouts collection if it felt it could “work that technical solution in a way that generates greater reliability.” Under current law, the NSA need only notify Congress when it resumes collection. This amendment would instead require the NSA to obtain Congress’s express approval before it can resume abouts collection, which―given this surveillance's past abuses—would be notable. The other HJC amendment Congress should accept would require the FBI to give a quarterly report to Congress of the number of queries it has conducted of Americans’ communications in its Section 702 databases and would also allow high-ranking members of Congress to attend proceedings of the notoriously secretive FISC. More congressional oversight of FBI queries of Americans’ communications and FISC proceedings would be good. That said, even if Congress passes this amendment (which it should), both Congress and the American public deserve much greater transparency about Section 702 surveillance.   House Permanent Select Committee on Intelligence’s Amendments Would Expand Section 702 Instead of much-needed reforms, the HPSCI amendments expand Section 702 surveillance. One HPSCI amendment would add “counternarcotics” to FISA’s definition of “foreign intelligence information,” expanding the scope of mass surveillance even further from the antiterrorism goals that most Americans associate with FISA. In truth, FISA’s definition of “foreign intelligence information” already goes beyond terrorism. But this counternarcotics amendment would further expand “foreign intelligence information” to allow FISA to be used to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths” but also to any of their precursors. Given the massive amount of Americans’ communications the government already collects under Section 702 and the government’s history of abusing Americans’ civil liberties through searching these communications, the expanded collection this amendment would permit is unacceptable. Another amendment would authorize using Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released last year, the government has sought some version of this authority for years, and the FISC repeatedly denied it—finally approving it for the first time in 2023. The FISC opinion is very redacted, which makes it impossible to know either the current scope of immigration and visa-related surveillance under Section 702 or what the intelligence agencies have sought in the past. But regardless, it’s deeply concerning that HPSCI is trying to formally lower Section 702 protections for immigrants and asylum seekers. We’ve already seen the government revoke people’s visas based upon their political opinions—this amendment would put this kind of thing on steroids. The last HPSCI amendment tries to make more companies subject to Section 702’s required turnover of customer information in more instances. In 2023, the FISC Court of Review rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While we don’t know the details of the secret proceedings because the FISC Court of Review opinion is heavily redacted, this is an ominous attempt to increase the scope of providers subject to 702. With this amendment, HPSCI is attempting to legislatively overrule a court already famously friendly to the government. HPSCI Chair Mike Turner acknowledged as much in a House Rules Committee hearing earlier this week, stating that this amendment “responds” to the FISC Court of Review’s decision. What’s Next  This hearing was unlikely to be the last time Congress considers Section 702 before April 19—we expect another attempt to renew this surveillance authority in the coming days. We’ve been very clear: Section 702 must not be renewed without essential reforms that protect privacy, improve transparency, and keep the program within the confines of the law.  Take action TELL congress: 702 Needs serious reforms
>> mehr lesen

Virtual Reality and the 'Virtual Wall' (Thu, 11 Apr 2024)
When EFF set out to map surveillance technology along the U.S.-Mexico border, we weren't exactly sure how to do it. We started with public records—procurement documents, environmental assessments, and the like—which allowed us to find the GPS coordinates of scores of towers. During a series of in-person trips, we were able to find even more. Yet virtual reality ended up being one of the key tools in not only discovering surveillance at the border, but also in educating people about Customs & Border Protection's so-called "virtual wall" through VR tours. EFF Director of Investigations Dave Maass recently gave a lightning talk at University of Nevada, Reno's annual XR Meetup explaining how virtual reality, perhaps ironically, has allowed us to better understand the reality of border surveillance. play %3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FwEolW4-b6Dw%3Fsi%3DfeR80hnnRLET4VpX%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
>> mehr lesen

The Motion Picture Association Doesn’t Get to Decide Who the First Amendment Protects (Wed, 10 Apr 2024)
Twelve years ago, internet users spoke up with one voice to reject a law that would build censorship into the internet at a fundamental level. This week, the Motion Picture Association (MPA), a group that represents six giant movie and TV studios, announced that it hoped we’d all forgotten how dangerous this idea was. The MPA is wrong. We remember, and the internet remembers. What the MPA wants is the power to block entire websites, everywhere in the U.S., using the same tools as repressive regimes like China and Russia. To it, instances of possible copyright infringement should be played like a trump card to shut off our access to entire websites, regardless of the other legal speech hosted there. It is not simply calling for the ability to take down instances of infringement—a power they already have, without even having to ask a judge—but for the keys to the internet. Building new architectures of censorship would hurt everyone, and doesn’t help artists. The bills known as SOPA/PIPA would have created a new, rapid path for copyright holders like the major studios to use court orders against sites they accuse of infringing copyright. Internet service providers (ISPs) receiving one of those orders would have to block all of their customers from accessing the identified websites. The orders would also apply to domain name registries and registrars, and potentially other companies and organizations that make up the internet’s basic infrastructure. To comply, all of those would have to build new infrastructure dedicated to site-blocking, inviting over-blocking and all kinds of abuse that would censor lawful and important speech. In other words, the right to choose what websites you visit would be taken away from you and given to giant media companies and ISPs. And the very shape of the internet would have to be changed to allow it. In 2012, it seemed like SOPA/PIPA, backed by major corporations used to getting what they want from Congress, was on the fast track to becoming law. But a grassroots movement of diverse Internet communities came together to fight it. Digital rights groups like EFF, Public Knowledge, and many more joined with editor communities from sites like Reddit and Wikipedia to speak up. Newly formed grassroots groups like Demand Progress and Fight for the Future added their voices to those calling out the dangers of this new form of censorship. In the final days of the campaign, giant tech companies like Google and Facebook (now Meta) joined in opposition as well. What resulted was one of the biggest protests ever seen against a piece of legislation. Congress was flooded with calls and emails from ordinary people concerned about this steamroller of censorship. Members of Congress raced one another to withdraw their support for the bills. The bills died, and so did site blocking legislation in the US. It was, all told, a success story for the public interest. Even the MPA, one of the biggest forces behind SOPA/PIPA, claimed to have moved on. But we never believed it, and they proved us right time and time again. The MPA backed site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. Even the lobbying of Congress for a new law never really went away. It’s just that today, with MPA president Charles Rivkin openly calling on Congress “to enact judicial site-blocking legislation here in the United States,” the MPA is taking its mask off. Things have changed since 2012. Tech platforms that were once seen as innovators have become behemoths, part of the establishment rather than underdogs. The Silicon Valley-based video streamer Netflix illustrated this when it joined MPA in 2019. And the entertainment companies have also tried to pivot into being tech companies. Somehow, they are adopting each other’s worst aspects. But it’s important not to let those changes hide the fact that those hurt by this proposal are not Big Tech but regular internet users. Internet platforms big and small are still where ordinary users and creators find their voice, connect with audiences, and participate in politics and culture, mostly in legal—and legally protected—ways. Filmmakers who can’t get a distribution deal from a giant movie house still reach audiences on YouTube. Culture critics still reach audiences through zines and newsletters. The typical users of these platforms don’t have the giant megaphones of major studios, record labels, or publishers. Site-blocking legislation, whether called SOPA/PIPA, “no fault injunctions,” or by any other name, still threatens the free expression of all of these citizens and creators. No matter what the MPA wants to claim, this does not help artists. Artists want their work seen, not locked away for a tax write-off. They wanted a fair deal, not nearly five months of strikes. They want studios to make more small and midsize films and to take a chance on new voices. They have been incredibly clear about what they want, and this is not it. Even if Rivkin’s claim of an “unflinching commitment to the First Amendment” was credible from a group that seems to think it has a monopoly on free expression—and which just tried to consign the future of its own artists to the gig economy—a site-blocking law would not be used only by Hollywood studios. Anyone with a copyright and the means to hire a lawyer could wield the hammer of site-blocking. And here’s the thing: we already know that copyright claims are used as tools of censorship. The notice-and-takedown system created by the Digital Millennium Copyright Act, for example, is abused time and again by people who claim to be enforcing their copyrights, and also by folks who simply want to make speech they don’t like disappear from the Internet. Even without a site-blocking law, major record labels and US Immigration and Customs Enforcement shut down a popular hip hop music blog and kept it off the internet for over a year without ever showing that it infringed copyright. And unscrupulous characters use accusations of infringement to extort money from website owners, or even force them into carrying spam links. This censorious abuse, whether intentional or accidental, is far more damaging when it targets the internet’s infrastructure. Blocking entire websites or groups of websites is imprecise, inevitably bringing down lawful speech along with whatever was targeted. For example, suits by Microsoft intended to shut down malicious botnets caused thousands of legitimate users to lose access to the domain names they depended on. There is, in short, no effective safeguard on a new censorship power that would be the internet’s version of police seizing printing presses. Even if this didn’t endanger free expression on its own, once new tools exist, they can be used for more than copyright. Just as malfunctioning copyright filters were adapted into the malfunctioning filters used for “adult content” on tumblr, so can means of site blocking. The major companies of a single industry should not get to dictate the future of free speech online. Why the MPA is announcing this now is anyone’s guess. They might think no one cares anymore. They’re wrong. Internet users rejected site blocking in 2012 and they reject it today.
>> mehr lesen

Speaking Freely: Mary Aileen Diez-Bacalso (Tue, 09 Apr 2024)
This interview has been edited for length and clarity.* Mary Aileen Diez-Bacalso is the executive director of FORUM-Asia. She has worked for many years in human rights organizations in the Philippines and internationally, and is best known for her work on enforced disappearances. She has received several human rights awards at home and abroad, including the Emilio F. Mignone International Human Rights Prize conferred by the Government of Argentina and the Franco-German Ministerial Prize for Human Rights and Rule of Law. In addition to her work at FORUM-Asia, she currently serves as the president of the International Coalition Against Enforced Disappearances (ICAED) and is a senior lecturer at the Asian Center of the University of the Philippines. York: What does free expression mean to you? And can you tell me about an experience, or experiences, that shaped your views on free expression? To me, free speech or free expression means the exercise of the right to express oneself and to seek and receive information as an individual or an organization. I’m an individual, but I’m also representing an organization, so it means the ability to express thoughts, ideas, or opinions without threats or intimidation or fear of reprisals.  Free speech is expressed in various avenues, such as in a community where one lives or in an organization where one belongs at the national, regional, or international levels. It is the right to express these ideas, opinions, and thoughts for different purposes, for instance; influencing behaviors, opinions, and policy decisions; giving education; addressing, for example, historical revisionism—which is historically common in my country, the Philippines. Without freedom of speech people will be kept in the dark in terms of access to information, in understanding and analyzing information, and deciding which information to believe and which information is incorrect or inaccurate or is meant to misinform people. So without freedom of speech people cannot exercise their other basic human rights, like the right of suffrage and, for example, religious organizations who are preaching will not be able to fulfill their mission of preaching if freedom of speech is curtailed.  I have worked for years with families of the disappeared—victims of enforced disappearance—in many countries. And this forced disappearance is a consequence of the absence of free speech. These disappeared people are forcibly disappeared because of their political beliefs, because of their political affiliations, and because of their human rights work, among other things. And they were deprived of the right to speech. Additionally, in the Philippines and many other Asian countries, rallies, for example, and demonstrations on various legitimate issues of the people are being dispersed by security forces in the name of peace. That’s depriving legitimate protesters from the rights to speech and to peaceful assembly. So these people are named as enemies of the state, as subversives, as troublemakers, and in the process they’re tear-gassed, arrested, detained, etcetera. So allowing these people to exercise their constitutional rights is a manifestation of free speech. But in many Asian countries—and many other countries in other regions also—such rights, although provided for by the Constitution, are not respected. Free speech in whatever country you are in, wherever you go, is freedom to study the situation of that country to give your opinion of that situation and share your ideas with others.  York: Can you share some experiences that helped shape your views on freedom of expression?  During my childhood years, when martial law was imposed, I’d heard a lot of news about detention, arrest and detention of journalists because of their protest against martial law that was imposed by the dictator Ferdinand Marcos, Sr, who was the father of the present President of the Philippines. So I read a lot about violations of human rights of activists from different sectors of society. I read about farmers, workers, students, church people, who were arrested, detained, tortured, disappeared, and killed because of martial law. Because they spoke against the Marcos administration. So during those years when I was so young, this actually formed my mind and also my commitment to freedom of expression, freedom of assembly, freedom of association.  Once, I was arrested during the first Marcos administration, and that was a very long time ago. That is a manifestation of the curtailment of the right of free speech. I was together with other human rights defenders—I was very young at the time. We were rallying because there was a priest who was made to disappear forcibly. So we were arrested and detained. Also, I was deported by the government of India on my way to Kashmir. I was there three times, but on my third time I was not allowed to go to Kashmir because of our human rights work there. So even now, I am banned in India and I can not go back there. It was because of those reports we made on enforced disappearances and mass graves in Kashmir. So free speech means freedom without thread, intimidation, or retaliation. And it means being able to use all avenues in various contexts to speak in whatever forms—verbal speeches, written speeches, videos, and all forms of communication. Also, the enforced disappearance of my husband informed my views on free expression. Two weeks after we got married he was briefly forcibly disappeared. He was tortured, he was not fed, and he was forced to confess that he was a member of the Communist Party of the Philippines. He was together with one other person he did not know and did not see, and they were forced to dig a grave for themselves to be buried alive inside. Another person who was disappeared then escaped and informed us of where my husband was. So we told the military that we knew where my husband was. They were afraid that the other person might testify so they released my husband in a cemetery near his parent’s house. And that made an impact on me, that’s why I work a lot with families of enforced disappearances both in the Philippines and in many other countries. I believe that the experience of enforced disappearance of my husband, and other family members of the disappeared and their experience of having family members disappeared until now, is a consequence of the violation of freedom of expression, freedom of assembly, freedom of speech. And also my integration or immersion with families of the disappeared has contributed a lot to my commitment to human rights and free speech. I’m just lucky to have my husband back. And he’s lucky. But the way of giving back, of being grateful for the experience we had—because they are very rare cases where victims of enforced disappearances surfaced alive—so I dedicate my whole life to the cause of human rights.  York: What do you feel are some of the qualities that make you passionate about protecting free expression for others? Being brought up by my family, my parents, we were taught about the importance of speaking for the truth, and the importance of uprightness. It was also because of our religious background. We were taught it is very important to tell the truth. So this passion for truth and uprightness is one of the qualities that make me passionate about free expression. And the sense of moral responsibility to rectify wrongs that are being committed. My love of writing, also. I love writing whenever I have the opportunity to do it, the time to do it. And the sense of duty to make human rights a lifetime commitment.  York: What should we know about the role of social media in modern Philippine society?  I believe social media contributed a lot to what we are now. The current oppressive administration invested a lot in misinformation, in revising history, and that’s why a lot of young people think of martial law as the years of glory and prosperity. I believe one of the biggest factors of the administration getting the votes was their investment in social media for at least a decade.  York: What are your feelings on how online speech should be regulated?  I’m not very sure it should be regulated. For me, as long as the individuals or the organizations have a sense of responsibility for what they say online, there should be no regulation. But when we look at free speech on online platforms these online platforms have the responsibility to ensure that there are clear guidelines for content moderation and must be held accountable for content posted on their platforms. So fact-checking—which is so important in this world of misinformation and “fake news”—and complaints mechanisms have to be in place to ensure that harmful online speech is identified and addressed. So while freedom of expression is a fundamental right, it is important to recognize that this can be exploited to spread hate speech and harmful content all in the guise of online freedom of speech—so this could be abused. This is being abused. Those responsible for online platforms must be accountable for their content. For example, from March 2020 to July 2020 our organization, FORUM-Asia and its partners, including freedom of expression group AFAD, documented around 40 cases of hate speech and dangerous speech on Facebook. And the study scope is limited as it only covered posts and comments in Burmese. The researchers involved also reported that many other posts were reported and subsequently removed prior to being documented. So the actual amount of hate speech is likely to be significantly higher. I recommend taking a look at the report. So while FORUM-Asia acknowledges the efforts of Facebook to promote policies to curb hate speech on the platform, it still needs to update and constantly review all these things, like the community guidelines, including those on political advertisements and paid or sponsored content, with the participation of the Facebook Oversight Board.  York: Can you tell me about a personal experience you’ve had with censorship, or perhaps the opposite, an experience you have of using freedom of expression for the greater good? In terms of censorship, I don’t have personal experience with censorship. I wrote some opinion pieces in the Union of Catholic Asian News and other online platforms, but I haven’t had any experience of censorship. Although I did experience negative comments because of the content of what I wrote. There are a lot of trolls in the Philippines and they were and are very supportive of the previous administration of Duterte, so there was negative feedback when I wrote a lot on the war on drugs and the killings and impunity. But that’s also part of freedom of speech! I just had to ignore it, but, to be honest, I felt bad.  York: Thank you for sharing that. Do you have a free expression hero?  I believe we have so many unsung heroes in terms of free speech and these are the unknown persecuted human rights defenders. But I also answer that during this week we are commemorating the Holy Week [editor’s note: this interview took place on March 28, 2024] so I would like to say that I would like to remember Jesus Christ. Whose passion, death, and resurrection Christians are commemorating this week. So, during his time, Jesus spoke about the ills of society, he was enraged when he witnessed how defenseless poor were violated of their rights and he was angry when authority took advantage of them. And he spoke very openly about his anger, about his defense for the poor. So I believe that he is my hero. Also, in contemporary times, Óscar Arnulfo Romero y Galdámez, who was canonized as a Saint in 2018, I consider him as my free speech hero also. I visited the chapel where he was assassinated, the Cathedral of San Salvador, where his mortal remains were buried. And the international community, especially the Salvadoran people, celebrated the 44th anniversary of his assassination last Sunday the 24th of March, 2024. Seeing the ills of society, the consequent persecution of the progressive segment of the Catholic church and the churches in El Salvador, and the indiscriminate killings of the Salvadoran people in his communities San Romero courageously spoke on the eve of his assassination. I’d like to quote what he said. He said: “I would like to make a special appeal to the men of the army, and specifically to the ranks of the National Guard, the police and the military. Brothers, you come from our own people. You are killing your own brother peasants when any human order to kill must be subordinate to the law of God which says, ‘Thou shalt not kill.’ No soldier is obliged to obey an order contrary to the law of God. No one has to obey an immoral law. It is high time you recovered your consciences and obeyed your consciences rather than a sinful order. The church, the defender of the rights of God, of the law of God, of human dignity, of the person, cannot remain silent before such an abomination. We want the government to face the fact that reforms are valueless if they are to be carried out at the cost of so much blood. In the name of God, in the name of this suffering people whose cries rise to heaven more loudly each day, I implore you, I beg you, I order you in the name of God: stop the repression.” So as a fitting tribute to Saint Romero of the Americas the United Nations has dedicated the 24th of March as the International Day for Truth, Justice, Reparation, and Guarantees of Non-repetition. So he is my hero. Of course, Jesus Christ being the most courageous human rights defender during these times, continues to be my hero. Which I’m sure was the model of Monsignor Romero. 
>> mehr lesen

Podcast Episode: Antitrust/Pro-Internet (Tue, 09 Apr 2024)
Imagine an internet in which economic power is more broadly distributed, so that more people can build and maintain small businesses online to make good livings. In this world, the behavioral advertising that has made the internet into a giant surveillance tool would be banned, so people could share more equally in the riches without surrendering their privacy. play %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F7766e45d-3612-4a96-af70-8059ab24761d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com   Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge (You can also find this episode on the Internet Archive and on YouTube.) That’s the world Tim Wu envisions as he teaches and shapes policy on the revitalization of American antitrust law and the growing power of big tech platforms. He joins EFF’s Cindy Cohn and Jason Kelley to discuss using the law to counterbalance the market’s worst instincts, in order to create an internet focused more on improving people’s lives than on meaningless revenue generation.  In this episode you’ll learn about:  Getting a better “deal” in trading some of your data for connectedness.  Building corporate structures that do a better job of balancing the public good with private profits.  Creating a healthier online ecosystem with corporate “quarantines” to prevent a handful of gigantic companies from dominating the entire internet.  Nurturing actual innovation of products and services online, not just newer price models.  Timothy Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School, where he has served on the faculty since 2006. First known for coining the term “net neutrality” in 2002, he served in President Joe Biden’s White House as special assistant to the President for technology and competition policy from 2021 to 2023; he also had worked on competition policy for the National Economic Council during the last year of President Barack Obama’s administration. Earlier, he worked in antitrust enforcement at the Federal Trade Commission and served as enforcement counsel in the New York Attorney General’s Office. His books include “The Curse of Bigness: Antitrust in the New Gilded Age” (2018), "The Attention Merchants: The Epic Scramble to Get Inside Our Heads” (2016), “The Master Switch: The Rise and Fall of Information Empires” (2010), and “Who Controls the Internet? Illusions of a Borderless World” (2006). Resources:  Columbia Law School: “How to Change 40 Years of Policy in 22 Months: Professor Wu in Washington" (Feb. 8. 2023)  New York Times: “In Regulating A.I., We May Be Doing Too Much. And Too Little.” (Nov. 7, 2023)  New York Times: “The Google Trial Is Going to Rewrite Our Future” (Sept. 18, 2023) CNBC: “Tech companies have exploited human tendencies to maintain their monopolies, says Columbia's Tim Wu” (Oct. 3, 2023) What do you think of “How to Fix the Internet?” Share your feedback here.  Transcript TIM WU I think with advertising we need a better deal. So advertising is always a deal. You trade your attention and you trade probably some data, in exchange you get exposed to advertising and in exchange you get some kind of free product. You know, that's the deal with television, that's been the deal for a long time with radio. But because it's sort of an invisible bargain, it's hard to make the bargain, and the price can be increased in ways that you don't necessarily notice. For example, we had one deal with Google in, let's say, around the year 2010 - if you go on Google now, it's an entirely different bargain. It's as if there's been a massive inflation in these so-called free products. In terms of how much data has been taken, in terms of how much you're exposed to, how much ad load you get. It's as if sneakers went from 30 dollars to 1,000 dollars! CINDY COHN That's Tim Wu – author, law professor, White House advisor. He’s something of a swiss army knife for technology law and policy. He spent two years on the National Economic Council, working with the Biden administration as an advisor on competition and tech policy. He worked on antitrust legislation to try and check some of the country’s biggest corporations, especially, of course, the tech giants. I’m Cindy Cohn - executive director of the Electronic Frontier Foundation. JASON KELLEY And I’m Jason Kelley - EFF’s Activism Director. This is our podcast, How to Fix the Internet. Our guest today is Tim Wu. His stint with the Biden administration was the second White House administration he advised. And in between, he ran for statewide office in New York. And that whole thing is just a sideline from his day job as a law professor at Columbia University. Plus, he coined the term net neutrality! CINDY COHN On top of that, Tim basically writes a book every few years that I read in order to tell me what's going to happen next in technology. And before that he's been a programmer and a more traditional lab based scientist. So he's kind of got it all. TIM WU Sounds like I'm a dilettante. CINDY COHN Well, I think you've got a lot of skills in a lot of different departments, and I think that in some ways, I've heard you call yourself a translator, and I think that that's really what all of that experience gives you as a superpower is the ability to kind of talk between these kinds of spaces in the rest of the world. TIM WU Well, I guess you could say that. I've always been inspired by Wilhelm Humboldt, who had this theory that in order to have a full life, you had to try to do a lot of different stuff. So somehow that factors into it somewhere. CINDY COHN That's wonderful. We want to talk about a lot of things in this conversation, but I kind of wanted to start off with the central story of the podcast, which is, what does the world look like if we get this right? You know, you and I have spent a lot of years talking about all the problems, trying to lift up obstacles and get rid of obstacles. But if we reach this end state where we get a lot of these problems right, in Tim Wu's world, what, what does it look like? Like, what does your day look like? What do people's experience of technology look like? TIM WU I think it looks like a world in which economic power surrounding the internet and surrounding the platforms is very much more distributed. And, you know, what that means practically is it means a lot of people are able to make a good living, I guess, based on being a small producer or having a service based skill in a way that feels sustainable and where the sort of riches of the Internet are more broadly shared. So that's less about what kind of things you click on or, you know, what kind of apps you use and more about, I guess, the economic structure surrounding the Internet, which I think, you know, um, I don't think I'm the only person who thinks this, you know, the structure could be fairer and could work for more people. It does feel like the potential and, you know, we've all lived through that potential starting in the 90s of this kind of economically liberating force that would be the basis for a lot of people to make a decent living has seemed to turn into something more where a lot of money aggregates in a few places. CINDY COHN Yeah, I remember, people still talk about the long tail, right, as a way in which the digitization of materials created a revenue stream that's more than just, you know, the flavor of the week that a movie studio or a book publisher might want us to pay attention to on kind of the cultural side, right? That there was space for this. And that also makes me think of a conversation we just had with the folks in the right to repair movement talking about like their world includes a place where there's mom and pop shops that will help you fix your devices all over the place. Like this is another way in which we have centralized economic power. We've centralized power and if we decentralize this or, or, or spread it more broadly, uh, we're going to create a lot of jobs and opportunities for people, not just as users of technology, but as the people who help build and offer it to us. TIM WU I'm writing a new book, um, working title, Platform Capitalism, that has caused me to go back and look at the, you know, the early promise of the internet. And I went back and I was struck by a book, some of you may remember, called "An Army of Davids," by Glenn Reynolds the Instapundit. Yeah, and he wrote a book and he said, you know, the future of the American economy is going to be all these kind of mom and pop sellers who, who take over everything – he wrote this about 2006 – and he says, you know, bloggers are already competing with news operations, small sellers on eBay are already competing with retail stores, and so on, journalists, so on down the line that, uh, you know, the age of the big, centralized Goliath is over and the little guys are going to rule the future. Kind of dovetailed, I went back and read Yochai Benkler's early work about a production commons model and how, you know, there'll be a new node of production. Those books have not aged all that well. In fact, I think the book that wins is Blitzscaling. That somewhere along the line, instead of the internet favoring small business, small production, things went in the exact opposite direction. And when I think about Yochai Benkler's idea of sort of production-based commons, you know, Waze was like that, the mapping program, until one day Waze was just bought by Google. So, I was just thinking about those as I was writing that chapter of the book. CINDY COHN Yeah, I think that's right. I think that identifying and, and you've done a lot of work on this, identify the way in which we started with this promise and we ended up in this other place can help us figure out, and Cory Doctorow, our colleague and friend has been doing a lot of work on this with choke point capitalism and other work that he's done for EFF and elsewhere. And I also agree with him that, like, we don't really want to create the good old days. We want to create the good new days, right? Like, we want to experience the benefits of an Internet post-1990s, but also have those, those riches decentralized or shared a little more broadly, or a lot more broadly, honestly. TIM WU Yeah, I think that's right, and so I think part of what I'm saying, you know, what would fix the internet, or what would make it something that people feel excited about. You know, I think people are always excited about apps and videos, but also people are excited about their livelihood and making money. And if we can figure out the kind of structure that makes capitalism more distributed surrounding platforms, you know, it's not abandoning the idea of you have to have a good site or a product or something to, to gain customers. It's not a total surrender of that idea, but a return to that idea working for more people. CINDY COHN I mean, one of the things that you taught me in the early days is how kind of ‘twas ever so, right? If you think about radio or broadcast medium or other previous mediums, they kind of started out with this promise of a broader impact and broader empowerment and, and didn't end up that way as much as well. And I know that's something you've thought about a lot. TIM WU Yeah, the first book I wrote by myself, The Master Switch, had that theme and at the time when I wrote it, um, I wrote a lot of it in the, ‘09, ‘08, ‘07 kind of period, and I think at that point I had more optimism that the internet could hold out, that it wouldn't be subject to the sort of monopolizing tendencies that had taken over the radio, which originally was thousands of radio stations, or the telephone system – which started as this ‘go west young man and start your own telephone company’ kind of technology – film industry and and many others. I was firmly of the view that things would be different. Um, I think I thought that, uh, because of the CCP IP protocol, because of the platforms like HTML that were, you know, the center of the web, because of net neutrality, lasting influence. But frankly, I was wrong. I was wrong, at least when I was writing the book. JASON KELLEY As you've been talking about the sort of almost inevitable funneling of the power that these technologies have into a single or, or a few small platforms or companies, I wonder what you think about newer ideas around decentralization that have sort of started over the last few years, in particular with platforms like Mastodon or something like that, these kinds of APIs or protocols, not platforms, that idea. Do you see any promise in that sort of thing? Because we see some, but I'm wondering what you think. TIM WU I do see some promise. I think that In some ways, it's a long overdue effort. I mean, it's not the first. I can't say it's the first. Um, and part of me wishes that we had been, you know, the idealistic people. Even the idealistic people at some of these companies, such as they were, had been a bit more careful about their design in the first place. You know, I guess what I would hope … the problem with Mastodon on some of these is they're trying to compete with entities that already are operating with all the full benefits of scale and which are already tied to sort of a Delaware private corporate model. Uh, now this is a little bit, I'm not saying that hindsight is 20/20, but when I think about the major platforms and entities the early 21st century, it's really only Wikipedia that got it right in my view by structurally insulating themselves from certain forces and temptations. So I guess what I'm trying to say is that, uh, part of me wishes we'd done more of this earlier. I do think there's hope in them. I think it's very challenging in current economics to succeed. And sometimes you'd have to wonder if you go in a different, you know, that it might be, I don't want to say impossible, very challenging when you're competing with existing structures. And if you're starting something new, you should start it right. That said, AI started in a way structurally different and we've seen how that's gone recently. CINDY COHN Oh, say more, say more! JASON KELLEY Yeah. Yeah. Keep, keep talking about AI. CINDY COHN I'm very curious about your thinking about that. TIM WU Well, you know, I said that, The Holy Roman Empire was neither holy, nor Roman, nor an empire. And OpenAI is now no longer open, nor non-profit, nor anything else. You know, it's kind of, uh, been extraordinary that the circuit breakers they tried to install have just been blown straight through. Um, and I think there's been a lot of negative coverage of the board. Um, because, you know, the business press is kind of narrow on these topics. But, um, you know, OpenAI, I guess, at some point, tried to structure itself more carefully and, um, and, uh, you know, now the board is run by people whose main experience has been, um, uh, taking good organizations and making them worse, like Quora, so, yeah, I, I, that is not exactly an inspiring story, uh, I guess of OpenAI in the sense of it's trying to structure itself a little differently and, and it, uh, failing to hold. CINDY COHN I mean, I think Mozilla has managed to have a structure that has a, you know, kind of complicated for profit/not-for-profit strategy that has worked a little better, but II hear you. I think that if you do a power analysis, right, you know, a nonprofit is going to have a very hard time up against all the money in the world. And I think that that seems to be what happened for OpenAI. Uh, once all the money in the world showed up, it was pretty hard to, uh, actually impossible for the public interest nonprofit side to hold sway. TIM WU When I think about it over and over, I think engineers and the people who set up these, uh, structures have been repeatedly very naive about, um, the power of their own good intentions. And I agree. Mozilla is a good example. Wikipedia is a good example. Google, I remember when they IPO'd, they had some set up, and they said, ‘We're not going to be an ordinary company,’ or something like that. And they sort of had preferred stock for some of the owners. You know, Google is still in some ways an impressive company, but it's hard to differentiate them from any other slightly money grubbing, non-innovative colossus, um, of the kind they were determined not to become. And, you know, there was this like, well, it's not going to be us, because we're different. You know, we're young and idealistic, and why would we want to become, I don't know, like Xerox or IBM, but like all of us, you begin by saying, I'm never going to become like my parents, and then next thing you know, you're yelling at your kids or whatever. CINDY COHN Yeah, it's, it's the, you know, meet the new boss the same as the old boss, right? When we, what we were hoping was that we would be free of some of the old bosses and have a different way to approach, but, but the forces are pretty powerful that stick people back in line, I think. TIM WU And some of the old structures, you know, look a little better. Like, I'm not going to say newspapers are perfect, but a structure like the New York Times structure, for example, basically is better than Google's. And I just think there was this sense that, Well, we can solve that problem with code and good vibes. And that turned out to be the great mistake. CINDY COHN One of the conversations that you and I have had over the years is kind of the role of regulation on, on the internet. I think the fight about whether to regulate or not to regulate the Internet was always a little beside the point. The question is how. And I'm wondering what you're thinking now. You've been in the government a couple times. You've tried to push some things that were pretty regulatory. How are you thinking now about something like a centralized regulatory agency or another approach to, you know, regulating the Internet? TIM WU Yeah, I, you know, I continue to have mixed feelings about something like the central internet commission, mostly for some of the reasons you said, but on the other hand, sometimes, if I want to achieve what I mentioned, which is the idea of platforms that are an input into a lot of people being able to operate on top of them and run businesses-like, you know, at times, the roads have been, or the electric system, or the phone network, um, it's hard to get away from the idea of having some hard rules, sometimes I think my sort of platonic form of, of government regulation or rules was the 1956 AT&T consent decree, which, for those who are not as deep in those weeds as I am, told AT&T that it could do nothing but telecom, and therefore not do computing and also force them to license every single one of their patents for free. And the impact of that was more than one -  one is because they were out of computing. They were not able to dominate it and you had companies then new to computing like IBM and others that got into that space and developed the American computing industry completely separate from AT&T. And you also ended up, semiconductor companies start that time with the transistor patent and other patents they used for free. So you know, I don't know exactly how you achieve that, but I'm drawn to basically keeping the main platforms in their lane. I would like there to be more competition. The antitrust side of me would love it. And I think that in some areas we are starting to have it, like in social media, for better or for worse. But maybe for some of the more basic fundamentals, online markets and, you know, as much competition as we can get – but some rule to stay out of other businesses, some rule to stop eating the ecosystem. I do think we need some kind of structural separation rules. Who runs those is a little bit of a harder question. CINDY COHN Yeah, we're not opposed to structural separation at EFF. I think we, we think a lot more about interoperability to start with as a way to, you know, help people have other choices, but we haven't been opposed to structural separation, and I think there are situations in which it might make a lot of good sense, especially, you know, in the context of mergers, right? Where the company has actually swallowed another company that did another thing. That's, kind of the low hanging fruit, and EFF has participated a lot in commenting on potential mergers. TIM WU I'm not opposed the idea of pushing interoperability. I think that it's based on the experience of the last 100 years. It is a tricky thing to get right. I'm not saying it's impossible. We do have examples: Phone network, in the early 20th century, and interconnection was relatively successful. And right now, you know, when you change between, let's say, T-Mobile and Verizon, there's only three left, but you get to take your phone number with you, which is a form of interoperability. But it has the risk of being something you put a lot of effort into and it not necessarily working that well in terms of actually stimulating competition, particularly because of the problem of sabotage, as we saw in the ‘96 Act. So it's actually not about the theory, it's about the practice, the legal engineering of it. Can you find the right thing where you've got kind of a cut point where you could have a good interoperability scheme? JASON KELLEY Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Tim Wu. I was intrigued by what he said about keeping platforms in their lane. I wanted to hear him speak more about how that relates to antitrust – is that spreading into other ecosystems what sets his antitrust alarm bells off? How does he think about that? TIM WU I guess the phrase I might use is quarantine, is you want to quarantine businesses, I guess, from others. And it's less of a traditional antitrust kind of remedy, although it, obviously, in the ‘56 consent decree, which was out of an antitrust suit against AT&T, it can be a remedy. And the basic idea of it is, it's explicitly distributional in its ideas. It wants more players in the ecosystem, in the economy. It's almost like an ecosystem promoting a device, which is you say, okay, you know, you are the unquestioned master of this particular area of commerce. Maybe we're talking about Amazon and it's online shopping and other forms of e-commerce, or Google and search. We're not going to give up on the hope of competition, but we think that in terms of having a more distributed economy where more people have their say, um, almost in the way that you might insulate the college students from the elementary school students or something. We're going to give other, you know, room for other people to develop their own industries in these side markets. Now, you know, there's resistance say, well, okay, but Google is going to do a better job in, uh, I don't know, shopping or something, you know, they might do a good job. They might not, but you know, they've got their returns and they're always going to be an advantage as a platform owner and also as a monopoly owner of having the ability to cross-subsidize and the ability to help themselves. So I think you get healthier ecosystems with quarantines. That's basically my instinct. And, you know, we do quarantines either legally or de facto all the time. As I said, the phone network has long been barred from being involved in a lot of businesses. Banking is kept out of a lot of businesses because of obvious problems of corruption. The electric network, I guess they could make toasters if they want, but it was never set up to allow them to dominate the appliance markets. And, you know, if they did dominate the appliance markets, I think it would be a much poorer world, a lot less interesting innovation, and frankly, a lot less wealth for everyone. So, yeah, I have strong feelings. It's more of my net neutrality side that drives this thinking than my antitrust side, I’ll put it that way. JASON KELLEY You specifically worked in both the Obama and Biden administration sort of on these issues. I'm wondering if your thinking on this has changed. In experiencing those things from from the sort of White House perspective and also just how different those two, sort of, experiences were, obviously the moments are different in time and and and everything like that, but they're not so far apart – maybe light years in terms of technology, but what was your sort of experience between those two, and how do you think we're doing now on this issue? TIM WU I want to go back to a slightly earlier time in government, not the Obama, actually it was the Obama administration, but my first job in the, okay, sorry, my third job in the federal government, uh, I guess I'm a, one of these recidivists or something, was at the Federal Trade Commission. CINDY COHN Oh yeah, I remember. TIM WU Taking the first hard look at big tech and, in fact, we're investigating Google for the first time for antitrust possible offenses, and we also did the first privacy remedy on Facebook, which I will concede was a complete and absolute failure of government, one of the weakest remedies, I think. We did that right before Cambridge Analytica. And obviously had no effect on Facebook's conduct at all. So, one of the failed remedies. I think that when I think back about that period, the main difference was that the tech platforms were different in a lot of ways. I believe that, uh, monopolies and big companies have, have a life cycle. And they were relatively early in that life cycle, maybe even in a golden age. A company like Amazon seemed to be making life possible for a lot of sellers. Google was still in its early phase and didn't have a huge number of verticals. Still had limited advertising. Most searches still didn't turn up that many ads. You know, they were in a different stage of their life. And they also still felt somewhat, they were still already big companies. They still felt relatively in some sense, vulnerable to even more powerful economic forces. So they hadn't sort of reached that maturity. You know, 10 years later, I think the life cycle has turned. I think companies have largely abandoned innovation in their core products and turned to defense and trying to improve – most of their innovations are attempting to raise more revenue and supposed to make the product better. Uh, kind of reminds me of the airline industry, which stopped innovating somewhere in the seventies and started making, trying to innovate in, um, terms of price structures and seats being smaller, that kind of thing. You know, there's, you reach this end point, I think the airlines are the end point where you take a high tech industry at one point and just completely give up on anything other than trying to innovate in terms of your pricing models. CINDY COHN Yeah, I mean, I, you know, our, our, we, Cory keeps coming up, but of course Cory calls it the “enshittification” of, uh, of services, and I think that is, uh, in typical Corrie way captures, this stage of the process. TIM WU Yeah, I just to speak more broadly. I you know, I think there's a lot of faith and belief that the, uh, company like Google, you know, in its heart meant well, and I do still think the people working there mean well, but I feel that, you know, the structure they set up, which requires showing increasing revenue and profit every quarter began to catch up with it much more and we’re at a much later stage of the process. CINDY COHN Yep. TIM WU Or the life cycle. I guess I'd put it. CINDY COHN And then for you, kind of coming in as a government actor on this, like, what did that mean in terms of, like, was it, I'm assuming, I kind of want to finish the sentence for you. And that, you know, that meant it was harder to get them to do the right thing. It meant that their defenses were better against trying to do the right thing. Like how did that impact the governmental interventions that you were trying to help make happen? TIM WU I think it was both. I think there was both, in terms of government action, a sense that the record was very different. The Google story in 2012 is very different than 2023. And the main difference is in 2023 Google is paying out 26.3 billion a year to other companies to keep its search engine where it is, and arguably to split the market with Apple. You know, there wasn't that kind of record back in 2012. Maybe we still should have acted, but there wasn't that much money being so obviously spent on pure defensive monopoly. But also people were less willing. They thought the companies were great. They overall, I mean, there's a broader ideological change that people still felt, many people from the Clinton administration felt the government was the problem. Private industry was the solution. Had kind of a sort of magical thinking about the ability of this industry to be different in some fundamental way. So the chair of the FCC wasn't willing to pull the trigger. The economists all said it was a terrible idea. You know, they failed to block over a thousand mergers that big tech did during that period, which it's, I think, very low odds that none of those thousands were anti-competitive or in the aggregate that maybe, you know, that was a way of building up market power. Um, it did enrich a lot of small company people, but I, I think people at companies like Waze really regret selling out and, you know, end up not really building anything of their own but becoming a tiny sub-post of the Google empire. CINDY COHN Yeah, the “acquihire” thing is very central now and what I hear from people in the industry is that like, if that's not your strategy to get acquired by one of the ones, it's very hard to get funded, right? It feeds back into the VC and how you get funded to get something built. If it's not something that one of the big guys is going to buy, you're going to have a hard time building it and you're going to have a hard time getting the support to get to the place where you might actually even be able to compete with them. TIM WU And I think sometimes people forget we had different models. You know, some of your listeners might forget that, you know, in the ‘70s, ‘80s, and ‘90s, and early 2000s, people did build companies not just to be bought... CINDY COHN Right. TIM WU ...but to build fortunes, or because they thought it was a good company. I mean, the people who built Sun, or Apple, or, you know, Microsoft, they weren't saying, well, I hope I'm gonna be bought by IBM one day. And they made real fortunes. I mean, look, being acquired, you can obviously become a very wealthy person, but you don't become a person of significance. You can go fund a charity or something, but you haven't really done something with your life. CINDY COHN I'm going to flip it around again. And so we get to the place where the Tim Wu vision that the power is spread more broadly. We've got lots of little businesses all around. We've got many choices for consumers. What else, what else do you see in this world? Like what role does the advertising business model play in this kind of a better future. That's just one example there of many, that we could give. TIM WU Yeah, no, I like your vision of a different future. I think, uh, just like focus on it goes back to the sense of opportunity and, you know, you could have a life where you run a small business that's on the internet that is a respectable business and you're neither a billionaire nor you're impoverished, but you know, you just had to have your own business the way people have, like, in New York or used to run like stores and in other parts of the country, and in that world, I mean, in my ideal world, there is advertising, but advertising is primarily informational, if that makes sense. It provides useful information. And it's a long way to go between here and there, but where, um, you know, it's not the default business model for informational sources such that it, it has much less corrupting effects. Um, you know, I think that advertising obviously everyone's business model is going to affect them, but advertising has some of the more, corrupting business models around. So, in my ideal world, we would not, it's not that advertising will go away, people want information, but we'd strike a better bargain. Exactly how you do that. I guess more competition helps, you know, lower advertising, um, sites you might frequent, better privacy protecting sites, but, you know, also passing privacy legislation might help too. CINDY COHN I think that’s right, I think EFF has taken a position that we think we should ban behavioral ads. That's a pretty strong position for us and not what we normally do, um, to, to say, well, we need to ban something. But also that we need, of course, comprehensive privacy law, which is, you know, kind of underlines so many of the harms that we're seeing online right now is this, this lack of a baseline privacy protection. I don't know if you see it the same way, but it's certainly it seems to be the through line for a lot of harms that are coming up as things people are concerned about. Yeah. TIM WU I mean, absolutely, and I, you know, don't want to give EFF advice on their views, but I would say that I think it's wise to see the totally unregulated collection of data from, you know, millions, if not billions of people as a source of so many of the problems that we have. It drives unhealthy business models, it leads to real-world consequences, in terms of identity theft and, and so many others, but I think I, I'd focus first on what, yeah, the kind of behavior that encourages the kind of business model is encourages, which are ones that just don't in the aggregate, feel very good for the businesses or for, for us in particular. So yeah, my first priority legislatively, I think if I were acting at this moment would be starting right there with, um, a privacy law that is not just something that gives supposed user rights to take a look at the data that's collected, but that meaningfully stops the collection of data. And I think we'll all just shrug our shoulders and say, oh, we're better off without that. Yes, it supported some, but we will still have some of the things – it's not as if we didn't have friends before Facebook. It's not as if we didn't have video content before YouTube, you know, these things will survive with less without behavioral advertising. I think your stance on this is entirely, uh, correct. CINDY COHN Great. Thank you, I always love it when Tim agrees with me and you know, it pains me when we disagree, but one of the things I know is that you are one of the people who was inspired by Larry Lessig and we cite Larry a lot on the show because we like to think about things or organize them in terms of the four levels of, um, You know, digital regulation, you know, laws, norms, markets, and code as four ways that we could control things online. And I know you've been focusing a lot on laws lately and markets as well. How do you think about, you know, these four levers and where we are and, and how we should be deploying them? TIM WU Good question. I regard Larry as a prophet. He was my mentor in law school, and in fact, he is responsible for most of my life direction. Larry saw that there was a force arising through code that already was somewhat, in that time, 90s, early 2000s, not particularly subject to any kind of accountability, and he saw that it could take forms that might not be consistent with the kind of liberties you would like to have or expect and he was right about that. You know, you can say whatever you want about law or government and there are many examples of terrible government, but at least the United States Constitution we think well, there is this problem called tyranny and we need to do something about it. There's no real equivalent for the development of abusive technologies unless you get government to do something about it and government hasn't done much about it. You know, I think the interactions are what interests me about the four forces. So if we agree that code has a certain kind of sovereignty over our lives in many ways and most of us on a day-to-day basis are probably more affected by the code of the devices we use than by the laws we operate under. And the question is, what controls code? And the two main contenders are the market and law. And right now the winner by far is just the market, which has led codemakers in directions that even they find kind of unfortunate and disgraceful. I don't remember who had that quote, but it was some Facebook engineer that said the greatest minds of our generation are writing code to try to have people click on random ads, and we have sort of wasted a generation of talent on meaningless revenue generation when they could be building things that make people's lives better. So, you know, the answer is not easy is to use law to counter the market. And that's where I think we are with Larry's four factors. CINDY COHN Yeah, I think that that's right, and I agree that it's a little ro-sham-bo, right, that you can control code with laws and, and markets and you can control markets with code, which is kind of where interoperability comes in sometimes and laws and you know, norms play a role in kind of a slightly different whammy role in all of these things, but I do think that those interactions are really important and we've, again, I've always thought it was a somewhat phony conversation about, you know, "to regulate or not to regulate, that is the question" because that's not actually particularly useful in terms of thinking about things because we were embedded in a set of laws. It's just the ones we pay attention to and the ones that we might not notice, but I do think we're in a time when we have to think a lot harder about how to make laws that will be flexible enough to empower people and empower competition and not lock in the winners of today's markets. And we spend a lot of time thinking about that issue. TIM WU Well, let me say this much. This might sound a little contradictory in my life story, but I'm not actually a fan of big government, certainly not overly prescriptive government. Having been in government, I see government's limits, and they are real. But I do think the people together are powerful. I think laws can be powerful, but what they most usefully do is balance out the market. You know what I'm saying? And create different incentives or different forces against it. I think trying to have government decide exactly how tech should run is usually a terrible idea. But to cut off incentives – you talked about behavioral advertising. So let's say you ban behavioral advertising just the way we ban child labor or something. You know, you can live without it. And, yeah, maybe we're less productive because we don't let 12 year olds work in factories. There's a marginal loss of revenue, but I frankly think it's worth it. And, you know, and some of the other practices that have shown up are in some ways the equivalent. And we can live without them. And that's the, you know, it's sort of easy to say. we should ban child labor. But when you look for those kind of practices, that's where we need law to be active. JASON KELLEY Well, Cindy, I came away from that with a reading list. I'm sure a lot of people are familiar with those authors and those books, but I am going to have to catch up. I think we'll put some of them, maybe all the books, in the, in the show notes so that people who are wondering can, can catch up on their end. You, as someone who's already read all those books, probably have different takeaways from this conversation than me. CINDY COHN You know what I really, I really like how Tim thinks he's, you know, he comes out of this, especially most recently from an economics perspective. So his future is really an economics one. It's about an internet that has lots of spaces for people to make a reasonable living as opposed to the few people make a killing, or sell their companies to the big tech giants. And I think that that vision dovetails a lot with a lot of the people that we've talked. to on this show that, you know, in some ways we've got to think about how do we redistribute the internet and that includes redistributing the economic benefits. JASON KELLEY Yeah. And thinking about, you know, something you've said many times, which is this idea of rather than going backwards to the internet we used to have, or the world we used to have, we're really trying to build a better world with the one we do have. So another thing he did mention that I really pulled away from this conversation was when antitrust makes sense. And that sort of idea of, well, what do you do when companies start spreading into other ecosystems? That's when you really have to start thinking about the problems that they're creating for competition. And I think the word he used was quarantine. Is that right? CINDY COHN Yeah I love that image. JASON KELLEY Yeah, that was just a helpful, I think, way for people to think about how antitrust can work. And that was something that I'll take away from this probably forever. CINDY COHN Yeah, I also liked his vision of what kind of deal we have with a lot of these free tools or AKA free tools, which is, you know, at one time when we signed up for, you know, a Gmail account, it's, you know, the, the deal was that it was going to look at what you searched on and what you wrote and then place you ads based on the context and what you did. And now that deal is much, much worse. And I think he, he's right to likening that to something that, you know, has secretly gotten much more expensive for us, that the deal for us as consumers has gotten worse and worse. And I really like that framing because again, it kind of translates out from the issues that where we live, which is, you know, privacy and free speech and fairness and turns it into something that is actually kind of an economic framing of some of the same points. I think that the kind of upshot of Tim and, and honestly, some of the other people we've talked to is this idea of ‘blitzscaling’, um, and growing gigantic platforms is really at the heart of a lot of the problems that we're seeing in free speech and in privacy and also in economic fairness. And I think that's a point that Tim makes very well. I think that from, you know, The Attention Merchants, The Curse of Bigness, Tim has been writing in this space for a while, and he, what I appreciate is Tim is really a person, um, who came up in the Internet, he understands the Internet, he understands a lot of the values, and so he's, he's not writing as an outsider throwing rocks as much as an insider who is kind of dismayed at how things have gone and looking to try to unpack all of the problems. And I think his observation, which is shared by a lot of people, is that a lot of the problems that we're seeing inside tech are also problems we're seeing outside tech. It's just that tech is new enough that they really took over pretty fast. But I think that it's important for us to both recognize the problems inside tech and it doesn't let tech off the hook. To note that these are broader societal problems, but it may help us in thinking about how we get out of them. JASON KELLEY Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week. We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode you heard Perspectives *** by J.Lang featuring Sackjo22 and Admiral Bob, and Warm Vacuum Tube by Admiral Bob featuring starfrosch. You can find links to their music in our episode notes, or on our website at eff.org/podcast. Our theme music is by Nat Keefe of BeatMower with Reed Mathis How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll talk to you again soon. I’m Jason Kelley CINDY COHN And I’m Cindy Cohn.
>> mehr lesen

"Infrastructures of Control": Q&A with the Geographers Behind University of Arizona's Border Surveillance Photo Exhibition (Fri, 05 Apr 2024)
Guided by EFF's map of Customs & Border Protection surveillance towers, University of Arizona geographers Colter Thomas and Dugan Meyer have been methodologically traversing the U.S.-Mexico border and photographing the infrastructure that comprises the so-called "virtual wall." An amrored vehicle next to a surveillance tower along the Rio Grande River Anduril Sentry tower beside the Rio Grande River. Photo by Colter Thomas (CC BY-NC-ND 4.0) From April 12-26, their outdoor exhibition "Infrastructures of Control" will be on display on the University of Arizona campus in Tucson, featuring more than 30 photographs of surveillance technology, a replica surveillance tower, and a blow up map based on EFF's data. Locals can join the researchers and EFF staff for an opening night tour at 5pm on April 12, followed by an EFF Speakeasy/Meetup. There will also be a panel discussion at 5pm on April 19, moderated by journalist Yael Grauer, co-author of EFF's Street-Level Surveillance hub. It will feature a variety of experts on the border, including Isaac Esposto (No More Deaths), Dora Rodriguez (Salvavision), Pedro De Velasco (Kino Border Initiative), Todd Miller (The Border Chronicle), and Daniel Torres (Daniel Torres Reports). In the meantime, we chatted with Colter and Dugan about what their project means to them. MAASS: Tell us what you hope people will take away from this project? MEYER: We think of our work as a way for us to contribute to a broader movement for border justice that has been alive and well in the U.S.-Mexico borderlands for decades. Using photography, mapping, and other forms of research, we are trying to make the constantly expanding infrastructure of U.S. border policing and surveillance more visible to public audiences everywhere. Our hope is that doing so will prompt more expansive and critical discussions about the extent to which these infrastructures are reshaping the social and environmental landscapes throughout this region and beyond. THOMAS: The diversity of landscapes that make up the borderlands can make it hard to see how these parts fit together, but the common thread of surveillance is an ominous sign for the future and we hope that the work we make can encourage people from different places and experiences to find common cause for looking critically at these infrastructures and what they mean for the future of the borderlands. A surveillance tower in a valley. An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0) MAASS: So much is written about border surveillance by researchers working off documents, without seeing these towers first hand. How did your real-world exploration affect your understanding of border technology? THOMAS: Personally I’m left with more questions than answers when doing this fieldwork. We have driven along the border from the Gulf of Mexico to the Pacific, and it is surprising just how much variation there is within this broad system of U.S. border security. It can sometimes seem like there isn’t just one border at all, but instead a patchwork of infrastructural parts—technologies, architecture, policy, etc.—that only looks cohesive from a distance. A surveillance tower on a hill An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0) MAASS: That makes me think of Trevor Paglen, an artist known for his work documenting surveillance programs. He often talks about the invisibility of surveillance technology. Is that also what you encountered? MEYER: The scale and scope of U.S. border policing is dizzying, and much of how this system functions is hidden from view. But we think many viewers of this exhibition might be surprised—as we were when we started doing this work—just how much of this infrastructure is hidden in plain sight, integrated into daily life in communities of all kinds. This is one of the classic characteristics of infrastructure: when it is working as intended, it often seems to recede into the background of life, taken for granted as though it always existed and couldn’t be otherwise. But these systems, from surveillance programs to the border itself, require tremendous amounts of labor and resources to function, and when you look closely, it is much easier to see the waste and brutality that are their real legacy. As Colter and I do this kind of looking, I often think about a line from the late David Graeber, who wrote that “the ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently.” THOMAS: Like Dugan said, infrastructure rarely draws direct attention. As artists and researchers, then, our challenge has been to find a way to disrupt this banality visually, to literally reframe the material landscapes of surveillance in ways that sort of pull this infrastructure back into focus. We aren’t trying to make this infrastructure beautiful, but we are trying to present it in a way that people will look at it more closely. I think this is also what makes Paglen’s work so powerful—it aims for something more than simply documenting or archiving a subject that has thus far escaped scrutiny. Like Paglen, we are trying to present our audiences with images that demand attention, and to contextualize those images in ways that open up opportunities and spaces for viewers to act collectively with their attention. For us, this means collaborating with a range of other people and organizations—like the EFF—to invite viewers into critical conversations that are already happening about what these technologies and infrastructures mean for ourselves and our neighbors, wherever they are coming from.
>> mehr lesen

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog (Fri, 05 Apr 2024)
This post was co-written by EFF legal intern Melda Gurakar. Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled. The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so. Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute. The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X. The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers. EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information. The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions. EFF and ACLU Amicus Brief in Support of CCDH  
>> mehr lesen

The White House is Wrong: Section 702 Needs Drastic Change (Thu, 04 Apr 2024)
With Section 702 of the Foreign Intelligence Surveillance Act set to expire later this month, the White House recently released a memo objecting to the SAFE Act—legislation introduced by Senators Dick Durbin and Mike Lee that would reauthorize Section 702 with some reforms. The White House is wrong. SAFE is a bipartisan bill that may be our most realistic chance of reforming a dangerous NSA mass surveillance program that even the federal government’s privacy watchdog and the White House itself have acknowledged needs reform. As we’ve written, the SAFE Act does not go nearly far enough in protecting us from the warrantless surveillance the government now conducts under Section 702. But, with surveillance hawks in the government pushing for a reauthorization of their favorite national security law without any meaningful reforms, the SAFE Act might be privacy and civil liberties advocates’ best hope for imposing some checks upon Section 702. Section 702 is a serious threat to the privacy of those in the United States. It authorizes the collection of overseas communications for national security purposes, and, in a globalized world, this allows the government to collect a massive amount of Americans’ communications. As Section 702 is currently written, intelligence agencies and domestic law enforcement have backdoor, warrantless access to millions of communications from people with clear constitutional rights. The White House objects to the SAFE Act’s two major reforms. The first requires the government to obtain court approval before accessing the content of communications for people in the United States which have been hoovered up and stored in Section 702 databases—just like police have to do to read your letters or emails. The SAFE Act’s second reform closes the “data broker loophole” by largely prohibiting the government from purchasing personal data they would otherwise need a warrant to collect. While the White House memo is just the latest attempt to scare lawmakers into reauthorizing Section 702, it omits important context and distorts the key SAFE Act amendments’ effects The government has repeatedly abused Section 702 by searching its databases for Americans’ communications. Every time, the government claims it has learned from its mistakes and won’t repeat them, only for another abuse to come to light years later. The government asks you to trust it with the enormously powerful surveillance tool that is Section 702—but it has proven unworthy of that trust. The Government Should Get Judicial Approval Before Accessing Americans’ Communications Requiring the government to obtain judicial approval before it can access the communications of Americans and those in the United States is a necessary, minimum protection against Section 702’s warrantless surveillance. Because Section 702 does not require safeguards of particularity and probable cause when the government initially collects communications, it is essential to require the government to at least convince a judge that there is a justification before the “separate Fourth Amendment event” of the government accessing the communications of Americans it has collected. The White House’s memo claims that the government shouldn’t need to get court approval to access communications of Americans that were “lawfully obtained” under Section 702. But this ignores the fundamental differences between Section 702 and other surveillance. Intelligence agencies and law enforcement don’t get to play “finders keepers” with our communications just because they have a pre-existing program that warrantlessly vacuums them all up. The SAFE Act has exceptions from its general requirement of court approval for emergencies, consent, and—for malicious software—“defensive cybersecurity queries.” While the White House memo claims these are “dangerously narrow,” exigency and consent are longstanding, well-developed exceptions to the Fourth Amendment’s warrant requirement. And the SAFE Act gives the government even more leeway than the Fourth Amendment ordinarily does in also excluding “defensive cybersecurity queries” from its requirement of judicial approval. The Government Shouldn’t Be Able to Buy What It Would Otherwise Need a Warrant to Collect The SAFE Act properly imposes broad restrictions upon the government’s ability to purchase data—because way too much of our data is available for the government to purchase. Both the FBI and NSA have acknowledged knowingly buying data on Americans. As we’ve written many times, the commercially available information that the government purchases can be very revealing about our most intimate, private communications and associations. The Director of National Intelligence’s own report on government purchases of commercially available information recognizes this data can be “misused to pry into private lives, ruin reputations, and cause emotional distress and threaten the safety of individuals.” This report also recognizes that this data can “disclose, for example, the detailed movements and associations of individuals and groups, revealing political, religious, travel, and speech activities.” The SAFE Act would go a significant way towards closing the “data broker loophole” that the government has been exploiting. Contrary to the White House’s argument that Section 702 reauthorization is “not the vehicle” for protecting Americans’ data privacy, closing the “data broker loophole” goes hand-in-hand with putting crucial guardrails upon Section 702 surveillance: the necessary reform of requiring court approval for government access to Americans’ communications is undermined if the government is able to warrantlessly collect revealing information about Americans some other way.  The White House further objects that the SAFE Act does not address data purchases by other countries and nongovernmental entities, but this misses the point. The best way Congress can protect Americans’ data privacy from these entities and others is to pass comprehensive data privacy regulation. But, in the context of Section 702 reauthorization, the government is effectively asking for special surveillance permissions for itself, that its surveillance continue to be subjected to minimal oversight while other other countries’ surveillance practices are regulated. (This has been a pattern as of late.) The Fourth Amendment prohibits intelligence agencies and law enforcement from giving themselves the prerogative to invade our privacy.  
>> mehr lesen

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists (Wed, 03 Apr 2024)
In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk.  The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years. The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.   The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean.  In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment.  Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists.  In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights.  The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below. Prior Judicial Order for Communications Surveillance and Access to Data The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing that “the mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.”  Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that “[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added)  According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”   The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter. The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order. Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.”  To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure.  Informational Self-Determination Recognized as an Autonomous Human Right  In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention.  “In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”   The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases.  The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law.  According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.   The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files. The Vital Protection of the Right to Defend Human Rights The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights. For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities.  Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State.  Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices  In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities.  In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights.  As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records.  The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures.  The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files.  These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out. In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations.  The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.   When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 
>> mehr lesen

Speaking Freely: Emma Shapiro (Tue, 02 Apr 2024)
Emma Shapiro is an American artist, writer, and activist who is based in Valencia, Spain. She is the Editor-At-Large for the Don’t Delete Art campaign and the founder of the international art project and movement Exposure Therapy. Her work includes the use of video, collage, performance, and photography, while primarily using her own body and image. Through her use of layered video projection, self portraiture, and repeated encounters with her own image, Emma deconstructs and questions the meaning of our bodies, how we know them, and what they could be. Regular censorship of her artwork online and IRL has driven Emma to dedicate herself to advocacy for freedom of expression. Emma sat down with EFF’s Jillian York to discuss the need for greater protection of artistic expression across platforms, how the adult body is regulated in the digital world, the role of visual artists as defenders of cultural and digital rights, and more. York: What does free expression mean to you? Free expression, to me, as primarily an artist—I’ve now also become an arts writer and an advocate for artistry things including censorship and suppression online of those who make art —but, primarily, I’m an artist. So for me free expression is my own ability to make my work and see the artwork of others. That is what is, baseline, the most important thing to me. And whenever I encounter obstacles to those things is when I know I’m facing issues with free expression. Besides that, how free we are to express ourselves is kind of the barometer for what kind of society we’re living in. York: Can you tell me about an experience that shaped your views on freedom of expression? The first times I encountered suppression and erasure of my own art work, which is probably the most pivotal moment that I personally had that shaped my views around this in that I became indignant and that led me down a path of meeting other artists and other people who were expressing themselves and facing the exact same problem. Especially in the online space. The way it operates is, if you’re being censored, you’re being suppressed. You’re effectively not being seen. So unless you’re seeking out this conversation – and that’s usually because it’s happened to you – you’re easily not going to encounter this problem. You’re not going to be able to interact with the creators this is happening to. That was a completely ground-shifting and important experience for me when I first started experiencing this kind of suppression and erasure of my artwork online. I’ve always experienced misunderstanding of my work and I usually chalked that up to a puritan mindset or sexism in that I use my own body in my artwork. Even though I’m not dealing with sexual themes – I’m not even dealing with feminist themes – those topics are unavoidable as soon as you use a body. Especially a female-presenting body in your artwork. As soon as I started posting my artwork online that was when the experience of censorship became absolutely clear to me. York: Tell me about your project Exposure Therapy. We’ve both done a lot of work around how female-presenting bodies are allowed to exist on social media platforms. I would love to hear your take on this and what brought you to that project.  I’d be happy to talk about Exposure Therapy! Exposure Therapy came out of one of the first major instances of censorship that I experienced. Which was something that happened in real life. It happened at a WalMart in rural Virginia where I couldn’t get my work printed. They threatened me with the police and they destroyed my artwork in front of me. The reason they gave me was that it showed nipples. So I decided to put my nipples everywhere. Because I was like… this is so arbitrary. If I put my nipple on my car is my car now illicit and sexy or whatever you’re accusing me of? So that’s how Exposure Therapy started. It started as a physical offline intervention. And it was just my own body that I was using.  Then when I started an Instagram account to explore it a little further I, of course, faced online censorship of the female-presenting nipple. And so it became a more complex conversation after that. Because the online space and how we judge bodies online was a deep and confusing world. I ended up meeting a lot of other activists online who are dealing with the same topic and incorporating other bodies into the project. Out of that, I’ve grown nearly everything I’ve done since as having to do with online spaces, the censorship of bodies, and particularly censorship of female-presenting bodies. And it’s been an extremely rewarding experience. It’s been very interesting to monitor the temperature shifts over the last few years since I began the project, and to see how some things have remained the same. I mean, even when I go out and discuss the topic of censorship of the female-presenting nipple, the baseline understanding people often have is they think that female nipples are genitalia and they’re embarrassed by them. And that’s a lot of people in the world – even people who would attend a lecture of mine feel that way! York: If you were to be the CEO of a social media platform tomorrow how would you construct the rules when it comes to the human body?  When it comes to the adult human body. The adult consenting human body. I’m interested more in user choice in online spaces and social media platforms. I like the idea of me, as a user, going into a space with the ability to determine what I don’t want to see or what I do want to see. Instead of the space dictating what’s allowed to be on the space in the first place. And I also am interested in some models that I’ve seen where the artist or the person posting the content is labeling the images themselves, like self-tagging. And those tags end up creating their own sub-tags. And that is very interactive – it could be a much more interactive and user-experience based space rather than the way social media is operating right now which is completely dictated from the top down. There basically is no user choice now. There might be some toggles that you can say that you want to see or that you don’t want to see “sensitive content,” but they’re still the ones labeling what “sensitive content” is. I’m mostly interested in the user choice aspect. Lips social media run by Annie Brown I find to be a fascinating experiment. Something that she is proving is that there is a space that can be created that is LGBTQ and feminist-focused where it does put the user first. It puts the creator first. And there’s a sort of social contract that you’re a part of being in that space.  York: Let me ask you about the Don’t Delete Art Campaign. What prompted that and what’s been a success story from that campaign? I’m not a founding member of Don’t Delete Art. My fellow co-curators, Spencer Tunick and Savannah Spirit, were there in the very beginning when this was created with NCAC (National Coalition Against Censorship) and Freemuse and ARC (Artists at Risk Connection), and there were also some others involved at the beginning. But now it is those three organizations and three of us artists/ activists. Since its inception in 2020, I believe, or the end of 2019, we had seen a shift in the way that certain things were happening at Meta. We mostly are dealing with Meta platforms because they’re mostly image-based. Of course there are things that happen on other social media platforms, but visual artists are usually using these visual platforms. So most of our work has had to do with Meta platforms, previously Facebook platforms. And since the inception of Don’t Delete Art we actually have seen shifts in the way that they deal with the appeals processes and the way that there might be more nuance in how lens-based work is assessed. We can’t necessarily claim those as victories because no one has told us, “This is thanks to you, Don’t Delete Art, that we made this change!” Of course, they’re never going to do that. But we’re pretty confident that our input – our contact with them, the data that we gathered to give them – helps them hear a little more of our artistic perspectives and integrate that into their content moderation design. So that’s a win. For me personally, since I came on board – and I’m the Editor at Large of Don’t Delete Art – I have been very pleased with our interaction with artists and other groups including digital rights groups and free expression groups who really value what we do. And that we are able to collaborate with them, take part in events that they’re doing, and spread the message of Don’t Delete Art. And just let artists know that when this happens to them – this suppression or censorship – they’re not alone. Because it’s an extremely isolating situation. People feel ashamed. It’s hard to know you’re now inaugurated into a community when this happens to you. So I feel like that’s a win. The more I can educate my own community, the artist community, on this issue and advance the conversation and advance the cause. York: What would you say to someone who says nudity isn’t one of the most important topics in the discussion around content moderation? That is something that I encounter a lot. And basically it’s that there’s a lot of aspects to being an artist online—and then especially an artist who uses the body online—that faces suppression and censorship that people tend to think our concerns are frivolous. This also goes hand in hand with  the “free the nipple” movement and body equality. People tend to look upon those conversations—especially when they’re online—as being frivolous secondary concerns. And what I have to say to that is… my body is your body. If my body is not considered equal for any reason at all and not given the respect it deserves then no body is equal. It doesn’t matter what context it’s in. It doesn’t matter if I’m using my body or using the topic of female nipples or me as an artist. The fact that art using the body is so suppressed online means that there’s a whole set of artists who just aren’t being seen, who are unable to access the same kinds of tools as other artists who choose a different medium. And the medium that we choose to express ourselves with shouldn’t be subject to those kinds of restrictions. It shouldn’t be the case that artists have to change their entire practice just to get access to the same tools that other artists have. Which has happened.  Many artists, myself included, [and] Savannah Spirit, especially, speak to this: people have changed their entire practice or they don’t show entire bodies of work or they even stop creating because they’re facing suppression and censorship and even harassment online. And that extends to the offline space. If a gallery is showing an artist who faces censorship online, they would be less likely to include that artist’s work in their promotional material where they might have otherwise. Or, if they do host that artist’s work and the gallery faces suppression and censorship of their presence online because of that artist’s work, then in the future they might choose not to work with an artist who works with the body. Then we’re losing an entire field of art in which people are discussing body politics and identity and ancestry and everything that has to do with the body. I mean there’s a reason artists are working with the body. It’s important commentary, an important tool, and important visibility.  York: Is there anything else you’d like to share about your work that I haven’t asked you about?  I do want to have the opportunity to say that—and it relates to the way people might not take some artists seriously or take this issue seriously—and I think that extends to the digital rights conversation and artists. I think it’s a conversation that isn’t being had in art communities. But it’s something that affects visual artists completely. Visual artists aren’t necessarily—well, it’s hard to group us as a community because we don’t have unions for ourselves, it’s a pretty individualistic practice, obviously—but artists don’t tend to realize that they are cultural rights defenders. And that they need to step in and occupy their digital rights space. Digital rights conversations very rarely include the topic of visual art. For example, the Santa Clara Principles is a very important document that doesn’t mention visual art at all. And that’s a both sides problem. That artists don’t recognize the importance of digital art in their practice, and digital rights groups don’t realize that they should be inviting visual artists to the table. So in my work, especially in the writing I do for arts journals, I have very specifically focused on and tried to call this out. That artists need to step into the digital rights space and realize this is a conversation that needs to be had in our own community. York: That is a fantastic call to action to have in this interview, thank you. Now my final question- who, if anyone, is your free expression hero? I feel somewhat embarrassed by it because it comes from a very naive place, but when I was a young kid I saw Ragtime on Broadway and Emma Goldman became my icon as a very young child. And of course I was drawn to her probably because we have the same name! Just her character in the show, and then learning about her life, became very influential to me. I just loved the idea of a strong woman spending her life and her energy advocating for people, activating people, motivating people to fight for their rights and make sure the world is a more equal place. And that has always been a sort of model in my mind and it’s never really gone away. I feel like I backed into what I’m doing now and ended up being where I want to be. Because I, of course, pursued art and I didn’t anticipate that I would be encountering this issue. I didn’t anticipate that I’d become part of the Don’t Delete Campaign, I didn’t know any of that. I didn’t set out for that. I just always had Emma Goldman in the back of my mind as this strong female figure whose life was dedicated to free speech and equality. So that’s my biggest icon. But it also is one that I had as a very young kid who didn’t know much about the world.  York: Those are the icons that shape us! Thank you so much for this interview. 
>> mehr lesen

Ola Bini Faces Ecuadorian Prosecutors Seeking to Overturn Acquittal of Cybercrime Charge (Mon, 01 Apr 2024)
Ola Bini, the software developer acquitted last year of cybercrime charges in a unanimous verdict in Ecuador, was back in court last week in Quito as prosecutors, using the same evidence that helped clear him, asked an appeals court to overturn the decision with bogus allegations of unauthorized access of a telecommunications system. Armed with a grainy image of a telnet session—which the lower court already ruled was not proof of criminal activity—and testimony of an expert witness to the lower court—who never had access to the devices and systems involved in the alleged intrusion—prosecutors presented the theory that, by connecting to a router, Bini made partial unauthorized access in an attempt to break into a  system  provided by Ecuador’s national telecommunications company (CNT) to a presidency's contingency center. If this all sounds familiar, that’s because it is. In an unfounded criminal case plagued by irregularities, delays, and due process violations, Ecuadorian prosecutors have for the last five years sought to prove Bini violated the law by allegedly accessing an information system without authorization. Bini, who resides in Ecuador, was arrested at the Quito airport in 2019 without being told why. He first learned about the charges from a TV news report depicting him as a criminal trying to destabilize the country. He spent 70 days in jail and cannot leave Ecuador or use his bank accounts. Bini prevailed in a trial last year before a three-judge panel. The core evidence the Prosecutor’s Office and CNT’s lawyer presented to support the accusation of unauthorized access to a computer, telematic, or telecommunications system was a printed image of a telnet session allegedly taken from Bini’s mobile phone. The image shows the user requesting a telnet connection to an open server using their computer’s command line. The open server warns that unauthorized access is prohibited and asks for a username. No username is entered. The connection then times out and closes. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network system, it shows the trail of someone who paid a visit to a publicly accessible server—and then politely obeyed the server's warnings about usage and access. Bini’s acquittal was a major victory for him and the work of security researchers. By assessing the evidence presented, the court concluded that both the Prosecutor’s Office and CNT failed to demonstrate a crime had occurred. There was no evidence that unauthorized access had ever happened, nor anything to sustain the malicious intent that article 234 of Ecuador’s Penal Code requires to characterize the offense of unauthorized access. The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and found that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, do not constitute evidence of cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge, or what can't be verified from digital forensics, is not proper digital evidence. Prosecutors appealed the verdict and are back in court using the same image that didn’t prove any crime was committed. At the March 26 hearing, prosecutors said their expert witness’s analysis of the telnet image shows there was connectivity to the router. The witness compared it to entering the yard of someone’s property to see if the gate to the property is open or closed. Entering the yard is analogous to connecting to the router, the witness said. Actually, no. Our interpretation of the image, which was leaked to the media before Bini’s trial, is that it’s the internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and walking away. If this image could prove anything it is that no unauthorized access happened. Yet, no expert analysis was conducted in the systems allegedly affected. The  expert witness’s testimony was based on his analysis of a CNT report—he didn’t have access to the CNT router to verify its configuration. He didn’t digitally validate whether what was shown in the report actually happened and he was never asked to verify the existence of an IP address owned or managed by CNT. That’s not the only problem with the appeal proceedings. Deciding the appeal is a panel of three judges, two of whom ruled to keep Bini in detention after his arrest in 2019 because there were allegedly sufficient elements to establish a suspicion against him. The detention was later considered illegal and arbitrary because of a lack of such elements. Bini filed a lawsuit against the Ecuadorian state, including the two judges, for violating his rights. Bini’s defense team has sought to remove these two judges from the appeals case, but his requests were denied. The appeals court panel is expected to issue a final ruling in the coming days.  
>> mehr lesen

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media (Fri, 29 Mar 2024)
After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment. The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members. A Two-Part Test The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.   As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.” The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements: the official “possessed actual authority to speak” on the government’s behalf, and the official “purported to exercise that authority when he spoke on social media.” Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright. This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University. Element One: Does the official possess actual authority to speak on the government’s behalf? There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media. Element One Should Be Interpreted Broadly We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in. Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government. “At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights. When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties? If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove. One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”   We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties? Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one. Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.” The court explains: For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none. But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one. Element Two: Did the official purport to exercise government authority when speaking on social media? Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two. Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.” This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference. Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account. An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”   The court further explains: The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability. We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices. The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.” The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.
>> mehr lesen

Restricting Flipper is a Zero Accountability Approach to Security: Canadian Government Response to Car Hacking (Fri, 29 Mar 2024)
On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth. EFF explores toilet hacking While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers. This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies. When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices. Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions. It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not. We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good. *Though concerns have been raised about Flipper Devices' connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices' servers, and much of the dedicated security and pen-testing hardware which hasn't been banned also suffers from similar problems.
>> mehr lesen

EFF Asks Oregon Supreme Court Not to Limit Fourth Amendment Rights Based on Terms of Service (Thu, 28 Mar 2024)
This post was drafted by EFF legal intern Alissa Johnson. EFF signed on to an amicus brief drafted by the National Association of Criminal Defense Lawyers earlier this month petitioning the Oregon Supreme Court to review State v. Simons, a case involving law enforcement surveillance of over a year’s worth of private internet activity. We ask that the Court join the Ninth Circuit in recognizing that people have a reasonable expectation of privacy in their browsing histories, and that checking a box to access public Wi-Fi does not waive Fourth Amendment rights. Mr. Simons was convicted of downloading child pornography after police warrantlessly captured his browsing history on an A&W restaurant’s public Wi-Fi network, which he accessed from his home across the street. The network was not password-protected but did require users to agree to an acceptable use policy, which noted that while web activity would not be actively monitored under normal circumstances, A&W “may cooperate with legal authorities.” A private consultant hired by the restaurant noticed a device on the network accessing child pornography sites and turned over logs of all of the device’s unencrypted internet activity, both illegal and benign, to law enforcement. The Court of Appeals asserted that Mr. Simons had no reasonable expectation of privacy in his browsing history on A&W’s free Wi-Fi network. We disagree. Browsing history reveals some of the most sensitive personal information that exists—the very privacies of life that the Fourth Amendment was designed to protect. It can allow police to uncover political and religious affiliation, medical history, sexual orientation, or immigration status, among other personal details. Internet users know how much of their private information is exposed through browsing data, take steps to protect it, and expect it to remain private. Courts have also recognized that browsing history offers an extraordinarily detailed picture of someone’s private life. In Riley v. California, the Supreme Court cited browsing history as an example of the deeply private information that can be found on a cell phone. The Ninth Circuit went a step further in holding that people have a reasonable expectation of privacy in their browsing histories. People’s expectation of privacy in browsing history doesn’t disappear when tapping “I Agree” on a long scroll of Terms of Service to access public Wi-Fi. Private businesses monitoring internet activity to protect their commercial interests does not license the government to sidestep a warrant requirement, or otherwise waive constitutional rights. The price of participation in public society cannot be the loss of Fourth Amendment rights to be free of unreasonable government infringement on our privacy. As the Supreme Court noted in Carpenter v. United States, “A person does not surrender all Fourth Amendment protection by venturing into the public sphere.” People cannot negotiate the terms under which they use public Wi-Fi, and in practicality have no choice but to accept the terms dictated by the network provider. The Oregon Court of Appeals’ assertion that access to public Wi-Fi is convenient but not necessary for participation in modern life ignores well-documented inequalities in internet access across race and class. Fourth Amendment rights are for everyone, not just those with private residences and a Wi-Fi budget. Allowing private businesses’ Terms of Service to dictate our constitutional rights threatens to make a “crazy quilt” of the Fourth Amendment, as the U.S. Supreme Court pointed out in Smith v. Maryland. Pinning constitutional protection to the contractual provisions of private parties is absurd and impracticable. Almost all of us rely on Wi-Fi outside of our homes, and that access should be protected against government surveillance. We hope that the Oregon Supreme Court accepts Mr. Simons’ petition for review to address the important constitutional questions at stake in this case.
>> mehr lesen

Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction (Tue, 26 Mar 2024)
EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations. Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties. The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression. The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote: The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination. In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.” We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking. In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF. We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.
>> mehr lesen

Speaking Freely: Robert Ssempala (Tue, 26 Mar 2024)
*This interview has been edited for length and clarity.  Robert Ssempala is a longtime press freedom and social justice advocate. He serves as Executive Director at Human Rights Network for Journalists-Uganda, a network of journalists in Uganda working towards enhancing the promotion, protection, and respect of human rights through defending and building the capacities of journalists, to effectively exercise their constitutional rights and fundamental freedoms for collective campaigning through the media. Under his leadership, his organization has supported hundreds of journalists who have been assaulted, imprisoned, and targeted in the course of their work.   York: What does free speech or free expression mean to you?  It means being able to give one’s opinions and ideas freely without fear of reprisals or without fearing facing criminal sanctions, and without being concerned about how another feels about their ideas or opinions. Sometimes even if it’s offensive, it’s one’s opinion. For me, it’s entirely how one wants to express themselves that is all about having the liberty to speak freely.  York: What are the qualities that make you passionate about free expression?  For me, it is the light for everyone when they’re able to give their ideas and opinions. It is having a sense of liberty to have an idea. I am very passionate about listening to ideas, about everyone getting to speak what they feel is right. The qualities that make me passionate about it are that, first, I’m from a media background. So, during that time I learned that we are going to receive the people’s ideas and opinions, disseminate them to the wider public, and there will be feedback from the public about what has come out from one side to the other. And that quality is so dear to my heart. And second, it is a sense of freedom that is expressed at all levels, in any part of the country or the world, being the people’s eyes and ears, especially at their critical times of need.  York: I want to ask you more about Uganda. Can you give us a short overview of what the situation for speech is like in the country right now?  The climate in Uganda is partly free and partly not free, depending on the nature of the issues at hand. Those that touch civil and political rights are very highly restricted and it has attracted so many reprisals for those that seek to express themselves that way. I work for the Human Rights Network for Journalists-Uganda (HRNJ-Uganda) which is a non-governmental media rights organization, so we monitor and document annually the incidents, trends, and patterns touching freedom of expression and journalists’ rights. Most of the cases that we have received, documented, and worked on are stemming from civil and political rights. We receive less of those that touch economic, social, and cultural rights. So depending on where you’re standing, those media houses and journalists that are critically independent and venture into investigative practices are highly targeted. They have been attacked physically, their gadgets have been confiscated and sometimes even damaged deliberately. Some have lost their jobs under duress because a majority of media ownership in this country is by the political class or lean toward the ruling political party. As such, they want to be seen to be supportive of the regime, so they kind of tighten the noose on all freedom of expression spaces within media houses and prevail over their journalists. This by any measure has led to heightened self-censorship.  But also, those journalists that seem to take critical lines are targeted. Some are even blacklisted. We can say that from the looks of things that times around political campaigns and elections are the tightest for freedom of expression in this country, and most cases have been reported around such times. We normally have elections every five years. So every three years after an election electioneering starts. And that’s when we see a lot of restrictions coming from the government through its regulation bodies like the Uganda Communications Commission, which is the communications regulator in my country. Also from the Media Council of Uganda, which was put in place by an act of Parliament to oversee the practices of media. And from the police or security apparatus in this country. So it’s a very fragile environment within which to practice. The journalists operate under immense fear and there are very high levels of censorship. The law has increasingly been used to criminalize free speech. That’s how I’d describe the current environment.  York: I understand that the Computer Misuse Act as well as cybercrime legislation have been used to target journalists. Have you or any of your clients experienced censorship through abuse of computer crime laws?  We have a very Draconian law called the Computer Misuse Amendment Act. It was amended just last year to make it even worse. It has been now the walking stick of the proponents of the regime that don’t want to be subjected to public scrutiny, that don’t want to be held accountable politically in their offices. So abuses of public trust and power of their offices are hidden under the Computer Misuse Amendment Act. And most journalists, most editors, most managers have been, from time to time, interrogated at the Criminal Investigations Directorate of the police over what they have written about the powerful personalities especially in the political class – sometimes even from the business class – but mainly it’s from the political class. So it is used to insulate the powerful from being held accountable. Sadly, most of these cases are politically motivated. Most of them have not even ended up in courts of law, but have been used to open up charges against the media practitioners who have, from time to time, kept reporting and answering to the police for a long time without being presented to court or that are presented at a time when they realize that the journalists in question are becoming a bit unruly. So these laws are used to contain the journalists.  Since most of the stories that have been at the highlight of the regime have been factual, they have not had reason to run to Court, but the effect of this is very counterproductive to the journalists’ independence, to their ability to concentrate on more stories – because they’re always thinking about these cases pending before them. Also, media houses now become very fearful and learn how to behave to not be in many cases of that nature. So the Computer Misuse Act, criminal defamation and now the most recent one, the Anti-Homosexuality Act (AHA) – which was passed by Parliament with very drastic clauses– are clawback legislation for press freedom in Uganda. The AHA in itself fundamentally affected the practice of journalism. The legislation falls short of drawing a clear distinction between what amounts to promotion or education [with regards to sharing material related to homosexuality]. Yet one of the crucial roles of the media is to educate the population about many things, but here, it’s not clear when the media is promoting and when it is educating. So it wants to slap a blackout completely on when you’re discussing the LGBTQI+ issues in the country. So, this law is very ambiguous and therefore susceptible to abuse at the expense of freedom speech  And it also introduces very drastic sanctions. For instance, if one writes about homosexuality their media operating license is revoked for ten years. And I’m sure no media house can stand up again after ten years of closure and can still breathe life. Also, the AHA generalizes the practice of an individual journalist. If, for instance, one of your journalists writes something that the law looks at as against it, the entire media house license is revoked for ten years, but also you’re imprisoned for five years – you as the writer. In addition, you receive a hefty fine of the equivalent of 1 billion Uganda shillings, that’s about 250,000 euros. Which is really too much for any media house operating in Uganda.  So that alone has created a lot of fear to discuss these issues, even when the law was passed in such a rushed manner with total disregard for the input of key stakeholders like the media, among others. As a media rights organization, we had looked at the draft bill and we were planning to make a presentation before the Parliamentary Committee. But within a week they closed all public hearings opinions, which limited the space for engagement. Within a few days the law had been written, presented again, and then assented to by the President. No wonder it’s being challenged in the Constitutional Court. This is the second time actually that such a law has been challenged. Of course, there are many other laws, like the Anti-Terrorism Act, which has not clearly defined the role of a journalist who speaks to a person who engages in subversive activities as terrorism. Where the law presupposes that before interviewing a person or before hosting them in your shows, you must have done a lot of background checks to make sure they have not engaged in such terrorism acts. So if you do not, the law here presses a criminal liability on the talk show host for promoting and abetting terrorism. And if there’s a conviction, the ultimate punishment is being sentenced to death. So these couple of laws are really used to curtail freedom of expression.  York: Wow, that’s incredible. I understand how this impacts media houses, but what would you say the impact is on ordinary citizens or individual activists, for example?  Under the Computer Misuse Amendment Act, the amended Act is restrictive and inhibitive to freedom of expression in regards to citizen journalism. It introduces such stringent conditions, like, if I’m going to record a video of you, say that I’m a journalist, citizen journalist or an activist who is not working for a media house, I must seek your permission before I record you in case you’re committing a crime. The law presupposes that I have no right to record you and later on disseminate the video without your explicit permission. Notably, the law is silent on the nature of the admissible permission, whether it is an email, SMS, WhatsApp, voice note, written note, etc. Also, the law presupposes that before I send you such a video, I must seek for your permission as the intended recipient of the said message. For instance, if I send you an email and you think you don’t need it, you can open a case against me for sending you unsolicited information. Unsolicited information – that’s the word that’s used.  So the law is so amorphous in this nature that it completely closes out the liberty of a free society where citizens can engage in discussions, dialogues, or give opinions or ideas. For instance, I could be a very successful farmer, and I think the public could benefit from my farming practices, and I record a lot of what I do and I disseminate those videos. Somebody who receives this, wherever they are, can run to court and use this amended Computer Misuse Act to open up charges against me. And the fines are also very hefty compared to the crimes that the law talks about. So it is so evident that the law is killing citizen journalism, dissent, and activism at all levels. The law does not seem to cater to a free society where the individual citizens can express themselves at any one time, can criticize their leaders, and can hold them accountable. In the presence of this law, we do not have a society that can hold anyone accountable or that can keep the powerful in check. So the spirit of the law is bad. The powerful fence themselves off from the ordinary citizens that are out there watching and not able to track their progress of things or raise red flags through the different social media platforms. But we have tried to challenge this law. There is a group of us, 13 individual activists and CSOs that have gone to the Constitutional Court to say, “this law is counterproductive to freedom of expression, democracy, rule of law and a free society.” We believe that the court will agree with us given its key function of promoting human rights, good governance, democracy, and the rule of law.  York: That was my next question- I was going to ask how are people fighting against these laws?  People are very active in terms of pushing back and to that extent we have many petitions that are in court. For instance, the Computer Misuse Amendment is being challenged. We had the Anti-pornographic Act of 2014 which was so amorphous in its nature that it didn’t clearly define what actually amounts to pornography. For instance, if I went around people in a swimming pool in their swimming trunks and took photos and carried those in the newspaper or on TV, that would be promoting pornography. So that was counterproductive to journalism so we went to court. And, fortunately, a court ruled in our favor. So the citizens are really up in arms to fight back because that’s the only way we can have civic engagements that are not restricted through a litany of such laws. There has been civic participation and engagement through mass media, dialogues with key actors, among others. However, many fear to speak out due to fear of reprisals, having seen the closure of media houses, the arrest and detention of activists and journalists, and the use of administrative sanctions to curtail free expression.  York: Are there ways in which international groups and activists can stand in solidarity with those of you who are fighting back against these laws?  There’s a lot of backlash on organizations, especially local ones, that tend to work a lot with international organizations. The government seems to be so threatened by the international eye as compared to local eyes, because recently it banned the UN Human Rights Office. They had to wind up business and leave the country. Also, the offices of the Democratic Governance Facility (DGF), which was a basket of embassies and the EU that were the biggest funding entity for the civil society. And actually for the government, too, because they were empowering citizens, you know, empowering the demand side to heighten its demand for services from the supply side. The government said no and they had to wind up their offices and leave. This has severely crippled the work of civil society, media, and, generally, governance. The UN played an important role before they left and we now have that gap. Yet this comes at a time when our national Uganda Human Rights Commission is at its weakest due to a number of structural challenges characterizing it. The current leadership of the Commission is always up in arms against the political opposition for accusing government of committing human rights excesses against its members. So we do our best to work with international organizations through sharing our voices. We have an African Hub, like the African IFEX, where the members try to replicate voices from here. In that nature we do try a lot, but it’s not very easy for them to come here and do their practices. Just like you will realize a lot of foreign correspondents, foreign journalists, who work in Uganda are highly restricted. It’s a tug of war to have their licenses renewed. Because it’s politically handled. It was taken away from the professional body of the Media Council of Uganda to the Media Centre of Uganda, which is a government mouthpiece.  So for the critical foreign correspondents their licenses are rarely renewed. When it comes to election times most of them are blocked from even coming here to cover the elections. The international media development bodies can help to build capacities of our media development organizations, facilitate research, provide legal aid support, and engage the government on the excesses of the security forces and some emergency responses for victims, among others.  York: Is there anything that I didn’t ask that you’d like to share with our readers?  One thing I was to add is about trying to have an international focus on Uganda in the build up to elections. There’s a lot of havoc that happens to the citizens, but most importantly, to the activists and human rights defenders. Either cultural activists or media activists- a lot happens. And most of these things are not captured well because it is prior to the peak of campaigns or there is fear by the local media of capturing such situations. So by the time we get international attention, sometimes the damage is really too irreparable and a lot has happened. As opposed to if there was that international focus from the world. To me, that should really be captured because it would mitigate a lot that has happened.   
>> mehr lesen

Podcast Episode: About Face (Recognition) (Tue, 26 Mar 2024)
Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more? play %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F8d50564d-31d3-45bb-b0ab-4b9704d85c8a%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge (You can also find this episode on the Internet Archive and on YouTube.) New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here.  In this episode, you’ll learn about:  The difficulty of anticipating how information that you freely share might be used against you as technology advances.  How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use.  The racial biases that were built into many face recognition technologies.   How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow.  Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism.  Resources:  The New Yorker: “A Facial-Recognition Tour of New York” (Jan. 15, 2024)  EFF: “Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now” (Jan. 31, 2020)  EFF on Face Recognition Technology  Testimony of EFF’s Jennifer Lynch on Face Recognition to the House Oversight Committee (March 2017)  ACLU v. Clearview AI, a 2020 lawsuit alleging violation of Illinois residents’ privacy rights under the Illinois Biometric Information Privacy Act  EFF’s amicus brief in ACLU v. Clearview AI (Nov. 5, 2020)  EFF’s amicus brief in consolidated federal litigation against Clearview AI (July 9, 2021)  Frankenbook: “The Bitter Aftertaste of Technical Sweetness” (May 17, 2018)  What do you think of “How to Fix the Internet?” Share your feedback here.  Transcript KASHMIR HILL Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts. But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money. And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in. And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used? CINDY COHN That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade. She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI. Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement. I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation. JASON KELLEY And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet. CINDY COHN The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right. JASON KELLEY It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode. Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation. KASHMIR HILL Clearview AI scraped billions of photos from the internet - JASON KELLEY Billions with a B. Sorry to interrupt you, just to make sure people hear that. KASHMIR HILL Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database. They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears. So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there. JASON KELLEY Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit? KASHMIR HILL Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten. I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are. On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos. It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it. CINDY COHN Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main. KASHMIR HILL Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up. But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases. So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information. We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large. CINDY COHN Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention. It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission. I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right? For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash. But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash? KASHMIR HILL Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from. And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to. And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials. But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though. JASON KELLEY We were very upset about this. CINDY COHN Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened. KASHMIR HILL And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters. You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen. CINDY COHN I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing. Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images. KASHMIR HILL And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with. JASON KELLEY What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right? KASHMIR HILL I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves. And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face. So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past. JASON KELLEY Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill. CINDY COHN So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building KASHMIR HILL The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades. The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well. But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans. And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness. CINDY COHN I love that term. KASHMIR HILL This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back. So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first? And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it. And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology. So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left. And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country. And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd. And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage. CINDY COHN I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't. JASON KELLEY One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example. KASHMIR HILL So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails. And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps. It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots. You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken. But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database. So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways. CINDY COHN And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it. And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system. And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data. KASHMIR HILL Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with. And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers. That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets. And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore. Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school. And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be? CINDY COHN I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives. KASHMIR HILL I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him. Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated. And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives. CINDY COHN Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right? Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed. KASHMIR HILL In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology. And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden. The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling. CINDY COHN I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right? Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that. And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize. KASHMIR HILL Laws work is one of the themes of the book. CINDY COHN Thank you so much, Kash, for joining us. It was really fun to talk about this important topic. KASHMIR HILL Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you. JASON KELLEY That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF. CINDY COHN Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story. JASON KELLEY Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did. And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one. CINDY COHN Yeah, I love that perspective, right? JASON KELLEY I mean, it’s a terrible thing, but it is helpful to think about, right? CINDY COHN Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across. The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws. There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look. So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here. And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones. JASON KELLEY Exactly. CINDY COHN Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well. You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story. JASON KELLEY Thanks for joining us for this episode of how to fix the internet. If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon. And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone. You can find links to their music in our episode notes, or on our website at eff.org/podcast. Our theme music is by Nat Keefe of BeatMower with Reed Mathis How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley. CINDY COHN And I’m Cindy Cohn.
>> mehr lesen

No KOSA, No TikTok Ban | EFFector 36.4 (Mon, 25 Mar 2024)
Want to hear about the latest news in digital rights? Well, you're in luck! EFFector 36.4 is out now and covers the latest topics, including our stance on the unconstitutional TikTok ban (spoiler: it's bad), a victory helping Indybay resist an unlawful search warrant and gag order, and thought-provoking comments we got from thousands of young people regarding the Kids Online Safety Act. You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below: LISTEN ON YouTube EFFector 36.4 | No KOSA, No TikTok Ban Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.  Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
>> mehr lesen

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks (Sat, 23 Mar 2024)
This post was written by Rachel Hochhauser, an EFF legal intern We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product. On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers. In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies. ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene. ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms. Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence. In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool. This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.
>> mehr lesen

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas (Fri, 22 Mar 2024)
In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list. Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately. Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software. Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face. But it gets worse. In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face. These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing.  There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect. EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 
>> mehr lesen

EFF and 34 Civil Society Organizations Call on Ghana’s President to Reject the Anti-LGBTQ+ Bill  (Fri, 22 Mar 2024)
MPs in Ghana’s Parliament voted to pass the country’s draconian ‘Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill’ on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law.  EFF has joined 34 civil society organizations to demand that President Akufo-Addo vetoes the Family Values Bill. The legislation criminalizes being LGBTQ+ or an ally of LGBTQ+ people, and also imposes custodial sentences for users and social media companies in punishment for vague, ill-defined offenses like promoting “change in public opinion of prohibited acts” on social media. This would effectively ban all speech and activity online and offline that even remotely supports LGBTQ+ rights. The letter concludes: “We also call on you to reaffirm Ghana’s obligation to prevent acts that violate and undermine LGBTQ+ people’s fundamental human rights, including the rights to life, to information, to free association, and to freedom of expression.” Read the full letter here.
>> mehr lesen

Disinformation and Elections: EFF and ARTICLE 19 Submit Key Recommendations to EU Commission (Thu, 21 Mar 2024)
Global Elections and Platform Responsibility This year is a major one for elections around the world, with pivotal races in the U.S., the UK, the European Union, Russia, and India, to name just a few. Social media platforms play a crucial role in democratic engagement by enabling users to participate in public discourse and by providing access to information, especially as public figures increasingly engage with voters directly. Unfortunately elections also attract a sometimes dangerous amount of disinformation, filling users' news feed with ads touting conspiracy theories about candidates, false news stories about stolen elections, and so on. Online election disinformation and misinformation can have real world consequences in the U.S. and all over the world. The EU Commission and other regulators are therefore formulating measures platforms could take to address disinformation related to elections.  Given their dominance over the online information space, providers of Very Large Online Platforms (VLOPs), as sites with over 45 million users in the EU are called, have unique power to influence outcomes.  Platforms are driven by economic incentives that may not align with democratic values, and that disconnect  may be embedded in the design of their systems. For example, features like engagement-driven recommender systems may prioritize and amplify disinformation, divisive content, and incitement to violence. That effect, combined with a significant lack of transparency and targeting techniques, can too easily undermine free, fair, and well-informed electoral processes. Digital Services Act and EU Commission Guidelines The EU Digital Services Act (DSA) contains a set of sweeping regulations about online-content governance and responsibility for digital services that make X, Facebook, and other platforms subject in many ways to the European Commission and national authorities. It focuses on content moderation processes on platforms, limits targeted ads, and enhances transparency for users. However, the DSA also grants considerable power to authorities to flag content and investigate anonymous users - powers that they may be tempted to mis-use with elections looming. The DSA also obliges VLOPs to assess and mitigate systemic risks, but it is unclear what those obligations mean in practice. Much will depend on how social media platforms interpret their obligations under the DSA, and how European Union authorities enforce the regulation. We therefore support the initiative by the EU Commission to gather views about what measures the Commission should call on platforms to take to mitigate specific risks linked to disinformation and electoral processes. Together with ARTICLE 19, we have submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Furthermore, DSA risk assessment and mitigation compliance evaluations should focus primarily on ensuring respect for fundamental rights.  We further argue against using watermarking of AI content to curb disinformation, and caution against the draft guidelines’ broadly phrased recommendation that platforms should exchange information with national authorities. Any such exchanges should take care to respect human rights, beginning with a transparent process.  We also recommend that the guidelines pay particular attention to attacks against minority groups or online harassment and abuse of female candidates, lest such attacks further silence those parts of the population who are already often denied a voice. EFF and ARTICLE 19 Submission: https://www.eff.org/document/joint-submission-euelections
>> mehr lesen

EFF Seeks Greater Public Access to Patent Lawsuit Filed in Texas (Wed, 20 Mar 2024)
You’re not supposed to be able to litigate in secret in the U.S. That’s especially true in a patent case dealing with technology that most internet users rely on every day.  Unfortunately, that’s exactly what’s happening in a case called Entropic Communications, LLC v. Charter Communications, Inc. The parties have made so much of their dispute secret that it is hard to tell how the patents owned by Entropic might affect the Data Over Cable Service Interface Specifications (DOCSIS) standard, a key technical standard that ensures cable customers can access the internet. In Entropic, both sides are experienced litigants who should know that this type of sealing is improper. Unfortunately, overbroad secrecy is common in patent litigation, particularly in cases filed in the U.S. District Court for the Eastern District of Texas. EFF has sought to ensure public access to lawsuits in this district for years. In 2016, EFF intervened in another patent case in this very district, arguing that the heavy sealing by a patent owner called Blue Spike violated the public’s First Amendment and common law rights. A judge ordered the case unsealed. As Entropic shows, however, parties still believe they can shut down the public’s access to presumptively public legal disputes. This secrecy has to stop. That’s why EFF, represented by the Science, Health & Information Clinic at Columbia Law School, filed a motion today seeking to intervene in the case and unseal a variety of legal briefs and evidence submitted in the case. EFF’s motion argues that the legal issues in the case and their potential implications for the DOCSIS standard are a matter of public concern and asks the district court judge hearing the case to provide greater public access. Protective Orders Cannot Override The Public’s First Amendment Rights As EFF’s motion describes, the parties appear to have agreed to keep much of their filings secret via what is known as a protective order. These court orders are common in litigation and prevent the parties from disclosing information that they obtain from one another during the fact-gathering phase of a case. Importantly, protective orders set the rules for information exchanged between the parties, not what is filed on a public court docket. The parties in Entropic, however, are claiming that the protective order permits them to keep secret both legal arguments made in briefs filed with the court as well as evidence submitted with those filings. EFF’s motion argues that this contention is incorrect as a matter of law because the parties cannot use their agreement to abrogate the public’s First Amendment and common law rights to access court records. More generally, relying on protective orders to limit public access is problematic because parties in litigation often have little interest or incentive to make their filings public. Unfortunately, parties in patent litigation too often seek to seal a variety of information that should be public. EFF continues to push back on these claims. In addition to our work in Texas, we have also intervened in a California patent case, where we also won an important transparency ruling. The court in that case prevented Uniloc, a company that had filed hundreds of patent lawsuits, from keeping the public in the dark as to its licensing activities. That is why part of EFF’s motion asks the court to clarify that parties litigating in the Texas district court cannot rely on a protective order for secrecy and that they must instead seek permission from the court and justify any claim that material should be filed under seal. On top of clarifying that the parties’ protective orders cannot frustrate the public’s right to access federal court records, we hope the motion in Entropic helps shed light on the claims and defenses at issue in this case, which are themselves a matter of public concern. The DOCSIS standard is used in virtually all cable internet modems around the world, so the claims made by Entropic may have broader consequences for anyone who connects to the internet via a cable modem. It’s also impossible to tell if Entropic might want to sue more cable modem makers. So far, Entropic has sued five big cable modem vendors—Charter, Cox, Comcast, DISH TV, and DirecTV—in more than a dozen separate cases. EFF is hopeful that the records will shed light on how broadly Entropic believes its patents can reach cable modem technology. EFF is extremely grateful that Columbia Law School’s Science, Health & Information Clinic could represent us in this case. We especially thank the student attorneys who worked on the filing, including Sean Hong, Gloria Yi, Hiba Ismail, and Stephanie Lim, and the clinic’s director, Christopher Morten. Related Cases:  Entropic Communications, LLC v. Charter Communications, Inc.
>> mehr lesen

The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies (Wed, 20 Mar 2024)
There has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. The fretting has only gotten worse as a result of a U.S. State Department-commissioned report on the security risk of weaponized AI. Whether these messages come from popular films like a War Games or The Terminator, reports that in digital simulations AI supposedly favors the nuclear option more than it should, or the idea that AI could assess nuclear threats quicker than humans—all of these scenarios have one thing in common: they end with nukes (almost) being launched because a computer either had the ability to pull the trigger or convinced humans to do so by simulating imminent nuclear threat. The purported risk of AI comes not just from yielding “control" to computers, but also the ability for advanced algorithmic systems to breach cybersecurity measures or manipulate and social engineer people with realistic voice, text, images, video, or digital impersonations.  But there is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons. This means both denying algorithms ultimate decision making powers, but it also means building in protocols and safeguards so that some kind of generative AI cannot be used to impersonate or simulate the orders capable of launching attacks. It’s really simple, and we’re by far not the only (or the first) people to suggest the radical idea that we just not integrate computer decision making into many important decisions–from deciding a person’s freedom to launching first or retaliatory strikes with nuclear weapons. First, let’s define terms. To start, I am using "Artificial Intelligence" purely for expediency and because it is the term most commonly used by vendors and government agencies to describe automated algorithmic decision making despite the fact that it is a problematic term that shields human agency from criticism. What we are talking about here is an algorithmic system, fed a tremendous amount of historical or hypothetical information, that leverages probability and context in order to choose what outcomes are expected based on the data it has been fed. It’s how training algorithmic chatbots on posts from social media resulted in the chatbot regurgitating the racist rhetoric it was trained on. It’s also how predictive policing algorithms reaffirm racially biased policing by sending police to neighborhoods where the police already patrol and where they make a majority of their arrests. From the vantage of the data it looks as if that is the only neighborhood with crime because police don’t typically arrest people in other neighborhoods. As AI expert and technologist Joy Buolamwini has said, "With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion... Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made." Military Tactics Shouldn’t Drive AI Use As EFF wrote in 2018, “Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.” (You can read EFF’s whole 2018 white paper: The Cautious Path to Advantage: How Militaries Should Plan for AI here)  Just like in policing, in the military there must be a compelling directive (not to mention the marketing from eager companies hoping to get rich off defense contracts) to constantly be innovating in order to claim technical superiority. But integrating technology for innovation’s sake alone creates a great risk of unforeseen danger. AI-enhanced targeting is liable to get things wrong. AI can be fooled or tricked. It can be hacked. And giving AI the power to escalate armed conflicts, especially on a global or nuclear scale, might just bring about the much-feared AI apocalypse that can be avoided just by keeping a human finger on the button. We’ve written before about how necessary it is to ban attempts for police to arm robots (either remote controlled or autonomous) in a domestic context for the same reasons. The idea of so-called autonomy among machines and robots creates the false sense of agency–the idea that only the computer is to blame for falsely targeting the wrong person or misreading signs of incoming missiles and launching a nuclear weapon in response–obscures who is really at fault. Humans put computers in charge of making the decisions, but humans also train the programs which make the decisions. AI Does What We Tell It To In the words of linguist Emily Bender,  “AI” and especially its text-based applications, is a “stochastic parrot” meaning that it echoes back to us things we taught it with as “determined by random, probabilistic distribution.” In short, we give it the material it learns, it learns it, and then draws conclusions and makes decisions based on that historical dataset. If you teach an algorithmic model that 9 times out of 10 a nation will launch a retaliatory strike when missiles are fired at them–the first time that model mistakes a flock of birds for inbound missiles, that is exactly what it will do. To that end, AI scholar Kate Crawford argues, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests.”  AI does what we teach it to. It mimics the decisions it is taught to make either through hypotheticals or historical data. This means that, yet again, we are not powerless to a coming AI doomsday. We teach AI how to operate. We give it control of escalation, weaponry, and military response. We could just not. Governing AI Doesn’t Mean Making it More Secret–It Means Regulating Use  Part of the recent report commissioned by the U.S. Department of State on the weaponization of AI included one troubling recommendation: making the inner workings of AI more secret. In order to keep algorithms from being tampered with or manipulated, the full report (as summarized by Time) suggests that a new governmental regulatory agency responsible for AI should criminalize and make potentially punishable by jail time publishing the inner workings of AI. This means that how AI functions in our daily lives, and how the government uses it, could never be open source and would always live inside a black box where we could never learn the datasets informing its decision making. So much of our lives is already being governed by automated decision making, from the criminal justice system to employment, to criminalize the only route for people to know how those systems are being trained seems counterproductive and wrong. Opening up the inner workings of AI puts more eyes on how a system functions and makes it more easy, not less, to spot manipulation and tampering… not to mention it might mitigate the biases and harms that skewed training datasets create in the first place. Conclusion Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple with—but we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”—they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic. This doesn’t mean that people won’t weaponize AI—and already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control. 
>> mehr lesen

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests (Tue, 19 Mar 2024)
The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.   The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city.  According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.”  Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight.  Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.”  You can read the lawsuit here
>> mehr lesen

Speaking Freely: Maryam Al-Khawaja (Tue, 19 Mar 2024)
*This interview has been edited for length and clarity. Maryam Al-Khawaja is a Bahraini Woman Human Rights Defender who works as a consultant and trainer on Human Rights. She is a leading voice for human rights and political reform in Bahrain and the Gulf region. She has been influential in shaping official responses to human rights atrocities in Bahrain and the Gulf region by leading campaigns and engaging with prominent policymakers around the world. She played an instrumental role in the pro-democracy protests in Bahrain’s Pearl Roundabout in February 2011. These protests triggered a government response of widespread extra judicial killings, arrests, and torture, which she documented extensively over social media. Due to her human rights work, she was subjected to assault, threats, defamation campaigns, imprisonment and an unfair trial. She was arrested on illegitimate charges in 2014 and sentenced in absentia to one year in prison. She currently has an outstanding arrest warrant and four pending cases, one of which could carry a life sentence. She serves on the Boards of the International Service for Human Rights, Urgent Action Fund, CIVICUS and the Bahrain Institute for Rights and Democracy. She also previously served as Co-Director at the Gulf Center for Human Rights and Acting President of the Bahrain Centre for Human Rights. York: Can you introduce yourself and tell us a little about your work? Maybe provide us a brief outline of your history as a free expression advocate going back as far as you’d like. Maryam: Sure, so my name is Maryam Al-Khawaja. I’m a Bahraini-Danish human rights defender and advocate. I’ve worked in many different spaces around human rights and on many different thematic issues. Of course freedom of expression is an intricate part of nearly any kind of human rights advocacy work. And it’s one of the issues that is critical to the work that we do and critical to the civil society space because it not only affects people who live in dictatorships, but also people who live in democracies or pseudodemocracies. A lot of times there’s not necessarily an agreement around what freedom of expression is or a definition of what falls under the scope of freedom of expression. And also to who and how that applies. So while some things for some people might be considered free expression, for others it might be considered not as free expression and therefore it’s not protected. I think it’s something that I’ve both experienced having done the work and having taken part in the revolution in Bahrain and watching the difference between how we went from self-censorship prior to the uprising and then how people took to the streets and started saying whatever they wanted. That moment of just breaking down that wall and feeling almost like you could breathe again because you suddenly could express yourself. Not necessarily without fear – because the consequences were still there – but more so that you were doing it anyway, despite the fear. I think that’s one of the strongest memories I have of the importance of speech and that shift that happens even internally because, yes, there’s censorship in Bahrain, but censorship then creates self-censorship for protection and self preservation. It’s interesting because I then left Bahrain and came to Denmark and I started seeing how, as a Brown, Muslim woman, my right to free expression doesn’t look the same as someone who is White living in Europe. So I also had to learn those intricacies and how that works and how we stand up to that or fight against that. It’s… been a long struggle, to keep it short. York: That’s a really strong answer and I want to come back to something you said, and that’s that censorship creates self-censorship. I think we both know the moment we’re living in right now, and I’m seeing a lot of self-censorship even from people who typically are very staunch in standing up for freedom of expression. I’m curious, in the past decade, how has the idea that censorship creates self-censorship impacted you and the people around you or the activists that you know? One part of it is when you’re an advocate and you look how I look – especially when I was wearing the headscarf – you learn very quickly that there are things that people find acceptable coming from you, and things they find not acceptable. There are judgements and stereotypes that are applied to you and therefore what you can and cannot say actually has to also be taken into that context. Like to give you a small example, one of the things that I faced a lot during my advocacy and my work on Bahrain was I was constantly put in a space where I had to explain or… not justify – because I don’t support the use of violence generally – but I was put in a defensive position of “Why are you as civil society not telling these youth not to use Molotov cocktails on the street of Bahrain?” And I would try to explain that while I don’t justify the use of violence generally, it’s important to understand the context. And to understand that a small group of youth in Bahrain started using Molotov cocktails as a way to defend themselves, to try and get the riot police out of their villages when the riot police would come in in the middle of the night and basically go on a rampage, break into people’s homes, beat people to a pulp, and then take people and disappear them or torture them and so on. And so one of the ways for them to try and fight back was to use Molotov cocktails to at least get the riot police to stop coming into their villages. Of course this was always taken as me justifying violence or me supporting terrorism. Unfortunately, it wasn’t surprising, but it was such a clarifying moment. Then I watched those very same people at the very same media outlets literally put out tutorials on how to make Molotov cocktails for people in Ukraine fighting back against Russia. It’s not surprising because I know that’s how the world works, I know that in the world that we live in and the societies that we live in, my life is not equal to that of others – specific others. I very quickly learned that my work as a person of color – and I don’t really like that term – but as a person of the global majority, it’s my proximity to whiteness that decides my value as a human being. Unfortunately. So that’s one layer of it. Another layer of it is here in Europe. I live in Copenhagen. I travel in the West quite often. I’ve also seen the difference of how we’re positioned as – especially Muslims with the incredible amounts of Islamophobia especially in Copenhagen – and seeing how politicians can come out and say incredibly Islamophobic and racist things and be written off as freedom of expression. But if someone of the global majority were to do that they would immediately be dubbed as extremist or a radical. There is this extreme double standard when it comes to what freedom of expression looks like and how it’s implemented. And I’ll end with this example, with the Charlie Hebdo example. There was such a huge international solidarity movement when the attack on Charlie Hebdo happened in France. And obviously the killing that happened, there doesn’t even need to be a conversation around that, of course everyone should condemn that. What I find lacking in the conversation around freedom of expression when it comes to Charlie Hebdo is that Charlie Hebdo targets Muslim minorities that are already under attack, that are already discriminated against, and, in my mind, it actually incites violence against them when it does so. Because they’re already so targeted, because they’re vilified already in the media by politicians and so on. So my approach isn’t to say, “we should start censoring these media publications” or “we should start censoring people from being able to say what they say.” I’m saying that when we’re going to implement rules or understandings around freedom of expression it needs to be implemented equally. It needs to be implemented without double standards. Without picking and choosing who gets to have freedom of expression versus who doesn’t. York: That’s such a great point. And I’m glad you brought up Charlie Hebdo. Coming back to that, it reminded me about the different governments that we saw, from my perspective, pretending to march for free expression when that happened. We saw a number of states that ranked fairly poorly on press freedom at the time. My recollection is we saw a number of countries that don’t have a great track record on freedom of expression, I think including Russia, the UAE, and Saudi Arabia, take a stance at that time. What that evokes for me is the hypocrisy of various states. We think about censorship as a potent tool for those in power to maintain power and then of course that sort of political posturing is also a very potent tool. So what are your thoughts on that? How does that inform your advocacy? Like I said, we’ve already seen it throughout Europe and throughout the United States. Right now with the Gaza situation we’re seeing this with even more clarity – and it’s not like it was hidden before, those of us that work in these spaces already knew this – but I think right now it’s just so in-your-face where people are literally getting fired from their jobs and called into HR for liking posts, for posting things basically standing against an ongoing genocide. And I think, again, it brings to the surface the double standard and the hypocrisy that exists within the spaces that talk about freedom of expression. France is actually a great example. Even when we’re talking about Charlie Hebdo; Charlie Hebdo did the cover of the magazine before they were attacked. It was mocking the Rabaa Massacre, which was one of the largest massacres to happen in Egypt in recent history. Regardless of what you think of the Muslim Brotherhood, that was a massacre, it was wrong, it should be condemned. And they poked fun at that. They had this man with a long beard who looked like the Muslim Brotherhood holding up a Quran with bullets going through the Quran and hitting him, saying, “your Quran won’t protect you.” This was considered freedom of expression even though it was mocking a literal massacre that happened in Egypt. Which, in my opinion, the Egyptian regime should be considered as committing terrorist acts for that massacre. And so in some ways that could be considered as supporting terrorism. Just like I consider what is happening to the Palestinians as a form of terrorism. The same thing with Syria and so on. But, unfortunately, it’s the people who own the discourse that get to decide what phrases and what terminologies can be applied and used where. But the point that I was making about Charlie Hebdo is that not much later after the attack on Charlie Hebdo, there was a 16 year old in France who made a cartoon cover where he mocks the attack on Charlie Hebdo. He basically used the exact same type of cartoon that they had used around the Rabaa massacre. Where there’s a guy from Charlie Hebdo holding up a copy of Charlie Hebdo and being struck by bullets and saying “your magazine doesn’t stop bullets.” And he was arrested! This 16 year old kid does this cartoon – exactly the same as the magazine had done after the massacre – and he was arrested and charged with advocating terrorism. And I think this is one of the clearest examples of how freedom of expression is not implemented on an equal level when it comes to who’s practicing it. I think it’s the same thing as what we’re seeing right now happening with Palestine. When you look at what’s happening in Germany with the amount of people being arrested [for unauthorized protests] and now we’re even hearing about raids on people’s homes. I’ve spoken to some of my friends in Germany who say that they’re literally trying to hide and get rid of any pro-Palestinian flyers or flags that they have just in case their home gets raided. It’s interesting because quite a few Arabs in Germany now are referring to Germany as Assad’s Germany. Because a lot of what’s happening in Germany right now, to them, is reminiscent of what it was like to live in Syria under Assad. I think that tells you almost everything you need to know about the double standards of how these things are implemented. I think this is where the problem comes in. You can not talk about free expression and freedom of speech without talking about how it’s related to colonialism. About how it’s related to movements for freedom. About how it’s related to the fact that much of our human rights movements in civil society are currently based on institutionalized human rights – and I’m talking specifically about the West, obviously, because there are a lot of grassroots movements in the global majority countries. But we can not talk about these things without talking about the need and importance of decolonizing our activism. My thinking right now is very much inspired by Fanon’s The Wretched of the Earth, where he talks about how when colonizers colonized, they didn’t just colonize the country and the institutions and education and all these different things. They even colonized and decided for us and dictated for us how we’re allowed to fight back. How we’re allowed to resist. And I think that’s incredibly true. There’s a very rigid understanding of the space that you’re allowed to exist in or have to exist in to be regarded as a credible human rights activist. Whether it’s for free speech or for any other human right. And so, in my mind, what we need right now is to decolonize our activism. And to step away from that idea that it’s the West that decides for us what “appropriate” or “acceptable” activism actually looks like. And start deciding for ourselves what our activism needs to look like. Because we know now that none of these people that have supported the genocide in Gaza can in any way shape or form try to dictate what humans rights look like or what activism looks like. I’ve seen this over social media over the past period and people have been saying this over and over again that what died in Gaza is that pretense. That the West gets to tell the rest of us what human rights are and what freedoms are and how we should fight for them. York: Let’s change directions for a moment. What do you think of the control that corporations have over deciding what speech parameters look like right now?  [Laughs] Where do I start? I think it’s a struggle for a lot of us. I want to first acknowledge that I have a lot of privileges that other activists don’t. When I left Bahrain in 2011 I already had Danish citizenship. Which meant that I could travel. I already had a strong command of English. Which meant that I could do meetings without the need for a translator. That I could attend and be in certain spaces. And that’s not necessarily the case for so many other activists. And so I do have a lot of privileges that have put me in the position that I am in. And I believe that part of having privileges like that means that I need to use them to be also a loud speaker for others. And to try and make this world a better place, in whatever shape and form that I can. That being said, I think that for many of us even who have had privileges that other activists don’t, it’s been a real struggle to watch the mediums and tools that we have been using for the past, over a decade, as a means of raising pressure, communicating with the world, connecting, and so on, be taken away from us. In ways that we can’t control and in ways that we don’t have a say on. I think that for a lot of – and I know especially for myself – but I think for a lot of activists who really found their voices in 2011 as part of activism especially on platforms like Twitter. When Elon Musk bought Twitter and decided to remove the verification status from all of us activists who had that for a reason. I remember I received my verification status because of the amount of fake accounts that the Bahraini government was creating at that time to impersonate me to try to discredit me. And also because I was receiving death threats and rape threats and all kinds of threats, over and over again. I received that verification status as an acknowledgement that I need support against those attacks that I was being subjected to. And it was gone overnight. It’s not just about that blue tick. It’s that people don’t see my Tweets the way that they used to. It’s about the fact that my message can’t go as far as it used to go. It’s not just because we no longer show up in people’s feeds, but also because so many people have left the platform because of how problematic it’s become. In some ways I spent 13 years focused on Twitter, building a following—obviously, my work is so much more than Twitter—but Twitter has been a tool for the work that I do. And really building a following and making sure that people trusted me and the information that I shared and that I was a trusted and credible source of information. Not just on Bahrain, but on all of the different types of work that I do. And then suddenly overnight, at the age of 35, 36 having to recreate that all over again on Instagram. And on TikTok. And the thing is… we’re tired. We’re exhausted. We’re burnt out. We’re not doing well. Almost everyone I know is either depressed or sick or dealing with some form of health issue. Thirteen years after the uprisings we’re not doing well and we’re not okay. What’s happening with Gaza right now is hitting all of us. I think it’s incredibly triggering and hurtful. I think the idea that we now have to make that effort to rebuild platforms to be able to reach people, it’s not just “Oh my god, I don’t have the energy for it.” It’s like someone tore a limb from us and we have to try to regrow that limb. And how do you regrow a limb, right? It’s incredibly painful. Obviously, it’s nice to have a large following and for people to recognize you and know who you are and so on—and it’s hard work not letting that get to your head—but, for me, losing my voice is not about the follower count or how much people know who I am. It’s the fact that I can no longer get the same kind of attention for my father’s case. I can no longer get the same kind of attention for the hundreds of people who no one knows their names or their faces who are sitting in prison cells in Bahrain who are still being tortured. For the children who are still being arrested for protesting. For Palestine and Bahrain. I can no longer make sure that I’m a loudspeaker so that people know these things are happening. A lot of people talked about and wrote about the damage that Elon Musk did to Twitter and to that “public square” that we have. Twitter has always had its problems. And Meta has always had its problems. But it was a problem where we at least had a voice. We weren’t always heard and we weren’t always able to influence things, but at least it felt like we had a voice. Now it doesn’t feel like we have a voice. There was a lot of conversation around this, around the taking away of the public square, but there are these intricacies and details that affect us on such a personal level that I don’t think people outside of these circles can really understand or even think about. And how it affects when I need to make noise because my father might die from a heart attack because they’re refusing to give him medical treatment. And I can’t get retweets or I can’t get people to re-post. Or only 100 people are seeing the videos I’m posting on Instagram. It’s not that I care about having that following, it’s about literally being able to save my father’s life. So it takes such a toll on you on a personal level as well. I think that’s the part of the conversation that I think is missing when we talk about these things. I can’t imagine—but in some ways I can imagine—how it feels for Palestinians right now. To watch their family members, their people being subjected to an ongoing genocide and then have their voices taken away from them, to be subjected to shadowbans, to have their accounts shut down. It’s insult added to injury. You’re already hurting. You’re already in pain. You’re already not doing well. You’re already struggling just to survive another day and the only thing you have is your voice and then even that is taken away from you. I don’t think we can even begin to imagine the kind of damage on mental health and even physical health that that’s going to have in the coming years and in the coming generations because, of course, we pass down our trauma to the people around us as well.  York: I’m going to take a slight step back and a slight segue because I want to be able to use this interview for you to talk about your father’s case as well. Can you tell us about your father’s case and where it stands today? My father, Abdulhadi Al-Khawaja, dedicated his entire life to human rights activism. Which is why he spent half his life, if not more than that, in exile. And it’s why he spent the last thirteen years in prison. My father is the only Danish prisoner of conscience in the world today. And I very strongly believe that if my father was not a Brown, Muslim man he would not have spent this long as an EU citizen in a prison cell based on freedom of expression charges. And this is one of those cases where you really get to recognize those double standards. Where Denmark prides itself on being one of the countries that is the biggest protector of freedom of expression. And yet the entire case against my father – and my father was one of the human rights leaders of the uprising in 2011 – and he led the protests and he talked about human rights and freedom and he talked about the importance of us doing things the right way. And I think that’s why he was seen as such a threat. One of his speeches was about how even if we are able to change the government in Bahrain, we are not going to torture. We’re not going to be like them. We’re going to make sure that people who were perpetrators receive due process and fair trials. He always focused on the importance of people fighting for justice and fighting for change to do things the right way and from a human rights framework. He was arrested very violently from my sister’s home in front of my friends and family. He was beaten unconscious in front of my family. And he repeatedly said as he was being beaten, “I can’t breathe.” And every time I think of what happened with my father I think of Eric Garner as well – where he said over and over again “I can’t breathe” when he was basically killed by the United States police. Then my father was taken away. Interestingly enough, especially because we’re talking about freedom of expression, my father was charged with terrorism. In Bahrain, the terrorism law is so vague that even the work of a human rights defender can be regarded as terrorism. So even criticizing the police for committing violations can be seen as inciting terrorism. So my father was arrested and tried under the terrorism law, and they said he was trying to overthrow the government. But Human Rights Watch actually dissected the case that was brought against my father and the “evidence” that he was of course forced to sign under torture. He was subjected to very severe psychological and sexual torture for over two months during which he was disappeared as well – held in incommunicado detention. When they did that dissection of the case they found that all of the charges against my father were based on freedom of expression issues. It was all based on things that he had said during the protests around calling for democracy, around calling for representative government, the right to self determination, and more. It’s very much a freedom of expression issue. What I find horrifying – but also it says a lot about the case against my father and why he’s in prison today – is that one of the first things they did to my dad was they hit him with a hard object on his jaw and they broke his jaw. Even my father says that he feels they did that on purpose because they were hoping that he would never be able to speak again. They broke his jaw in six different places, or four different places. He had to undergo a four hour surgery where they reattached his jaw. They had to use more than twenty metal plates and screws to put his jaw back together. And he, of course, still has chronic pain and issues because of what they did. He was subjected to so much else like electrocutions and more, but that was a very specific intentional first blow that he received when he was arrested. To the face and to the mouth. As punishment, as retaliation, for having used his right to free expression to speak up and criticize the government. I think this tells you pretty much everything you need to know about what the situation of freedom of expression is in Bahrain. But it should also tell you a lot about the EU and the West and how they regard the importance of freedom of expression when the fact that my father is an EU citizen has not actually protected him. And 13 years later he continues to sit in a prison cell serving a life sentence because he practiced his right to free expression and because he practiced his right to freedom of assembly. Last year, my father decided to do a one-person protest in the prison yard. Both in solidarity with Palestine, but also because of the consistent and systematic denial of adequate medical treatment to prisoners of conscience in Bahrain. Because of that, and because he was again using his right to free expression inside prison, he was denied medical treatment for over a year. And my father had developed a heart condition. So a few months ago his condition started to get really bad, the doctors told us he might have a heart attack or a stroke at any time given that he was being denied access to a cardiologist. So I had to put myself and my freedom at risk. I’m already sentenced to one year in prison in Bahrain, I have four pending cases – basically, going back to Bahrain means that I am very likely to spend the rest of my life in prison, if not be subjected to torture. Which I have been in the past as well. But I decided to try and go back to Bahrain because the Danish government was refusing to step up. The West was refusing to step up. I mean we were asking for the bare minimum, which was access to a cardiologist. So I had to put myself at risk to try and bring attention. I ended up being denied boarding because there was too much international attention around my trip. So they denied me boarding because they didn’t want international coverage around me being arrested at the Bahrain airport again. I managed to get several very high profile human rights personalities to go with me on the trip. Because of that, and because we were able to raise so much international attention around my dad’s case, they actually ended up taking him to the cardiologist and now he’s on heart medication. But he’s never out of the danger zone, with Bahrain being what it is and because he’s still sitting in a prison cell. We’re still working hard on getting him out, but I think for my dad it’s always about his principles and his values and his ethics. For him, being a human rights defender, being in prison doesn’t mean the end of his activism. And that’s why he’s gone on more than seven hunger strikes in prison, that’s why he’s done multiple one-person protests in the prison yard. For him, his activism is an ongoing thing even from inside his prison cell. York: That’s an incredible story and I appreciate you sharing it with our readers—your father is incredibly brave. Last question- who is your free speech hero? Of course my dad, for sure. He always taught us the importance of using our voice not just to speak up for ourselves but for others especially. There’s so many that I’m drawing a blank! I can tell you that my favorite quote is by Edward Snowden. “Saying that you don’t care about the right to privacy because you have nothing to hide is like saying you don’t care about freedom of speech because you have nothing to say.” I think that really brings things to the point. There’s also an indigenous activist in the US who has been doing such a tremendous job using her voice to bring attention to what’s happening to the indigenous communities in the US. And I know it comes at a cost and it comes at great risk. There’s several Syrian activists and Palestinian activists. Motaz Azaiza and his reporting on what’s happening now in Gaza and the price that he’s paying for it, same thing with Bisan and Plestia. She’s also a Palestinian journalist who’s been reporting on Gaza. There’s just so many free expression heroes. People who have really excelled in understanding how to use their voice to make this world a better place. Those are my heroes. The everyday people who choose to do the right thing when it’s easier not to.
>> mehr lesen

Decoding the California DMV's Mobile Driver's License (Tue, 19 Mar 2024)
The State of California is currently rolling out a “mobile driver’s license” (mDL), a form of digital identification that raises significant privacy and equity concerns. This post explains the new smartphone application, explores the risks, and calls on the state and its vendor to focus more on protection of the users.  What is the California DMV Wallet?  The California DMV Wallet app came out in app stores last year as a pilot, offering the ability to store and display your mDL on your smartphone, without needing to carry and present a traditional physical document. Several features in this app replicate how we currently present the physical document with key information about our identity—like address, age, birthday, driver class, etc.  However, other features in the app provide new ways to present the data on your driver’s license. Right now, we only take out our driver’s license occasionally throughout the week. However, with the app’s QR Code and “add-on” features, the incentive for frequency may grow. This concerns us, given the rise of age verification laws that burden everyone’s access to the internet, and the lack of comprehensive consumer data privacy laws that keep businesses from harvesting and selling identifying information and sensitive personal information.  For now, you can use the California DMV Wallet app with TSA in airports, and with select stores that have opted in to an age verification feature called TruAge. That feature generates a separate QR Code for age verification on age-restricted items in stores, like alcohol and tobacco. This is not simply a one-to-one exchange of going from a physical document to an mDL. Rather, this presents a wider scope of possible usage of mDLs that needs expanded protections for those who use them. While California is not the first state to do this, this app will be used as an example to explain the current landscape. What’s the QR Code?  There are two ways to present your information on the mDL: 1) a human readable presentation, or 2) a QR code.  The QR code with a normal QR code scanner will display an alphanumeric string of text that starts with “mdoc:”. For example:   “mdoc:owBjMS4wAY..." [shortened for brevity] This “mobile document” (mdoc) text is defined by the International Organization for Standardization’s ISO/IEC18013-5. The string of text afterwards details driver’s license data that has been signed by the issuer (i.e., the California DMV), encrypted, and encoded. This data sequence includes technical specifications and standards, open and enclosed.   In the digital identity space, including mDLs, the most referenced and utilized are the ISO standard above, the American Association of Motor Vehicle Administrators (AAMVA) standard, and the W3C’s Verified Credentials (VC). These standards are often not siloed, but rather used together since they offer directions on data formats, security, and methods of presentation that aren’t completely covered by just one. However, ISO and AAMVA are not open standards and are decided internally. VCs were created for digital credentials generally, not just for mDLs. These standards are relatively new and still need time to mature to address potential gaps. The decrypted data could possibly look like this JSON blob:          {"family_name":"Doe",           "given_name":"John",           "birth_date":"1980-10-10",           "issue_date":"2020-08-10",           "expiry_date":"2030-10-30",           "issuing_country":"US",           "issuing_authority":"CA DMV",           "document_number":"I12345678",           "portrait":"../../../../test/issuance/portrait.b64",           "driving_privileges":[             {                "vehicle_category_code":"A",                "issue_date":"2022-08-09",                "expiry_date":"2030-10-20"             },             {                "vehicle_category_code":"B",                "issue_date":"2022-08-09",                "expiry_date":"2030-10-20"             }           ],           "un_distinguishing_sign":"USA",           {           "weight":70,           "eye_colour":"hazel",           "hair_colour":"red",           "birth_place":"California",           "resident_address":"2415 1st Avenue",           "portrait_capture_date":"2020-08-10T12:00:00Z",           "age_in_years":42,           "age_birth_year":1980,           "age_over_18":true,           "age_over_21":true,           "issuing_jurisdiction":"US-CA",           "nationality":"US",           "resident_city":"Sacramento",           "resident_state":"California",           "resident_postal_code":"95818",           "resident_country": "US"}} Application Approach and Scope Problems  California decided to contract a vendor to build a wallet app rather than use Google Wallet or Apple Wallet (not to be conflated with Google and Apple Pay). A handful of other states use Google and Apple, perhaps because many people have one or the other. There are concerns about large companies being contracted by the states to deliver mDLs to the public, such as their controlling the public image of digital identity and device compatibility.   This isn’t the first time a state contracted with a vendor to build a digital credential application without much public input or consensus. For example, New York State contracted with IBM to roll out the Excelsior app during the beginning of COVID-19 vaccination availability. At the time, EFF raised privacy and other concerns about this form of digital proof of vaccination. The state ultimately paid the vendor a staggering $64 million. While initially proprietary, the application later opened to the SMART Health Card standard, which is based on the W3C’s VCs. The app was sunset last year. It’s not clear what effect it had on public health, but it’s good that it wound down as social distancing measures relaxed. The infrastructure should be dismantled, and the persistent data should be discarded. If another health crisis emerges, at least a law in New York now partially protects the privacy of this kind of data. NY state legislature is currently working on a bill around mDLs after a round-table on their potential pilot. However, the New York DMV has already entered into a $1.75 million dollar contract with the digital identity vendor IDEMIA. It will be a race to see if protections will be established prior to pilot deployment.  Scope is also a concern with California’s mDL. The state contracted with Spruce ID to build this app. The company states that its purpose is to empower “organizations to manage the entire lifecycle of digital credentials, such as mobile driver’s licenses, software audit statements, professional certifications, and more.” In the “add-ons” section of the app, TruAge’s age verification QR code is available.   Another issue is selective disclosure, meaning the technical ability for the identity credential holder to choose which information to disclose to a person or entity asking for information from their credential. This is a long-time promise from enthusiasts of digital identity. The most used example is verification that the credential holder is over 21, without showing anything else about the holder, such as their name and address that appear on the face of their traditional driver’s license. But the California DMV wallet app, has a lack of options for selective disclosure:  The holder has to agree to TruAge’s terms and service and generate a separate TruAge QR Code.   There is already an mDL reader option for age verification for the QR Code of an mDL.  There is no current option for the holder to use selective disclosure for their mDL. But it is planned for future release, according to the California DMV via email.  Lastly, if selective disclosure is coming, this makes the TruAge add-on redundant.  The over-21 example is only as meaningful as its implementation; including the convenience, privacy, and choice given to the mDL holder.  TruAge appears to be piloting its product in at least 6 states. With “add-ons”, the scope of the wallet app indicates expansion beyond simply presenting your driver’s license. According to the California DMV’s Office of Public Affairs via email:  “The DMV is exploring the possibility of offering additional services including disabled person parking placard ID, registration card, vehicle ownership and occupational license in the add-ons in the coming months.”  This clearly displays how the scope of this pilot may expand and how the mDL could eventually be housed within an entire ecosystem of identity documentation. There are privacy preserving ways to present mDLs, like unlinkable proofs. These mechanisms help mitigate verifier-issuer collusion from establishing if the holder was in different places with their mDL.  Privacy and Equity First  At the time of this post, about 325,000 California residents have the pilot app. We urge states to take their time with creating mDLs, and even wait for verification methods that are more privacy considerate to mature. Deploying mDLs should prioritize holder control, privacy, and transparency. The speed of these pilots is possibly influenced by other factors, like the push for mDLs from the U.S. Department of Homeland Security.  Digital wallet initiatives like eIDAS in the European Union are forging conversations on what user control mechanisms might look like. These might include, for example, “bringing your own wallet” and using an “open wallet” that is secure, private, interoperable, and portable.  We also need governance that properly limits law enforcement access to information collected by mDLs, and to other information in the smartphones where holders place their mDLs. Further, we need safeguards against these state-created wallets being wedged into problematic realms like age verification mandates as a condition of accessing the internet.  We should be speed running privacy and provide better access for all to public services and government-issued documentation. That includes a right to stick with traditional paper or plastic identification, and accommodation of cases where a phone may not be accessible.   We urge the state to implement selective disclosure and other privacy preserving tools. The app is not required anywhere. It should remain that way no matter how cryptographically secure the system purports to be, or how robust the privacy policies. We also urge all governments to remain transparent and cautious about how they sign on vendors during pilot programs. If a contract takes away the public’s input on future protections, then that is a bad start. If a state builds a pilot without much patience for privacy and public input, then that is also turbulent ground for protecting users going forward.   Just because digital identity may feel inevitable, doesn’t mean the dangers have to be. 
>> mehr lesen

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media (Tue, 19 Mar 2024)
EFF legal intern Jack Beck contributed to this post. A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users. At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke. The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals. Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent. In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content. We made two arguments in our brief supporting Snap’s appeal. First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances. Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down. There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter. If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.
>> mehr lesen

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation! (Sat, 16 Mar 2024)
In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal. Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert. Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans. On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction. We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s. And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive. In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts. Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply. As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules. The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries. As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S. Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited. Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies. But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban. We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.    
>> mehr lesen

The SAFE Act to Reauthorize Section 702 is Two Steps Forward, One Step Back (Fri, 15 Mar 2024)
Section 702 of the Foreign Intelligence Surveillance Act (FISA) is one of the most insidious and secretive mass surveillance authorities still in operation today. The Security and Freedom Enhancement (SAFE) Act would make some much-needed and long fought-for reforms, but it also does not go nearly far enough to rein in a surveillance law that the federal government has abused time and time again. You can read the full text of the bill here. While Section 702 was first sold as a tool necessary to stop foreign terrorists, it has since become clear that the government uses the communications it collects under this law as a domestic intelligence source. The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between people overseas on U.S. persons. Now, it’s this US side of digital conversations that are being routinely sifted through by domestic law enforcement agencies—all without a warrant. The SAFE Act, like other reform bills introduced this Congress, attempts to roll back some of this warrantless surveillance. Despite its glaring flaws and omissions, in a Congress as dysfunctional as this one it might be the bill that best privacy-conscious people and organizations can hope for. For instance, it does not do as much as the Government Surveillance Reform Act, which EFF supported in November 2023. But imposing meaningful checks on the Intelligence Community (IC) is an urgent priority, especially because the Intelligence Community has been trying to sneak a "clean" reauthorization of Section 702 into government funding bills, and has even sought to have the renewal happen in secret in the hopes of keeping its favorite mass surveillance law intact. The administration is also reportedly planning to seek another year-long extension of the law without any congressional action. All the while, those advocating for renewing Section 702 have toyed with as many talking points as they can—from cybercrime or human trafficking to drug smuggling, terrorism, oreven solidarity activism in the United States—to see what issue would scare people sufficiently enough to allow for a clean reauthorization of mass surveillance. So let’s break down the SAFE Act: what’s good, what’s bad, and what aspects of it might actually cause more harm in the future.  What’s Good about the SAFE Act The SAFE Act would do at least two things that reform advocates have pressured Congress to include in any proposed bill to reauthorize Section 702. This speaks to the growing consensus that some reforms are absolutely necessary if this power is to remain operational. The first and most important reform the bill would make is to require the government to obtain a warrant before accessing the content of communications for people in the United States. Currently, relying on Section 702, the government vacuums up communications from all over the world, and a huge number of those intercepted communications are to or from US persons. Those communications sit in a massive database. Both intelligence agencies and law enforcement have conducted millions of queries of this database for US-based communications—all without a warrant—in order to investigate both national security concerns and run-of-the-mill criminal investigations. The SAFE Act would prohibit “warrantless access to the communications and other information of United States persons and persons located in the United States.” While this is the bare minimum a reform bill should do, it’s an important step. It is crucial to note, however, that this does not stop the IC or law enforcement from querying to see if the government has collected communications from specific individuals under Section 702—it merely stops them from reading those communications without a warrant. The second major reform the SAFE Act provides is to close the “data brooker loophole,” which EFF has been calling attention to for years. As one example, mobile apps often collect user data to sell it to advertisers on the open market. The problem is law enforcement and intelligence agencies increasingly buy this private user data, rather than obtain a warrant for it. This bill would largely prohibit the government from purchasing personal data they would otherwise need a warrant to collect. This provision does include a potentially significant exception for situations where the government cannot exclude Americans’ data from larger “compilations” that include foreigners’ data. This speaks not only to the unfair bifurcation of rights between Americans and everyone else under much of our surveillance law, but also to the risks of allowing any large scale acquisition from data brokers at all. The SAFE Act would require the government to minimize collection, search, and use of any Americans’ data in these compilations, but it remains to be seen how effective these prohibitions will be.  What’s Missing from the SAFE Act The SAFE Act is missing a number of important reforms that we’ve called for—and which the Government Surveillance Reform Act would have addressed. These reforms include ensuring that individuals harmed by warrantless surveillance are able to challenge it in court, both in civil lawsuits like those brought by EFF in the past, and in criminal cases where the government may seek to shield its use of Section 702 from defendants. After nearly 14 years of Section 702 and countless court rulings slamming the courthouse door on such legal challenges, it’s well past time to ensure that those harmed by Section 702 surveillance can have the opportunity to challenge it. New Problems Potentially Created by the SAFE Act While there may often be good reason to protect the secrecy of FISA proceedings, unofficial disclosures about these proceedings has from the very beginning played an indispensable role in reforming uncontested abuses of surveillance authorities. From the Bush administration’s warrantless wiretapping program through the Snowden disclosures up to the present, when reporting about FISA applications appears on the front page of the New York Times, oversight of the intelligence community would be extremely difficult, if not impossible, without these disclosures. Unfortunately, the SAFE Act contains at least one truly nasty addition to current law: an entirely new crime that makes it a felony to disclose “the existence of an application” for foreign intelligence surveillance or any of the application’s contents. In addition to explicitly adding to the existing penalties in the Espionage Act—itself highly controversial— this new provision seems aimed at discouraging leaks by increasing the potential sentence to eight years in prison. There is no requirement that prosecutors show that the disclosure harmed national security, nor any consideration of the public interest. Under the present climate, there’s simply no reason to give prosecutors even more tools like this one to punish whistleblowers who are seen as going through improper channels. EFF always aims to tell it like it is. This bill has some real improvements, but it’s nowhere near the surveillance reform we all deserve. On the other hand, the IC and its allies in Congress continue to have significant leverage to push fake reform bills, so the SAFE Act may well be the best we’re going to get. Either way, we’re not giving up the fight.  
>> mehr lesen

Thousands of Young People Told Us Why the Kids Online Safety Act Will Be Harmful to Minors (Fri, 15 Mar 2024)
With KOSA passed, the information i can access as a minor will be limited and censored, under the guise of "protecting me", which is the responsibility of my parents, NOT the government. I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for! - Alan, 15   ___________________ If information is put through a filter, that’s bad. Any and all points of view should be accessible, even if harmful so everyone can get an understanding of all situations. Not to mention, as a young neurodivergent and queer person, I’m sure the information I’d be able to acquire and use to help myself would be severely impacted. I want to be free like anyone else. - Sunny, 15   ___________________ How young people feel about the Kids Online Safety Act (KOSA) matters. It will primarily affect them, and many, many teenagers oppose the bill. Some have been calling and emailing legislators to tell them how they feel. Others have been posting their concerns about the bill on social media. These teenagers have been baring their souls to explain how important social media access is to them, but lawmakers and civil liberties advocates, including us, have mostly been the ones talking about the bill and about what’s best for kids, and often we’re not hearing from minors in these debates at all. We should be — these young voices should be essential when talking about KOSA. So, a few weeks ago, we asked some of the young advocates fighting to stop the Kids Online Safety Act a few questions:   - How has access to social media improved your life? What do you gain from it?  - What would you lose if KOSA passed? How would your life be different if it was already law?  Within a week we received over 3,000 responses. As of today, we have received over 5,000. These answers are critical for legislators to hear. Below, you can read some of these comments, sorted into the following themes (though they often overlap):   KOSA Will Harm Rights That Young People Know They Ought to Have  KOSA Could Impact Young People’s Artistic Education and Opportunities  KOSA Will Hurt Young People’s Ability to Find Community Online  KOSA Could Seriously Hinder People’s Self-Discovery   KOSA Could Stop Young People from Learning True News and Valuable Information  These comments show that thoughtful young people are deeply concerned about the proposed law's fallout, and that many who would be affected think it will harm them, not help them. Over 700 of those who responded reported that they were currently sixteen or under—the age under which KOSA’s liability is applicable. The average age of those who answered the survey was 20 (of those who gave their age—the question was optional, and about 60% of people responded).  In addition to these two questions, we also asked those taking the survey if they were comfortable sharing their email address for any journalist who might want to speak with them; unfortunately much coverage usually only mentions one or two of the young people who would be most affected. So, journalists: We have contact info for over 300 young people who would be happy to speak to you about why social media matters to them, and why they oppose KOSA. Individually, these answers show that social media, despite its current problems, offer an overall positive experience for many, many young people. It helps people living in remote areas find connection; it helps those in abusive situations find solace and escape; it offers education in history, art, health, and world events for those who wouldn’t otherwise have it; it helps people learn about themselves and the world around them. (Research also suggests that social media is more helpful than harmful for young people.)  And as a whole, these answers tell a story that is 180° different from that which is regularly told by politicians and the media. In those stories, it is accepted as fact that the majority of young people’s experiences on social media platforms are harmful. But from these responses, it is clear that many, many young people also experience help, education, friendship, and a sense of belonging there—precisely because social media allows them to explore, something KOSA is likely to hinder. These kids are deeply engaged in the world around them through these platforms, and genuinely concerned that a law like KOSA could take that away from them and from other young people.   Here are just a few of the thousands of reasons they’re worried.   Note: We are sharing individuals’ opinions, without editing. We do not necessarily endorse them or their interpretation of KOSA. KOSA Will Harm Rights That Young People Know They Ought to Have  One of the most important things that would be lost is the freedom of speech - a given right that is crucial to a healthy, functioning environment. Not every speech is morally okay, but regulating what speech is deemed "acceptable" constricts people's rights; a clear violation of the First Amendment. Those who need or want to access certain information are not allowed to - not because the information will harm them or others, but for the reason that a certain portion of people disagree with the information. If the country only ran on what select people believed, we would be a bland, monotonous place. This country thrives on diversity, whether it be race, gender, sex, or any other personal belief. If KOSA was passed, I would lose my safe spaces, places where I can go to for mental health, places that make me feel more like a human than just some girl. No more would I be able to fight for ideas and beliefs I hold, nor enjoy my time on the internet either. - Anonymous, 16   ___________________ I, and many of my friends, grew up in an Internet where remaining anonymous was common sense, and where revealing your identity was foolish and dangerous, something only to be done sparingly, with a trusted ally at your side, meeting at a common, crowded public space like a convention or a college cafeteria. This bill spits in the face of these very practical instincts, forces you to dox yourself, and if you don’t want to be outed, you must be forced to withdraw from your communities. From your friends and allies. From the space you have made for yourself, somewhere you can truly be yourself with little judgment, where you can find out who you really are, alongside people who might be wildly different from you in some ways, and exactly like you in others. I am fortunate to have parents who are kind and accepting of who I am. I know many people are nowhere near as lucky as me. - Maeve, 25   ___________________  I couldn't do activism through social media and I couldn't connect with other queer individuals due to censorship and that would lead to loneliness, depression other mental health issues, and even suicide for some individuals such as myself. For some of us the internet is the only way to the world outside of our hateful environments, our only hope. Representation matters, and by KOSA passing queer children would see less of age appropriate representation and they would feel more alone. Not to mention that KOSA passing would lead to people being uninformed about things and it would start an era of censorship on the internet and by looking at the past censorship is never good, its a gateway to genocide and a way for the government to control. – Sage, 15    ___________________ Privacy, censorship, and freedom of speech are not just theoretical concepts to young people. Their rights are often already restricted, and they see the internet as a place where they can begin to learn about, understand, and exercise those freedoms. They know why censorship is dangerous; they understand why forcing people to identify themselves online is dangerous; they know the value of free speech and privacy, and they know what they’ve gained from an internet that doesn’t have guardrails put up by various government censors.   TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT KOSA Could Impact Young People’s Artistic Education and Opportunities  I found so many friends and new interests from social media. Inspirations for my art I find online, like others who have an art style I admire, or models who do poses I want to draw. I can connect with my friends, send them funny videos and pictures. I use social media to keep up with my favorite YouTubers, content creators, shows, books. When my dad gets drunk and hard to be around or my parents are arguing, I can go on YouTube or Instagram and watch something funny to laugh instead. It gives me a lot of comfort, being able to distract myself from my sometimes upsetting home life. I get to see what life is like for the billions of other people on this planet, in different cities, states, countries. I get to share my life with my friends too, freely speaking my thoughts, sharing pictures, videos, etc.  I have found my favorite YouTubers from other social media platforms like tiktok, this happened maybe about a year ago, and since then I think this is the happiest I have been in a while. Since joining social media I have become a much more open minded person, it made me interested in what others lives are like. It also brought awareness and educated me about others who are suffering in the world like hunger, poor quality of life, etc. Posting on social media also made me more confident in my art, in the past year my drawing skills have immensely improved and I’m shocked at myself. Because I wanted to make better fan art, inspire others, and make them happy with my art. I have been introduce to many styles of clothing that have helped develop my own fun clothing style. It powers my dreams and makes me want to try hard when I see videos shared by people who have worked hard and made it. - Anonymous, 15    ___________________ As a kid I was able to interact in queer and disabled and fandom spaces, so even as a disabled introverted child who wasn’t popular with my peers I still didn’t feel lonely. The internet is arguably a safer way to interact with other fans of media than going to cons with strangers, as long as internet safety is really taught to kids. I also get inspiration for my art and writing from things I’ve only discovered online, and as an artist I can’t make money without the internet and even minors do commissions. The issue isn’t that the internet is unsafe, it’s that internet safety isn’t taught anymore. - Rachel, 19    ___________________ i am an artist, and sharing my things online makes me feel happy and good about myself. i love seeing other people online and knowing that they like what i make. when i make art, im always nervous to show other people. but when i post it online i feel like im a part of something, and that im in a community where i feel that i belong. – Anonymous, 15   ___________________  Social media has saved my life, just like it has for many young people. I have found safe spaces and motivation because of social media, and I have never encountered anything negative or harmful to me. With social media I have been able to share my creativity (writing, art, and music) and thoughts safely without feeling like I'm being held back or oppressed. My creations have been able to inspire and reach so many people, just like how other people's work have reached me. Recently, I have also been able to help the library I volunteer at through the help of social media. What I do in life and all my future plans (career, school, volunteer projects, etc.) surrounds social media, and without it I wouldn't be able to share what I do and learn more to improve my works and life. I wouldn't be able to connect with wonderful artists, musicians, and writers like I do now. I would be lost and feel like I don't have a reason to do what I do. If KOSA is passed, I wouldn't be able to get the help I need in order to survive. I've made so many friends who have been saved because of social media, and if this bill gets passed they will also be affected. Guess what? They wouldn't be able to get the help they need either. If KOSA was already a law when I was just a bit younger, I wouldn't even be alive. I wouldn't have been able to reach help when I needed it. I wouldn't have been able to share my mind with the world. Social media was the reason I was able to receive help when I was undergoing abuse and almost died. If KOSA was already a law, I would've taken my life, or my abuser would have done it before I could. If KOSA becomes a law now, I'm certain that the likeliness of that happening to kids of any age will increase. – Anonymous, 15    ___________________ A huge number of young artists say they use social media to improve their skills, and in many cases, the avenue by which they discovered their interest in a type of art or music. Young people are rightfully worried that the magic moment where you first stumble upon an artist or a style that changes your entire life will be less and less common for future generations if KOSA passes. We agree: KOSA would likely lead platforms to limit that opportunity for young people to experience unexpected things, forcing their online experiences into a much smaller box under the guise of protecting them.   Also, a lot of young people told us they wanted to, or were developing, an online business—often an art business. Under KOSA, young people could have less opportunities in the online communities where artists share their work and build a customer base, and a harder time navigating the various communities where they can share their art.   KOSA Will Hurt Young People’s Ability to Find Community Online  Social media has allowed me to connect with some of my closest friends ever, probably deeper than some people in real life. i get to talk about anything i want unimpeded and people accept me for who i am. in my deepest and darkest moments, knowing that i had somewhere to go was truly more relieving than anything else. i've never had the courage to commit suicide, but still, if it weren't for social media, i probably wouldn't be here, mentally & emotionally at least. i'd lose the space that accepts me. i'd lose the only place where i can be me. in life, i put up a mask to appease my parents and in some cases, my friends. with how extreme the u.s. is becoming these days, i could even lose my life. i would live my days in fear. i'm terrified of how fast this country is changing and if this bill passes, saying i would fall into despair would be an understatement. people say to "be yourself", but they don't understand that if i were to be my true self tomorrow, i could be killed. – march, 14   ___________________  Without the internet, and especially the rhythm gaming community which I found through Discord, I would've most likely killed myself at 13. My time on here has not been perfect, as has anyone's but without the internet I wouldn't have been the person I am today. I wouldn't have gotten help recognizing that what my biological parents were doing to me was abuse, the support I've received for my identity (as queer youth) and the way I view things, with ways to help people all around the world and be a more mindful ally, activist, and thinker, and I wouldn't have met my mom. I love my chosen mom. We met at a Dance Dance Revolution tournament in April of last year and have been friends ever since. When I told her that she was the first person I saw as a mother figure in my life back in November, I was bawling my eyes out. I'm her mije, and she's my mom. love her so much that saying that doesn't even begin to express exactly how much I love her.  I love all my chosen family from the rhythm gaming community, my older sisters and siblings, I love them all. I have a few, some I talk with more regularly than others. Even if they and I may not talk as much as we used to, I still love them. They mean so much to me. – X86, 15    ___________________ i spent my time in public school from ages 9-13 getting physically and emotionally abused by special ed aides, i remember a few months after i left public school for good, i saw a post online that made me realize that what i went through wasn’t normal. if it wasn’t for the internet, i wouldn’t have come to terms with my autism, i would have still hated myself due to not knowing that i was genderqueer, my mental health would be significantly worse, and i would probably still be self harming, which is something i stopped doing at 13. besides the trauma and mental health side of things, something important to know is that spaces for teenagers to hang out have been eradicated years ago, minors can’t go to malls unless they’re with their parents, anti loitering laws are everywhere, and schools aren’t exactly the best place for teenagers to hang out, especially considering queer teens who were murdered by bullies (such as brianna ghey or nex benedict), the internet has become the third space that teenagers have flocked to as a result. – Anonymous, 17    ___________________ KOSA is anti-community. People online don’t only connect over shared interests in art and music—they also connect over the difficult parts of their lives. Over and over again, young people told us that one of the most valuable parts of social media was learning that they were not alone in their troubles. Finding others in similar circumstances gave them a community, as well as ideas to improve their situations, and even opportunities to escape dangerous situations.   KOSA will make this harder. As platforms limit the types of recommendations and public content they feel safe sharing with young people, those who would otherwise find communities or potential friends will not be as likely to do so. A number of young people explained that they simply would never have been able to overcome some of the worst parts of their lives alone, and they are concerned that KOSA’s passage would stop others from ever finding the help they did.  KOSA Could Seriously Hinder People’s Self-Discovery   I am a transgender person, and when I was a preteen, looking down the barrel of the gun of puberty, I was miserable. I didn't know what was wrong I just knew I'd rather do anything else but go through puberty. The internet taught me what that was. They told me it was okay. There were things like haircuts and binders that I could use now and medical treatment I could use when I grew up to fix things. The internet was there for me too when I was questioning my sexuality and again when my mental health was crashing and even again when I was realizing I'm not neurotypical. The internet is a crucial source of information for preteens and beyond and you cannot take it away. You cannot take away their only realistically reachable source of information for what the close-minded or undereducated adults around them don't know. - Jay, 17     ___________________ Social media has improved my life so much and led to how I met my best friend, I’ve known them for 6+ years now and they mean so much to me. Access to social media really helps me connect with people similar to me and that make me feel like less of an outcast among my peers, being able to communicate with other neurodivergent queer kids who like similar interests to me. Social media makes me feel like I’m actually apart of a community that won’t judge me for who I am. I feel like I can actually be myself and find others like me without being harassed or bullied, I can share my art with others and find people like me in a way I can’t in other spaces. The internet & social media raised me when my parents were busy and unavailable and genuinely shaped the way I am today and the person I’ve become. – Anonymous, 14     ___________________ The censorship likely to come from this bill would mean I would not see others who have similar struggles to me. The vagueness of KOSA allows for state attorney generals to decide what is and is not appropriate for children to see, a power that should never be placed in the hands of one person. If issues like LGBT rights and mental health were censored by KOSA, I would have never realized that I AM NOT ALONE. There are problems with children and the internet but KOSA is not the solution. I urge the senate to rethink this bill, and come up with solutions that actually protect children, not put them in more danger, and make them feel ever more alone. - Rae, 16    ___________________  KOSA would effectively censor anything the government deems "harmful," which could be anything from queerness and fandom spaces to anything else that deviates from "the norm." People would lose support systems, education, and in some cases, any way to find out about who they are. I'll stop beating around the bush, if it wasn't for places online, I would never have discovered my own queerness. My parents and the small circle of adults I know would be my only connection to "grown-up" opinions, exposing me to a narrow range of beliefs I would likely be forced to adopt. Any kids in positions like mine would have no place to speak out or ask questions, and anything they bring up would put them at risk. Schools and families can only teach so much, and in this age of information, why can't kids be trusted to learn things on their own? - Anonymous, 15     ___________________ Social media helped me escape a very traumatic childhood and helped me connect with others. quite frankly, it saved me from being brainwashed. – Milo, 16     ___________________ Social media introduced me to lifelong friends and communities of like-minded people; in an abusive home, online social media in the 2010s provided a haven of privacy, safety, and information. I honed my creativity, nurtured my interests and developed my identity through relating and talking to people to whom I would otherwise have been totally isolated from. Also, unrestricted internet access actually taught me how to spot shady websites and inappropriate content FAR more effectively than if censorship had been at play like it is today. A couple of the friends I made online, as young as thirteen, were adults; and being friends with adults who knew I was a child, who practiced safe boundaries with me yet treated me with respect, helped me recognise unhealthy patterns in predatory adults. I have befriended mothers and fathers online through games and forums, and they were instrumental in preventing me being groomed by actual pedophiles. Had it not been for them, I would have wound up terribly abused by an "in real life" adult "friend". Instead, I recognised the differences in how he was treating me (infantilising yet praising) vs how my adult friends had treated me (like a human being), and slowly tapered off the friendship and safely cut contact. As I grew older, I found a wealth of resources on safe sex and sexual health education online. Again, if not for these discoveries, I would most certainly have wound up abused and/or pregnant as a teenager. I was never taught about consent, safe sex, menstruation, cervical health, breast health, my own anatomy, puberty, etc. as a child or teenager. What I found online-- typically on Tumblr and written with an alarming degree of normalcy-- helped me understand my body and my boundaries far more effectively than "the talk" or in-school sex ed ever did. I learned that the things that made me panic were actually normal; the ins and outs of puberty and development, and, crucially, that my comfort mattered most. I was comfortable and unashamed of being a virgin my entire teen years because I knew it was okay that I wasn't ready. When I was ready, at twenty-one, I knew how to communicate with my partner and establish safe boundaries, and knew to check in and talk afterwards to make sure we both felt safe and happy. I knew there was no judgement for crying after sex and that it didn't necessarily mean I wasn't okay. I also knew about physical post-sex care; e.g. going to the bathroom and cleaning oneself safely. AGAIN, I would NOT have known any of this if not for social media. AT ALL. And seeing these topics did NOT turn me into a dreaded teenage whore; if anything, they prevented it by teaching me safety and self-care. I also found help with depression, anxiety, and eating disorders-- learning to define them enabled me to seek help. I would not have had this without online spaces and social media. As aforementioned too, learning, sometimes through trial of fire, to safely navigate the web and differentiate between safe and unsafe sites was far more effective without censored content. Censorship only hurts children; it has never, ever helped them. How else was I to know what I was experiencing at home was wrong? To call it "abuse"? I never would have found that out. I also would never have discovered how to establish safe sexual AND social boundaries, or how to stand up for myself, or how to handle harassment, or how to discover my own interests and identity through media. The list goes on and on and on. – June, 21     ___________________ One of the claims that KOSA’s proponents make is that it won’t stop young people from finding the things they already want to search for. But we read dozens and dozens of comments from people who didn’t know something about themselves until they heard others discussing it—a mental health diagnosis, their sexuality, that they were being abused, that they had an eating disorder, and much, much more.   Censorship that stops you from looking through a library is still dangerous even if it doesn’t stop you from checking out the books you already know. It’s still a problem to stop young people in particular from finding new things that they didn’t know they were looking for.    TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT KOSA Could Stop Young People from Getting Accurate News and Valuable Information  Social media taught me to be curious. It taught me caution and trust and faith and that simply being me is enough. It brought me up where my parents failed, it allowed me to look into stories that assured me I am not alone where I am now. I would be fucking dead right now if it weren't for the stories of my fellow transgender folk out there, assuring me that it gets better.  I'm young and I'm not smart but I know without social media, myself and plenty of the people I hold dear in person and online would not be alive. We wouldn't have news of the atrocities happening overseas that the news doesn't report on, we wouldn't have mentors to help teach us where our parents failed. - Anonymous, 16    ___________________  Through social media, I've learned about news and current events that weren't taught at school or home, things like politics or controversial topics that taught me nuance and solidified my concept of ethics. I learned about my identity and found numerous communities filled with people I could socialize with and relate to. I could talk about my interests with people who loved them just as much as I did. I found out about numerous different perspectives and cultures and experienced art and film like I never had before. My empathy and media literacy greatly improved with experience. I was also able to gain skills in gathering information and proper defences against misinformation. More technically, I learned how to organize my computer and work with files, programs, applications, etc; I could find guides on how to pursue my hobbies and improve my skills (I'm a self-taught artist, and I learned almost everything I know from things like YouTube or Tumblr for free). - Anonymous, 15    ___________________  A huge portion of my political identity has been shaped by news and information I could only find on social media because the mainstream news outlets wouldn’t cover it. (Climate Change, International Crisis, Corrupt Systems, etc.) KOSA seems to be intentionally working to stunt all of this. It’s horrifying. So much of modern life takes place on the internet, and to strip that away from kids is just another way to prevent them from formulating their own thoughts and ideas that the people in power are afraid of. Deeply sinister. I probably would have never learned about KOSA if it were in place! That’s terrifying! - Sarge, 17    ___________________ I’ve met many of my friends from [social media] and it has improved my mental health by giving me resources. I used to have an eating disorder and didn’t even realize it until I saw others on social media talking about it in a nuanced way and from personal experience. - Anonymous, 15     ___________________ Many young people told us that they’re worried KOSA will result in more biased news online, and a less diverse information ecosystem. This seems inevitable—we’ve written before that almost any content could fit into the categories that politicians believe will cause minors anxiety or depression, and so carrying that content could be legally dangerous for a platform. That could include truthful news about what’s going on in the world, including wars, gun violence, and climate change.  “Preventing and mitigating” depression and anxiety isn’t a goal of any other outlet, and it shouldn’t be required for social media platforms. People have a right to access information—both news and opinion— in an open and democratic society, and sometimes that information is depressing or anxiety-inducing. To truly “prevent and mitigate” self-destructive behaviors, we must look beyond the media to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships—not hiding truthful information that is disappointing.   Young People’s Voices Matter  While KOSA’s sponsors intend to help these young people, those who responded to the survey don’t see it that way. You may have noticed that it’s impossible to limit these complex and detailed responses into single categories—many childhood abuse victims found help as well as arts education on social media; many children connected to communities that they otherwise couldn’t and learned something essential about themselves in doing so. Many understand that KOSA would endanger their privacy, and also know it could harm marginalized kids the most.   In reading thousands of these comments, it becomes clear that social media itself was not in itself a solution to the issues they experienced. What helped these young people was other people. Social media was where they were able to find and stay connected with those friends, communities, artists, activists, and educators. When you look at it this way, of course KOSA seems absurd: social media has become an essential element of young peoples’ lives, and they are scared to death that if the law passes, that part of their lives will disappear. Older teens and twenty-somethings, meanwhile, worry that if the law had been passed a decade ago, they never would have become the person that they did. And all of these fears are reasonable.   There were thousands more comments like those above. We hope this helps balance the conversation, because if young people’s voices are suppressed now—and if KOSA becomes law—it will be much more difficult for them to elevate their voices in the future.   TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT
>> mehr lesen

Analyzing KOSA’s Constitutional Problems In Depth  (Fri, 15 Mar 2024)
Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship  The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.   Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.   KOSA’s Effectiveness   First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.   A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional.  EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either.  Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.    KOSA’s Duty of Care   KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include:  mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors)  patterns of use that indicate or encourage addiction-like behaviors   physical violence, online bullying, and harassment  Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.   A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional.  In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.   Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.   KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.   We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.)  But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.   Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.   The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional.  EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.   Why the Rule of Construction Doesn’t Solve the Censorship Concern  In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”   minors from deliberately or independently searching for content, or  the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources."  We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA.  TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.   Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search.  The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not).  Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum.  EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.”  The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.”  Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care.  Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.   KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns  KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.   It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content.  Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.   TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply  KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.   EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.   KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.   EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.   We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset.  EFF Remains Concerned About State Attorneys General Enforcing KOSA  Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.   But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information.  KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms  Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”   There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.   But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.    And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear.  Not All of KOSA’s Critics Are Right, Either  Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill.  Large Social Media May Not Entirely Block Young People, But Smaller Services Might  Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.   They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.   Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.   KOSA Won’t Necessarily Make Your Real Name Public by Default  One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.   Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.    We Agree With Supporters: Government Should Study Social Media’s Effects on Minors  We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.   Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue.  TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT
>> mehr lesen

San Diego City Council Breaks TRUST (Fri, 15 Mar 2024)
In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies.  Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.   The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.   But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public.  Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost.  As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said:  “City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.”  If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 
>> mehr lesen

5 Questions to Ask Before Backing the TikTok Ban (Fri, 15 Mar 2024)
With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk.  The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions.  1. Is the TikTok bill about privacy or content? Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users?  From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public." This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.” 2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws?  It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all. It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home. 3. If the TikTok bill is about content, how will it avoid violating the First Amendment?  Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged. 4. Is the TikTok bill a ban or something else?  Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.”  Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment.  5. Does the U.S. support the free flow of information as a fundamental democratic principle?  Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic. In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.” Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle. TAKE ACTION TELL CONGRESS: DON'T BAN TIKTOK
>> mehr lesen

Location Data Tracks Abortion Clinic Visits. Here’s What to Know (Fri, 15 Mar 2024)
Our concerns about the selling and misuse of location data for those seeking reproductive and gender healthcare are escalating amid a recent wave of cases and incidents demonstrating that the digital trail we leave is being used by anti-abortion activists. The good news is some states and tech companies are taking steps to better protect location data privacy, including information that endangers people needing or seeking information about reproductive and gender-affirming healthcare. But we know more must be done—by pharmacies, our email providers, and lawmakers—to plug gaping holes in location data protection. Location data is highly sensitive, as it paints a picture of our daily lives—where we go, who we visit, when we seek medical care, or what clinics we visit. That’s what makes it so attractive to data brokers and law enforcement in states outlawing abortion and gender-affirming healthcare and those seeking to exploit such data for ideological or commercial purposes. What we’re seeing is deeply troubling. Sen. Ron Wyden recenty disclosed that vendor Near Intelligence allegedly gathered location data of people’s visits to nearly 600 Planned Parenthood locations across 48 states, without consent. It sold that data to an anti-abortion group, which used it in a massive anti-abortion ad campaign.The Wisconsin-based group used the geofenced data to send mobile ads to people who visited the clinics. It’s hardly a leap to imagine that law enforcement and bounty hunters in anti-abortion states would gladly buy the same data to find out who is visiting Planned Parenthood clinics and try to charge and imprison women, their families, doctors, and caregivers. That’s the real danger of an unregulated data broker industry; anyone can buy what’s gathered from warrantless surveillance, for whatever nefarious purpose they choose. For example, police in Idaho, where abortion is illegal, used cell phone data in an investigation against an Idaho woman and her son charged with kidnapping. The data showed that they had taken the son’s minor girlfriend to Oregon, where abortion is legal, to obtain an abortion. The exploitation of location data is not the only problem. Information about prescription medicines we take is not protected against law enforcement requests. The nation’s eight largest pharmacy chains, including CVS, Walgreens, and Rite Aid, have routinely turned over prescription records of thousands of Americans to law enforcement agencies or other government entities secretly without a warrant, according to a congressional inquiry. Many people may not know that their prescription records can be obtained by law enforcement without too much trouble. There’s not much standing between someone’s self-managed abortion medication and a law enforcement records demand. In April the U.S. Health and Human Services Department proposed a rule that would prevent healthcare providers and insurers from giving information to state officials trying to prosecute some seeking or providing a legal abortion. A final rule has not yet been published. Exploitation of location and healthcare data to target communities could easily expand to other groups working to protect bodily autonomy, especially those most likely to suffer targeted harassment and bigotry. With states passing and proposing bills restricting gender-affirming care and state law enforcement officials pursuing medical records of transgender youth across state lines, it’s not hard to imagine them buying or using location data to find people to prosecute. To better protect people against police access to sensitive health information, lawmakers in a few states have taken action. In 2022, California enacted two laws protecting abortion data privacy and preventing California companies from sharing abortion data with out-of-state entities. Then, last September the state enacted a shield law prohibiting California-based companies, including social media and tech companies, from disclosing patients’ private communications regarding healthcare that is legally protected in the state. Massachusetts lawmakers have proposed the Location Shield Act, which would prohibit the sale of cellphone location information to data brokers. The act would make it harder to trace the path of those traveling to Massachusetts for abortion services. Of course, tech companies have a huge role to play in location data privacy. EFF was glad when Google said in 2022 it would delete users’ location history for visits to medical facilities, including abortion clinics and counseling and fertility centers. Google pledged that when the location history setting on a device was turned on, it would delete entries for particularly personal places like reproductive health clinics soon after such a visit. But a study by AccountableTech testing Google’s pledge said the company wasn’t living up to its promises and continued to collect and retain location data from individuals visiting abortion clinics. Accountable Tech reran the study in late 2023 and the results were again troubling—Google still retained location search query data for some visits to Planned Parenthood clinics. It appears users will have to manually delete location search history to remove information about the routes they take to visiting sensitive locations. It doesn’t happen automatically. Late last year, Google announced plans to move saved Timeline entries in Google Maps to users’ devices. Users who want to keep the entries could choose to back up the data to the cloud, where it would be automatically encrypted and out of reach even to Google. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. But when these features are coming is uncertain—though Google said in December they’re “coming soon.” Google should implement the changes sooner as opposed to later. In the meantime, those seeking reproductive and gender information and healthcare can find tips on how to protect themselves in our Surveillance Self Defense guide. 
>> mehr lesen

How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can) (Fri, 15 Mar 2024)
Cars collect a lot of our personal data, and car companies disclose a lot of that data to third parties. It’s often unclear what’s being collected, and what's being shared and with whom. A recent New York Times article highlighted how data is shared by G.M. with insurance companies, sometimes without clear knowledge from the driver. If you're curious about what your car knows about you, you might be able to find out. In some cases, you may even be able to opt out of some of that sharing of data. Why Your Car Collects and Shares Data A car (and its app, if you installed one on your phone) can collect all sorts of data in the background with and without you realizing it. This in turn may be shared for a wide variety of purposes, including advertising and risk-assessment for insurance companies. The list of data collected is long and dependent on the car’s make, model, and trim.  But if you look through any car maker’s privacy policy, you'll see some trends: Diagnostics data, sometimes referred to as “vehicle health data,” may be used internally for quality assurance, research, recall tracking, service issues, and similar unsurprising car-related purposes. This type of data may also be shared with dealers or repair companies for service. Location information may be collected for emergency services, mapping, and to catalog other environmental information about where a car is operated. Some cars may give you access to the vehicle’s location in the app. Some usage data may be shared or used internally for advertising. Your daily driving or car maintenance habits, alongside location data, is a valuable asset to the targeted advertising ecosystem.  All of this data could be shared with law enforcement. Information about your driving habits, sometimes referred to as “Driving data” or “Driver behavior information,” may be shared with insurance companies and used to alter your premiums.  This can range from odometer readings to braking and acceleration statistics and even data about what time of day you drive..  Surprise insurance sharing is the thrust of The New York Times article, and certainly not the only problem with car data. We've written previously about how insurance companies offer discounts for customers who opt into a usage-based insurance program. Every state except California currently allows the use of telematics data for insurance rating, but privacy protections for this data vary widely across states. When you sign up directly through an insurer, these opt-in insurance programs have a pretty clear tradeoff and sign up processes, and they'll likely send you a physical device that you plug into your car's OBD port that then collects and transmits data back to the insurer. But some cars have their own internal systems for sharing information with insurance companies that can piggy back off an app you may have installed, or the car’s own internet connection. Many of these programs operate behind dense legalese. You may have accidentally “agreed” to such sharing without realizing it, while buying a new car—likely in a state of exhaustion and excitement after finally completing a gauntlet of finance and legal forms. This gets more confusing: car-makers use different terms for their insurance sharing programs. Some, like Toyota's “Insure Connect,” are pretty obviously named. But others, like Honda, tuck information about sharing with a data broker (that then shares with insurance companies) inside a privacy policy after you enable its “Driver Feedback” feature. Others might include the insurance sharing opt-in alongside broader services you might associate more with safety or theft, like G.M.’s OnStar, Subaru’s Starlink, and Volkswagen’s Car-Net. The amount of data shared differs by company, too. Some car makers might share only small amounts of data, like an odometer reading, while others might share specific details about driving habits. That's just the insurance data sharing. There's little doubt that many cars sell other data for behavioral advertising, and like the rest of that industry, it's nearly impossible to track exactly where your data goes and how it's used. See What Data Your Car Has (and Stop the Sharing) This is a general guide to see what your car collects and who it shares it with. It does not include information about specific scenarios—like intimate partner violence— that may raise distinctive driver privacy issues. See How Your Car Handles (Data)Start by seeing what your car is equipped to collect using Privacy4Cars’ Vehicle Privacy Report. Once you enter your car’s VIN, the site provides a rough idea of what sorts of data your car collects. It's also worth reading about your car manufacturer’s more general practices on Mozilla's Privacy Not Included site. Check the Privacy Options In Your Car’s Apps and Infotainment SystemIf you use an app for your car, head into the app’s settings, and look for any sort of data sharing options. Look for settings like “Data Privacy” or “Data Usage.” When possible, opt out of sharing any data with third-parties, or for behavioral advertising. As annoying as it may be, it’s important to read carefully here so you don’t accidentally disable something you want, like a car’s SOS feature. Be mindful that, at least according to Mozilla’s report on Tesla, opting out of certain data sharing might someday make the car undriveable. Now’s also a good time to disable ad tracking on your phone. When it comes to sharing with insurance companies, you’re looking for an option that may be something obvious, like Toyota’s “Insure Connect,” or less obvious, like Kia’s “Driving Score.” If your car’s app has any sort of driver scoring or feedback option—some other names include GM’s ”Smart Driver,” Honda’s “Driver Feedback,” or Mitsubishi’s “Driving Score”—there’s a chance it’s sharing that data with an insurance company. Check for these options in both the app and the car’s infotainment system. If you did accidentally sign up for sharing data with insurance companies, you may want to call your insurance company to see how doing so may affect your premiums. Depending on your driving habits, your premiums might go up or down, and in either case you don’t want a surprise bill. File a Privacy Request with the Car MakerNext, file a privacy request with the car manufacturer so you can see exactly what data the company has collected about you. Some car makers will provide this to anyone who asks. Others might only respond to requests from residents of states with a consumer data privacy law that requires their response. The International Association of Privacy Professionals has published this list of states with such laws. In these states, you have a “right to know” or “right to access” your data, which requires the company to send you a copy of what personal information it collected about you. Some of these states also guarantee “data portability,” meaning the right to access your data in a machine-readable format. File one of these requests, and you should receive a copy of your data. In some states, you can also file a request for the car maker to not sell or share your information, or to delete it. While the car maker might not be legally required to respond to your request if you're not from a state with these privacy rights, it doesn’t hurt to ask anyway. Every company tends to word these requests a little differently, but you’re looking for options to get a copy of your data, and ask them to stop sharing it. This typically requires filling out a separate request form for each type of request. Here are the privacy request pages for the major car brands: BMW (BMW, Mini, Rolls-Royce) Ford (Ford, Lincoln) GM (Cadillac, GMC, Chevrolet, Buick) Honda (Honda, Acura) Hyundai Jaguar (Jaguar, Land Rover) Kia Mazda Mercedes-Benz Mitsubishi Nissan Rivian Stellantis (Fiat, Chrysler, Jeep, Dodge) Subaru Tesla Toyota (Toyota, Lexus) Volkswagen (VW, Audi) Volvo Sometimes, you will need to confirm the request in an email, so be sure to keep an eye on your inbox. Check for Data On Popular Data Brokers Known to Share with InsurersFinally, request your data from data brokers known to hand car data to insurers. For example, do so with the two companies mentioned in The New York Times’ article:  LexisNexis  Verisk  Now, you wait. In most states, within 45 to 90 days you should receive an email from the car maker, and another from the data brokers, which will often include a link to your data. You will typically get a CSV file, though it may also be a PDF, XLS, or even a folder with a whole webpage and an HTML file. If you don't have any sort of spreadsheet software on your computer, you might struggle to open it up, but most of the files you get can be opened in free programs, like Google Sheets or LibreOffice. Without a national law that puts privacy first, there is little that most people can do to stop this sort of data sharing. Moreover, the steps above clearly require far too much effort for most people to take. That’s why we need much more than these consumer rights to know, to delete, and to opt-out of disclosure: we also need laws that automatically require corporations to minimize the data they process about us, and to get our opt-in consent before processing our data. As to car insurers, we've outlined exactly what sort of guardrails we'd like to see here.  As The New York Times' reporting revealed, many people were surprised to learn how their data is collected, disclosed, and used, even if there was an opt-in consent screen. This is a clear indication that car makers need to do better. 
>> mehr lesen

Making the Law Accessible in Europe and the USA (Thu, 14 Mar 2024)
Special thanks to EFF legal intern Alissa Johnson, who was the lead author of this post. Earlier this month, the European Union Court of Justice ruled that harmonized standards are a part of EU law, and thus must be accessible to EU citizens and residents free of charge. While it might seem like common sense that the laws that govern us should be freely accessible, this question has been in dispute in the EU for the past five years, and in the U.S. for over a decade. At the center of this debate are technical standards, developed by private organizations and later incorporated into law. Before they were challenged in court, standards-development organizations were able to limit access to these incorporated standards through assertions of copyright. Regulated parties or concerned citizens checking compliance with technical or safety standards had to do so by purchasing these standards, often at significant expense, from private organizations. While free alternatives, like proprietary online “reading rooms,” were sometimes available, these options had their own significant downsides, including limited functionality and privacy concerns. In 2018, two nonprofits, Public.Resource.Org and Right to Know, made a request to the European Commission for access to four harmonized standards—that is, standards that apply across the European Union—pertaining to the safety of toys. The Commission refused to grant them access on the grounds that the standards were copyrighted.    The nonprofits then brought an action before the General Court of the European Union seeking annulment of the Commission’s decision. They made two main arguments. First, that copyright couldn’t be applicable to the harmonized standards, and that open access to the standards would not harm the commercial interests of the European Committee for Standardization or other standard setting bodies. Second, they argued that the public interest in open access to the law should override whatever copyright interests might exist. The General Court rejected both arguments, finding that the threshold for originality that makes a work eligible for copyright protection had been met, the sale of standards was a vital part of standards bodies’ business model, and the public’s interest in ensuring the proper functioning of the European standardization system outweighed their interest in free access to harmonized standards. Last week, the EU Court of Justice overturned the General Court decision, holding that EU citizens and residents have an overriding interest in free access to the laws that govern them. Article 15(3) of the Treaty on the Functioning of the EU and Article 42 of the Charter of Fundamental Rights of the EU guarantee a right of access to documents of Union institutions, bodies, offices, and agencies. These bodies can refuse access to a document where its disclosure would undermine the protection of commercial interests, including intellectual property, unless there is an overriding public interest in disclosure. Under the ECJ’s ruling, standards written by private companies, but incorporated into legislation, now form part of EU law. People need access to these standards to determine their own compliance. While compliance with harmonized standards is not generally mandatory, it is in the case of the toy safety standards in question here. Even when compliance is not mandatory, products that meet technical standards benefit from a “presumption of conformity,” and failure to conform can impose significant administrative difficulties and additional costs. Given that harmonized standards are a part of EU law, citizens and residents of member states have an interest in free access that overrides potential copyright concerns. Free access is necessary for economic actors “to ascertain unequivocally what their rights and obligations are,” and to allow concerned citizens to examine compliance. As the U.S. Supreme Court noted in in 2020, “[e]very citizen is presumed to know the law, and it needs no argument to show that all should have free access” to it. The Court of Justice’s decision has far-reaching effects beyond the four toy safety standards under dispute. Its reasoning classifying these standards as EU law applies more broadly to standards incorporated into law. We’re pleased that under this precedent, EU standards-development organizations will be required to disclose standards on request without locking these important parts of the law behind a paywall.
>> mehr lesen

Why U.S. House Members Opposed the TikTok Ban Bill (Thu, 14 Mar 2024)
What do House Democrats like Alexandria Ocasio-Cortez and Barbara Lee have in common with House Republicans like Thomas Massie and Andy Biggs? Not a lot. But they do know an unconstitutional bill when they see one. These and others on both sides of the aisle were among the 65 House Members who voted "no" yesterday on the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521, which would effectively ban TikTok. The bill now goes to the Senate, where we hope cooler heads will prevail in demanding comprehensive data privacy legislation instead of this attack on Americans' First Amendment rights. We're saying plenty about this misguided, unfounded bill, and we want you to speak out about it too, but we thought you should see what some of the House Members who opposed it said, in their own words.   I am voting NO on the TikTok ban. Rather than target one company in a rushed and secretive process, Congress should pass comprehensive data privacy protections and do a better job of informing the public of the threats these companies may pose to national security. — Rep. Barbara Lee (@RepBarbaraLee) March 13, 2024    ___________________  Today, I voted against the so-called “TikTok Bill.” Here’s why: pic.twitter.com/Kbyh6hEhhj gilc15axuaahok9.jpg — Rep Andy Biggs (@RepAndyBiggsAZ) March 13, 2024    ___________________ Today, I voted against H.R. 7521. My full statement: pic.twitter.com/9QCFQ2yj5Q nadler.png — Rep. Nadler (@RepJerryNadler) March 13, 2024    ___________________  Today I claimed 20 minutes in opposition to the TikTok ban bill, and yielded time to several likeminded colleagues. This bill gives the President far too much authority to determine what Americans can see and do on the internet. This is my closing statement, before I voted No. pic.twitter.com/xMxp9bU18t massie.mp4 — Thomas Massie (@RepThomasMassie) March 13, 2024    ___________________  Why I voted no on the bill to potentially ban tik tok: pic.twitter.com/OGkfdxY8CR himes.jpg — Jim Himes (@jahimes) March 13, 2024    ___________________  I don’t use TikTok. I find it unwise to do so. But after careful review, I’m a no on this legislation. This bill infringes on the First Amendment and grants undue power to the administrative state. pic.twitter.com/oSpmYhCrV8 bishop.mp4 — Rep. Dan Bishop (@RepDanBishop) March 13, 2024    ___________________  I’m voting NO on the TikTok forced sale bill. This bill was incredibly rushed, from committee to vote in 4 days, with little explanation. There are serious antitrust and privacy questions here, and any national security concerns should be laid out to the public prior to a vote. — Alexandria Ocasio-Cortez (@AOC) March 13, 2024    ___________________  We should defend the free & open debate that our First Amendment protects. We should not take that power AWAY from the people & give it to the government. The answer to authoritarianism is NOT more authoritarianism. The answer to CCP-style propaganda is NOT CCP-style oppression. pic.twitter.com/z9HWgUSMpw mcclintock.mp4 — Tom McClintock (@RepMcClintock) March 13, 2024    ___________________  I'm voting no on the TikTok bill. Here's why: 1) It was rushed. 2) There's major free speech issues. 3) It would hurt small businesses. 4) America should be doing way more to protect data privacy & combatting misinformation online. Singling out one app isn't the answer. — Rep. Jim McGovern (@RepMcGovern) March 13, 2024     ___________________ Solve the correct problem. Privacy. Surveillance. Content moderation. Who owns #TikTok? 60% investors - including Americans 20% +7,000 employees - including Americans 20% founders CEO & HQ Singapore Data in Texas held by Oracle What changes with ownership? I’ll be voting NO. pic.twitter.com/MrfROe02IS davidson.mp4 — Warren Davidson (@WarrenDavidson) March 13, 2024    ___________________  I voted no on the bill to force the sale of TikTok. Unlike our adversaries, we believe in freedom of speech and don’t ban social media platforms. Instead of this rushed bill, we need comprehensive data security legislation that protects all Americans. — Val Hoyle (@RepValHoyle) March 13, 2024     ___________________ Please tell the Senate to reject this bill and instead give Americans the comprehensive data privacy protections we so desperately need. TAKE ACTION TELL CONGRESS: DON'T BAN TIKTOK
>> mehr lesen

SXSW Tried to Silence Critics with Bogus Trademark and Copyright Claims. EFF Fought Back. (Thu, 14 Mar 2024)
Special thanks to EFF legal intern Jack Beck, who was the lead author of this post. Amid heavy criticism for its ties to weapons manufacturers supplying Israel, South by Southwest—the organizer of an annual conference and music festival in Austin—has been on the defensive. One tool in their arsenal: bogus trademark and copyright claims against local advocacy group Austin for Palestine Coalition. The Austin for Palestine Coalition has been a major source of momentum behind recent anti-SXSW protests. Their efforts have included organizing rallies outside festival stages and hosting an alternative music festival in solidarity with Palestine. They have also created social media posts explaining the controversy, criticizing SXSW, and calling on readers to email SXSW with demands for action. The group’s posts include graphics that modify SXSW’s arrow logo to add blood-stained fighter jets. Other images incorporate patterns evoking SXSW marketing materials overlaid with imagery like a bomb or a bleeding dove. Graphic featuring parody of SXSW arrow logo and a bleeding dove in front of a geometric background, with the text "If SXSW wishes to retain its credibility, it must change course by disavowing the normalization of militarization within the tech and entertainment industries." One of Austin for Palestine's graphics Days after the posts went up, SXSW sent a cease-and-desist letter to Austin for Palestine, accusing them of trademark and copyright infringement and demanding they take down the posts. Austin for Palestine later received an email from Instagram indicating that SXSW had reported the post for violating their trademark rights. We responded to SXSW on Austin for Palestine’s behalf, explaining that their claims are completely unsupported by the law and demanding they retract them. The law is clear on this point. The First Amendment protects your right to make a political statement using trademark parodies, whether or not the trademark owner likes it. That’s why trademark law applies a different standard (the “Rogers test”) to infringement claims involving expressive works. The Rogers test is a crucial defense against takedowns like these, and it clearly applies here. Even without Rogers’ extra protections, SXSW’s trademark claim would be bogus: Trademark law is about preventing consumer confusion, and no reasonable consumer would see Austin for Palestine’s posts and infer they were created or endorsed by SXSW. SXSW’s copyright claims are just as groundless. Basic symbols like their arrow logo are not copyrightable. Moreover, even if SXSW meant to challenge Austin for Palestine’s mimicking of their promotional material—and it’s questionable whether that is copyrightable as well—the posts are a clear example of non-infringing fair use. In a fair use analysis, courts conduct a four-part analysis, and each of those four factors here either favors Austin for Palestine or is at worst neutral. Most importantly, it’s clear that the critical message conveyed by Austin for Palestine’s use is entirely different from the original purpose of these marketing materials, and the only injury to SXSW is reputational—which is not a cognizable copyright injury. SXSW has yet to respond to our letter. EFF has defended against bogus copyright and trademark claims in the past, and SXSW’s attempted takedown feels especially egregious considering the nature of Austin for Palestine’s advocacy. Austin for Palestine used SXSW’s iconography to make a political point about the festival itself, and neither trademark nor copyright is a free pass to shut down criticism. As an organization that “dedicates itself to helping creative people achieve their goals,” SXSW should know better.
>> mehr lesen

Protect Yourself from Election Misinformation (Wed, 13 Mar 2024)
Welcome to your U.S. presidential election year, when all kinds of bad actors will flood the internet with election-related disinformation and misinformation aimed at swaying or suppressing your vote in November.  So… what’re you going to do about it?  As EFF’s Corynne McSherry wrote in 2020, online election disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events. Still, election misinformation and disinformation continue to proliferate online and off.  That being said, regulation is not typically an effective or human rights-respecting way to address election misinformation. Even well-meaning efforts to control election misinformation through regulation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression. Indeed, any content regulation must be scrutinized to avoid inadvertently affecting meaningful expression: Is the approach narrowly tailored or a categorical ban? Does it empower users? Is it transparent? Is it consistent with human rights principles?   While platforms and regulators struggle to get it right, internet users must be vigilant about checking the election information they receive for accuracy. There is help. Nonprofit journalism organization ProPublica published a handy guide about how to tell if what you’re reading is accurate or “fake news.” The International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends: how_to_spot_fake_news.jpg How to Spot Fake News - IFLA To make sure you’re getting good information about how your election is being conducted, check in with trusted sources including your state’s Secretary of State, Common Cause, and other nonpartisan voter protection groups, or call or text 866-OUR-VOTE (866-687-8683) to speak with a trained election protection volunteer.  And if you see something, say something: You can report election disinformation at https://reportdisinfo.org/, a project of the Common Cause Education Fund.   EFF also offers some election-year food for thought:  On EFF’s “How to Fix the Internet” podcast, Pamela Smith—president and CEO of Verified Voting—in 2022 talked with EFF’s Cindy Cohn and Jason Kelley about finding reliable information on how your elections are conducted, as part of ensuring ballot accessibility and election transparency. Also on “How to Fix the Internet”, Alice Marwick—cofounder and principal researcher at the University of North Carolina, Chapel Hill’s Center for Information, Technology and Public Life—in 2023 talked about finding ways to identify and leverage people’s commonalities to stem the flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out. She discussed why seemingly ludicrous conspiracy theories get so many views and followers; how disinformation is tied to personal identity and feelings of marginalization and disenfranchisement; and when fact-checking does and doesn’t work. EFF’s Cory Doctorow wrote in 2020 about how big tech monopolies distort our public discourse: “By gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.”  An effective democracy requires an informed public and participating in a democracy is a responsibility that requires work. Online platforms have a long way to go in providing the tools users need to discern legitimate sources from fake news. In the meantime, it’s on each of us. Don’t let anyone lie, cheat, or scare you away from making the most informed decision for your community at the ballot box. 
>> mehr lesen

Congress Should Give Up on Unconstitutional TikTok Bans (Wed, 13 Mar 2024)
Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week. A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill.  TAKE ACTION TELL CONGRESS: DON'T BAN TIKTOK As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company. The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target.  The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now.  The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation.  Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms. Montana’s ban was as unprecedented as it was unconstitutional And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana.  Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property.  TAKE ACTION TELL CONGRESS: DON'T BAN TIKTOK Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.
>> mehr lesen

Congress Must Stop Pushing Bills That Will Benefit Patent Trolls (Tue, 12 Mar 2024)
The U.S. Senate is moving forward with two bills that would enrich patent trolls, patent system insiders, and a few large companies that rely on flimsy patents, at the expense of everyone else.  One bill, the Patent Eligibility Restoration Act (PERA) would bring back some of the worst software patents we’ve seen, and even re-introduce types of patents on human genes that were banned years ago. Meanwhile, a similar group of senators is trying to push forward the PREVAIL Act (S. 2220), which would shut out most of the public from even petitioning the government to reconsider wrongly granted patents.  Take Action Tell Congress: No New Bills For Patent Trolls Patent trolls are companies that don’t focus on making products or selling services. Instead, they collect patents, then use them to threaten or sue other companies and individuals. They’re not a niche problem; patent trolls filed the majority of patent lawsuits last year and for all the years in which we have good data. In the tech sector, they file more than 80% of the lawsuits. These do-nothing companies continue to be vigorous users of the patent system, and they’ll be the big winners under the two bills the U.S. Senate is considering pushing forward.  Don’t Bring Back “Do It On A Computer” Patents  The Patent Eligibility Restoration Act, or PERA, would overturn key legal precedents that we all rely on to kick the worst-of-the-worst patents out of the system. PERA would throw out a landmark Supreme Court ruling called the Alice v. CLS Bank case, which made it clear that patents can’t just claim basic business or cultural processes by adding generic computer language.  The Alice rules are what—finally—allowed courts to throw out the most ridiculous “do it on a computer” software patents at an early stage. Under the Alice test, courts threw out patents on “matchmaking”, online picture menus, scavenger hunts, and online photo contests.  The rules under Alice are clear, fair, and they work. It hasn’t stopped patent trolling, because there are so many patent owners willing to ask for nuisance-value settlements that are far below the cost of legal defense. It’s not perfect, and it hasn’t ended patent trolling. But Alice has done a good job of saving everyday internet users from some of the worst patent claims.  PERA would allow patents like the outrageous one brought forward in the Alice v. CLS Bank case, which claimed the idea of having a third party clear financial transactions—but on a computer. A patent on ordering restaurant food through a mobile phone, which was used to sue more than 100 restaurants, hotels, and fast-food chains before it was finally thrown out under the Alice rules, could survive if PERA becomes law.  Don’t Bring Back Patents On Human Genes  PERA goes further than software. It would also overturn a Supreme Court rule that prevents patents from being granted on naturally occurring human genes. For almost 30 years, some biotech and pharmaceutical companies used a cynical argument to patent genes and monopolize diagnostic tests that analyzed them. That let the patent owners run up the costs on tests like the BRCA genes, which are predictive of ovarian and breast cancers. When the Supreme Court disallowed patents on human genes found in nature, the prices of those tests plummeted.  Patenting naturally occurring human genes is a horrific practice and the Supreme Court was right to ban it. The fact that PERA sponsors want to bring back these patents is unconscionable.  Allowing extensive patenting of genetic information will also harm future health innovations, by blocking competition from those who may offer more affordable tests and treatments. It could affect our response to future pandemics. Imagine if the first lab to sequence the COVID-19 genome filed for patent protection, and went on to threaten other labs that seek to create tests with patent infringement. As an ACLU attorney who litigated against the BRCA gene patents has pointed out, this scenario is not fantastical if a bill like PERA were to advance.  Take Action Tell Congress To Reject PERA and PREVAIL Don’t Shut Down The Public’s Right To Challenge Patents The PREVAIL Act would bar most people from petitioning the U.S. Patent and Trademark Office (USPTO) to revoke patents that never should have been granted in the first place.  The U.S. Patent and Trademark Office (USPTO) issues hundreds of thousands of patents every year, with less than 20 hours, on average, being devoted to examining each patent. Mistakes happen.  That’s why Congress created a process for the public to ask the USPTO to double-check certain patents, to make sure they were not wrongly granted. This process, called inter partes review or IPR, is still expensive and difficult, but faster and cheaper than federal courts, where litigating a patent through a jury trial can cost millions of dollars. IPR has allowed the cancellation of thousands of patent claims that never should have been issued in the first place.  The PREVAIL Act will limit access to the IPR process to only people and companies that have been directly threatened or sued over a patent. No one else will have standing to even file a petition. That means that EFF, other non-profits, and membership-based patent defense companies won’t be able to access the IPR process to protect the public.  EFF used the IPR process back in 2013, when thousands of our supporters chipped in to raise more than $80,000 to fight against a patent that claimed to cover all podcasts. We won’t be able to do that if PREVAIL passes.  And EFF isn’t the only non-profit to use IPRs to protect users and developers. The Linux Foundation, for instance, funds an “open source zone” that uses IPR to knock out patents that may be used to sue open source projects. Dozens of lawsuits are filed each year against open source projects, the majority of them brought by patent trolls.  IPR is already too expensive and limited; Congress should be eliminating barriers to challenging bad patents, not raising more. Congress Should Work For the Public, Not For Patent Trolls The Senators pushing this agenda have chosen willful ignorance of the patent troll problem. The facts remain clear: the majority of patent lawsuits are brought by patent trolls. In the tech sector, it’s more than 80%. These numbers may be low considering threat letters from patent trolls, which don’t become visible in the public record.  These patent lawsuits don’t have much to do with what most people think of when they think about “inventors” or inventions. They’re brought by companies that have no business beyond making patent threats.  The Alice rules and IPR system, along with other important reforms, have weakened the power of these patent trolls. Patent trolls that used to receive regular multi-million dollar paydays have seen their incomes shrink (but not disappear). Some trolls, like Shipping and Transit LLC finally wound up operations after being hit with sanctions (more than 500 lawsuits later). Trolls like IP Edge, now being investigated by a federal judge after claiming its true “owners” included a Texas food truck owner who turned out to be, essentially, a decoy.  There’s big money behind bringing back the patent troll business, as well as a few huge tech and pharma companies that prefer to use unjustified monopolies rather than competing fairly. Two former Federal Circuit judges, two former Directors of the U.S. Patent and Trademark Office, and many other well-placed patent insiders are all telling Congress that Alice should be overturned and patent trolls should be allowed to run amok. We can’t let that happen.  Take Action Tell Congress: Don't Work For Patent Trolls
>> mehr lesen

Reject Nevada’s Attack on Encrypted Messaging, EFF Tells Court (Tue, 12 Mar 2024)
Nevada Makes Backward Argument That Insecure Communication Makes Children Safer LAS VEGAS — The Electronic Frontier Foundation (EFF) and a coalition of partners urged a court to protect default encrypted messaging and children’s privacy and security in a brief filed today. The brief by the American Civil Liberties Union (ACLU), the ACLU of Nevada, the EFF, Stanford Internet Observatory Research Scholar Riana Pfefferkorn, and six other organizations asks the court to reject a request by Nevada’s attorney general to stop Meta from offering end-to-end encryption by default to Facebook Messenger users under 18 in the state. The brief was also signed by Access Now, Center for Democracy & Technology (CDT), Fight for the Future, Internet Society, Mozilla, and Signal Messenger LLC. Communications are safer when third parties can’t listen in on them. That’s why the EFF and others who care about privacy pushed Meta for years to make end-to-end encryption the default option in Messenger. Meta finally made the change, but Nevada wants to turn back the clock. As the brief notes, end-to-end encryption “means that even if someone intercepts the messages—whether they are a criminal, a domestic abuser, a foreign despot, or law enforcement—they will not be able to decipher or access the message.” The state of Nevada, however, bizarrely argues that young people would be better off without this protection. “Encryption is the best tool we have for safeguarding our privacy and security online — and privacy and security are especially important for young people,” said EFF Surveillance Litigation Director Andrew Crocker. “Nevada’s argument that children need to be ‘protected’ from securely communicating isn’t just baffling; it’s dangerous.” As explained in a friend-of-the-court brief filed by the EFF and others today, encryption is one of the best ways to reclaim our privacy and security in a digital world full of cyberattacks and security breaches. It is increasingly being deployed across the internet as a way to protect users and data. For children and their families especially, encrypted communication is one of the strongest safeguards they have against malicious misuse of their private messages — a safeguard Nevada seeks to deny them. “The European Court of Human Rights recently rejected a Russian law that would have imposed similar requirements on services that offer end-to-end message encryption – finding that it violated human rights and EU law to deny people the security and privacy that encryption provides,” said EFF’s Executive Director Cindy Cohn. “Nevada’s attempt should be similarly rejected.” In its motion to the court, Nevada argues that it is necessary to block end-to-end encryption on Facebook Messenger because it can impede some criminal investigations involving children. This ignores that law enforcement can and does conduct investigations involving encrypted messages, which can be reported by users and accessed from either the sender or recipient’s devices. It also ignores law enforcement’s use of the tremendous amount of additional information about users that Meta routinely collects. The brief notes that co-amicus Pfeffercorn recently authored a study that confirmed that Nevada does not, in fact, need to block encryption to do its investigations. The study found that “content-oblivious” investigation methods are “considered more useful than monitoring the contents of users’ communications when it comes to detecting nearly every kind of online abuse.”  “The court should reject Nevada’s motion,” said EFF’s Crocker. “Making children more vulnerable in just to make law enforcement investigators’ jobs slightly easier is an uneceesary and dangerous trade off.” For the brief: https://www.eff.org/document/nevada-v-meta-amicus-brief Contact:  Andrew Crocker Surveillance Litigation Director andrew@eff.org
>> mehr lesen