Infrastructure that works well rarely stands out. The internet infrastructure provided by Cloudflare, which provides a content delivery network that safeguards millions of sites online, is a notable exception. Last year, Cloudflare came under intense pressure to stop providing its services to 8chan, the online message board popular among white supremacists, after the gunmen in three separate shootings posted manifestos on the site prior to their attacks. 8chan had relied on the company’s content delivery network to keep its message board online and accessible. After initially saying it had no legal obligation to do so, the company eventually relented and denied 8chan the use of its services.
Cloudflare’s decision highlights a fundamental question about internet infrastructure companies: What is the political process behind their content moderation decisions?
The services that companies like Cloudflare provide are twofold. First, a content delivery network provides faster load times. Due to the sheer size of the globe, as well as the physical limits of wires and fiber optic cables, content housed on a server farther away from a requesting user will usually take longer to load. Content delivery networks, or CDNs, solve this problem by storing cached copies of a site’s content in datacenters around the world, as close to the user requesting it as possible. Without this service, streaming music or video would slow down considerably. Yet CDNs don’t just offer faster load times—they also provide a unique form of security. One way to take down a website is to overload it with requests, to the point where it has to deny service altogether, in what is known as distributed denial of service (DDoS) attack. However, DDoS attacks aren’t as effective for websites that rely on companies like Cloudflare, because requests are directed to a CDN rather than the website’s server. As the biggest of many such infrastructural service providers, Cloudflare keeps clients’ websites afloat by making sure that they can always meet users’ demands for the content they provide.
Cloudflare and other CDN providers usually offer their services even when the content to be hosted and streamed on their clients’ websites is objectionable. Up until recently, Cloudflare in particular maintained that content should not ever be regulated at the level of infrastructural delivery, clinging to a vision of infrastructure untainted by politics. Cloudflare argues it should not make content moderation decisions. But the question of whether infrastructure companies should make decisions about content, often at the heart of the debates over hate speech and its continued online presence, is a distraction from the reality that they already do—just not in ways that most users of those infrastructures can see. Content moderation does not just happen at the moment of termination: It happens every day a website is kept up and available by the infrastructure below it.
The modern world runs on “big data,” the massive data sets used by governments, firms, and academic researchers to conduct analyses, unearth patterns, and drive decision-making. When it comes to data analysis, bigger can be better: The more high-quality data is incorporated, the more robust the analysis will be. Large-scale data analysis is becoming increasingly powerful thanks to machine learning and has a wide range of benefits, such as informing public-health research, reducing traffic, and identifying systemic discrimination in loan applications.
But there’s a downside to big data, as it requires aggregating vast amounts of potentially sensitive personal information. Whether amassing medical records, scraping social media profiles, or tracking banking and credit card transactions, data scientists risk jeopardizing the privacy of the individuals whose records they collect. And once data is stored on a server, it may be stolen, shared, or compromised.
Computer scientists have worked for years to try to find ways to make data more private, but even if they attempt to de-identify data—for example, by removing individuals’ names or other parts of a data set—it is often possible for others to “connect the dots” and piece together information from multiple sources to determine a supposedly anonymous individual’s identity (via a so-called re-identification or linkage attack).
Fortunately, in recent years, computer scientists have developed a promising new approach to privacy-preserving data analysis known as “differential privacy” that allows researchers to unearth the patterns within a data set—and derive observations about the population as a whole—while obscuring the information about each individual’s records.
Ben Collins and Brandy Zadrozny are NBC News reporters focusing on mis- and disinformation in health and politics. Here, they speak with Lawfare’s Quinta Jurecic and Evelyn Douek about QAnon, a far-right conspiracy theory built around anonymous internet posts heralding Donald Trump as the righteous leader of a struggle against a cabal of “deep state” elites running an extensive child sex trafficking operation. They discussed how social media platforms have allowed QAnon to spread and how its ability to plug into other conspiracy theories, permeate the mainstream, and incite offline violence have made it particularly pernicious.
Today, our lives happen in video conferences. The big events we used to share in person with friends and family—graduations, weddings, and birthdays—now take place in small boxes on a screen. While increasing our ability to be in multiple places at once, video conferences have come with hair-pulling frustrations: the garbled connection, people talking over each other, the family members that can’t unmute themselves or start their camera.
While the stakes are low for personal calls, these same issues gain far greater magnitude in virtual court hearings in which judges decide on the fate of defendants. This new reality of virtual criminal hearings around the country has created problems ranging from the embarrassing to the potentially unconstitutional. While we should applaud efforts to keep the justice system running during the pandemic, we must assess the impact of these changes and guarantee that new technologies and processes comply with the Constitution and provide equal access. If we don’t, these developments will exacerbate existing power dynamics that favor the prosecution and punish those with the most to lose: defendants.
Jillian C. York is a free-expression activist and Director for International Freedom of Expression at the Electronic Frontier Foundation. Here, she speaks with Kate Klonick, Assistant Professor of Law at St. John’s University, and Lawfare’s Quinta Jurecic about ongoing debates over disinformation and internet governance. They discuss platform accountability and transparency issues, content moderation as a broken concept, and the problem with focusing platform governance conversations on the United States to the exclusion of much of the rest of the world.
As public pressure has increased for social media platforms to take action against online harassment and abuse, most of the policy debate has centered around Section 230 of the Communications Decency Act, which provides immunity to social media companies like Facebook and Twitter from being sued over most content users publish on their sites. The presumptive Democratic presidential nominee, Joe Biden, is ready to revoke it because of platforms’ tolerance of hate speech. President Donald Trump appears to agree but for different reasons: because of what he claims is the platforms’ anti-conservative bias.
Whether Section 230 is tweaked, repealed, or unchanged, platforms will likely respond to online harm in fundamentally the same way they are now. Echoing the American criminal justice system, the main strategy that platforms use is to remove offensive material—and sometimes the users who post it—from communities.
Instead of removing content and users, we argue for a different approach to content moderation, one based on the principles of restorative justice: focus on repairing harm rather than punishing offenders. Offenders are (usually) capable of remorse and change, and victims are better served by processes that meet their specific needs rather than on punishing the harmers. Transformative justice—a closely related idea—emphasizes that an incident of harm is an opportunity for a community to work on changing conditions and norms to try to prevent the same harm from happening again. By building content moderation policies and practices around restorative and transformative justice, social media platforms have the opportunity to make their spaces healthier and more resilient.
Hany Farid, a professor in the School of Information at the University of California, Berkeley, speaks with Lawfare’s Quinta Jurecic and Evelyn Douek about deep fakes—realistic AI-generated content in which a person’s likeness is altered to show them doing or saying something they never did or said. They discuss the danger posed by deep fakes, whether the danger stems primarily from the technology itself or the way platforms amplify the content, and what the tech industry response should look like.
Since the COVID-19 pandemic began, the search for an effective treatment has been fraught. In the United States, public attention has focused on the anti-malarial drug hydroxychloroquine, in large part because of President Trump’s endorsement of the drug. After he amplified a small study in France that suggested the drug could be an effective treatment, prescription sales of hydroxychloroquine skyrocketed. Well-developed, randomized clinical trials have since found that hydroxychloroquine is not an effective treatment for COVID-19, but this has done little to reduce interest in the drug—especially in the White House, where Trump and his aides most recently latched on to a less rigorous, observational study that purported to demonstrate the drug’s effectiveness.
The debate over hydroxychloroquine has become deeply politicized, which has obscured more nuanced debates within the scientific community over what constitutes actionable evidence. With social media platforms acting as key arbiters in the circulation of health information, these nuances are particularly important for the major platforms to understand. By framing decisions around removing content related to hydroxychloroquine as a choice between harmful medical misinformation vs. science, platforms may be closing off space for inquiry and debate and incentivizing the consumption of more medical misinformation instead of less. We advocate an approach that gives voice to experts and allows dialogue on different approaches to medical uncertainty.
Across the United States, COVID-19 has had unequal impact on different communities. As numerous studies have shown, Black people, immigrants, and low-income people are more likely to contract the disease and more likely to suffer hospitalization and death when they do. Such disparate impacts also extend to the disabled community, which has similarly experienced far higher rates of hospitalization and mortality.
Advocates for social justice and civil rights have repeatedly called for the policy and technology sectors to effectively address COVID-19’s unequal impact on marginalized communities. Yet those advocates, as well as policymakers and technologists themselves, first need to understand the specific concerns that disabled people have. As a disabled person who is also queer, trans, and East Asian, I would argue that centering disability justice as a framework is necessary for achieving racial, gender, and economic justice, especially in light of COVID-19.
Nowhere is that framework more necessary than the technology industry. On the one hand, technology holds enormous promise for helping disabled people to cope with, and perhaps even thrive amid the pandemic, such as by enabling consistent access to meaningful social interaction. On the other, however, technology threatens to exacerbate long-standing structural problems, such as widespread medical discrimination resulting in denial of care. For disabled people, the stakes have never been higher, and this requires the tech policy community to make careful, well-designed proposals in collaboration with the most impacted communities.
Jane Lytvynenko, is a senior reporter at BuzzFeed News and a noted debunker of online hoaxes and misinformation. Here, she speaks with Lawfare’s Quinta Jurecic and Evelyn Douek about analyzing and reporting on mis- and disinformation in real time — especially in the context of COVID-19, where “fake experts” espousing misleading stories about the virus, and conspiracy theories such as the “Plandemic” video, have proliferated.