How hate speech reveals the invisible politics of internet infrastructure

People gather in front of Twitter Japan headquarters to pressure the company to be more active against hate speech and discrimination on the platform on June 6, 2020 in Tokyo, Japan.

Infrastructure that works well rarely stands out. The internet infrastructure provided by Cloudflare, which provides a content delivery network that safeguards millions of sites online, is a notable exception. Last year, Cloudflare came under intense pressure to stop providing its services to 8chan, the online message board popular among white supremacists, after the gunmen in three separate shootings posted manifestos on the site prior to their attacks. 8chan had relied on the company’s content delivery network to keep its message board online and accessible. After initially saying it had no legal obligation to do so, the company eventually relented and denied 8chan the use of its services.

Cloudflare’s decision highlights a fundamental question about internet infrastructure companies: What is the political process behind their content moderation decisions?

The services that companies like Cloudflare provide are twofold. First, a content delivery network provides faster load times. Due to the sheer size of the globe, as well as the physical limits of wires and fiber optic cables, content housed on a server farther away from a requesting user will usually take longer to load. Content delivery networks, or CDNs, solve this problem by storing cached copies of a site’s content in datacenters around the world, as close to the user requesting it as possible. Without this service, streaming music or video would slow down considerably. Yet CDNs don’t just offer faster load times—they also provide a unique form of security. One way to take down a website is to overload it with requests, to the point where it has to deny service altogether, in what is known as distributed denial of service (DDoS) attack. However, DDoS attacks aren’t as effective for websites that rely on companies like Cloudflare, because requests are directed to a CDN rather than the website’s server. As the biggest of many such infrastructural service providers, Cloudflare keeps clients’ websites afloat by making sure that they can always meet users’ demands for the content they provide.

Cloudflare and other CDN providers usually offer their services even when the content to be hosted and streamed on their clients’ websites is objectionable. Up until recently, Cloudflare in particular maintained that content should not ever be regulated at the level of infrastructural delivery, clinging to a vision of infrastructure untainted by politics. Cloudflare argues it should not make content moderation decisions. But the question of whether infrastructure companies should make decisions about content, often at the heart of the debates over hate speech and its continued online presence, is a distraction from the reality that they already do—just not in ways that most users of those infrastructures can see. Content moderation does not just happen at the moment of termination: It happens every day a website is kept up and available by the infrastructure below it.

From rational justice to mere conduits

While Cloudflare has long been able to maintain that it makes no decisions about content and is not political, the last three years have made that stance untenable. In 2017, Cloudflare decided to remove its protections for the neo-Nazi outlet The Daily Stormer. In a blog post announcing the decision, company CEO Matthew Prince said the decision troubled him. The first people who had gotten in touch with him about removing the site’s Cloudflare protections were “vigilante hackers,” who wanted the company to “get out of the way so we can DDoS this site off the internet.” Though many people besides “vigilante hackers” criticized Cloudflare’s decision to continue protecting The Daily Stormer’s online presence, Prince argued that “having the mechanism of content control be vigilante hackers launching DDoS attacks subverts any rational concept of justice.” As he announced that Cloudflare would cut services to the platform, Prince emphasized that there was a need for future content moderation decisions to be “clear, transparent, consistent and respectful of due process.”

Two years later, Cloudflare’s argument for removing protections for a hateful site were very different. After the El Paso shooting, a company lawyer said the company was under no legal obligation to boot the site from its client roster. A day later, Prince announced Cloudflare had cut ties with 8chan because the site had “repeatedly proven itself to be a cesspool of hate.” The “tipping point” for the Daily Stormer decision two years earlier had been the neo-Nazi site’s suggestion that Cloudflare secretly agreed with its views, but in 2019, 8chan being a “cesspool of hate” was ostensibly sufficient grounds for termination. “We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” Prince wrote. But the rationale for removing 8chan, Prince argued, was simple: “They have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths.”

The Daily Stormer and 8chan decisions represent a shift in the political responsibilities Cloudflare is willing to assume. When removing The Daily Stormer in 2017, Prince argued that the decision could portend a future in which “a small number of companies,” such as Cloudflare, Amazon, and Facebook would “largely determine what can and cannot be online.” But when removing 8chan in 2019, Prince drew a sharp distinction between Cloudflare and companies like YouTube and Facebook. While the latter organize or promote content, Prince wrote, companies like Cloudflare are “mere conduits” for content. Two years after the Daily Stormer announcement in which he emphasized transparency, Prince argued that “conduits, like Cloudflare, are not visible to users and therefore cannot be transparent and consistent about their policies.”

This reversal indicates the extent to which infrastructure companies can shift their standards on the fly—despite the fact that such decisions by a company of Cloudflare’s size affect almost all internet users.

Invisible politics

Prince’s argument that Cloudflare cannot be held accountable for what passes through its servers because it is invisible has not received enough attention. Internet companies routinely claim to be neutral conduits in order to abdicate responsibility for online hate. But the fact that internet infrastructure is largely invisible to its users does not mean that it is apolitical until the companies in question would like to be recognized for their politics. 

This claim to invisibility and political neutrality has become increasingly tenuous in recent months. Following the wave of protests sparked by the murder of George Floyd, Cloudflare has jumped at the chance to publicly discuss its governance choices. Writing that the protests can be catalysts for change, but only if they can be heard, Cloudflare proudly claims to make this possible by offering activists protection from cyberattacks. They are far from the only tech company seeking to ally itself with antiracist protests in the United States: IBM, Amazon, and Microsoft all announced that they would no longer provide facial recognition technology to US-based law enforcement, for example. But infrastructural companies like Cloudflare are uniquely positioned to have it both ways: Charitably and very visibly decide to protect some content while categorically refusing accountability for deciding to protect or eventually remove other content. 

If companies like Cloudflare are “not a government”—as Prince emphasized in 2019— and the internet is a Wild West, then the CEOs, lawyers, and trust and safety teams who make decisions about what content gets to remain online resemble the sheriffs in Western movies, free to choose when to relax or enforce the rules.

Pandemic internet, public good?

Thinking of a company like Cloudflare as an internet sheriff is especially ironic given the increasingly common comparison between the internet and public utilities. When the pandemic shifted education, work, and social life online, the need for working internet infrastructures became acute enough to make it feel like an essential good. Cloudflare has not hesitated to point this out, publishing cheery blog posts about the pandemic’s effects on internet usage and its own ability to handle increased traffic. One of these self-congratulatory posts remarks that it is “hard to imagine another utility (say electricity, water or gas) coping with a sudden and continuous increase in demand of 50%.” In this light, the company’s “mere conduits” transport an essential good, and Cloudflare is an indispensable public utility service. 

The analogy emphasizes the “essential” importance of an internet connection but underplays the reciprocal character of that connection. Public utilities are defined by the understanding that everyone needs access to them. Water, for example, is a public good because we all need to open our taps and drink. If internet contentis a public good to be delivered by infrastructural service providers, the implication is that everyone needs to be connected—just like everyone needs access to water. But companies like Cloudflare work for websites that host content, not the users who request it. Being connected to a content delivery network means that you can provide, not consume, the “public good” that is internet content in this analogy. Cloudflare does not meet a demand to drink water—it protects clients who want to contribute to the supply.

The presentation of internet infrastructural services as “conduits” that bring us the internet, just like pipelines bring us water or wires transmit electrical currents, allows the qualitative dimension of content to disappear from view. Provided they meet quality standards, water and electricity are the same for everyone on the network. Individual preferences or values do not factor into what is delivered. For a network like Cloudflare, which allows websites to deliver the particular content users request, that simply is not the case. The public utility metaphor obscures whom the internet’s infrastructures serve and what content they decide to make available to users.

Comparing themselves to public utilities allows companies like Cloudflare to camouflage how their Wild West approach to content moderation concentrates decision-making power at the top. Actual public utility companies, like water providers, are almost always publicly owned and run. Even when they are private businesses, electric companies are subject to civil oversight by public utilities commissions. The provision of a public good is a deeply political matter, too important to be provided without transparent governance and the resulting accountability. The inhabitants of Flint, Michigan are still fighting to hold the government responsible for the lack of clean drinking water running through their pipes. Their efforts are but one example of how important it is for citizens to be able to hold public utility providers accountable.

There is, of course, little recourse for internet users wanting to hold Cloudflare to its 2017 promises of transparency and consistency about its content-related decision-making. In its 2019 statement, Cloudflare rightly reminds us that it neither has to make any promises nor follow through on them. If it’s hard to imagine public utilities keeping up with massive increases in demand, it is even harder to imagine these systems ostensibly being governed by the vacillating whims of corporate executives and policy teams. The public utility metaphor just keeps the internet sheriffs out of view.

Traffic controllers

A more appropriate metaphor for the service companies like Cloudflare provide is that of a traffic controller. If the internet is a network of private roads, Cloudflare is hired by websites to direct traffic in order to prevent congestion or collisions on their estates. Infrastructure companies like Cloudflare permit certain kinds of transportation flows, note what kinds of vehicles can travel along what roads, and enforce traffic rules. Thinking of infrastructural services in this way highlights the foolishness of pretending that there are no political choices involved: Directing traffic is always a matter of prioritization, and traffic flows shape what destinations remain on the map. More importantly, the traffic metaphor underlines that the choice not to close a road with frequent collisions is just as consequential as the choice to close it, if not more. 

On the public roads we traverse every day, we expect the rules to be consistent and known to all parties. But even on a network of private roads, we can expect consistency—our safety depends on it. A place where we all travel cannot be a Wild West, and commitments to transparency should be more robust than a kind of cowboy’s honor. Cloudflare’s blog posts do indicate a thoughtful sheriff, and it may be the case that the company makes its decisions about content in measured and considered ways. But we cannot actually know this, because the company continues to describe itself as a public utility provider or cloak itself as a “mere conduit,” depending on who is asking. 

Cloudflare’s haphazard public responses to its white supremacist clientele reveal the limits of the underlying model of governance, in which infrastructure companies owe nothing to users, not even insight into who and what their “conduits” serve. It is more than reasonable—and long overdue—to expect that the CEOs of the internet are not just reluctant sheriffs, but instead begin to do their job as traffic controllers with greater transparency and consistency. The choices that companies make about the flow of internet traffic need to be visible and available for reference to everyone on the road. Every time we let companies like Cloudflare either proudly present or quickly obscure their infrastructural politics, we move further away from an internet that is safe and answerable to the users exploring its trails.

Suzanne van Geuns researches online antifeminism. She is a Ph.D. candidate in the Department for the Study of Religion at the University of Toronto, where she is also an Ethics of AI fellow.

Corinne Cath-Speth researches internet governance culture. She is a Ph.D. Candidate at the Oxford Internet Institute and the Alan Turing Institute for Data Science and AI.

Amazon, Facebook, Google, IBM International Foundation, and Microsoft provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.