Sections

Commentary

A guide for conceptualizing the debate over Section 230

Tweets from President Donald Trump are seen as Sen. Tammy Baldwin (D-Wis.) asks questions during a Senate Commerce, Science, and Transportation Committee hearing to discuss reforming Section 230 of the Communications Decency Act with big tech companies on Wednesday, October 28, 2020. Photo by Greg Nash/Pool/ABACAPRESS.COM

Though they were once the darlings of society, major tech companies have decidedly fallen out of favor. A range of real and perceived abuses by the large tech companies have spurred widespread skepticism and distrust. The shift in public opinion has not gone unnoticed: Policymakers from D.C. to Brussels have not only brought major anti-trust lawsuits against large firms like Facebook and Google, but are also contemplating major overhauls of their tech regulatory regimes. In the United States in particular, the desire for reform has crystalized in a contested debate over how to best overhaul Section 230 of the Communications Decency Act, which provides online platforms with immunity for liability from most content posted by their users. 

Yet the ongoing debate around Section 230 feels like a shouting match between strangers with earmuffs, each yelling in a different language hoping that the louder they yell the more easily they will be understood. The chaotic nature of this debate stems in part from the difficulty in providing a unified theory of the harms associated with the law, and efforts to overhaul the law will never succeed without a clear understanding of online harms more broadly. Although Section 230 is not the proximate cause of any of those harms, the law is typically criticized for either enabling or failing to prevent them. Harms that are not related to Section 230’s focus on liability protections for content and content moderation, such as privacy violations, nonetheless provide much of the energy behind efforts to reform it. Reforming the law effectively thus first requires a good understanding of the individual and generalized harms posed by digital platforms, as well as key legal and policy challenges to implementing content-moderation regimes.

Individual harms

Discrimination, harassment, and other uncontained harms

From discriminatory and hateful content to the non-consensual publishing of sexualized imagery (revenge porn), users of digital platforms have experienced a wide variety of uncontained harms. Yet efforts to use the courts to provide redress for harassment or discrimination online are typically dismissed on Section 230 immunity grounds—even though reasonable minds may well feel the efficient or fair balance of remediation would put more burden on the platform, as calls for “cyber civil rights” have stressed. The problem is not limited to large platforms. The “Bad Samaritan” (to borrow a phrase from a paper by Danielle Keats Citron and Benjamin Wittes) can also be small platforms that have invested few or no resources in responding to the concerns of their users—or, as illustrated by recent controversies surrounding Clubhouse, new platforms that may have novel structural challenges in developing effective content moderation regimes.

Perceived lack of agency

Another motivation for Section 230 reform is of lost user agency, as one of the authors of this piece recently theorized. The centralization of the services that facilitate a user’s experience of the internet typically makes online life easier by abstracting away complex technical functionality and choice. Open up Facebook, for example, and the company’s algorithm will display what it has learned to be most interesting to you. That process subjects complex human conversations and interactions to background filtering functions and recommendation and optimization processes that are not readily understandable, as they are powered by advanced machine-learning systems trained on data sets that would cost billions of dollars to replicate. As a consequence, all too often platforms fall into the category of “can’t live with them, can’t live without them” as facilitators of our online experience and all that it entails—family photos presented alongside a vaccine conspiracy theory, let’s say. It is no wonder that the platforms that serve up this information and filter the internet for its users become the target of frustration, together with the law that grants them immunity from liability for carrying out this facilitation.

Privacy concerns

More than a decade ago, Mark Zuckerberg declared privacy dead. Over the years since, concerns with what is perceived to be ubiquitous collection and unchecked use of data generated by and about everyday internet users continue to motivate a substantial portion of antagonism against tech, as articulated poignantly by Dr. Shoshana Zuboff in her writings about what she labels the “surveillance society.” While privacy and content moderation remain separate subjects for many, the actions—or inactions—of social media companies can lead to harmful circumstances that very much conflate the issues, as in cases where companies fail to take action against harassment and are protected from liability for the harassment appearing on their platform in the first place. Some policymakers are also conflating privacy and content moderation, as in the Don’t Push My Buttons Act introduced in both the House and Senate in 2020, which proposes an immunity exception where user data is collected and used in automated personalization.

Generalized harms

Mis- and disinformation

Conversations around the continued spread of disinformation and misinformation online—particularly through larger social media services—shape much of the public’s generalized perception of harm online. As an extension of their cybersecurity resilience, larger platforms have invested substantial resources in defending against targeted influence operations and the spread of disinformation. But increasingly large numbers of influence operations taken down and documented in transparency reports provide far from perfect reassurance. More broadly, persistent mis- and disinformation regarding topics as varied as the efficacy of vaccines, the integrity of the U.S. election system and viral conspiracy theory movements have negative consequences for societies around the world. But misinformation, being so closely tied to free expression, is a hard target to address. Though experiments have introduced various forms of friction as remedies, such efforts tend to stand in opposition to market demands for growth and greater engagement. So far, the demand to address mis- and disinformation has given rise to more congressional hearings than draft laws and policy frameworks, but it nevertheless remains a major unresolved concern and a major impetus behind the desire to reform Section 230.

Conservative perception of anti-conservative bias

One widely cited harm is the view that platform content policies, particularly those of Facebook, Twitter and YouTube, are calibrated to disfavor conservative content. Rep. Jim Jordan, the Ohio Republican, summed it up when he declared at a recent congressional hearing featuring the CEOs of major technology companies: “I’ll just cut to the chase, Big Tech is out to get conservatives.” Generally, content policies are facially neutral without content-specific references, but the way they are crafted allows for ample space to perceive bias. A PEW Research Center survey in August of 2020 found that 90 percent of Republicans surveyed believe social media intentionally censors on political grounds, even though conservative voices continue to thrive on social media as a report from New York University showed. The facts that the companies themselves are predominantly based in California and their employees overwhelmingly donate more to Democratic politicians likely bolster this view. While social media services profess they do not discriminate on political grounds in practice, as private sector entities they are not legally required to be politically neutral, contrary to the recent concurring opinion of Justice Clarence Thomas.

Progressive perception of pro-conservative bias

Conversely, even as conservatives see evidence of bias against their views, progressives frequently assert that platforms are not aggressive enough in their content moderation. According to this view, platforms permit or even promote harmful content and accounts that violate the platforms’ own policies because they fear the political blow-back they would receive from conservatives. Facebook receives the brunt of this criticism, which is exacerbated by the right-wing’s persistently high engagement numbers on the platform and high-profile interactions between the company’s top executives and conservative political figures. As with anti-conservative bias concerns, the generality of scope coupled with the inherent subjectivity of speech is the foremost challenge here. Where a policy prohibits the use of hate speech, for example, different audiences will hold that same metric up to the same speech and get a different result.

Policy challenges

Standards of free speech

Since the right to free expression is a fundamental right, government limits on speech face high bars to legitimacy—especially in the United States, thanks to the First Amendment. Although private sector actors are not subject to the same legal limitations as governments, their content moderation decisions still feel like infringements of free speech—a person wants to speak and is barred from doing so. Yet even lawful speech can cause harm, which is why many communities have evolved contextual standards for what can and can’t be said. Offline environments, like schools and workplaces, often adopt cultural norms and standards as checks on behavior and enforce them through informal social sanction and other mechanisms. But such structures do not always translate well to online fora, particularly at scale. Ironically, this gap is precisely why content policies exist on platforms, and why immunity provisions were built into legal systems to protect online platforms’ ability to enact and enforce content policies. Platform policies and actions that resemble law and government action blur the distinctions between legal and community standards and introduce substantial tension.

Transparency tradeoffs

The platform companies’ toolkit for mitigating harm is extensive. But much of their use of it occurs at either a speed or level of sensitivity that does not readily facilitate transparency and trust. Consider the thoughtfulness of the “transparency reports‘’ released by companies on government requests for information to be provided or taken down. These exhaustive and thorough products benefit from a deliberative process that typically spans a variety of teams, including legal offices. Contrast that with the constant flow of experimentation and tweaking of recommendation systems and content moderation algorithms. Such systems typically involve balancing mitigation techniques with business-model-driven engagement factors, which are considered proprietary and highly sensitive, making full transparency infeasible. There are occasional examples of individual tweaks shared via company blog posts or leaks, such as YouTube’s changes to reduce recommendations of so-called “borderline content.” But these are evaluated with only limited frames of reference and without robust independent expertise to help evaluate whether such moves are in fact properly calibrated for the harm they are designed to address.

The centralization of power

Effective governance requires balance, and for some advocates for reform, the sheer power and success of a few companies is the most powerful underlying motivation for legal change. If the tech sector is causing individual and public harms at scale, then in theory the market should correct that behavior as individuals and communities leave those platforms for others associated with fewer harms. Yet that hasn’t happened. Though some high-profile users have left Facebook and Twitter, the companies remain wildly successful overall, despite their perceived shortcomings. For a few reform advocates, therefore, Section 230 reform is principally a means of exacting punishment where markets and law have thus far left things alone.

Conclusion

The debate on reforming Section 230 is driven by a complex combination of these perceived harms and policy challenges, most of which are rooted in a specific understanding of the world. The conversation around Section 230 reform, however, has mostly focused on the messaging, communications and rhetorical flourishes of each new bill, followed by a trove of responses that either explain how Section 230 is not the reason for the original concern, or how the proposed reform is not possible given the limitations of government action. Constructive criticism ends up pointing out the downsides for solutions, and the air is sucked out of the conversation that might be better spent trying to forge some shared sense of what problem is trying to be solved. Much like an ambigram, different people see the problem differently, even though they are looking at the same thing. Agreement over which is the most important challenge that platforms and their use creates for society is not necessary, but mutual understanding seems like a necessary precondition for effective solutions.

David Morar is a post-doctoral data policy fellow at NYU Steinhardt and a visiting researcher in technology and governance at the Hans Bredow Institute and a consultant for R Street.
Chris Riley is a resident senior fellow of internet governance at R Street.

Facebook and Google provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors