Sections

Commentary

What does the day after Section 230 reform look like?

A photo illustration shows the suspended Twitter account of former President Donald Trump on a smartphone in front of the White House.

The Jan. 6 attack on the U.S. Capitol by supporters of President Donald Trump has brought simmering controversies over social media platforms to a boil. In the wake of Trump’s incitement of the attack, Twitter first suspended and then permanently disabled his account, albeit reluctantly. Reactions from other firms—including Amazon, Apple, Facebook, and Google—followed quickly. Some suspended Trump’s accounts. Others targeted disinformation, either directly, such as when YouTube began limiting false claims about the 2020 president election, or indirectly, as when Apple and Google removed Parler from their app stores.

Prior to the election, lawmakers on both sides of the aisle were already demanding that social media platforms be regulated more closely. As a candidate, President Joe Biden said he would like to see Section 230 of the Communications Decency Act repealed, though he has given no indication he will make doing so a priority. The role of online misinformation and conspiracy theories in fomenting an attack on the Capitol may generate additional urgency for reform, but there is no consensus on what such regulation ought to achieve. If increased political pressure translates into legislative activity, it will mean alterations to or the outright repeal of Section 230, which provides platforms with liability protection for most content posted by users and has emerged as the central battleground in debates over platform regulation. One Democratic proposal, floated by Reps. Anna Eshoo and Tom Malinowski in the aftermath of the attack, would limit Section 230 protections for content that encourages civil rights abuses or terrorism. A bipartisan proposal authored by Democratic Sen. Brian Schatz and Republican Sen. John Thune is aimed at encouraging transparency and consistency in platform content moderation decisions. Democrats and Republicans are deeply divided on their reasons for reforming the law: Republicans want to strip Section 230 protections because social media companies moderate too much content; Democrats want it reformed because platforms aren’t doing enough to quash harmful misinformation. Platforms thus face the Goldilocks problem: They can neither remove nor host too much content, but must get their curation exactly right to satisfy both camps.

These political divides hasn’t stopped a wide range of observers from predicting Section 230’s demise as a result of the attack on the Capitol. Getting rid of Section 230 is a seemingly straightforward way to press platforms to intervene more frequently. But a repeal is likely to cause significant disruptions in the short to medium term. In the long run, changes will be far less dramatic than either proponents or critics envision—the information available online will probably remain relatively constant, but the entities that carry it may shift, especially if increased costs create disadvantages for start-up companies.

The first and most predictable effect of a diminution of Section 230 will be a wave of litigation. Aggrieved social media users and enterprising plaintiffs’ attorneys will challenge platforms’ decisions to leave up or take down content, to ban or restrict users, and to label posts as inaccurate or disputed. Conservative activists have already begun suing sites such as YouTube and Twitter to challenge restrictions on their accounts based on posting fake news. To date, these claims have failed. But unhappy users will re-litigate existing theories and try out new ones to compel platforms to remove or replace content. The increase in lawsuits will only gather steam if state legislatures are now authorized to draft and pass regulations, as North Dakota has already shown. This development will be helpful in at least one way: It will gradually clarify the boundaries of post-230 safeguards, such as firms’ terms of service, in enabling platforms to engage in content curation. However, it will be expensive and protracted.

In addition, different courts are likely to come to different conclusions—an undesirable outcome for global sites that would prefer to enforce consistent rules. While the U.S. Supreme Court might ultimately resolve some disputes, such as over First Amendment claims, others such as state-based contract doctrines do not fall directly under its purview. Variegated laws on the same internet subject make it difficult and expensive for national entities to comply fully, as past experience with anti-spam rules, privacy laws, and the contours of Section 230’s exception for intellectual property have demonstrated.

The second immediate effect is likely that internet sites will become much more cautious about content. This could cause smaller and less well-funded sites, such as local journalism outlets or start-up platform firms, to disable user comments and other forms of interaction to avoid the risk of liability. Larger platforms are likely to struggle and generate incoherent responses, restricting political debate in some instances and leaving up conspiracy theories in others. A muddled response is an inevitable reaction to the uncertainty that the change in the regulatory landscape will produce. However, it will also fuel the outrage that some users feel about inconsistent standards and the purported bias by large technology companies. Changes to the sorts of user-generated content that platforms allow could have significant social consequences. It would not be a huge sacrifice to face cuts to the number of cat videos on YouTube. It would be a major loss if the site’s new approach reduced copwatching.

A secondary effect of this new hesitancy will be increased costs for sites that do allow user-generated content, as they will have to review if not police it much more extensively. Users may not feel higher costs directly—most major platforms will remain nominally free, deriving revenue through ads and user data. However, to the extent one is concerned about the size of platforms—if we worry about the “Big” part of “Big Tech”—these increased expenditures will be counterproductive. The larger the platform, the more readily it can absorb new costs. Smaller firms and startups, though, may not be able to do so as easily, or at all. Thus, the current set of antitrust concerns around platform size and market dominance is in tension with reforming Section 230, as larger firms will be much better poised to navigate the changing legal landscape—and to shape it.

In the medium term, other areas of law will have to resolve internet-related questions that were previously dealt with under Section 230. For example, Section 230(c)(1) immunized platforms and their users for most content created by someone else. If I posted a defamatory tweet about you, you could sue me, but not Twitter—and not anyone who retweeted it. This meant that courts rarely had to consider how to classify platforms under standard tort doctrine. Is Twitter more like a newspaper, which faces greater risk of liability as a publisher, or a bookstore, which is generally liable only when it has actual knowledge that it is offering defamatory content? Does retweeting make that user an author of the underlying content in the original Tweet? Or will courts, in their role of promulgating common law standards, arrive at an entirely new scheme for online information exchange?

Contract law will also have to adapt in the medium term to efforts to limit the disclaimers and protections that internet firms can build into their terms of service. It may no longer be possible, for example, to require a user to forfeit any legal claims based on removal (or retention) of content. These types of public policy choices instantiated through contract doctrine are not new but are likely to proliferate. Federal courts will have to further elucidate the question of who has standing to bring suit—an issue that the Supreme Court famously ducked in Spokeo v. Robbins. Standing doctrine often serves as a “get out of jail free” card for federal courts, who can dispose of litigation that does not seem meritorious on a procedural question rather than a substantive evaluation of the claims. In short, without Section 230 to manage these issues, other areas of the law will be under pressure to adapt so as to create new boundaries that put users and firms on notice about the contours of potential liability.

A reform or repeal of Section 230 is also likely in the medium term to fuel another round of the encryption wars. The policy goal of reducing undesirable content such as incitement and child sex abuse material (CSAM) will probably encourage law enforcement agencies to renew their push for limits on encryption, particularly the end-to-end variety present in apps such as Signal. Most observers would support steps that would reduce harms to children, but it is not clear how much of a barrier encryption currently creates for law enforcement. For example, in 2019, federal courts authorized 3225 wiretaps. Of those, 121, or 3.75%, were encrypted; in 104 of those, law enforcement could not ultimately decrypt the information exchanged. A New York Times investigation found that law enforcement agencies devoted to investigating CSAM were “understaffed and underfunded”; the Justice Department has not even bothered to write reports that are required under federal anti-CSAM legislation. If encryption use by criminals is on the rise, calls for back doors, key escrow, and other limits on apps’ capabilities are likely to resonate. There are, however, important tradeoffs for privacy and security of user communications, especially if those users are political dissidents, journalists, or others with well-founded fears of surveillance. The past few cycles of the encryption wars have largely ended in stalemates, but another flare-up is likely brewing.

In the long term, and perhaps surprisingly, the content in the social media ecosystem will probably look similar to that available in the current environment with Section 230. This is because Section 230 is, in significant fashion, a statutory clearinghouse for constitutional concerns based on the First Amendment. As a rule, courts avoid deciding constitutional questions when there is another path for resolving an issue or case. Right now, Section 230 does this work. For the most part, it has not been necessary to decide whether and under what circumstances content moderation by internet sites constitutes protected expression by those sites (although some state courts have taken up the question).

It is likely, though, that editorial decisions about what information to permit on a platform are protected by the First Amendment. The internet is not broadcast radio or cable television. The first major Supreme Court case dealing with First Amendment questions over federal indecency regulation online resulted in a resounding defeat for the government, and the establishment of the principle that internet communication would be protected as strongly as more traditional media such as newspapers. Subsequent cases have confirmed this approach, and the Roberts court has been highly speech-protective. New attempts to regulate platforms are not likely to fare well when confronted with the powerful protections for free information exchange under the First Amendment. Thus, constitutional law will likely block most new regulation by state and federal governments alike, particularly where the underlying speech is lawful, even if repugnant. And platforms are likely to enjoy some protection for hosting even unlawful communication in some instances, particularly as they evolve toward more traditional gatekeepers instead of simply hosts for anything users produce.

The day after Section 230 is repealed or reduced is likely to be chaotic. Users and platforms will confront uncertainty about what is permissible, and both sides are likely to test those boundaries, for both must-carry and must-remove questions around content. In time, though, disputes will be resolved, and the online information ecosystem will probably adapt to the statutory change—at least, the larger platforms will. We may, though, lose the next Twitter to the added costs that a repeal would inevitably entail. New platforms may not be able to afford systems that screen material, making it harder for them to gain funding, to be acquired by larger tech firms, and ultimately to survive. Section 230 embodies American free speech norms that favor open discussion and dialogue. Even if our shared commitment to those norms wavers at times, it ultimately endures, and the longer-term online landscape for free expression will reflect that.

Derek E. Bambauer is a professor of law at the University of Arizona, where he teaches internet law and intellectual property.

Amazon, Apple, Facebook, Google, and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors