Sections

Research

Interpreting the ambiguities of Section 230

Alan Z. Rozenshtein
Alan Rozenshtein
Alan Z. Rozenshtein Associate Professor of Law - University of Minnesota Law School, Senior Editor - Lawfare

October 26, 2023


  • In most Section 230 cases, which involve a person suing an online platform or service for harm caused by third-party content, the key question is what it means for the platform or service to be “treated as the publisher or speaker” of third-party content.
  • Algorithmic recommendations are at the heart of modern social media—and courts, including the Supreme Court, will no doubt continue to grapple with whether Section 230 protects platforms for such recommendations.
  • Courts need to recognize that many of the dominant interpretations of Section 230 differ substantially from what Congress originally intended. Otherwise courts risk substituting their own policy views over that of the democratic process.
Committee Chairman Roger Wicker (R-MS) waits for Facebook CEO Mark Zuckerberg to fix a technical glitch with his connection during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill in Washington, DC, U.S., October 28, 2020.
Committee Chairman Roger Wicker (R-MS) waits for Facebook CEO Mark Zuckerberg to fix a technical glitch with his connection during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill in Washington, DC, U.S., October 28, 2020. Greg Nash/Pool via REUTERS
Introduction

No statute has had a bigger impact on the internet than Section 230. The law, which prevents any online platform from being “treated as the publisher or speaker” of third-party content, has enabled the business models of the technology giants that dominate the digital public sphere. Whether one champions it as the “Magna Carta of the internet” or vilifies it as the “law that ruined the internet,” there is no doubting that the statute “made Silicon Valley.”

And so it is remarkable that, nearly 30 years after its enactment, basic questions about Section 230’s meaning and scope remain uncertain.

Consider the oral argument in Gonzalez v. Google during the Supreme Court’s 2022-2023 term. The Court was set to decide whether Section 230 immunizes platforms for the act of recommending third-party content to usersa question of immense practical importance to platforms, and hardly an esoteric corner case of platform intermediary liability. Yet multiple justices expressed uncertainty, even bewilderment, over how to apply Section 230 to this core issue: Justice Thomas was “confused,” Justice Jackson was “thoroughly confused,” and Justice Alito was “completely confused.” Justice Kagan, in the most memorable portion of argument, quipped, “We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.” Given the tone of oral argument, it was unsurprising when, several months later, the Court punted, resolving Gonzalez on unrelated grounds—and, in a tacit admission of the difficulty of the problem, never reaching the Section 230 issue at all.

How could the Supreme Court have failed so dramatically to provide basic clarity to such an important law? And what will happen if and when the issue returns?

The answer is that Section 230, despite its seemingly simple language, is a deeply ambiguous statute. This ambiguity stems from a repeated series of errors committed by Congress, the lower courts, and the Supreme Court in the drafting, enactment, and early judicial interpretation of the statute. In particular, beginning with the U.S. Court of Appeals for the 4th Circuit’s 1997 decision in Zeran v. America Online, lower courts have adopted an interpretation that elides the ambiguities present in the legislative record.

The key provision of Section 230—what the statute’s leading historian Jeff Kosseff calls the “twenty-six words that created the internet”—is (c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In most Section 230 cases, which involve a person suing an online platform or service for harm caused by third-party content, the key question is what it means for the platform or service to be “treated as the publisher or speaker” of third-party content. On one extreme, this language can be interpreted broadly, so as to prohibit virtually all lawsuits against platforms for harm involving third-party conduct. On the other extreme, the language can be interpreted very narrowly, permitting platform liability in a variety of contexts—such as when the platform knowingly hosts harmful third-party content or affirmatively recommends or promotes such content on its service. With some notable exceptions, courts have read Section 230 expansively.

As debates continue to rage over the future of Section 230—both in the courts post-Gonzalez and in Congress, where dozens of amendments to and replacements of the law have been proposed—it is important to recognize that the dominant judicial interpretation Section 230 is not the only plausible one. Particularly when it comes to understanding Congress’s intent in enacting the statute, that dominant interpretation should not necessarily be considered the baseline against which new caselaw and legislation should be measured. In particular, if courts stick with the interpretative status quo, they risk exacerbating the democratic deficit that has existed ever since the courts first interpreted Section 230 to apply far more broadly than was intended by lawmakers over a quarter-century ago.

To see why Section 230 is ambiguous as to its scope, it is crucial to situate the statute within its broader legal and policy context. Two historical facts are particularly important: first, the pre–Section 230 caselaw that spurred the statute’s drafting; and second, the broader legislative package of which Section 230 was only one part.

The judicial context of Section 230

Section 230 arose because of idiosyncrasies in how courts applied the common law of distributor liability to defamation claims against online platforms. At the time of Section 230’s enactment, liability for transmission of defamatory third-party content depended on the nature of the transmission. As described in the Restatement (Second) of Torts, a “publisher” of such content could be held strictly liable—i.e., even if the publisher did not know that the content was defamatory.

By contrast, someone who merely “delivers or transmits defamatory matter published by a third person”—i.e., a “distributor”—would only be liable if they knew or had reason to know that the content was defamatory. Thus, while bookstores or libraries need not review every book they offered in advance to avoid liability for selling or circulating a defamatory book, they could be liable as distributors if they circulated books already known to them to be defamatory. Similarly, a telegraph operator would not be liable for transmitting a message that the operator did not know (or have reason to know) was libelous.

For courts applying distributor liability to internet platforms, the question was whether the platforms were publishers of their third-party content—and so could be held strictly liable for it—or whether they were merely distributors—and so could only be held liable if they knew or had reason to know about the defamatory nature of the material. Two cases, both decided in the early 1990s, addressed this question and established the emerging law to which Section 230 was a reaction.

In the first case, Cubby, Inc. v. CompuServe, the owner of a media news outlet sued CompuServe, one of the main early online services—along with America Online (AOL) and Prodigy—because CompuServe hosted a forum in which allegedly defamatory material was posted about the news outlet. The court held that CompuServe was not liable for the defamatory material because CompuServe was a distributor, not a publisher, of the forum and thus could only be held liable if it knew that the forum it hosted was transmitting defamatory content. In reaching this conclusion, the court emphasized that CompuServe did not review content before it was released on the forum and made available across CompuServe. Thus, in the court’s view, “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.”

The second case, which was decided in 1995 and led directly to Section 230’s introduction into Congress a year later, was Stratton Oakmont, Inc. v. Prodigy. Stratton Oakmont, the securities firm led by the soon-to-be disgraced Jordan Belfort, sued Prodigy for allegedly defamatory comments made about the firm on Prodigy’s finance-related “Money Talk” message board. Unlike CompuServe, which did not review the content on its forums, Prodigy moderated the Money Talk board in a variety of ways, and more generally “held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper,” as the court wrote. On this basis, and explicitly distinguishing Cubby, the court held that Prodigy should be subject to publisher, not merely distributor, liability for the allegedly defamatory content.

But Section 230’s text cannot be considered in isolation, and nor can the views of its sponsors be treated as conclusive evidence of Congress’s intent.

From the beginning, it was clear that Stratton Oakmont perversely incentivized platforms not to moderate content, since it was Prodigy’s decision to moderate some content that led the court to hold it liable as a publisher for any content it allowed to remain on its platform. For this reason, the decision, despite arising from a state trial court, received national attention. In stories published the day after the case was decided, the New York Times described AOL’s general counsel as “hop[ing] that on-line services would not be forced to choose between monitoring bulletin boards and assuming liability for users’ messages,” and Time noted that Prodigy was “ironically” being held more liable for its users’ speech than were other, non-moderated, services.

It is indisputable that Section 230 was written in large part to overturn Stratton Oakmont. After all, to overturn the New York court’s judgment, Congress need only have prohibited treating platforms as publishers “on the basis of their content-moderation practices.” Instead, Section 230 prohibits outright treating platforms as publishers of third-party content. Thus, as Christopher Cox and Ron Wyden, Section 230’s sponsors, have confirmed in recent years, Section 230 was intended to go beyond reversing Stratton Oakmont and establish greater protections for platforms. The question that has bedeviled interpreters of Section 230 ever since is just how much further beyond Stratton Oakmont it was meant to go. Specifically, what does it mean for a platform to be “treated as the publisher or speaker of any information provided by another”?

But Section 230’s text cannot be considered in isolation, and nor can the views of its sponsors be treated as conclusive evidence of Congress’s intent. To understand the intent behind Section 230, it is necessary to turn to the law of which it was only a small part: the Communications Decency Act of 1996.

The legislative context of Section 230

When Section 230 was enacted in 1996, it was as part of a broader congressional response to the perceived dangers of the internet—specifically, the problem of children being exposed to inappropriate content, especially pornography. In fact, Section 230, although it has come to assume a central role in internet law, originally emerged as a relatively obscure, part of a much broader legislative package, the Communications Decency Act (CDA) of 1996, itself part of the Telecommunications Act of 1996.

The CDA, introduced by Senator James Exon (D-NE), criminalized the knowing transmission of “obscene” or “indecent” messages to minors. Internet advocates were concerned that the combination of criminal penalties and vague, broad language would cripple the then-nascent internet by causing platforms to censor large quantities of content so as not to risk violating the CDA.

As an alternative, Christopher Cox (R-CA) and Ron Wyden (D-OR), then both members of the House of Representatives, introduced alternative language in their “Internet Freedom and Family Empowerment Act,” which sought to encourage platforms to moderate content on a voluntary, rather than mandatory, basis. This text would become Section 230.

The first two sections of the statute set out various findings and policy statements and illustrate Section 230’s multiple—and potentially conflicting—goals. Section 230 was intended to accomplish the goal of protecting children but through a different mechanism: the removal of “disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.”

But Cox and Wyden also had loftier free expression goals in mind. Recognizing that the internet represented an “extraordinary advance in the availability of educational and informational resources” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity,” Cox and Wyden sought to “to promote the continued development of the Internet,” in particular the “vibrant and competitive free market that presently exists for the Internet . . . , unfettered by Federal or State regulation.”

After the House and Senate passed their respective versions of the Telecommunications Act of 1996—the House including Cox and Wyden’s Internet Freedom and Family Empowerment Act and the Senate including Exon’s CDA—the bills went to the conference committee. For reasons that remain unclear, the conference committee, rather than taking the logical step of choosing between the Exon and Cox-Wyden proposals, included both provisions as part of a single “Communications Decency Act,” with the Cox-Wyden proposal as an added final section to Exon’s original legislation.

The conference committee report devoted only half a page to describing Section 230 and focused entirely on its effect of “protecting from civil liability those providers and users of interactive computer services for actions to restrict or to enable restriction of access to objectionable online material.” It specifically listed overruling Stratton Oakmont as “one of the specific purposes” of Section 230. By treating “providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material,” the report stated, Stratton Oakmont created “serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services.”

Shortly after the reconciled Telecommunications Act was enacted into law, civil liberties groups led by the ACLU challenged the CDA in court. Specifically, they alleged that the criminal penalties included in Exon’s original CDA violated the First Amendment. The Supreme Court agreed in Reno v. ACLU, leaving Section 230 as the only remaining operative provision of the CDA.

As this convoluted drafting history shows, understanding the intent of Congress in enacting Section 230 requires more than simply examining the statute in isolation. Cox and Wyden might have preferred for Section 230 to fully replace Exon’s original CDA, but Congress chose to enact both bills. Whether the conference committee was intentional or merely sloppy in combining two dramatically different and arguably inconsistent approaches to platform liability, its choice commits future interpreters to interpret both provisions such that “effect is given to all . . . provisions” of the ultimately enacted CDA so that “no part will be inoperative or superfluous, void or insignificant.”

The competing purposes behind the CDA demonstrate how ambiguous its provisions really are. For example, Section 230 was not enacted as part of the “Internet Freedom and Family Empowerment Act,” a title that, in emphasizing platform and user control, accurately reflects Cox and Wyden’s personal intentions. Rather, it became law as part of the “Communications Decency Act”—“by no stretch of imagination a libertarian enactment,” as Danielle Citron and Benjamin Wittes have observed. This suggests that a central if not primary goal of the bill was to encourage the removal of “indecent” content online. And yet, that title sits alongside the “freedom of speech” language—creating a puzzling if not outright contradictory text for courts to grapple with. Thus, it is a mistake, as some courts have done, to single out the promotion of “freedom of speech in the new and burgeoning Internet medium” as Congress’s overriding purpose in enacting Section 230.

Distributor vs. publisher liability

After Section 230 was enacted, the first major interpretative question that courts and litigants grappled with was whether Section 230 permitted treating a platform as merely a distributor rather than a full-fledged publisher of third-party content. This was decided—as it turns out, decisively—in the first major case interpreting Section 230, Zeran v. AOL, decided by the 4th Circuit in 1997. Zeran’s holding—that Section 230 applies broadly to any attempt to hold a platform liable for third-party content—has remained canonical, and continues to shape jurisprudence on Section 230 to this day.  But the background legal and legislative context described above demonstrate that Zeran was based on flawed reasoning that failed to recognize Section 230’s ambiguity.

Zeran involved Kenneth Zeran, an otherwise ordinary person whose life was turned upside down after a series of messages on an AOL bulletin board falsely connected him to the Oklahoma City terrorist bombing. Even after Zeran repeatedly informed AOL about this content and asked AOL to take down the offending messages, the company refused. Zeran sued, arguing that AOL was negligent in not responding adequately to the false messages about him. But because Section 230 had just been enacted, Zeran couldn’t argue that AOL was strictly liable in the way a publisher would be, whether it knew about the offending messages or not. So Zeran instead argued that the company was liable as a distributor—i.e., because it knew about the messages, given that Zeran kept telling AOL about them.

The court rejected Zeran’s argument that Section 230 should be read narrowly, as applying only to publisher liability. First, it argued that distributor liability was a subset of publisher liability, and thus Section 230’s prohibition on treating a platform as a publisher of third-party content extended to attempts to treat a platform as a distributor of that content. To support this argument, the court relied primarily on an influential contemporary tort law treatise, which used a particularly broad definition of “publication” to describe any act of transmission of defamatory material, including by distributors (which the treatise described as “secondary publishers”). Thus, the fact that newspapers could be held strictly liable for transmitting defamatory content, while bookstores could only be liable if they had knowledge, was merely a difference in liability between different types of publishers.

Second, citing Section 230’s findings and policy statements, the court argued that Congress’s purpose in enacting Section 230 was to counter “the threat that tort-based lawsuits pose to freedom of speech in the new and burgeoning Internet medium.” Because the “specter of tort liability in an area of such prolific speech would have an obvious chilling effect,” the court reasoned, “Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.” And because distributor liability would expose platforms’ “potential liability each time they receive notice of a potentially defamatory statement,” platforms would have “a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not.” Thus, “like strict liability, liability upon notice has a chilling effect on the freedom of Internet speech.”

Yet both these arguments overlook important features of Section 230’s legal and legislative history that push in the opposite interpretative direction. First, even assuming that the common law at the time of Section 230’s enactment treated distribution as a subset of publication (and thus that Zeran correctly relied on the treatise’s expansive definition of “publication”), Section 230 was enacted as a direct response to the Stratton Oakmont decision, which did not treat distribution as a subset of publication, but rather as two distinct categories. Even though legal terms are presumed to carry their established common law meanings when they are used in legislation, this presumption can be overridden when there is evidence of contrary legislative intent. Thus, as a matter of congressional intent, it is at least ambiguous as to whether the 1996 Congress, rather than just Cox and Wyden, meant to include distribution within the scope of Section 230’s immunity provision.

Second, putting Section 230 in its proper legislative context—as part of the broader Communications Decency Act—demonstrates that Congress’s overall purpose could not possibly have been, as Zeran argued, to “keep government interference in the medium to a minimum.” To the contrary, imputing such a purpose to Congress would lead to a distorted understanding of the legislature’s intent. It’s clear, as the court argued, that notice-based liability would likely result in platforms taking down large swaths of legitimate content. But it’s hardly obvious that the same Congress that voted for criminal liability for transmitting obscene material to children, with all the predictable chilling effects, would have suddenly objected to those chilling effects when they resulted from tort liability.

Even the court’s more limited claim that Section 230 represented a “policy choice . . . not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for other parties’ potentially injurious messages” is questionable. As Justice Thomas has noted, because the criminal law provisions of the Communications Decency Act included civil enforcement for the “knowing . . . display” of indecent material to children—i.e., distributor liability—it is unlikely that Section 230 was intended to eliminate such liability altogether.

As a matter of statutory interpretation that remains faithful to Congress’s intent, the best argument for reading Section 230 to eliminate distributor liability is that such immunity is necessary to encourage platforms to aggressively moderate content. The Zeran court was right to recognize the possibility that “notice-based liability would deter service providers from regulating the dissemination of offensive material over their own services,” since “any efforts by a service provider to investigate and screen material posted on its service would only lead to notice of potentially defamatory material more frequently and thereby create a stronger basis for liability.”

The question is whether that incentive would be stronger than the incentive to remove content. There are two reasons to doubt that it would be, at least in many circumstances. First, under a distributor-liability regime, platforms would still be liable for content that they were informed was illegal.

Second, and more importantly, a completely unmoderated platform would drive away consumers. This is why the big platforms spend millions to actively moderate their services. Even with the litigation risk created by proactive moderation, platforms might carry it out anyway so as to create an environment that users want to be in.

Admittedly, this incentive to moderate is likely to be more powerful for larger rather than smaller platforms, since large platforms have both more to gain financially and more in the way of resources to spend on moderation. A narrow reading of Section 230 would thus favor large over small platforms, many of which will likely shut down rather than face the increased liability risk that a narrow interpretation of Section 230 would create. This is unlikely to be good for free expression on the internet, and it’s a not policy outcome that I personally would favor. But given the importance that Congress placed on encouraging moderation and preventing harmful content in enacting the Communications Decency Act, an interpretation of Section 230 that favors large over small platforms is not inconsistent with congressional intent.

The point of the above argument is not to establish definitively that Section 230 was not intended to eliminate distributor liability (though I believe the weight of evidence does support that conclusion). Rather, the point is to establish a case that Section 230 is at the very least ambiguous as to whether it eliminated distributor liability and that Zeran was unwarranted in reading Section 230 as a surrogate First Amendment for the internet.

This would not have been such a problem had Zeran been an isolated decision. But Zeran is, by far, the most important Section 230 case to have been decided. It’s been cited by hundreds of cases, and virtually all courts that have addressed Section 230 in the decades since have adopted its general approach (even if they have sometimes found Section 230 protections inapplicable on the specific facts before them). In this way, a judicial consensus has arisen, at least in the lower courts, about the scope of Section 230 that is far broader than both the text and the legislative context of the statute allow.

Liability for algorithmic amplification

Because of Zeran’s influence, the question of whether Section 230 eliminates distributor liability has not been a live issue in the courts for decades. More recent legal debates around Section 230 have instead focused on whether platforms are liable when they affirmatively promote illegal content—for example, through algorithmic recommendations. Here, too, courts have generally concluded that Section 230 immunizes platform recommendation decisions, even though the statute is far more ambiguous on this point than the courts have generally recognized.

Liability for platform recommendations was the core question in the Gonzalez case. In November 2015, a series of attacks by the Islamic State (IS) killed more than 130 people in Paris, including Nohemi Gonzalez, a 23-year-old California college student who was participating in a foreign exchange program. Gonzalez’s family sued Google under a provision of federal law that imposes liability on anyone who aids, intentionally or not, a terrorist attack. In the family’s view, Google was liable because it promoted ISIS content on its YouTube platform to users by means of recommendation algorithms, thereby aiding in IS’s recruitment efforts and thus aiding, even if indirectly, IS in carrying out the 2015 attacks. The district and circuit courts held that Section 230 barred Gonzalez’s lawsuit, and, in February 2023, the Supreme Court heard oral arguments on the issue. (A related case, Twitter v. Taamneh, addressed whether, absent Section 230, the platforms would be substantively liable under the federal anti-terrorism statutes.)

The case hinged on what it means for a legal claim to “treat[]” a platform “as the publisher” of third-party content. The 9th Circuit decision in Gonzalez, as well as other circuits coming to a similar conclusion, interpreted this provision broadly, using the everyday meaning of publisher: one who engages in the many activities associated with publishing, which include promotion and recommendation. By contrast, Gonzalez and the government characterized Section 230 as a narrow intervention in defamation law and argued for a narrow construction of the provision that focused on the nature of publisher liability under common law: the transmission of a communication whose content is defamatory (or otherwise tortious). On this reading, Section 230 should not bar suits that, rather than treating the defendant as liable simply for retransmitting the tortious communication, seek to hold the defendant liable for harms that go beyond mere retransmission—e.g., for personalized recommendations.

This distinction is a subtle one, and it’s hard to say that either side had a knock-down argument regarding Section 230’s textual meaning. Nor does legislative history provide much insight. Although recommendation algorithms were not unknown when Section 230 was enacted in 1996, they played nothing like the central role that they do today, and so it is unsurprising that neither the text of Section 230 nor the legislative history address the question of liability for recommendation algorithms. As Justice Kagan noted during oral arguments, “everyone is trying their best to figure out how [Section 230] . . . , which was a pre-algorithm statute[,] applies in a post-algorithm world.”

Ultimately, the Court did not decide the Section 230 issue in Gonzalez. Instead, in the companion Taamneh case, it held that the plaintiff’s substantive tort claim—that the platforms had violated the Anti-Terrorism Act—was flawed because it did not allege that the platforms intentionally aided terrorist groups. Since Taamneh held that the plaintiffs would lose the substance of their claim, there was no reason to address the platforms’ Section 230 defense, and so the Court was able to duck the issues in Gonzalez.

But the question is not going away. Algorithmic recommendations are at the heart of modern social media—and courts, including the Supreme Court, will no doubt continue to grapple with whether Section 230 protects platforms for such recommendations.

As with Zeran, the best argument for a broad reading of Section 230 in the context of algorithmic recommendation is that it would effectuate the primary goal of the statute: encouraging platforms to moderate content and provide tools for users to moderate that content. The use of algorithms for moderation—including content removal, downranking, and “shadowbanning”—is increasingly the key mechanism by which platforms moderate objectionable content. If platforms are held liable for personalized recommendations, they may decide not to perform any automated screening or ranking at all, out of an abundance of caution.

But the reason this argument should not have been decisive in Zeran is also why it should not be decisive here. If algorithms really are necessary for platforms to avoid becoming cesspools of offensive content, platforms will continue to use them no matter the litigation risk, at least to the extent necessary to maintain users and preserve advertising revenue.

As noted above, this will likely lead to some smaller platforms leaving the market entirely. But it is unlikely that Facebook or X (formerly Twitter) would choose to shut down rather than invest the necessary resources in content moderation. The question is whether, under a liability regime that permits platforms to be sued for the harms caused by their algorithmic recommendations, platforms will, on net, host more or less harmful content. The impossibility of answering this question is yet another reason why Section 230 is, properly interpreted, ambiguous as to the question of liability for algorithmic amplification.

Where do courts go from here?

Section 230 cases aren’t going away. District and circuit courts will continue to hear Section 230 cases and, if a circuit split develops in a case that presents a core issue of Section 230 immunity more cleanly than did Gonzalez and Taamneh, the Supreme Court will almost certainly have to step in to authoritatively interpret the statute.

When faced with statutory ambiguity that the traditional tools of statutory interpretation cannot resolve, courts essentially have three options. They can interpret the statute in a way that accords with their view of what the best policy outcome would be. They can maintain the legal status quo—e.g., by sticking with previous judicial interpretations of the statute, even if they believe the interpretation to be flawed—and hope that Congress clarifies its intent by amending or rewriting the statute. Or they can try to interpret the statute specifically so as to prod Congress to act.

Unfortunately, all three options have serious disadvantages when it comes to Section 230, and so there’s little room for optimism that courts can unwind the interpretative mess that is contemporary Section 230 doctrine.

Consider first the approach of interpreting a statute to achieve the best policy result. This is essentially the approach taken in Zeran, which was motivated by a laudable concern for free expression on the internet. Likewise, policy concerns were also clearly on the minds of many of the justices in the Gonzalez oral arguments. Justice Kavanaugh worried that a narrow reading of Section 230 could lead to “non-stop” lawsuits that would “create a lot of economic dislocation” and “really crash the digital economy”—or, as Chief Justice Roberts suggested, cause the internet to “be sunk.”

On the other hand, an overly broad reading of Section 230 could permit serious harms. For example, Justice Sotomayor worried about giving platforms carte blanche to use “algorithm[s] that inherently discriminates against people.” This disagreement between the justices illustrates the main problem with policy-based reasoning: Courts have neither the expertise to predict the policy outcomes of different interpretations of a statute, nor the democratic legitimacy to make contestable policy tradeoffs.

For this reason, it’s preferable that Congress, not the courts, decide the ultimate contours of intermediary platform liability. This raises the question: Is there anything courts can do to encourage Congress to act?

Congress is more likely to act when pressured by powerful interests groups, and, when it comes to Section 230, the powerful interest groups are the large platforms, who are quite happy to see the Zeran approach to Section 230 continue, since it provides them with maximum protection against liability.

One possibility is for courts to maintain the interpretative status quo—i.e., continue Zeran’s broad reading of Section 230 immunity. A general principle of statutory interpretation is that precedent should be strictly upheld, even in the face of compelling arguments that the previous interpretation was incorrect. This is because such a principle articulates to Congress “a clear and unyielding division of responsibility”: If Congress wants the law to change, it will have to change it.

Several justices suggested this approach during the Gonzalez oral argument. Justice Kagan suggested that restricting the broad reach of Section 230 is “something for Congress to do, not the Court.” And Justice Kavanaugh doubted that the Court, rather than Congress, was the “right body to draw back from” a broad reading of Section 230. Instead, he argued that it would be “better for [the Supreme Court] to keep [Section 230] the way it is” and “to put the burden on Congress to change that” so that Congress could “consider the implications and make these predictive judgments.” In the end, of course, the Court chose not to address the merits of Gonzalez, leaving the invitation open for Congress to step in.

The problem with maintaining the interpretative status quo in the context of Section 230 is that it may actually instead discourage Congress from acting. The reason is rooted not in law but rather in political economy: Congress is more likely to act when pressured by powerful interests groups, and, when it comes to Section 230, the powerful interest groups are the large platforms, who are quite happy to see the Zeran approach to Section 230 continue, since it provides them with maximum protection against liability. In other words, interpreting Section 230 narrowly, even though this would reverse previous judicial interpretations, may be the most likely way to prod Congress into action.

But this approach is not without its dangers, either. Twenty-five years of the Zeran reading of Section 230 have created immense “interpretive debt” in the courts, which have been able to avoid grappling with foundational questions of common law liability online because Section 230 allowed them to dismiss nearly all (non-copyright) lawsuits alleging intermediary liability. Limiting Section 230 would create immense uncertainty and flood lower courts with years of litigation. This uncertainty would in turn lead platforms to act far more conservatively when it comes to allowing speech on their platforms and would likely censor and remove a lot of non-tortious content just to avoid litigation risk. If Congress reacted quickly by enacting a comprehensive—and, unlike Section 230, clear—liability regime, these disruptive effects would be limited. If not, they could fester for years.

Conclusion

The current, expansive interpretation of Section 230 is, at best, only one possible reading of the statute. At worse, it goes substantially beyond what Congress intended in enacting the statute. The reason for this mismatch between the statute and its judicial interpretation is a result of repeated failures by Congress, the lower courts, and the Supreme Court—creating a situation in which the courts, in trying to interpret this important statute, are left with no good options.

If courts allow their interpretation to be driven by their evaluation of the policy consequences, they will be substituting their own, potentially inaccurate, views over that of the democratic process. If they continue the status quo, they will lock in the benefits of the current system, but also its harms, and will potentially lower the chance of congressional involvement. And if they interpret the statute narrowly, hoping to spur congressional action by mobilizing the political influence of giant internet platforms, they risk destabilizing the internet itself.

When it comes to fundamental questions of social and economic policy, there is no substitute for clear, comprehensive legislation. But whether Congress can produce such legislation—given its well-known pathologies and partisan gridlock—is a different question.

Authors

  • Acknowledgements and disclosures

    For helpful comments I thank Gregory Dickinson, Eric Goldman, Jeff Kosseff, Quinta Jurecic, Daphne Keller, Matt Perrault, Blake Reid, Chinny Sharma, Christopher Terry, Eugene Volokh, Ben Wittes, and workshop participants at the University of Minnesota Law School. For excellent research assistance I thank Adam George.

    Google is a general unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

  • Footnotes
    1. In this article I limit my analysis to the part of Section 230 that protects platforms for content that they host. In particular, I do not address the separate question of to what extent Section 230 protects platforms for their decision to remove content. See generally Adam Candeub & Eugene Volokh, Interpreting 47 U.S.C. § 230(c)(2), 1 J. Free Speech L. 175 (2021).
    2. Like many scholars, I have been immeasurably helped in understanding the winding road to Section 230 by Jeff Kosseff, Section 230’s preeminent historian, in particular Jeff Kosseff, The Twenty-Six Words that Created the Internet chs. 2–3 (2019); Jeff Kosseff, A User’s Guide to Section 230, and a Legislator’s Guide to Amending It (or Not), 37 Berk. Tech. L.J. 757, 761–73 (2022); and Jeff Kosseff, What Was the Purpose of Section 230? That’s a Tough Question, 103 B.U. L. Rev. (forthcoming 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4388216. I rely on Kosseff’s work throughout.
    3. 3 Restatement (Second) of Torts § 558.
    4. Id. § 581(1) & cmts. e–f (1977).
    5. Indeed, the judge in Stratton Oakmont himself recognized this, though he argued that the economic benefits that a platform would gain from moderating its content and thus being more user- and family-friendly would outweigh the additional litigation risk.
    6. Although the law is commonly described as “Section 230 of the Communications Decency Act,” it was actually Section 509 of the Telecommunications Act of 1996, of which Title V (covering sections 501–509) was the Communications Decency Act. Section 509 created a new section 230 in the Communications Act of 1934, codified at 47 U.S.C. 230. Thus, the proper name of Section 230 should be “Section 230 of the Communications Act of 1934, as amended” or “Section 509 of the Telecommunications Act of 1996.” Jeff Kosseff, What’s in a Name? Quite a Bit, If You’re Talking About Section 230, Lawfare (Dec. 19, 2019). To avoid confusion, I will simply refer to the law as Section 230.
    7. Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans §230 Immunity, 86 Fordham L. Rev. 401, 404 (2017).
    8. There is an additional interpretive option: that (c)(1) was only ever meant as a definitional, rather than an immunity-granting provision, and that the only immunity granted by Section 230 was that of (c)(2), which immunizes platforms when they remove certain categories of content. See, e.g., Doe v. GTE Corp., 347 F.3d 655, 660 (7th Cir. 2003) (Easterbrook, J.); Shlomo Klapper, Reading Section 230, 70 Buff. L. Rev. 1237, 1281 (2022). I do not address this possibility, since my argument does not rely on this claim.
    9. In recent years, Cox and Wyden have stated that Zeran was rightly decided. See Jeff Kosseff, The Lawsuit Against America Online That Set Up Today’s Internet Battles, Slate (July 14, 2020) (“Did [Fourth Circuit Judge J. Harvie Wilkinson] misinterpret Section 230 when he ruled against Zeran? Both of its authors, Cox and Wyden, told me that Wilkinson got it right.”); see also Chris Cox & Ron Wyden, Reply Comments of Co-Authors of Section 230 of the Communications Act of 1934, In re NTIA Petition for Fulemaking to Clarify Provisions of Section 230 of the Communications Act of 1934, RM-11862 at 15–17 (2020), https://perma.cc/9DGT-8BCH (arguing that Section 230 does not permit the use of negligence standards).
    10. The question, in other words, is, if Zeran is correct, is there a situation in which recommendations are nevertheless unprotected by Section 230? If Zeran is wrong and Section 230 is narrowly about eliminating strict liability for platforms, then the answer is easier: Recommendations are unprotected at least when the platform knew or had reason to know that they were recommending illegal content.
    11. For an overview of how platforms can respond to objectionable content, see Eric Goldman, Content Moderation Remedies, 28 Mich. Tech. L. Rev. 1, 23–40 (2021).