Sections

Commentary

Trump’s Section 230 reform is repudiation in disguise

President Donald Trump speaks as US Attorney General William Barr listens during a discussion with State Attorneys General on Protecting Consumers from Social Media Abuses in the Cabinet Room of the White House on Wednesday, Sep. 23, 2020 in Washington, DC.(Photo by Oliver Contreras/For The New York Times)No Use UK. No Use Germany.

President Donald Trump and former Vice President Biden differ on most issues, but a new proposal from Trump’s Department of Justice reveals one point of agreement: Section 230 of the Communications Decency Act needs to go. Biden has openly called for its repeal. While the proposal purports to remedy flaws in the statute, its text shows that Trump has come to bury Section 230, not reform it. And though his Justice Department is advocating what it describes as reform, Trump made his personal opinion clear in a tweet on Tuesday: “REPEAL SECTION 230!!!”

Section 230 grants internet platforms legal immunity for most of the content posted by their users and provides the legal basis for platforms like Facebook and Twitter to operate without fears of ruinous liability: Thanks to Section 230, they mostly can’t be sued for the content their users create. Section 230 also provides platforms with “Good Samaritan” protections to take down obscene or offensive speech, a measure that provides the primary legal basis for today’s content-moderation regimes.

These content-moderation regimes—which platforms use to limit the spread of misinformation, pornography, and violent imagery—have become a bête noire for the right, which views them as an effort to silence conservative voices. Stripping internet platforms of their liability protections under Section 230 has emerged as a major way for Trump and his allies to threaten Silicon Valley firms.

The Justice Department’s recent proposal to reform the law represents the Trump administration’s most concrete idea for overhauling these rules. The proposal is a mishmash of changes and grievances, poorly drafted in some parts and duplicative in others. Its contours, though, reflect a growing Republican consensus that popular internet platforms such as Twitter and Facebook are biased against conservative political views. Accordingly, the proposed legislation narrows the safe harbor that internet services and users currently enjoy against liability for content created by others.

The new provisions would penalize firms for removing some material and for leaving other information available. These changes would likely increase costs for internet companies, in part by inviting litigation over questions such as what constitutes an objectively reasonable belief such that material can be restricted in line with the statute.

On the whole, the Justice Department initiative is worse than an outright repeal of Section 230. It invites internet firms to chase an increasingly elusive quarry of immunity for third-party content while exposing them to a wider range of legal liability. And it would likely have the practical effect of pushing internet sites to curate information in ways that the Trump administration specifically prefers. Forcing platforms to litigate compliance with the new regime will inevitably consume resources, and a more narrow Section 230 might force companies to rely on a First Amendment defense with uncertain outcomes.

The most familiar part of Section 230 protects internet sites and their users from most civil liability for information provided by a third party. If I post a comment to Facebook that defames you, you can sue me, but not Facebook. The Trump proposal changes this immunity in four important ways.

First, it significantly restricts the safe harbor if a platform removes or restricts content. At present, Section 230 protects internet sites both when they leave material up and when they take it down. It’s up to each site to decide what third-party communications it will or won’t allow. The suggested changes would shelter platforms that remove content only if the information at issue falls within one of ten categories of objectionable material, such as promoting violent extremism or self-harm. (The proposal adds these two terms but doesn’t bother to define them.) A decision to take down content would only receive liability protection if that decision was both based on an “objectively reasonable” belief and made in “good faith.” This good faith standard is stringent: Content-moderation decisions must be consistent with a platform’s terms of service and official representations; implemented uniformly across similar material; and accompanied by notice of the factual basis for the restriction to the individuals whose content is taken down. Content removal that fails to meet the requirements of good faith will expose a provider to potential liability.

Second, the proposal dramatically broadens liability by redefining the term “information content provider.” The existing definition generally tracks our intuition: An information content provider is the author or speaker of the material at issue. Platforms are treated as information content providers only in the rare instances in which they require authors to include content, such as a vacation home rental service that mandates listing which gender an owner will rent to, or when they participate in its creation, such as having an employee write part of the material. The Department of Justice proposal, though, turns anyone who comments on, funds, solicits, or otherwise affirmatively and substantively contributes to someone else’s material into an information content provider. This would also make Twitter’s practice of labeling or removing certain posts as false a source of liability for the service. If you click “like” on a defamatory Facebook post, you’re potentially liable under the Trump administration’s approach. Even if the underlying law does not impose liability, a person or entity accused of a violation would have to defend against the substance of the allegation, while the current law generally lets them dispose of the suit early in the process.

Third, the new legislation would expand the reach of liability that states could impose on internet platform providers. One reason for establishing a single federal approach to civil immunity is that information circulates on the internet with little regard for national borders, let alone state ones. Currently, Section 230 lets states enforce laws that are consistent with its provisions. Put more bluntly, it bars enforcement of any state statutes that run afoul of Section 230, such as defamation laws targeting entities that publish someone else’s content. The new language seems intended to enable states to use both civil and criminal prohibitions to target providers who purposefully promote, solicit, or facilitate certain content, or that fail to offer a take-down mechanism for designated unlawful material of which the platform has actual notice. This would give state attorneys general and other plaintiffs far more power to challenge sites’ decisions to host content. However, as described below, the language is confusing and badly drafted, and may not achieve this effect.

And fourth, the proposal would try to legislatively reverse judicial interpretations of Section 230 that make platforms immune from most civil liability either as publishers of material or distributors of it. Right now, if someone wins a defamation lawsuit over a comment posted on Facebook, the speaker might have to delete it, but Facebook can’t be forced to do so. The administration’s draft would let plaintiffs who win a suit in any court in the United States go after platforms that fail to take down the material at issue. Many of these lawsuits result in default judgments—the speaker doesn’t know about the suit or doesn’t think it is worth contesting the issue. Plaintiffs can often find a sympathetic local court to hear their case, and have been alarmingly successful at getting those courts to issue overbroad injunctions that purport to bar platforms from hosting contested information. The new proposed language would reverse decades of established federal precedent and expose platforms to higher costs, since they would have to investigate the merits of the lawsuit or simply take down anything covered by a court judgment.

For such a sweeping alteration, the Department of Justice language is badly drafted—probably intentionally in places, and almost certainly unintentionally in others. For example, it lets sites ban material that promotes terrorism—which is typically defined with care in federal law—without specifying what counts. It also allows for the ban of “violent extremism” as a separate category from terrorism but without any indicators as to how the two differ. The concern is that this is strategic ambiguity: The administration wants sites to remove posts that support the Black Lives Matter movement, but to leave up content extolling the Proud Boys. Trump’s Department of Justice has made plain that it intends to pursue activity that it views as censoring conservative political views, but not liberal ones.

There are also provisions that conflict, probably due to the sort of careless error or outright indifference that has characterized the Trump administration’s approach to other issues, such as its proposed travel ban or Census questions. For example, the proposal removes the safe harbor altogether for content removals in its new Section 230(c)(1)(B), but then conditions the safe harbor on voluntary, good faith, and objectively reasonable actions in removing content in Section 230(c)(1)(C). The Department of Justice cannot eat its cake and have it, too. Immunity is either conditional or void altogether, but not both.

Then, there are parts of the proposal that are simply confusing. The proposal grants states the ability to pursue civil and criminal liability for providers—but only where the provider’s actions violate federal criminal law. This might contemplate a hybrid enforcement model, such as state civil lawsuits based on federal criminal law, that raises a host of issues based on pre-emption, federalism, and other complex considerations. The language is unclear—it might simply restate federalism principles, or might seek to expand dramatically the scope of state enforcement powers. This section of the proposal, though, is too opaque even to understand what it seeks to accomplish.

Time before the 2020 election is short, and congressional focus on a Supreme Court nomination and COVID-19 relief means that the Justice Department proposal to alter—and effectively abolish—Section 230 is mostly a signaling act. The proposal marks a path (albeit a muddy one) toward future legislation. And it suggests ways that sympathetic courts might arrive at similar ends by interpreting the current statute, such as reading the term “information content provider” more broadly, thereby imposing liability on a wider range of actors.

Finally, the Justice Department language demonstrates a dangerous understanding of the ways in which information costs can influence the behavior of key actors in the internet information ecosystem. It is not necessary to successfully impose liability on platforms to change their behavior. Rather, the costs of evaluating any content to which someone objects—and potentially having to defend it in court—will push internet sites to comply regardless of legality. This sort of jawboning is pernicious because it is hard to challenge in court but makes free expression costly.

Derek E. Bambauer is a professor of law at the University of Arizona, where he teaches internet law and intellectual property.

Facebook and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors