Sections

Research

The long reach of Taamneh: Carriage and removal requirements for internet platforms

Daphne Keller
Photo: Daphne Keller, Stanford University
Daphne Keller Director, Program on Platform Regulation, Cyber Policy Center - Stanford University

October 19, 2023


  • Taamneh shows us what a world without the much-maligned platform immunities under Section 230 would look like, and it is not the world that many critics of the law would actually want.
  • People who want platforms to remove more speech and people who want them to remove less speech—including Texas and Florida in major pending Supreme Court cases—will both claim that Taamneh supports them.
  • If the Court upholds the Texas and Florida “must-carry” laws, internet platforms will face competing legal mandates to both remove and carry content.
Flowers and pictures of the victims are placed near the entrance of Reina nightclub, which was attacked by a gunman, in Istanbul, Turkey January 3, 2017.
Flowers and pictures of the victims are placed near the entrance of Reina nightclub, which was attacked by a gunman, in Istanbul, Turkey January 3, 2017. REUTERS/Osman Orsal
Introduction

Efforts to regulate internet platforms in the U.S. are often stymied by a fundamental disagreement. Some voters and leaders—mostly Democrats—want platforms to remove more content, including “lawful but awful” material like medical misinformation or hate speech. Some—mostly Republicans—want platforms to remove less content. These same competing impulses also appear in litigation. Some plaintiffs sue to prevent platforms from removing content, and others to hold them liable for failing to do so.

Congress doesn’t have to resolve those competing imperatives. It can continue its long-running stalemate on platform issues. It can even keep convening hearings in which members yell at platforms for diametrically opposed reasons, chastising them for doing both too much and too little content moderation. Courts don’t have that option, though. If litigants file cases about these issues, eventually judges will have to decide them.

Questions about when platforms must remove content to avoid liability came to the Supreme Court last term in a pair of closely linked cases. In Twitter v. Taamneh, the Court unanimously held that Twitter, Facebook, and YouTube were not liable for harms from Islamic State group (IS) attacks. In Gonzalez v. Google, it declined to decide whether the platforms were also immunized under the law known as Section 230, which protects platforms from many claims about unlawful user content. Litigation on other claims that would effectively hold platforms liable for failing to remove user speech—which I will call “must-remove” cases—continues in lower courts.

The Court’s docket for this coming term will feature questions about when platforms must not remove content. In a second pair of cases, platforms raise First Amendment challenges to so-called “must-carry” laws in Texas and Florida, which take away much of their discretion to set editorial policies.

Must-carry and must-remove claims generally come up in separate cases and are assessed as unrelated questions by courts. But they are deeply intertwined. Taamneh, as the Court’s major holding on platforms and speech since the 1990s, will likely matter to both. This piece examines the connections between must-remove and must-carry claims through the lens of the tort law questions examined in Taamneh. It argues that Taamneh’s emphasis on platform “passivity,” which permeates the opinion, may have troubling ramifications. Indeed, Texas relied heavily on Taamneh in a brief supporting its must-carry law, arguing that platforms’ passivity demonstrates that platforms have no First Amendment interest in enforcing editorial policies.

Taamneh is largely a victory for platforms. It powerfully rejects the idea that basic platform functions, including algorithmic ranking of user content, aid and abet users’ harmful acts. Because its analysis turns on general tort law, it will influence cases far beyond the antiterrorism context.

Despite its overall pro-platform holding, though, Taamneh may have unexpected consequences for future cases because of its emphasis on platforms’ supposedly “passive[,]” “agnostic[,]” and “indifferent” attitude toward users’ posts. I review those passages in detail in this separate article on Lawfare. As I discuss in Lawfare and below, this depiction mostly doesn’t matter for Taamneh’s legal analysis, and thus should have no formal precedential significance for future cases. But parties and perhaps courts will look for ways to make Taamneh’s characterizations matter all the same. Its omissions (like not mentioning that platforms enforce their own speech rules), understatements (saying platforms only “perhaps” “attempted” to remove posts and accounts), and inaccuracies (saying platforms in 2017 did not proactively screen uploads) are particularly striking in light of the many briefs the Court received, from parties and amici, detailing all of these practices. The Supreme Court clerks who read those briefs are gone, though. Their successors, and lower courts, may consider Taamneh a reliable primer on platforms’ operations.

Taamneh’s characterization of platform moderation practices may shape both future must-carry and must-remove cases. Regarding must-carry, Taamneh makes it sound as if platforms are already more or less what Texas and Florida want them to be: common carriers with no rules or preferences for user speech. It should come as no surprise that Texas has seized on these parts of the opinion as justifications for the state’s must-carry law. Taamneh does not prejudge anything about the Texas and Florida cases, of course. But the facts about platforms’ current content moderation (or lack thereof) do matter for must-carry cases. Texas and Florida argue that platforms’ hands-off behavior demonstrates their lack of expressive interest in curating or editing user speech, and that platforms have sacrificed any First Amendment objections to must-carry laws precisely because they allegedly hold themselves out as open to all comers. Justice Thomas, as well as Justices Alito and Gorsuch, have signaled receptivity to this line of argument.

Taamneh’s emphasis on passivity will also come up in future must-remove cases. Plaintiffs are sure to quote from it in arguing that the Taamneh platforms won because they were passive. Platforms exercising more editorial discretion—which is to say, every major platform—might, by that logic, assume more legal responsibility for users’ unlawful posts. I don’t think Taamneh really supports this argument as a legal matter. But its broad statements about platform passivity will give future plaintiffs a lot of jurisprudential spaghetti to throw at the wall. Some of it could stick. If it does, and plaintiffs win some of those cases, platforms will be further deterred from moderating “lawful but awful” speech.

In this sense, Taamneh may prove to be a double setback for advocates who want platforms to more actively moderate and remove user content. It directly rejects plaintiffs’ claims that platforms must moderate more in order to avoid liability—an argument for which, in the Section 230 context, Justice Jackson voiced sympathy. And it indirectly nudges platforms to avoid even voluntary content moderation. That nudge will become a lot more direct if plaintiffs succeed in persuading courts that moderation creates culpable knowledge and liability for platforms. The Court could have spoken to this in Taamneh, sending a clear message that platforms don’t assume legal risk under tort law by taking down terrorist content, hate speech, harassment, and other illegal or harmful online content. Instead, the ruling is at best silent on this question.

Taamneh illustrates how easily tort law standards can give rise to the so-called “moderator’s dilemma,” in which platforms want to moderate content but are deterred by fear of future liability. As I will discuss below, the basic elements of many tort claims give plaintiffs room to argue that content moderation gives platforms culpable knowledge about, or overall responsibility for, users’ speech. Even if platforms ultimately win such suits, the expense and nuisance of litigating remains a major consideration, particularly for smaller platforms. The specific elements of potential claims, and the ways that courts interpret them, shape the legal advice that lawyers provide to platforms about whether or how to moderate content.

The moderator’s dilemma is sometimes seen as a pre-Section 230 problem. Section 230, which was enacted in 1996, expressly immunizes platforms from most claims in both must-remove and must-carry cases. Its drafters’ goal was to avoid the moderator’s dilemma, and encourage companies to set and enforce rules to govern speech on their platforms. But Section 230 doesn’t immunize all claims—it has never immunized federal criminal or intellectual property claims, for example. As a result, the moderator’s dilemma has been alive and well in other areas of U.S. law all along. It is far more pervasive, though, in a post-230 legal landscape. Taamneh gives us a glimpse of how that might play out.

 Legal disputes like the one in Taamneh, about whether plaintiffs have any cause of action against platforms in the first place, are relatively common in litigation. But academic and public policy discussion tends not to focus on these merits issues. Instead, academics and policymakers often focus on the other two major sources of law governing platforms and speech: immunity statutes and the Constitution. Statutory immunities under laws like Section 230 have been the subject of noisy debates for a number of years, while constitutional questions rose to prominence more recently. It is clear that the First Amendment sets real limits on the laws that govern platforms. But it is less clear exactly what those limits are.

Basic liability standards, immunity standards, and First Amendment standards all combine to shape platform regulation. Together, they profoundly influence platforms’ content moderation and users’ ability to share information online. The tort law questions explored in this article are closely tied to the First Amendment issues I discussed in a previous article, Who Do You Sue?. Like this article, it explored links between must-carry and must-remove claims. It reviewed the Supreme Court precedent and competing speech rights claims that are now at issue in the Texas and Florida cases, as well as in “jawboning” cases about informal state pressure on platforms. Most relevantly for this article, it also described First Amendment limits on actual laws—including the laws invoked by plaintiffs in cases like Taamneh—that hold platforms liable for failing to remove content.

A predictable consequence of laws that hold platforms liable or potentially liable for user speech is that platforms will cautiously remove both unlawful and lawful material in order to protect themselves. As a midcentury Supreme Court case about bookstores tells us, this can be a problem of constitutional dimension. In Smith v. California, the Court struck down a law making booksellers strictly liable for obscene books on their shelves, noting that the resulting “censorship affecting the whole public” would be the result of state action, and “hardly less virulent for being privately administered[.]” A similar issue arguably arose in Taamneh, where at least one amicus brief pointed out the First Amendment implications of expansive liability claims like those asserted by the plaintiffs. The Court avoided those questions by rejecting plaintiffs’ claims and sticking narrowly to familiar tort law principles. That doesn’t always work, though. Sometimes, as I will discuss here, courts interpreting tort law must consider First Amendment limits.

By illustrating the problems with applying tort standards to online speech platforms, Taamneh also provides a timely reminder of the work that Section 230 does—both to mitigate moderator’s dilemma issues and to avoid constitutional conflict. The statute has earned its bipartisan opposition by thwarting the speech-regulation goals of lawmakers on both sides of the aisle. It lets platforms that want to avoid “censorship” leave content up; and lets platforms that prefer to remove content and enforce editorial policies do that. Critically, it keeps governments out of the business of telling platforms how to moderate content. It tries, instead, to create a diversity of forums offering distinct and competing rules for online speech. Must-remove laws and must-carry laws do the opposite. Must-remove laws make it harder for users to see and share speech online. Must-carry laws restrict platforms’ editorial choices, and effectively force users to view unwanted speech as the cost of seeing the speech they are actually interested in. Whatever its other strengths and weaknesses, Section 230 at least largely avoids the constitutional questions that arise when control over speech sits in state hands.

Following this Introduction, Section I of this essay will discuss the overall landscape of must-remove and must-carry claims. It will describe the role that liability standards like Taamneh’s already play, despite Section 230, in shaping platform behavior. It will also discuss legal, policy, and practical issues with must-carry laws like the ones in Texas and Florida. Section II will describe the Taamneh ruling in more detail and sketch out ways it might shape future must-remove and must-carry claims—and perhaps influence courts’ reasoning about Section 230 itself. It will also examine a scenario that courts and advocates would do well to consider now: What if the Court upholds the Texas and Florida laws, leaving platforms subject to simultaneous must-carry and must-remove obligations?

A few caveats are in order. My analysis is informed by my own experience, which includes serving as an associate general counsel to Google until 2015 and consulting for smaller platforms. This article is about platforms that allow users to host and share content, like the defendants in Taamneh—and not about more complex entities like Wikipedia or CloudFlare, which play different roles in the internet ecosystem. I will use some other common terms as if they had stable meanings, when they actually do not. “Unlawful content,” in particular, is rarely a clearcut category. Most speech is lawful in at least some circumstances, like news reporting or parody. The word “remove” is also an oversimplification, since platforms may also demonetize, de-index, demote, or take other adverse actions against user content. I’ve argued that those distinctions make little difference for constitutional analysis or for Section 230 purposes.

This essay largely sets aside those variables to focus on Taamneh and the connections between must-carry and must-remove mandates. For both policy and doctrinal purposes, must-carry and must-remove claims are two parts of the same puzzle. Courts should be wary of considering either in isolation.

I. The Moderator’s Dilemma and the Lure of Common Carriage

Cases about platforms’ liability for user speech tell them, as a practical matter, when they must remove that speech in order to avoid liability. A single must-remove case can have sweeping consequences, shaping numerous platforms’ future approaches to content moderation. Taamneh re-raises longstanding questions about the moderator’s dilemma, and whether rational, risk-averse platforms should attempt to avoid liability by refusing to moderate user speech. That would be a perverse result for plaintiffs whose goal was to make platforms moderate speech more. It could lead platforms to opt for the very passivity that, according to Texas and Florida, supports must-carry mandates. This section lays out the overall state of play in disputes about these interlinked must-remove and must-carry questions.

Must-remove claims and the moderator’s dilemma

In the U.S., must-remove cases—that is, cases that effectively seek to hold platforms liable for failing to remove users’ speech—come in many forms. Some common claims, including defamation, fail because of platforms’ statutory immunities. But claims not immunized by Section 230, including federal criminal law, intellectual property, and trafficking or prostitution claims, can all potentially impose liability if platforms do not remove content.

Platforms’ incentives vary significantly depending on the details of must-remove liability standards, as James Grimmelmann and Pengfei Zhang recently illustrated through rigorous economic modeling. Laws imposing liability for unlawful content that a platform “knows” about can create moderator’s-dilemma incentives and lead platforms to avoid any possible knowledge by not reviewing or attempting to moderate user content. Knowledge-based liability can also drive platforms to the opposite extreme—that is, removing any content that comes to employees’ attention and is even slightly suspicious.

Laws that immunize platforms until they know about or receive “notice” of illegal content can combine these two incentives. Platforms operating under laws like the Digital Millennium Copyright Act (DMCA) are sometimes reluctant to moderate content or allow employees to review it until they receive a notice alleging that content is illegal. At that point, many err strongly on the side of simply removing any content identified in the notice. This can be a real problem for users’ free expression interests, because “notice-and-takedown” regimes under laws like the DMCA in the U.S. and the Right to Be Forgotten in the EU attract an enormous number of false allegations. In some notorious examples, politicians have used DMCA notices to try to remove material about corruption charges, and the Ecuadorian government has used them to suppress critical journalism.

Platform immunity legislation like Section 230, the DMCA, or the EU’s new Digital Services Act (DSA) often aims to better align platforms’ incentives with societal goals. At the simplest, the goal might be to make platforms remove unlawful content, while avoiding incentives for them to remove lawful speech. The DMCA tries to do this, for example, by penalizing bad faith notices and creating appeals possibilities for accused speakers. Section 230 is in some ways a blunter legal instrument, because its immunity is simple and unconditional. But in conjunction with other federal laws, it also creates tiers of legal responsibility for different kinds of harms. It does not, for example, immunize platforms from claims about content that violates federal criminal law. A platform that distributes child sexual abuse material faces the same federal criminal law standards as anyone else. Section 230 is also in some ways more ambitious than laws like the DMCA, because of its engagement with competition and technical innovation. It avoids over-specifying the design of immunized technologies, expressly articulates pro-competitive goals, and immunizes providers of user-empowerment tools to maximize individuals’ control over the content they see.

Taamneh somewhat artificially isolated tort liability questions from platforms’ defenses under Section 230, because only the tort questions were at issue on appeal. As a result, it provides a valuable look at how cases might play out in a world without Section 230. It is in many ways a fairly typical platform liability case, turning on legal elements that have close analogs in other areas of law. Plaintiffs argued that by displaying IS content in ranked features like Twitter’s newsfeed, the platforms violated the Justice Against Sponsors of Terrorism Act (JASTA). Liability under that statute turned on two key questions. One is what the platforms did. Lawyers sometimes call this the actus reus. The culpable act that plaintiffs needed to establish in Taamneh was “substantial assistance.” Similar culpable acts for platforms include “material contribution” (in copyright cases) and “facilitation” (in trafficking and prostitution cases). The second question is what the platforms knew. This culpable mental state (sometimes called mens rea or scienter) was, in the statute applied in Taamneh, “knowingly.” Other laws might hold platforms liable for things they “should have known” or for mental states like “intent” or “reckless disregard.”

Both of these two legal elements—the culpable act and mental state—contribute to the moderator’s dilemma for platforms. Platforms that hire moderators to enforce speech rules and remove objectionable content may find those actions characterized as “assistance” for whatever content doesn’t get removed. Or plaintiffs may argue that a platform’s entire business “assists” or “facilitates” users’ actions—a claim that is almost by definition true of almost any platform, at least given the ordinary meaning of those terms. If courts accept that reasoning, then the only element that matters in litigation, and the only thing platforms can try to control ahead of time, is their knowledge or other relevant mental state. Moderating user content makes that much harder. Once platform moderators have looked at user posts in order to enforce rules against things like nudity or harassment, plaintiffs can argue that they “knew” or “should have known” about illegality, even on unrelated legal grounds. In Taamneh, plaintiffs made precisely this argument about YouTube’s knowledge of the IS’s presence on the platform, because employees would generally review videos on the platform to determine their eligibility for monetization.

These concerns matter to U.S. platforms, because Section 230 is not the only source of law shaping their choices about content moderation. For one thing, major platforms operate internationally. Outside the U.S., tort standards like the ones in Taamneh are commonplace and lead to claims that platforms “contributed to” and “knew” or “should have known about” users’ illegal conduct. Immunity standards can work the same way. In the EU, for example, courts and lawmakers struggled for years to encourage content moderation under an imprecise law that exposed platforms to liability if they had “knowledge” or “control over” online content.

The moderator’s dilemma persists in U.S. law, too. Platforms must assess whether particular moderation efforts are worth it, given potentially increased exposure to must-remove claims. The biggest platforms seem generally very willing to moderate and accept the resulting litigation risk these days. But for smaller or more risk-averse platforms, the moderator’s dilemma can influence moderation practices to a degree often underappreciated in discussions about Section 230. The biggest source of pressure generally comes from copyright law. Copyright claims against platforms often involve well-funded, highly motivated claimants, and statutory damages that could bankrupt most internet companies. Platforms are generally immunized under the DMCA, but lose that statutory immunity if they disregard notices provided under the statute. They can also lose immunity under a separate, hotly litigated “knowledge” exception in the DMCA. At that point, they can face claims like common law contributory liability, which turns on “knowledge” and “material contribution” to infringement.

Copyright law provides some of the richest information about what U.S. litigation and platform behavior might look like in a world without Section 230. One lesson is the consistent tendency, discussed above, to remove lawful speech. Another lesson, particularly compelling to investors and platform operators, is that platforms can prevail in must-remove cases, but still join what Wired called the “long list of promising startups driven into bankruptcy” by litigation. One such platform, Veoh, litigated a case almost identical to YouTube’s dispute with Viacom over liability for copyright infringing content uploaded to platforms. Both platforms won their cases. But only YouTube—backed by Google’s deep litigation coffers—avoided bankruptcy and remains in business today.

Other lessons are more granular. The 2016 Capitol Records v. Vimeo case, for example, illustrates the moderator’s dilemma in action. Vimeo employed moderators to evaluate whether videos violated their Terms of Service. Plaintiffs argued that when these platform employees saw videos that included entire popular songs, they could be assumed to “know” the use was infringing. The 2nd Circuit Court of Appeals rejected this theory—pointing out that people might recognize different music based on their age or musical taste. It also noted that moderators who reviewed videos might have focused on other issues, like “obscenity or bigotry[,]” rather than copyright.

Yet the fact that employees were carrying out the very content moderation encouraged by Section 230 did not protect Vimeo from liability. The 2nd Circuit said that plaintiffs could seek discovery, asking individual employees about what videos they looked at and what they “knew” about them. By undertaking content moderation, the platform exposed itself to litigation and discovery costs, at minimum—and perhaps to extremely steep statutory damages.

A platform in Vimeo’s position—particularly a smaller platform—might rationally decide it is better off not trying to enforce content rules at all, even under current U.S. law. Some American platforms have always avoided certain content moderation efforts on this basis. That some do build expensive moderation tools and hire armies of content moderators is partly a testament to the power of norms and markets in shaping company behavior. It also reflects platforms’ strong immunities under Section 230 and relatively clear conditional immunities under the DMCA—as well as courts’ general reluctance to puncture DMCA immunity absent a very clear indication of knowledge.

Judges are poorly equipped to resolve the problems generated by the moderator’s dilemma. As Justice Kagan said during oral arguments in Gonzalez, the Supreme Court justices themselves are “not the nine greatest experts on the internet.” Beyond the question of expertise, though, judges of any sort have a very limited toolkit for shaping platforms’ behavior or incentives. They can’t create notice-and-takedown systems to balance the rights of online speakers and victims of harm, like the ones legislators created in the DMCA and DSA or like those recommended in international human rights literature. Even if judges could somehow craft such a system, litigation would not provide all the relevant information needed for balancing, as it generally just surfaces the facts that two parties care about. In must-remove cases, courts hear from platforms and victims of harms. Internet users who are concerned about speech and information rights—and who might argue against legal standards that encourage substantial over-removal of lawful speech—are not represented. This “three-body problem” with litigation is one of many reasons why, as Justices Kagan and Kavanaugh both suggested in oral arguments, Congress is the better venue for complex changes to internet policy.

Courts in must-remove cases do sometimes look beyond merits issues, though, and consider the constitutional backdrop. That’s what the Supreme Court did in Smith, rejecting overly stringent liability for booksellers. In X-Citement Video, similarly, it managed to read a single statutory passage as establishing strict liability for creators of pornographic videos, but knowledge-based liability for distributors, in order to avoid “imput[ing] to Congress the intent to pass unconstitutional legislation.” This method for reconciling tort and constitutional law, using mental state liability standards to protect speech, is also familiar from the seminal New York Times v. Sullivan case. The Court there adopted the stringent “actual malice” mental state standard for defamation claims by public figures in order to protect First Amendment rights.

Rulings in other countries, drawing in part on U.S. First Amendment precedent, do the same thing for platform liability. Supreme Courts in both Argentina and India, for example, ruled platforms cannot be deemed to “know” which online content is unlawful until a court or competent authority has ruled on the speech in question. A Mexican court held that speakers must be given notice and an opportunity to contest allegations about their speech. Even under Europe’s softer free expression rules, courts have cited users’ speech rights as a basis for limiting when platforms can be said to “know” about unlawful content. Reinterpreting the mental state element of tort claims is often the most direct way that courts can align tort law with free expression protections.

The Court in Taamneh did not address this constitutional backdrop, nor did it attempt to wrangle with the moderator’s dilemma. By limiting its analysis to tort doctrine, it avoided fraught and complex questions—perhaps wisely, since the issues had not been extensively briefed to the Supreme Court or addressed in the lower court opinion under review. But the result is a must-remove liability standard that arguably encourages platforms to remain passive and avoid moderating content.

Must-carry claims and common carriage

Must-remove cases like Taamneh are relatively familiar legal territory. U.S. courts have reviewed hundreds and perhaps thousands of must-remove cases. But must-carry cases, in which claimants assert a right to make platforms host and transmit content against their will, are relative terra incognita.

The total number of must-carry cases is likely in the dozens. As I discussed in Who Do You Sue?, platforms have historically won such cases—a run of victories that ended with the 11th Circuit’s ruling upholding Texas’s must-carry law. Previous plaintiffs often could not make out the merits of their legal claims; or platforms won based on their Terms of Service, Section 230 immunities, or First Amendment rights to make editorial decisions.

While must-carry claims are not new, the motivation for them has largely shifted. Early plaintiffs often sued because of their commercial interests in reaching customers through particular platforms. Recent cases are often political, with plaintiffs asserting rights against platform “censorship”—and alleging platform bias against politically conservative viewpoints.

Critics have increasingly expressed concern about platform moderation that, as Justice Thomas put it, “stifled” or “smothered” lawful speech. Thomas and others have suggested that platforms should instead be required to carry users’ speech regardless of its content or viewpoint. This idea has a lot of legal and practical problems. But it stems from very valid concerns about platforms’ concentrated, private power over public discourse.

Accusations of platform “censorship” in the U.S. currently come more from the political right, as in the ongoing Missouri v. Biden case over government “jawboning” and platforms’ enforcement of policies against COVID-19 disinformation. But the underlying concerns are not innately partisan. The most important international ruling on jawboning to date, for example—a decision by the Israeli Supreme Court—involves more traditionally liberal concerns. The ruling rejected Palestinians’ arguments that the Israeli government violated their rights by requesting, through private extra-judicial channels, that platforms remove their posts as “terrorist” content.

Platforms may have been controlled by comparatively liberal Californians for the past few years. But as Elon Musk’s Twitter takeover demonstrates, when regime change happens, platforms’ speech rules can change overnight. In general, we should expect corporate decisions about online speech to be predictably shaped by economic motives, not by political conviction. Speech rules may be influenced by advertisers’ preferences, broader business interests, political expediency, and desire for access to markets outside the U.S. In the long term, those pressures seem likeliest to harm speech and speakers that are unpopular, marginalized, and politically or economically powerless.

Policymakers’ and judges’ political alignment on must-carry questions has shifted over time. The GOP platform long included opposition to the “fairness doctrine” for television stations. Republican-appointed FCC Commissioners deemed that carriage mandate unconstitutional, and President Reagan vetoed a bill that would have reinstated it. As recently as 1996, in Denver Area Educational Telecommunications Consortium v. FCC, Justices Thomas and Scalia expressed great skepticism about carriage mandates for cable companies, comparing them to the “government forc[ing] the editor of a collection of essays to print other essays.” Justices Ginsburg and Kennedy, by contrast, endorsed the dedication of cable channels for public use, invoking common-carriage or public-form doctrines.

The political and legal landscape began to shift during President Trump’s time in office. Throughout the Trump administration, Republicans increasingly called for must-carry mandates for internet platforms. In 2021, Justice Thomas took the unusual step of writing about the idea in an opinion, framed as a concurrence to an otherwise pro forma order vacating the 2nd Circuit’s ruling in Knight Institute v. Trump. (That case, about state actors’ social media accounts, became moot when President Trump left office.) He suggested that lawmakers might constitutionally compel platforms to carry speech against their will.

Within a few months after the publication of Justice Thomas’s opinion, two states enacted must-carry laws. Texas’s law prohibits platforms from moderating most speech based on its “viewpoint.” Florida’s law requires platforms to carry almost any speech by “journalistic enterprises” and by or about political candidates—seemingly even if they post defamation, copyright infringement, or offers to sell illegal drugs. Platforms challenged both laws, leading to a circuit split on a First Amendment question: Whether the laws violate platforms’ own rights to set and enforce editorial policies. The Court agreed to hear these cases, NetChoice v. Paxton and NetChoice v. Moody (“the NetChoice cases”), in the 2023-24 term.

Thomas’s Knight concurrence compared platforms to common carriers, public accommodations, and designated public forums—all entities with special obligations under the law. He noted that under common carriage precedent, states could potentially impose obligations on any platform that already “holds itself out as open to the public.” Texas and Florida advance the same theory in the NetChoice cases. Texas argues that platforms are “twenty-first century descendants of telegraph and telephone companies[,]” with only limited First Amendment protections because they are “open to the general public.” In a brief earlier foray to the Supreme Court—an emergency petition that kept Texas’s law from coming into effect—Justices Alito, Thomas, and Gorsuch signaled openness to this line of argument, writing that platforms likely forfeited their constitutional objections to must-carry laws by “hold[ing] themselves out as ‘open to the public’” and as “neutral forums for the speech of others.”

The Taamneh opinion, authored by Justice Thomas, uses quite similar language in describing platforms. It characterizes them as passive entities that are “generally available to the internet-using public,” and as functional substitutes for traditional common carriers. Most users, it asserts, “use the platforms for interactions that once took place via mail, on the phone, or in public areas.” A reader who learned about platforms only from Taamneh would think that they only “perhaps” ever removed users’ content or “attempted” to deplatform speakers—even though all parties in Taamneh agreed that platforms did both things, and Twitter described removing hundreds of thousands of IS accounts. Such a reader would also learn, incorrectly, that platforms never pre-screened content at upload. (All three did that at the time of the case for child sexual abuse material, and at least two pre-screened for copyright infringement.) They would learn that ranking algorithms are “agnostic” as to content of users’ posts. (I think that’s incorrect, too.) And they would encounter no indication that platforms set and enforce their own discretionary rules against “lawful but awful” material like disinformation and hate speech. Taamneh largely omits any reference to the activities that platforms, in NetChoice, identify as expressive acts and exercises of editorial discretion. Its descriptions are aligned with Texas’s claim that platforms already behave like common carriers—and that they have, as a result, forfeited First Amendment rights to set editorial policy.

Taamneh was not the Justices’ last chance to preview theories, forge alliances, or lay the foundation for future positions before addressing Texas’s and Florida’s must-carry rules. Those opportunities will arise in two other cases with oral arguments earlier in the same 2023-24 term, Lindke v. Freed and Garnier v. O’Connor-Ratcliff. Both involve government officials who operated social media accounts and blocked the plaintiffs, who argue that this violated the First Amendment. In principle, the Court could easily resolve these cases about government officials without discussing platforms rights to block users or content. But—particularly if the Court holds that defendants were not acting in their governmental capacity—Lindke and Garnier also offer ample room for justices to strategically expound on must-carry and other high-profile questions about the relationship between state actors and platforms in setting rules for online speech.

Must-carry rules for platforms create a daunting array of constitutional, doctrinal, and practical problems. Very few users would want to spend time on YouTube or Facebook if it meant seeing all the hate speech, extreme pornography, and scams that major platforms currently exclude. Users who do want to see content of this sort can find it now on barely moderated sites like 4chan or 8chan—but most choose not to. If the same material became common on mainstream platforms, many users would almost certainly leave. So would advertisers. Platforms would lose value for their operators, but also for many users, both as listeners and as speakers. Content creators ranging from emerging hip hop artists to providers of makeup tutorials would lose both audiences and revenue streams if sites like YouTube and Facebook turned into free speech mosh pits and drove away key audiences.

Americans may be deeply divided about what speech rules platforms should apply, as a legal or moral matter. But very few people actually want to waste their own time on the illegal or lawful-but-awful content that common carriage laws would unleash. And while requiring carriage just for some lawful speech may sound appealing, devising legal rules to this effect would be a practical, political, and constitutional nightmare. (The idea is not entirely without supporters, though. FCC Commissioner Brendan Carr appears to favor it.)

On the constitutional front, platforms’ First Amendment arguments against must-carry rules are just the start. There are also Dormant Commerce Clause questions about subjecting shared national communications services to competing obligations in different states. Platforms raised those issues in NetChoice, but they were not reviewed by the appellate courts and are not part of the petitions for Supreme Court review. If carriage obligations were to destroy much of platforms’ commercial value, this could also raise Takings Clause questions.

And there are many as-yet-underexplored questions about users’ First Amendment rights when government regulation of platforms determines what information they see online, and what opportunities they have to speak and reach a desired audience. Individual speakers’ rights would be dramatically affected, for example, if platforms opted to comply with Texas’s viewpoint-neutrality rule or Florida’s requirement for “consistent” content moderation by simply banning all speech about topics like race or abortion. Platforms might readily conclude that, as both a commercial and moral matter, suppressing all of that speech is preferable to carrying the most outrageous, harmful, or hateful things that users have to say on those topics. In NetChoice, the platforms challenge the laws as “must-carry” mandates and speech compulsions. But that framing should not obscure the fact that Texas and Florida’s laws could just as easily lead to removal and suppression of online speech.

If platforms did choose to act more like common carriers, allowing previously prohibited content to appear, existing users might have other First Amendment concerns. Online speakers would surely object if state interference drowned their speech in a sea of noise, leaving them unable to reach existing audiences. Internet users’ First Amendment rights as listeners are equally relevant: Must-carry laws effectively force them to hear unwanted speech, as the state-imposed cost of hearing the speech they are actually interested in.

The doctrinal questions are also tricky. As Blake Reid has written, “there is no coherent, widely-agreed-upon understanding” of which companies qualify as common carriers, or what legal obligations they assume as a result. The Court could change that, of course—but NetChoice would be a strange case in which to do so. For one thing, the Texas and Florida statutes use the language of common carriage, but their actual rules impose or permit a complicated assortment of speaker, content, and even viewpoint-based distinctions between users’ posts. Florida’s law uses state power to favor certain speakers and topics. Texas’s law, as I read it, disfavors some lawful-but-awful speech, giving platforms free rein to remove speech on a few, state-selected topics. Courts applying the laws would also be put in the technically and constitutionally difficult position of deciding what the correct ranking of content in search results or news feeds would be, in order to determine whether platforms’ actual ranking violates the law. All of this makes the NetChoice laws odd versions of “common carriage” to review. There is, in any case, no circuit split on this issue. In the NetChoice cases to date, only one judge—Judge Oldham in the 5th Circuit’s ruling on the Texas law—endorsed states’ common carriage arguments.

Public accommodations arguments—which were not part of the rulings below in NetChoice, but which appear in Texas’s briefs to the Supreme Court, and which Thomas has raised elsewhere— may be doctrinally even thornier. For one thing, the Court’s conservative majority recently held that Colorado’s public accommodations law could not require a web developer to build websites for same-sex weddings, because of the developer’s First Amendment rights. Interpreting public accommodations laws to nonetheless strip platforms of First Amendment rights, and compelling them to carry hate speech as a consequence, would be both doctrinally awkward and extremely troubling as a matter of public policy. Public accommodations laws are intended to promote equality goals, and protect people from discrimination based on identity attributes like race, religion, or disability. Reinterpreting those laws to prevent discrimination based on people’s speech would be a remarkable shift. It would use anti-discrimination laws to make platforms disseminate racist, homophobic, antisemitic, and otherwise discriminatory messages.

The internet was quite literally designed to avoid problems of this sort—to reap the benefits of common carriage while also allowing content moderation. Following “end-to-end” technical design principles, the “dumb pipes” that transmit information, like undersea cables or telecommunications carriers, were expected to be neutral—to play no role in approving or disapproving of the speech they carry. Power to discriminate based on content was reserved for applications or software at the edge of the network, under the control of end users. Users might, for example, decide what content to retrieve using the File Transfer Protocol or web browsers; or configure email clients to block profanity; or use services like NetNanny to restrict pornography websites. This distributed control, under diverse standards, was to be layered on top of carriage mandates at the infrastructure level. End-to-end design aimed to keep decisions about speech out of the hands of any centralized authority, be it a government or a company.

Policymakers in the 1990s, too, tried to keep individual users in control of decisions about speech. Congress in Section 230 lauded and immunized the providers of technologies for end-user control over content. And the Supreme Court in Reno v. ACLU discussed parental control technologies in the household as tools superior to, and more narrowly tailored than, top-down state regulation of online speech.

Demands for common carriage on platforms like Facebook or YouTube are not entirely out of keeping with end-to-end network design. Proponents want to treat major platforms as unavoidable and essential public utilities and relegate them to the status of “infrastructure,” to use Justice Thomas’s language in Taamneh. But that only works if some new technology emerges to take on the “end” function, giving users the ability to choose their preferred rules for online speech. Otherwise, common carriage will just remove all controls, subjecting users to a barrage of online garbage.

Technical models that treat major platforms more like infrastructure, and layer new end-user controls and competition on top, do exist. Functioning commercial versions are in their infancy, though. They can be blocked or sued by platforms and also face challenges relating to privacy, technical feasibility, costs, and revenue models. Lawmakers could help address these problems—and indeed, Texas’s law makes a gesture in this direction, in a brief provision seemingly allowing platforms themselves to offer end-user control mechanisms. But lawmakers could do much more to foster the development of tools for user control, from diverse, antagonistic, and competitive sources—not just incumbent platforms. At the very minimum, they could remove legal impediments to platform competitors that wish to build such tools.

The Supreme Court has identified individual control over content as a less restrictive alternative in striking down past speech regulations. Those older cases involved speech restrictions, but principles for speech compulsion should be no different—and in any case, the Texas and Florida laws do effectively restrict speech by both platforms and users. In U.S. v. Playboy, for example, the Court struck down a broad requirement for cable companies to block or scramble pornographic content, saying that a less restrictive option would be to let individual subscribers decide what channels to block. “[T]argeted blocking is less restrictive than banning,” it wrote, and “the Government cannot ban speech if targeted blocking is a feasible and effective means” of furthering state interests. “Technology expands the capacity to choose[,]” the Court continued. Where individual choice is feasible, lawmakers should not “assume the Government is best positioned to make these choices for us.”

Technologies expanding capacity to choose are eminently feasible on the internet and relevant under Reno v. ACLU. As the Playboy Court noted, the “mere possibility that user-based Internet screening software would `soon be widely available’” contributed to its rejection of overbroad internet speech regulation in that case.

II. Taamneh

Literal must-carry mandates are not the only ways the law can push platforms toward common carriage. Liability standards in must-remove cases like Taamneh can do the same thing, by indicating to platforms that treating all content neutrally is the way to avoid liability. The irony is that no party to Taamneh wanted or asked for this outcome. Plaintiffs, like many advocates and legal scholars, wanted platforms to face liability unless they do more to combat unlawful user content. Defendant platforms wanted to avoid liability while remaining free to moderate. No party wanted an outcome suggesting that platforms should do less moderation.

Taamneh does not directly tell platforms to engage in less moderation, of course. But future plaintiffs will try to give it that effect. Whether or not they win, platforms may try to avoid those cases by moderating less. If those plaintiffs do prevail in arguing that platforms assume more liability by moderating content, platforms will be more reluctant to engage in voluntary moderation going forward. That would leave far fewer tools against “lawful but awful” online content—material like Holocaust denial or pro-suicide videos that cannot constitutionally be regulated, but that violates most Americans’ personal moral beliefs or social norms.

This section will explain the ruling in broader strokes, and discuss how it might play into this dynamic. It will offer predictions and conjectures about its impact on future must-remove cases, future must-carry cases, and future rulings that attempt to reconcile must-carry and must-remove obligations.

Case overview

Taamneh and Gonzalez both arose from horrific events. Plaintiffs lost family members in the IS attack. The attack at issue in Taamneh took place in 2017 at the Reina nightclub in Istanbul. The Taamneh plaintiffs argued that three platform defendants—YouTube, Facebook, and Twitter—were liable under JASTA because of their role in spreading IS’s message online. They did not claim that terrorists used the platforms to plan or execute the attacks; in fact, the Istanbul attacker appears not to have used Twitter at all. All parties agreed that defendants had policies against IS content and systems in place to remove IS material once they became aware of it. Plaintiffs did not identify any specific IS posts or accounts as the basis for their claims.

Rather, plaintiffs argued that platforms took insufficient action against IS—allegedly, only removing specific accounts after being notified. The platforms knew that additional IS content remained on the platform and was boosted by ranking algorithms, plaintiffs argued, because reliable sources including law enforcement and news media told them so. But beyond removing accounts or material specifically identified to IS, the platforms opted to “avoid reviewing their files for terrorist materials,” and took “no steps of their own” to detect such additional material.

The 9th Circuit, which reviewed the two cases together, upheld the JASTA claim in Taamneh, but said platforms were immune under Section 230 in Gonzalez. Its 2021 ruling held that plaintiffs’ allegations met JASTA’s statutory prohibition on “aid[ing] and abet[ting], by knowingly providing substantial assistance[.]” Twitter successfully petitioned for Supreme Court review in the 2022-23 term. This made Twitter the only platform to appear at oral argument, though Google and Meta filed briefs as parties to Taamneh.

Before the Supreme Court, the plaintiffs shifted focus, arguing that platforms’ ranked newsfeeds and recommendations were the source of liability, because they amplified IS content and helped the group recruit new members. But the Court held that the plaintiffs had not established a sufficient nexus between the platforms’ actions and IS’s attack. Providing “generally available virtual platforms” and “fail[ing] to stop IS despite knowing it was using those platforms” did not, it said, violate JASTA.

The Court considered three possible culpable acts that might support liability for platforms—allowing IS to “upload content[,]” ranking the content, and taking “insufficient steps to… remove” content—and rejected all three. Its analysis drew extensively on common law aiding and abetting precedent, which it said did not support extending liability to a platform “merely for knowing that the wrongdoers were using its services and failing to stop them[.]” Instead, aiding and abetting liability required more active involvement, such as “encouraging, soliciting, or advising the commission” of harmful acts—none of which had been alleged in Taamneh.

The statute’s two key elements—that defendants must have acted knowingly, and that they must have provided substantial assistance—worked “in tandem” under common law principles, the Court said. A “lesser showing of one” would mean plaintiffs could prevail only with “a greater showing of the other.” If a defendant’s assistance is not very substantial, for example, a plaintiff would have to make a stronger showing of culpable knowledge. Aiding and abetting claims also, the Court noted, generally depend on a defendant’s intent. To prevail, the Taamneh plaintiffs would have had to establish that platforms “intentionally” assisted in IS’s attacks—which they did not.

The Court reinforced this emphasis on intent in another 2023 case, United States v. Hansen. That ruling rejected a First Amendment challenge to a statute that prohibited “encourag[ing] or induc[ing]” unlawful immigration, “knowing or in reckless disregard of” the resulting violation of law. The statute, it held, merely established familiar and constitutionally permissible “aiding and abetting” liability, under which prosecutors must establish the “provision of assistance to a wrongdoer with the intent to further” a “particular unlawful act[.]”

This reasoning about culpable mental states has already had an impact on platform law. The District of Columbia Circuit followed it, citing Hansen in rejecting a First Amendment challenge to a 2018 law that removed Section 230 immunity for platforms that “knowingly” facilitated prostitution or sex trafficking. (I was one of the plaintiffs’ counsel in that case.)

These stringent readings of statutes that on their face mention only knowledge stand in sharp contrast to the interpretation of “knowingly” advanced by plaintiffs in Taamneh and many other must-remove cases. Plaintiffs’ theory was that platforms should be liable, even if they removed each specific unlawful post or account they knew of, because they still generally knew that other prohibited content remained on the platform. That reading would effectively have turned JASTA’s “knowingly” standard into something more like a “constructive knowledge” standard, meaning that platforms would face liability for specific items of IS content that they did not actually know about, but should have known about. Under that standard, platforms seeking to avoid liability would have to proactively monitor or police users’ online speech in search of prohibited content. In rejecting this theory, Taamneh improved platforms’ position for the next time this issue is litigated in copyright or other claims that turn on “knowledge.”

Future cases

Platforms and the lawyers who sue them will, in future cases, reshape arguments based on Taamneh. Rightsholders in a major copyright case against Twitter, filed a few weeks after Taamneh came out, already seem to have done so. Echoing the Court’s emphasis on passivity, their complaint describes Twitter as not “content neutral”—implying that a content-neutral defendant might face less risk of liability. The platform’s infringement, it further argues, is “not merely the result of automated activity that occurs as a result of how Twitter designed its platform[.]”

This section offers some predictions and speculations about how claims building on Taamneh will play out, beginning with comparatively straightforward must-remove claims and continuing to the relatively uncertain landscape of must-carry claims. It will also discuss the very difficult and consequential issues—both legal and practical—that would arise if the Court upheld some form of must-carry obligations in the NetChoice cases, leaving lower courts to determine the ruling’s consequences for must-remove cases.

Must-remove claims

The most obvious place for parties to cite Taamneh will be in future rulings about platforms’ liability for content posted by users. That will surely happen for cases not immunized by Section 230, like copyright or prostitution claims. As I will explain below, though, it may also shape future cases involving Section 230.

How Taamneh could be cited in future legal arguments

In one sense, Taamneh is a ruling from an alternate legal universe—one in which platforms cannot assert Section 230 immunity, and instead must litigate must-remove claims on the merits. But that world is not hypothetical. The Court never resolved whether the claims in Taamneh were immunized, for one thing. And powerful lawmakers of all political stripes have called for the abolition of Section 230. If that happens, Taamneh will be a guide to a very real future.

Taamneh is a good guide to future must-remove cases, because its aiding and abetting standard and its specific knowledge and substantial assistance elements have close analogs in other claims litigated against platforms. That includes the list of civil claims that fall outside of Section 230 today— including prostitution or sex trafficking claims under FOSTA, federal intellectual property claims like copyright, and state intellectual property claims like right of publicity in some parts of the country. And platforms have never been immune from federal criminal prosecution. Criminal laws commonly spell out aiding and abetting offenses, or can be charged as aiding and abetting under 18 U.S.C. 2. Some, including anti-terrorism laws beyond JASTA and laws governing child sexual abuse material, also predicate liability on knowledge. Taamneh will matter, for example, if the Justice Department follows through on not-so-veiled threats to prosecute social media companies in relation to the fentanyl crisis.

Taamneh could matter even for must-remove claims that are currently understood to be immunized under Section 230. For one thing, courts in those cases often assess the merits of plaintiffs’ claims in addition to or in lieu of ruling on statutory immunities. An analysis of over 500 Section 230 cases found courts did so 28 percent of the time. Defendants and judges may both prefer resolving meritless claims under familiar liability principles, rather than an internet-specific and politically unpopular immunity statute.

Taamneh could also matter for cases interpreting Section 230 itself. As Eric Goldman points out, if Justice Thomas is “playing 4D chess,” Taamneh might “lay the foundation for a future SCOTUS evisceration of Section 230, on the basis that the Internet services shouldn’t be too upset because they will have other common law defenses[.]” That evisceration could come about through new judicial reasoning about Section 230, drawing on Taamneh’s discussion of platforms’ culpable mental state or actions.

Platforms’ knowledge or mental state has long been considered irrelevant to Section 230 immunity. The statute does not mention a mental state, and courts have noted that the point of Section 230 immunity would be largely defeated if claimants could remove immunity simply by alleging that speech is illegal. But a number of thinkers—including many Taamneh amici and Justice Thomas—have questioned whether this is correct. Their argument, which builds on common law distinctions between “publisher” and “distributor” liability for defamation, suggests that platforms lose immunity once they know about unlawful content. That analysis has doctrinal and policy problems that are unpacked in the seminal Zeran case, and its common law basis is debatable. If courts prove more open to it in the future, though, they might look to Taamneh’s analysis of platform knowledge in defining the scope of immunity.

Platforms’ actions, unlike their mental states, can clearly be relevant to Section 230 immunity today. Indeed, some concept of which actions are and are not immunized is innate to platform immunity laws—otherwise we wouldn’t know which defendants can claim protection. Under Section 230, platforms are not immunized if they are “responsible, in whole or in part, for the creation or development” of actionable content. In other words, there is something like a culpable act or actus reus standard baked into Section 230. Courts have referred to this, rather imprecisely, as a “material contribution” standard. A primary theory left unaddressed in Gonzalez was that plaintiffs’ claims involved platforms’ own “creation or development” of ranking algorithms, for which platforms should have no immunity.

We will surely see variations on the Gonzalez theory litigated again, whether they are based on algorithms or on currently fashionable arguments about platforms’ “design” or “systems.” When that happens, courts may look to Taamneh’s analysis of substantial assistance as an interpretive aid, especially if they focus on the judicially created material contribution standard, rather than Section 230’s wording or the facts of important cases applying it. Taamneh’s emphasis on passivity leaves such plaintiffs with something of an uphill battle, though. It is hard to argue that ordinary ranking algorithms make platforms responsible for content if the algorithms are also, in Taamneh’s characterization, “merely part of [the] infrastructure” for platforms’ services. But ample room remains for litigation about the relationship between standards for immunity and liability, and how Taamneh fits in.

How Taamneh could affect later must-remove cases

On its face, Taamneh is a very defense friendly ruling. In cases where courts reach merits questions about liability, it will largely favor platform defendants. It emphasizes “intent” as a requirement for liability under a statute that mentions only knowledge, and has strong language about platforms’ lack of responsibility for users’ actions. That said, in arriving at its conclusions, the Court repeatedly characterizes platforms as more passive than they actually were in 2017, and far more passive than they are today. Plaintiffs may point to this in alleging that platforms’ existing content moderation practices make them ineligible for protection under Taamneh.

In principle, those arguments should not work. Taamneh’s statements about platform passivity are generally not paired with legal analysis in the text. There is little to suggest that platforms’ alleged hands-off attitude mattered for liability purposes. What mattered was that platforms were passive toward IS and did nothing special to help that group—platforms’ active or passive attitude toward user content generally was immaterial. The case also arose on a motion to dismiss, meaning that the only relevant “facts” were those alleged by the plaintiffs. Platforms should be on strong ground defending their moderation practices as irrelevant under Taamneh. But they likely will have to mount those defenses, as plaintiffs test what opportunities the ruling might have created for must-remove claims.

Taamneh also leaves open other avenues of attack for plaintiffs simply because, as must-remove cases go, Taamneh presented an unusually weak claim. The causal chain from defendants’ actions to plaintiffs’ harms from IS attacks was, as the ruling put it, “highly attenuated.” Multiple lower courts had rejected JASTA claims against platforms for this very reason.

But future plaintiffs bringing suit under other statutes may have an easier case to make. Establishing causation and harm from platforms’ actions is much simpler for many common claims against platforms. The mere act of copying or displaying content can be the basis for copyright liability, for example, or might violate state laws about non-consensual sexual images. In defamation law, similarly, the harm comes from publishing content. Plaintiffs need not establish subsequent offline consequences.

If plaintiffs in these more traditional must-remove cases can get around platforms’ statutory immunities, they will presumably point to Taamneh’s statement that culpable knowledge and substantial assistance work “in tandem, with a lesser showing of one demanding a greater showing of the other.” Given the direct role of platforms in displaying or publishing content, they may argue, plaintiffs need not make the more significant mental state showing discussed in Taamneh.

Plaintiffs may also argue that Taamneh—despite its examination of commonly used terms like JASTA’s substantial assistance and knowledge elements—applies only to claims that platforms aided and abetted wrongdoers. Following that argument, Taamneh would leave platforms exposed to claims under statutory or common law standards that do not refer to aiding and abetting. The Court’s 2023 Hansen ruling will make that argument less likely to succeed, for better or for worse. Hansen identified an aiding and abetting standard in a statute that did not use those words but instead prohibited “encourag[ing] or induc[ing]” unlawful acts. The District of Columbia Circuit subsequently followed Hansen’s reasoning in a case about platform liability, identifying an aiding and abetting standard in a statute that referred to “promot[ing] or facilitate[ing]” wrongful acts. Following this reasoning, aiding and abetting standards could be relevant for many other civil claims that use similar verbs. And while criminal law has a large body of precedent about aiding and abetting, the standard is less common in civil cases—making Taamneh the leading case in a relatively small body of case law.

Plaintiffs could also bring claims under the burgeoning list of state laws that “impose duties” on platforms to protect children. California’s Age Appropriate Design Code, for example, says platforms must not take actions “materially detrimental to the… well-being of a child[.]” Claims under such laws, plaintiffs may argue, are not immunized by 230, because they regulate the platforms’ own conduct—and they create the “independent duty to act” that the Court said was missing in Taamneh. Such a duty, the Court suggested, could open the door to liability for mere “inaction” by platforms. That said, the Court also stated that on the facts of Taamneh, including plaintiffs’ allegations about ranking algorithms, even a duty would “not transform defendants’ distant inaction into knowing and substantial assistance[.]” Following that logic, even plaintiffs who can identify a statutory duty of care would have trouble prevailing based on platform behavior like that alleged in Taamneh.

Must-carry claims

Taamneh’s relevance for future must-carry claims, including the Texas and Florida cases the Court will hear in the 2023-24 term, is harder to forecast. In theory, the Court’s analysis in those cases could be entirely unrelated. The legal questions in Taamneh about platforms’ responsibilities for harms caused by users should, in principle, have no direct bearing on the questions in NetChoice about when platforms can be forced to carry content against their will. But Texas’s brief shows how Taamneh’s odd characterizations can be put to use. The passivity matters, according to Texas’s argument, because it shows that platforms do not have a First Amendment expressive interest in setting editorial policies.

The simplest connection between the cases involves the facts about platforms’ behavior. Justices Thomas, Alito, and Gorsuch have already expressed their openness to arguments that carriage obligations may be imposed on a platform that already “holds itself out as open to the public[.]” To the extent that clerks or justices see Taamneh’s description of platform behavior as a source of truth, it may lend support to this argument.

Of course, if platforms actually did offer their services to all comers, Texas and Florida would not have passed their laws—which explicitly responded to perceived political bias in platforms’ content moderation and account terminations. Platforms’ extensive content moderation is, outside of the Taamneh ruling, no secret. Politicians and pundits talk about it daily. Platforms proclaim it in their Terms of Services, Community Guidelines, and blog posts—not to mention endless hours of testimony, TED talks, and other public statements going back for over a decade. Given this backdrop, reasoning that they already accept all speakers, and thus have no cognizable First Amendment rights to set editorial rules, seems perverse. Justice Kavanaugh, who as a circuit court judge called similarly circular reasoning in a net neutrality case “mystifying[,]” may be particularly impatient with this line of argument.

Thomas’s analysis and characterization of platforms in Taamneh plays into an oft-repeated critique of platforms’ alleged inconsistency in must-remove and must-carry cases. Texas makes a version of this criticism in a 2023 NetChoice brief, saying that platforms’ “inaction” or “nonfeasance” protected them from liability in Taamneh. The platforms cannot claim to be passive for tort purposes, Texas argues, and “simultaneously demand First Amendment protection for this same conduct” in opposing must-carry obligations.

I don’t think this reasoning bears much scrutiny as a legal matter, because it conflates two different kinds of expression. One is the platforms’ editorial decisionmaking, which they identify as First Amendment protected activity. The other is users’ speech, in the form of online posts or comments. The distinction between these two kinds of expression is well-recognized in First Amendment law. Moviemakers have First Amendment rights, for example—but so, too, do cable companies when they select which video content to carry and exclude. In copyright law, similarly, U.S. law recognizes two separate rights for editors who select and arrange third party work, such as poems in an anthology, and the authors of each individual poem. There is no reason why platforms’ First Amendment rights as editors should rise or fall based on their degree of liability for the separate expression conveyed by users’ speech.

More to the point, the companies did not claim to be passive in Taamneh or Gonzalez, or argue that they should somehow need to refrain from moderating content in order to prevail in must-remove cases. That would be rather profoundly against their interests, given all the moderation the major platforms actually do. It is the Taamneh ruling, and not the platforms’ arguments, that repeatedly emphasizes passivity and implies that being agnostic about user speech might be legally beneficial for platforms. Briefs from the parties and amici in Gonzalez, by contrast, strongly emphasized platforms’ very active role in designing and implementing algorithms—a key issue, since platforms argued that this editorial activity was protected by Section 230. Those briefs detailed at length the rules and methods platforms use to exclude users and content, including hate speech, disinformation, and other often-lawful speech. In Taamneh, Meta and Google’s brief described “extensive efforts to prevent” IS activity on the platform, and Twitter described removing hundreds of thousands of IS accounts under its policies. Even the plaintiffs’ allegations credit platforms with more agency and active opposition to IS than the ruling does, recognizing platforms’ notice-and-takedown systems for IS content and accounts.

The deeper force of the argument that platforms must be passive in order to be immune comes from a widely held intuition that platforms should have to choose between being responsible editors and being passive carriers. Those are categories familiar from pre-internet law and communications technology. But forcing internet platforms to behave like these older entities would forfeit much of the value that users get from the internet in the first place. Anyone who could not afford to speak through legally vetted publications would be left to fend for themselves in forums that permit every kind of barely legal harassment and invective. Nothing in Taamneh requires that outcome.

Taamneh’s inaccuracies are largely inconsequential in the case itself, as discussed above. If the Court continues to see platforms as passive carriers open to all comers, however, it could shape the justices’ receptiveness to Texas’s arguments, in NetChoice, that platforms have demonstrated no editorial interest in moderating user content and can be thus be required to carry content against their will.

Simultaneous must-carry and must-remove obligations

If courts recognized simultaneous must-carry and must-remove obligations for platforms, the intersection would be unpredictable and chaotic. Major rulings in either area should, explicitly or implicitly, take account of the other.

Making communications channels carry some speech and also suppress other speech is not unheard of. Obligations of both kinds exist in U.S. law for broadcast and cable. Occasionally they even appear in the same case—though the last time that happened, in Denver Area, the Court issued six separate and very fractured rulings. Those cable and broadcast rules are administered by an expert agency under detailed regulation, though. They are constitutionally justified by attributes unique to those media. Building similar mandates for the internet, using an ad hoc mix of federal and state liability laws and hastily drafted must-carry statutes like the ones at issue in NetChoice, would be profoundly different.

Competing obligations and real world options for platforms

The relationship between must-remove and must-carry mandates is worth exploring as the Court considers NetChoice. As things stand, the moment when courts grapple with tensions between the two could in principle be deferred. The Supreme Court could uphold the Texas or Florida laws without mentioning removal obligations at all. The outcome of Taamneh makes that easier. But major tensions persist under existing must-remove laws like FOSTA, or immunity regimes predicated on content removal like the DMCA. If the Supreme Court did not address the intersection of must-carry and must-remove obligations, it would kick these difficult questions down the road to lower courts in run-of-the-mill cases. Judges would have to decide, for example, whether or how the Texas and Florida laws might tie their hands in assessing platforms’ obligations under copyright law or federal criminal law.

The simplest potential conflicts between must-carry and must-remove obligations involve content that platforms are required to host under must-carry laws, but that simultaneously violates other laws. Florida’s law, for example, would seemingly require platforms to leave up even child abuse material, as well as copyright-infringing content identified in DMCA notices. It requires that platforms carry nearly all “journalistic” speech (except obscenity) and all speech “by or about” political candidates (with no exceptions). Imagine that, for example, a political candidate posted entire pirated copies of current Hollywood movies—what is a platform to do in such circumstances?

As a legislative drafting problem, that direct conflict between must-carry and must-remove mandates could be fixed. Florida’s legislators could revise their law to let platforms remove illegal content—much as they already revised it to remove a carve-out initially granted to Disney, but withdrawn after that company publicly supported LGBTQ+ rights. (For their part, Texas lawmakers considered, and rejected, a carveout that would have allowed platforms to remove terrorist content under the state’s must-carry law.) Yet that wouldn’t solve the law’s deeper problem, which involves speech that is merely potentially unlawful.

Simultaneously complying with must-remove and must-carry laws like the ones in Texas and Florida is a practical impossibility. Platforms can’t just take down all the illegal content and leave up all the legal content, because the internet is awash in speech that might violate laws. Content that is regulated by federal criminal law in one context (like an IS recruitment video) might be lawful and important in another (like academic work or news reporting). Content that is lawful in one state might be unlawful in another, because of varying standards for claims like defamation or intentional infliction of emotional distress. Legality can depend on doctrinal determinations that platforms are poorly equipped to make, like whether a local business leader or ex-politician counts as a public figure. It can depend on facts that platforms are even less able to assess, like the truthfulness of sexual harassment allegations.

Platforms receive an enormous number of notices alleging that online speech is illegal. Many demand removal of clearly lawful speech, many more target speech that might be lawful. Platforms encounter still more legally ambiguous content through content moderation. If they are subject to must-remove claims for that material, then they must decide—rapidly and at scale—which user speech breaks the law. Perfect enforcement of complex speech laws is simply not possible, and platforms’ systems will inevitably err on the side of either over- or under-removal. A platform that calibrates its systems to protect lawful speech will leave some illegal content online, and risk liability in must-remove cases. A platform that protects itself from liability by erring on the side of removing too much content will risk violating must-carry laws. Such a system essentially requires platforms to transform their workers into proxies for judges, guessing at how future courts might rule.

It’s tempting to imagine that new technological advances might allow platforms to increase accuracy in moderation and so thread this needle. But technology, including artificial intelligence, won’t fix this. Accepting certain rates of false positives or false negatives is intrinsic to automated content moderation using machine learning. The kinds of duplicate-detection systems that the Taamneh plaintiffs urged platforms to adopt for terrorism (and that they later did adopt) push toward over-removal, because the software can’t distinguish context, such as news reporting about terrorism. Human moderators make foreseeable errors, too. People have biases, and are prone to rubber stamp machines’ conclusions. And platforms can’t possibly train a huge, distributed workforce to perfectly apply every law to every piece of content.

Whatever platforms do will skew toward over-removal, under-removal, or a combination. Platforms may take down too much potentially pornographic content, for example, but not enough potentially harassing content. These problems compound rapidly. If Facebook were somehow to achieve an astonishing 99.9 percent accuracy rate in reviewing the over 350 million photos uploaded daily, that would still lead to hundreds of thousands of errors every day, each of which might support must-remove or must-carry claims.

Platforms’ over-removal errors will not be neutral as to the viewpoint or content of posts. They will systematically penalize material that resembles unlawful speech or that correlates to liability risks. In seeking to avoid liability under laws like FOSTA, for example, platforms have reason to remove ambiguously legal speech that is pro-commercial-sex-work but not anti-prostitution. Platforms trying to avoid liability under JASTA or criminal anti-terrorism laws have reason to restrict content ranging from praise for Osama bin Laden to criticism of U.S. and Israeli military actions. These big-picture patterns and individual choices will burden speech with one viewpoint more than speech on the other side.

How would this interact with Texas’s prohibition on viewpoint discrimination, or Florida’s requirement that platforms moderate in a “consistent” manner? If the Supreme Court were to uphold those laws, platforms would have to make difficult decisions about how to navigate this landscape. And before long, courts surely would be called on to evaluate these choices as well.

These decisions get even messier if platforms must remove or restrict content that is not illegal on its face, but could contribute to future harms. Under California’s child safety law, for example, a platform risks liability when it “knows, or has reason to know” that its actions may prove “materially detrimental to the physical health, mental health, or well-being of a child[.]” Standards like that can also implicate politically fraught culture war issues. An attorney general in Arkansas may think platforms should protect children from speech supporting transgender rights but not speech supporting gun ownership. A California official might think the opposite. In either case, courts would wind up deciding who is right.

Taamneh points to a seemingly simple way out of this overall legal bind: A platform that enforced few or no speech rules would satisfy must-carry laws and—under Taamneh’s logic—fend off must-remove claims. But this solution would essentially give Florida and Texas lawmakers what they want. It would forfeit platforms’ own editorial rights and the content moderation measures sought by many other people—including the plaintiffs and their aligned amici in Taamneh.

Other problems

Two other points of tension between claims like Taamneh and laws like the ones in Texas and Florida warrant discussion. Neither is as profound as the competing carriage and removal obligations described so far. But both are important—and warrant more attention than this abbreviated treatment can provide.

The first issue involves identifying which platform features will be subject to new obligations: the content ranking function or the content hosting function. The must-remove claims in Taamneh and Gonzalez focused on platforms’ algorithmic ranking systems. Under the plaintiffs’ theory, platforms would have needed to restrict content in ranked features like YouTube recommendations or Twitter feeds in order to avoid liability—but the legal status quo would have remained intact for hosted features like Twitter users’ profile pages. Must-carry obligations would do just the opposite, under an argument from Eugene Volokh commonly referenced in NetChoice briefs. Following that argument, the legal status quo would remain in place for ranked features, to protect platforms’ editorial rights. But states could tell platforms what content to host in the first place. Together, these two theories would leave no major, public-facing aspect of platforms untouched. Platforms would have to tightly curate newsfeeds to avoid liability but be prevented from curating hosted content at all.

The other area of intersection involves Section 230. Arguments about that law were left unresolved in Gonzalez, and are formally not part of the cert petitions in NetChoice. Nonetheless, the Court may have to interpret Section 230 in that case. Texas argues that its statute must be interpreted in light of its carve-out allowing platforms to remove content when “specifically authorized” by federal law. The bill’s primary drafter says that is a reference to Section 230, and specifically to subsection 230(c)(2). If that’s Texas’s argument, then a cert grant in NetChoice would bring Section 230 back to the Court much sooner than expected.

Texas has argued that the only relevant part of Section 230 for must-carry cases is subsection (c)(2)—and not (c)(1), which was at issue in Gonzalez and other must-remove cases. But determining which part of the statute applies to carriage mandates would itself be a very fraught decision, with consequences far outside of Texas and Florida. This statutory dispute is complex, and has been years in the making. In brief, Texas’s interpretation of Section 230 is one long pursued by conservatives but generally rejected by courts. It says that platforms have no immunity from must-carry claims under the statute’s broadly worded subsection (c)(1). Must-carry claims, Texas’s argument goes, are only immunized under 230(c)(2). As a result, immunity applies only when platforms remove speech in the categories that 230(c)(2) lists: “obscene, lewd, lascivious, filthy, excessively violent, [or] harassing[.]” By extension, Texas’s law—by exempting these content moderation decisions from its carriage mandate—would permit platforms to discriminate based on viewpoint only for speech in the listed categories. And even outside of Texas, this interpretation of 230(c)(2) would change platforms’ defenses from must-carry claims based on theories like unfair competition or breach of contract.

That argument has a number of problems. For one thing, 230(c)(2) also allows platforms to remove “otherwise objectionable” content. And important court rulings say that platforms’ removal decisions are also protected by 230(c)(1), which provides content-neutral immunity for platforms’ editorial decisions. Texas’s interpretation of Section 230 would turn the statute, and Texas’s own law, into content-based rules, putting the state’s thumb on the scales to shape platforms’ moderation of online speech. Platforms could freely remove sexual or violent content, but not disinformation or non-harassing hate speech. The politely expressed theories of white supremacy that got white nationalist Jared Taylor kicked off of Twitter, for example, could seemingly not be taken down in Texas. (Or at least, not unless Twitter also removed other viewpoints about racism.) Texas’s interpretation, in other words, would raise a whole new round of First Amendment questions about state-backed preferences for online speech.

Conclusion

The legal landscape for platform moderation of user content, outside the protected world of Section 230, is vast and thorny. The streamlined and defense-favorable tort rules announced in Taamneh will help in simplifying future litigation for nonimmunized claims. But by over-emphasizing platforms’ putative passivity, Taamneh may also have unintended consequences for both must-remove and must-carry claims. These ripple effects in those areas illustrate how interconnected must-remove and must-carry claims are, and always have been. Advocates and courts should consider these intersections in future must-remove cases like Taamneh, and in must-carry cases like the NetChoice cases.

Authors