Sections

Commentary

The Supreme Court takes up Section 230

The portico of the US Supreme Court with the statute of the Guardian of Law in front.

On February 21 and 22, 2023, the United States Supreme Court is scheduled to hear arguments in cases involving the content moderation practices of social media platforms. The Court has also indicated that it could later address the First Amendment issues involved in conflicting Court of Appeals decisions regarding content moderation laws passed by Texas and Florida. The February oral arguments will, no doubt, be revealing. At this point, however, the fact that the Court has bifurcated the content moderation issue into questions of platform behavior and state authority could be telling as to the intentions of at least some of its justices.

About two percent of appeals to the Supreme Court are granted certiorari and heard by the justices. That the February cases have made it over that hurdle suggests at least some members of the Court might have something to say on an issue that has become a fixture in the culture wars (and the trigger for the Texas and Florida laws).

Although only one of the February cases explicitly mentions it, at the heart of the content moderation issue is Section 230 of the Communications Decency Act. For almost 30 years, Section 230 has been the foundation governing expression on digital platforms. The provision was enacted in 1996 at a time when the online experience was dominated by America Online (AOL), Prodigy, Compuserve, and similar services that ran commentary bulletin boards. The goal of Section 230 was to protect online platforms like these from liability for the third-party content that they distribute. In the intervening decades, technology has changed online experiences dramatically, and the U.S. Congress has failed to re-address existing and emerging policy issues considering those changes. It now falls to the Supreme Court to grapple with the statute based on the practices of 21st century social media.

Famously labeled “The Twenty-Six Words That Created the Internet,” Section 230 did not “create the internet” but rather allowed for the creation of the economic model of social media platforms. What the statute “created” was the protected monetization of users’ personal information through the application of software algorithms to target both advertisements and information and to sell access to those targets. This is a legitimate online activity. The question is whether technology and marketplace changes, since 1996, have also changed what society has a right to expect from the online platforms engaged in that activity.

The Section 230 Life cycle

The societal effects of Section 230 have gone through three stages. The original intent of Section 230, according to its authors, was to clarify the liability of online services for material published by others on their platforms. As online services evolved from bulletin boards to social media, however, the new social media companies took advantage of strict construction judicial interpretations to turn Section 230 from the protection of speech to the protection of a business model that profited from unfettered controversy. In its third phase, Section 230 has become a fixture in the culture wars.

Particularly when it comes to the culture wars incarnation, federal elected officials have used Section 230 as a tool for performance politics, but have done very little substantively. Concurrent with the lack of congressional action, the rigidity of Section 230’s black letter law has been interpreted by courts to short circuit the judicial capability to assess the application of common law principles, such as liability in light of new developments.

The Supreme Court appears primed to go where Congress and lower courts have feared to tread – and to do it in a bifurcated manner.

The February Cases

Scheduled for February arguments are two cases in which private citizens are challenging the behavior of social media companies. Both February cases involve social media’s relationship to terrorist activity.

In Gonzalez v. Google, the family of Nohemi Gonzalez alleges Google was complicit in the November 2015 ISIS attack in Paris that killed 130 people – among them Ms. Gonzalez. The plaintiffs submit the Google-owned service YouTube was used by ISIS to recruit and radicalize combatants in violation of the Anti-Terrorism Act (ATA) and Justice Against Sponsors of Terrorism Act (JASTA). In addition, they allege that, because YouTube sold advertising on the ISIS videos and shared the revenue with ISIS, the platform provided material support to terrorists. The Ninth Circuit Court of Appeals dismissed the suit, finding that Section 230 protected YouTube from liability for videos produced by someone else, and that the sharing of revenue was simply the normal course of business and not in support of a specific group or ideology.

In Twitter v Taamneh, relatives of Nawras Alassaf, who was killed in a 2017 ISIS attack in Istanbul, take a related, but different approach to assigning culpability. They allege that by allowing the distribution of ISIS material without editorial supervision, companies such as Twitter, Google, and Facebook (now Meta Platforms) aided and abetted ISIS’ activity in violation of the ATA and JASTA. Interestingly, the issue of Section 230 is not a part of the Taamneh appeal. Although it was raised by the companies, the lower court never reached a conclusion and thus assessment of Section 230’s applicability was not part of the Ninth Circuit’s decision. The Taamneh plaintiffs did raise the shared revenue issue, however. The appeals court reversed the district court’s dismissal, finding that Twitter (along with Google and Facebook) could face claims that by failing to identify and remove the ISIS video, their actions played an assistive role.

The decision of the Supreme Court to hold the state action cases in abeyance while moving forward with the cases dealing with online behavior perhaps suggests a judicial strategy. Specifically, will the Court seek to deal with the topic of online content in a manner that is orthogonal to the absolutist debate that habitually surrounds Section 230?

Do Algorithms Change the Nature of Liability?

It is asserted by the Gonzalez and Taamneh plaintiffs, and the United States Department of Justice in its brief, that the Section 230 assumption that the “provider or user of an interactive computer service” is simply transporting the work of a third-party does not reflect how the companies have utilized advances in digital technology.

In 1996, at the time of Section 230’s enactment, online platforms such as Prodigy or AOL operated bulletin boards that hosted information posted by third parties. Today, the major online platforms have built their business around algorithms that utilize data collected from each user to select which postings to share with which users. This algorithmic recommendation, it is argued, transforms the platforms from a Section 230-protected “interactive computer service” to an unprotected “information content provider.” The platform companies argue that “recommending” is actually “organizing” and there is no other way to present information to users.

The co-authors of Section 230, Senator (then-Rep.) Ron Wyden (D-OR) and former Rep. Chris Cox (R-CA), filed an amicus curiae brief with the Court in which they, among other things, assert that Section 230 anticipated recommendation algorithms and the ability to “filter, screen, allow, or disallow content” as well as “pick, choose, analyze, or digest content.” The authors explain, “[r]ecommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230.”[1]

The brief of the United States Department of Justice argued that the recommendation constitutes the site’s own conduct and is thus outside the protections developed for third-party content. “If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, ‘You should watch this,’ that message would fall outside Section 230 (c)(1),” the brief argues. “Encouraging a user to watch a selected video [e.g., by placing it on the “Up Next” sidebar] is conduct distinct from the video’s publication (i.e., hosting).”

“In contrast, social media, although constructed on an open platform, is a closed business in which algorithms are programmed to maximize revenue by selecting points of view and targeting their audience.”

Whether or not algorithmic promotion changes the nature of an online platform, and thus its liability protection, will no doubt be one of the major issues addressed by the Court in the Gonzalez case. While there are credible arguments on all sides, one thing is certain, that such recommendation within a closed and controlled platform moves today’s online activities away from the metaphorical open public square.

Such algorithmic promotion also differs from the idealized public square in that it is a compensated service. The internet per se is a public square in which anyone can set up their soapbox and in which all the world’s information and opinions are readily available. In contrast, social media, although constructed on an open platform, is a closed business in which algorithms are programmed to maximize revenue by selecting points of view and targeting their audience. How such construction affects the liability protections of Section 230 will, no doubt, be a major question before the Court.

Tea Leaves

Choosing to hear the two terrorist-related appeals before jumping into the state authority issue perhaps provides the Court with the opportunity to redefine the debate on its own terms with its own solutions prior to dealing with the state legislation.

It is not as if some members of the Court have been shy about expressing their thoughts on the topic, including proposing their own ideas. Justice Clarence Thomas has been the most vocal in sharing his opinions. “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms,” he wrote in 2021.

But no one really knows how the Court might act. There are multiple directions in which the Court could go on the content moderation issue. Among the multiple possibilities are two that have been put forward by Justice Thomas; the other is going into practice in the European Union.

Paring Back Immunity

In a 2020 case in which the Court refused to hear an appeal whether Section 230 protected a software company against claims of anticompetitive conduct, Justice Thomas observed, “many courts have construed the law broadly to confer sweeping immunity on some of the largest companies in the world… Paring back the sweeping immunity courts have read into §230 would not necessarily render defendants liable for online misconduct. It would simply give plaintiffs a chance to raise claims in the first place.”

Should the Court adopt this approach, it would allow the business model of advertising-supported online platforms to continue. At the same time, however, it could necessitate pre-clearance activities that, while technology such as artificial intelligence might help achieve, would nonetheless add to costs, delay time to display, and impose other constraints that could change the user experience and corporate returns.

Common Carrier Status

Justice Thomas has also championed another approach. “There is a fair argument,” he concluded, “that some digital platforms are sufficiently akin to common carriers or places of accommodation as to be regulated in this [mandatory non-discrimination] manner.” How, and whether, the Court could “legislate” platforms to be common carriers is problematic. The fact that in both the Gonzalez and Taamneh cases the plaintiffs assert the platforms are a part of the communications infrastructure could, however, provide an opening to argue for this communications concept traditionally applied to telephone companies.

A challenge to this approach, however, might come from Justice Kavanaugh who, as a member of the Court of Appeals for the DC Circuit, dissented from the decision affirming the 2015 Obama FCC’s net neutrality order declaring internet service providers such as Verizon or Comcast to be common carriers, in part because “the net neutrality rule violates the First Amendment to the U.S. Constitution.” The judge who argued, “The rule transforms the Internet by imposing common-carrier obligations on Internet service providers and thereby prohibiting Internet service providers from exercising editorial control over the content they transmit to consumers,” could possibly have a difficult time prohibiting those that use the internet pathways from exercising editorial control.

European Union

In 2000, the European Union adopted the Electronic Commerce Directive. Like Section 230, the eDirective protected online platforms from liability for the passive retransmission of third-party content. In 2022, the EU’s Digital Services Act (DSA), while leaving the eDirective undisturbed, established a “duty of care” for online platforms, with the most expansive duties reserved for the largest platforms.

At the heart of the DSA are disclosure and transparency requirements, including disclosure of both algorithmic and human content moderation. In the case of recommendation algorithms, all platforms must describe how they work, and the largest platforms must provide a recommendation system that does not use individual profiling as its basis.

The DSA also establishes an ex-post “notice-and-action” requirement that upon receiving notice asserting illegal content, the platform must rapidly assess the claim and take appropriate action. For large platforms, the DSA also requires an ex-ante effort to assess the risks “stemming from the design, functioning and use of their services” and “deploy the necessary means to diligently mitigate the systemic risks identified.” Under the DSA, this could include content that may not be illegal but is deemed harmful (which could be problematic under the First Amendment).

The Game is Afoot

Regardless of what the Court decides, it can be counted on to ignite a firestorm of public debate and a call for congressional, rather than judicial decision-making. The Court’s decision(s), therefore, could end up as a challenge to Congress to overcome its fragmentation to deal with the matter.

Google, Verizon, Comcast, and Meta (formerly Facebook) are general unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.


[1] Interestingly, and somewhat quizzically, the authors’ brief contains a footnote that seems to suggest there could be “good” and “bad” algorithms that could affect the application of Section 230: “The discussion in this brief pertains only to the algorithmic recommendation systems at issue in this case. Some algorithmic recommendations are alleged to be designed and trained to use information that is different in kind than the information at issue in this case…to cause harms not at issue in this case.” (Back to top)