Sections

Commentary

Section 230 reform deserves careful and focused consideration

Facebook CEO Mark Zuckerberg testifies remotely during a Senate Commerce, Science, and Transportation Committee hearing to discuss reforming Section 230 of the Communications Decency Act with big tech companies on Wednesday, October 28, 2020. Photo by Greg Nash/Pool/ABACAPRESS.COM
Editor's note:

This blog post is based on opening remarks to an April 2021 workshop of the National Academies of Science, Engineering, and Medicine’s Committee on Science, Technology, and Law. The public workshop examined how law, policy, and technology can help to manage third party content while preserving free speech and democracy online.

Today’s politics and public discourse is refracted through a prism of polarization. That is evident in debate about Section 230, the once obscure provision in communications law that has become a lightning rod for concerns about the power of and the content on social media platforms like Facebook, YouTube, and Twitter.

In recent years, there has been rising recognition of a range of problems with social media, relating to both concerns about content put on the platforms and the responses (or lack of them) by the platforms themselves. And many blame Section 230 or seize on it as a vehicle to force changes on platforms. But there is little agreement among political leaders as to what are the real problems are, much less the right solutions. The result is that many proposals to amend or repeal Section 230 fail to appreciate collateral consequences—and would ultimately end up doing more harm than good.

Section 230 is part of a set of policies passed in 1996 to protect the internet and innovation within its spaces. The original statute creates a distinction between internet platforms and content producers—resulting in what some consider to be a free pass for social media companies from legal responsibility over user-generated content. In a sense, these policies embody what Section 230 expert Jeff Kosseff (among others) has termed “internet exceptionalism,” but they aim to enable the internet to grow, to encourage uptake and innovation in online services, and to insulate platforms from regulation that might impede these goals.

When Section 230 was first passed over two decades ago, regulating the internet was like nurturing a hothouse flower. It is worth recalling the politics of the time: because of decisions by a Senate-House conference committee, Section 230 was inserted into the Communications Decency Act, which was in turn added to the Telecommunications Act of 1996. The Telecommunications Act contains another pillar of internet policy; it differentiates between Title I information services and Title II telecommunications services (otherwise known as common carriers). Under this model, information services, which now include social media platforms, are subject to less stringent oversight than are telecommunications services.

It’s an irony, perhaps, that it was the Communications Decency Act that provoked the Declaration of the Independence of Cyberspace by the late Grateful Dead lyricist John Perry Barlow—perhaps the ultimate expression of internet exceptionalism—that proclaimed “Governments of the Industrial World … You are not welcome among us. … Cyberspace does not lie within your borders.” Even more ironic, perhaps, Barlow’s place as the most prominent critic has been taken by Donald Trump with his “REPEAL SECTION 230” tweets. During last year’s lame duck session, Congress passed its only override to a Trump veto, his veto of the National Defense Authorization Act, which Trump vetoed because it did not repeal Section 230 (one has to wonder if Republicans today would be as willing to override such a veto as they were six months ago).

In the end, Section 230 is the only part of the Communication Decency Act that remains relevant today. Meanwhile, the Telecommunications Act’s differentiated model has been widely followed in much of the world, particularly in Europe. Even as Europe currently considers regulation of platform content or platform services, it is doing so within the context of that original framework.

One year after the passage of the Telecommunications Act, the Clinton administration’s 1997 e-commerce white paper framed a hands-off approach to regulation of internet services. It stated that the government should support industry self-regulation and avoid unnecessary restrictions on activities over the internet. Soon after, the 1998 Internet Tax Freedom Act sustained an exemption from state taxation for e-commerce services and certain other services for three years. It was renewed eight times until eventually President Obama signed the Trade Facilitation and Enforcement Act in 2015.

When I served in the Obama administration, we took these policies as a fundamental starting point for internet policy. In particular, we worked with U.S. trading partners to develop the 2011 OECD “Principles for Internet Policy Making,” approved by the council of 38 member governments after a thorough study. These aimed to promote the openness of the internet, including by limiting intermediary liability. This approach is still reflected in U.S trade policy today, such as the U.S.-Mexico-Canada trade agreement.

Early in the Obama administration, we also recognized a set of issues arising primarily online that needed to be addressed. As I characterized it at the time, 15 years after the launch of the internet era that hothouse flower was running rampant and needed to be pruned back. We focused, in particular, on privacy, cybersecurity, and intellectual property.

Today, more than another decade later, the internet and some platforms have grown to a scale beyond anybody’s imagination back in 2011, much less in 1996. In that time, we have been undergoing an information Big Bang. Over five billion people around the world now have cell phones, the majority of these smartphones. Add to these billions of additional sensors and smart devices, all together interconnected by high-bandwidth networks that transmit data instantaneously. The result is that Moore’s Law on the doubling of computing power every two years is compounded by Metcalfe’s Law on network effects, by which a network’s value is proportional to the square of its number of nodes. In other words, the volume and velocity of information is compounded exponentially by the number of connected devices and the doubling of bandwidth.

Many of the concerns about platforms today—whether the spread of misinformation and offensive communications or the power of social media to prevent or promote such communications—are a function of their scale and augmented network effects rather than endemic to platforms. Just as network effects enable new social movements like Black Lives Matter to gain adherents via social media, they allow viral spread of misinformation and hate. While network effects confer power on the operators of those networks, they can also benefit users.

A great deal of the understandable concern about online content are impacted by 230, but not necessarily caused by 230. Many—but not all—of the concerns are focused on the largest of the social media platforms like Facebook, YouTube, and Twitter, or on fairly small pockets of apps and websites, like subreddits, focused on conspiracy theories. Section 230, however, also benefits millions (or more) of small and large apps, websites, and platforms that enable users to post comments, blogs, photos, videos, product reviews, or other user-generated content.

Let me offer a few thoughts about Section 230 reform in this light.

First, policymakers should do no harm. While the warning that regulation of technology may break the internet has been overused, ill-conceived changes to Section 230 actually could break the internet. Many proposed solutions—such as mandating content moderation, imposing common carrier obligations, or outright repeal—present potential unintended consequences, including diminishing freedom of expression. Platforms allow speech by many people and offer them an expanded audience. These come with untoward effects—but such consequences are mainly a function of specific communications or categories of speech, not of the network effects themselves.

Repeal of Section 230 would take a blunderbuss to a problem that calls for a laser knife. It is not just Facebook, Twitter, or YouTube that benefit from Section 230. It would create exposure for the many apps and hosting services, offered by both startups and established companies, that accept user-generated content and could impose burdens on new services and competition in the marketplace.

The same would be true of mandated content moderation. Section 230 enables the providers of internet content to moderate their content without acquiring legal responsibility. That is a good thing; it avoids a Hobson’s Choice between having to allow any kind of offensive content onto their platforms or facing liability for offensive content that slips through.

Requiring content moderation would pose untenable obligations for many smaller providers that do not have the capacity that large corporations like Facebook do to both automate some of the screening content as well as to engage (in Facebook’s case) many thousands of human content moderators to review posts and make content decisions. That experience demonstrates that much of what is online that we consider offensive speech or misinformation, in one way or another, requires contextual judgments by humans to identify the problems and decide what’s over the line. There are many good suggestions out there to curb flows of offensive content that could make for best practices or codes of conduct, but these could also raise serious constitutional challenges if imposed by the government.

Similarly, the proposed application of common carrier obligations to websites or social media platforms, comparable to those under Title II of the 1996 Telecommunications Act, would essentially neuter the ability of platforms and service providers to moderate content. The result would only exacerbate some of the problems with offensive content. Disabling social media platforms from limiting user content would entrench on values of freedom of expression, where most Americans would rather not have the government make such judgements.

In the end, there are no panaceas for the problems arising from online content and social networks. And while there may be changes to Section 230 that could adjust incentives to moderate content and block offensive content, I believe it will take a range of very specific measures aimed at different aspects of platforms to accomplish what many people seem to broadly expect from Section 230 revision. These include competition enforcement under the Sherman and Clayton Acts, and new comprehensive privacy legislation that sets normative boundaries on the collection, use, and sharing of personal information.

In whatever solutions we adopt, America should not act alone. We need to conduct this work with an eye on international impact and on cooperation with international partners. As the European Union embarks on its Digital Services Act and other digital platforms legislation, we need to cooperate to find common ground that would both address problems that we both perceive, as well as protect common values in freedom of expression. Above all, we must find a solution to preserve the extraordinary human value and connectivity that information provides to the world while simultaneously curbing its excesses.


Facebook is a general, unrestricted donor to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the author and not influenced by any donation.

Authors