Sections

Commentary

How Section 230 reform endangers internet free speech

A tweet by US president Donald Trump is seen being flagged as inciting violence by Twitter in this photo illustration on an Apple iPhone in Warsaw, Poland on May 29, 2020. Twitter on May 29 applied a fact-checking label to a vote-in-mail tweet by US President Donald Trump that the company considers misleading. Twitter has recently started labelling tweets with public notification and fact check labels. The labelling of Trump's tweet about the uproar following the death of George Floyd has seen the president signing an executive order targeting the Communications Decency Act. Section 230 which protects social media companies against lawsuits against them for user generated content. (Photo by Jaap Arriens / Sipa USA)No Use UK. No Use Germany.

Everywhere one looks in Washington one finds proposals to reform Section 230 of the Communications Decency Act, which grants internet platforms legal immunity for most of the content posted by their users. Former Vice President Joe Biden wants to repeal it and even has an ally of sorts in President Donald Trump, who is using threats to explode Section 230 against his perceived enemies in Silicon Valley. One congressional proposal would condition immunity on an impossible standard of neutral content moderation. Another would condition immunity on undermining encryption.

Some of these proposals are not intended to become law. If they did, the courts would likely strike some down as violations of the protections for freedom of speech guaranteed by the U.S. Constitution’s First Amendment. Instead, they are intended as blunt tools of coercion—attempts to jawbone internet platforms into favoring a particular point of view. Even if Congress and the Trump administration fail to enact new rules, the additional pressure on internet platforms is likely to have a chilling effect that will make it harder for all users to communicate openly. Jawboning platforms gives political figures the best of both worlds: They can push internet firms to curate content that protects their own point of view without having to do the work of passing and then defending legislation mandating censorship.

The movement for Section 230 reform

Today, platforms such as Twitter, Facebook, and TikTok are the primary source of information for many Americans, just as network television and newspapers were in the  20th century. Social media sites have one key difference from those older media sources—their popularity comes from content that users create, rather than from the sites themselves. You can tweet at comedian Patton Oswalt, and he may well tweet back, without any involvement from Twitter aside from distributing your conversation. Normally, that’s good for everyone involved: Platforms get free content, and we get tools that enable easy communication with billions of other people.

But this setup also makes platforms wary about taking risks. If I post criticism of a politician, the politician might threaten to sue the platform that carries the critique. For a social media site, the choice is clear: taking down my post avoids the threat of liability, and while I may object, I’m only one of millions of users.

To reduce the risk that platforms will quash speech due to fears of lawsuits, in 1996 Congress protected internet intermediaries with a limited shield from liability. In most cases, platforms and other interactive computer services cannot be held liable for content created by someone else, such as one of their users (although they remain liable for information created by their employees). The immunity provisions of Section 230 of the Communications Decency Act have important exceptions, such as for violations of federal criminal law, wiretapping statutes, intellectual property rules, and (most recently) online sex trafficking. But the safe harbor has been broad enough and stable enough to enable American firms to offer a vibrant array of internet activities.

Recently, Section 230 has come under increasing political pressure, from members of both political parties in Congress and the executive branch. Most people would like to see greater limitations on some sort of internet content, whether it be non-consensual pornography (“revenge porn”), anti-vaccination claims, political advocacy by foreign countries, or fake news more generally.

The Trump administration, angered by Twitter’s efforts to cabin the president’s tweets containing falsehoods or incitement to violence, promulgated an executive order that asks the Federal Communications Commission to issue regulations reworking Section 230; directs federal agencies to review their spending on platforms that engage in undesirable censorship; and orders the Federal Trade Commission and the U.S. attorney general to assess whether internet firms are committing deceptive and unfair trade practices through their content moderation. Trump’s Department of Justice recommended that Congress remove 230’s protections for intermediaries that deliberately facilitate behavior that violates federal criminal law. In addition, the Justice Department proposes that platforms be required to implement mechanisms that allow users to complain about allegedly unlawful material, and that firms be mandated to keep records of reports and other activity that could aid law enforcement. Things might be even more stark if former vice president Joe Biden wins the presidency in November: Biden has called for the outright repeal of Section 230.

Congress has also weighed in. Sen. Josh Hawley, the Missouri Republican, has introduced several pieces of legislation that would either condition Section 230’s immunity on verifiably neutral content moderation practices (an impossibility), or strip the liability shield altogether for firms that selectively curate political information. Speaker of the House Nancy Pelosi has expressed a willingness to alter how Section 230 works. And there have been several bipartisan proposals. One, titled the EARN IT Act, would condition immunity on firms adopting congressionally mandated methods for eliminating child sex abuse material, which would include rolling back encryption protections for consumers. Another, the PACT Act, would require platforms to disclose the policies they use to police content, mandate that firms implement a user complaint system with an appeals process, and obligate firms to remove putatively illegal content within 24 hours. Although the time remaining in the current legislative session is short, there is considerable congressional attention on Section 230.

At first glance, Section 230 seems ripe for reform. After all, it protects internet intermediaries from a broad swath of legal liability for content most people dislike, from falsehoods that defame someone to fake news to posts generated by bots. And, platforms are often our first source for information about important public issues, from protests to pandemics. But there are several major problems with the reform proposals put forward so far.

The problem of scale

The major internet platforms have to manage massive amounts of data. Twitter gets half a billion posts every day. Facebook has over 1.7 billion active daily users. YouTube has over 720,000 hours of video uploaded to its site each day. The scale of the data means that platforms have to rely primarily on automated software programs—algorithms—to curate content. And algorithms, while constantly improving, make mistakes: They flag innocent content as suspect, and vice versa.

Proposals that seek to force platforms to engage in more monitoring—especially analysis before content is publicly available—will push internet firms to favor removing challenged content over keeping it. That’s precisely the chilling effect that Section 230 was intended to avoid.

Increasing costs

Additional procedures, such as appeals for complaints and requirements to track posts, will increase costs for platforms. Right now, most popular internet sites do not charge their users; instead, they earn revenues through advertising. If costs increase enough, some platforms would need to charge consumers an admissions fee. The ticket price might not be high, but it would affect people with less disposable income. That could widen the already existing digital divide. Even if Twitter earns enough money to keep its service free, the regulatory cost of these proposals could make it harder for start-up companies to compete with established internet companies.

Rising costs would only worsen the antitrust and competition concerns that the Department of Justice and state attorneys general are already investigating. And there is no guarantee that reforms would justify their expense. Spam e-mail didn’t dry up when the United States adopted anti-spam legislation in 2003; it simply moved its base of operations abroad, where it is harder for American law enforcement to operate.

Truth is in the eye of the beholder

Some proposals for Section 230 reform ask companies to make difficult, if not impossible, decisions about contested concepts like truth and neutrality. Political truth looks very different depending on whether you ask Biden or Trump. Neutral curation of political content presumably would require platforms to treat Republican and Democratic press releases the same way they do information put out by white supremacist organizations. The only way to achieve neutrality is not to filter information at all, which leaves platform users at the mercy of the loudest and most sensational voices in political discourse.

Some internet firms, like Twitter, have responded to criticism by banning political advertising as a category. Being freed from seeing political ads during campaign season may seem like a blessing. But such a ban inevitably hurts upstart and third-party candidates, who do not have the ability to command attention from standard media outlets in the way that established politicians with the backing of one of the two major American parties can.

Some of the proposals put forth by Republicans, such as Hawley’s and Trump’s, are responding to phantom controversies. There is no evidence that internet platforms systematically discriminate against conservatives – or progressives, for that matter. Forcing firms to filter information based on viewpoint is likely to be popular only with the political party currently in power.

But that doesn’t diminish their political utility. Firms have to spend resources responding to bad faith attacks and may be pushed by market forces to react to public outrage that these critiques generate. Indeed, political grandstanding is difficult to prevent.

The Section 230 safe harbor that protects internet intermediaries promotes, and is in the best traditions of, American respect for the value of free information exchange. Even well-intended proposals to alter that immunity should be viewed with skepticism.

Derek E. Bambauer is a professor of law at the University of Arizona, where he teaches internet law and intellectual property.

Facebook and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors