With a fact-check heard round the web, Twitter did what their “big tech” counterparts have been too afraid to do: hold the president of the United States accountable for his actions. Following the momentous decision to highlight Trump’s false claims about mail-in ballots, the president—and his frenzied fan-base—unleashed a fury of tech-lash. Their target is a cyber law from 1996, credited with creating the modern-day internet, and broadly known as Section 230.
Core to 47 U.S.C. Section 230 is the basic principle that websites are not liable for third-party, user generated content. To many, this principle is understandably confounding. Traditional print and broadcast media assume liability for disseminating third party materials all the time. For example, the New York Times can be held liable for publishing a defamatory article written by a third-party author. But that’s not the case for websites like Twitter.
It wasn’t always that way. In 1995, a New York state court in Stratton Oakmont, Inc. v. Prodigy Services Co., found the popular online service, Prodigy, liable for the defamatory material that was posted to their “Money Talk” bulletin board. In the interest of maintaining a “family-friendly” service, Prodigy regularly engaged in content moderation, attempting to screen and remove offensive content. But because Prodigy exercised editorial control – like their print and broadcast counterparts – they were liable as publishers of the defamatory content.
The Prodigy decision came several years after a New York federal district court in Cubby, Inc. v. CompuServe Inc. dismissed a similar defamation suit against CompuServe – another popular, competing online service from the 90’s. Similar to Prodigy, CompuServe was sued for defamatory content published in its third-party newsletter, “Rumorville.” Unlike Prodigy, however, CompuServe employees did not engage in any moderation practices, such as pre-screening. The district court rewarded CompuServe’s hands-off approach, holding that CompuServe, could not be liable as a mere content distributor.
This left online services with two choices: avoid legal liability but at the cost of their users controlling quality; or attempt to clean-up but with the understanding that these services would be liable for anything that slips through the cracks. This “moderator’s dilemma” was what Section 230 was enacted to resolve.
Section 230 provides for two key provisions under 230(c)(1) and 230(c)(2). Section 230(c)(1) famously comprises the twenty-six words that give the immunity its teeth:
Section 230(c)(2) provides an extra layer of protection:
“No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).”
Under 230(c)(1), defendants must meet three prongs: The first is that the defendant is the “provider or user of an interactive computer service.” Resist the urge to complicate it; a myriad of case law guarantees this prong applies to any website, service, software, platform, bulletin-board, conduit, forum, (etc), on the internet. The second prong is that the plaintiff is treating the defendant as a “publisher” or “speaker.” Courts interpret this prong broadly. In other words, the plaintiff is holding the defendant responsible for the third-party content. The third prong is that the plaintiff’s claim is based on “information provided by another information content provider” aka third-party content. As long as the defendant (and usually its employees) did not author the content, the content will be attributed to a third-party.
Understanding the provisions
There are some important observations about the 230(c)(1) provision. First, notice that Section 230(c)(1) says nothing about whether the website is a “neutral public forum.” Requiring websites to be “neutral” would be nearly impossible to achieve. Any content decision is influenced by the viewpoint of the person making it. On that note, courts have also consistently held that websites run by private companies are not like town halls, or public squares—places where viewpoint discrimination is impermissible. Second, Section 230(c)(1) applies whether the defendant “knew” about the objectionable content. It also doesn’t matter if the defendant acted in “good faith.” Lastly, again, the immunity applies to websites, regardless of their “platform” or “publisher” status.
Section 230(c)(1) is notably powerful. Years of defendant-friendly interpretation gives Section 230(c)(1) its edge, which is why it increasingly astounds Section 230 scholars when critics attack the law’s lesser-used provision, Section 230(c)(2).
Section 230(c)(2) provides two extra levels of protections to websites. Section 230(c)(2)(A) seemingly enshrines all content moderation decisions, protecting the “good faith” blocking or removal of “objectionable” content. Section 230(c)(2)(B) protects the blocking and filtering tools a website makes available to its users (think: anti-virus software and ad-blockers).
Critics of Section 230 direct extra animus towards Section 230(c)(2)(A), homing in on the provision’s “good faith” prerequisite. For example, the president’s May 28 “Executive Order on Preventing Online Censorship” states:
“When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.”
Yet, Section 230(c)(2)(A) is rarely tested in court. The “good-faith” provision makes it expensive and time-consuming to litigate, which is especially harmful for market entrants with limited legal resources. In practice, the majority of Section 230 cases turn on 230(c)(1), even when the plaintiff’s complaints are based on the service’s content moderation decisions.
Of course, Section 230 isn’t without its limits. The immunity has a set of exceptions including intellectual property infringement claims (for the most part), federal crime, and the 2018 FOSTA-SESTA amendment, aimed at combatting sex trafficking. It also does not extend to any first-party content made by the website itself. For example, Twitter is responsible for the words they use to describe their fact-checks. They are not liable, however, for any third-party content their fact-check might link-out to.
In many ways, we take the internet for granted. We enjoy information at our fingertips; we’re constantly connected to friends and family—a luxury we might especially appreciate amidst the pandemic; we frequent online marketplaces; consult consumer reviews; trade memes and 280-character quips; we share experiences; we engage in debate; we educate ourselves and each other; we’re part of global, public conversations; we stand-up massive protests; we challenge our political leaders; we build communities; we start businesses; and we’re always innovating. It is important to retain these benefits as people debate revisions to Section 230.