Sections

Commentary

Addressing Big Tech’s power over speech

CEO of Google and Alphabet Sundar Pichai testifies remotely during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill in Washington, DC, U.S., October 28, 2020. Greg Nash/Pool via REUTERS
Editor's note:

This blog post originally appeared on The Regulatory Review.

At many points during the 2020 U.S. presidential election, social media platforms demonstrated their power over speech. Twitter decided to ban political advertisements permanently in October 2019, sparking a vigorous debate over free speech and so-called “paid disinformation.” One year later, Facebook and Google imposed temporary restrictions on political ads shortly after the polls closed. In May 2020, Twitter assigned fact-check labels to two misleading tweets from then-President Donald J. Trump about mail-in ballots; Facebook initially refused to follow, but later adopted its own fact-checking policy for politicians.

In June 2020, Twitter, for the first time, “hid” one of President Trump’s tweets that appeared to call for violence against Black Lives Matter protesters. Facebook chose to leave the post up. Ultimately, after the U.S. Capitol attack on January 6, 2021, all three platforms suspended Trump’s account. In the days that followed President Trump’s suspension, online misinformation about election fraud dropped almost 75 percent across multiple platforms.

These events demonstrate the ability of Facebook, YouTube, Twitter, and others to amplify—or limit—the dissemination of information to their hundreds of millions of users. Although we applaud the steps these companies eventually took to counter political misinformation and extremism during the election cycle, their actions are also a sobering reminder of their power over our access to information. Raw power comes with the possibility of abuse—absent guardrails, there is no guarantee that dominant platforms will always use it to advance public discourse in the future.

Some leaders have suggested using antitrust law to limit the power of social media companies. U.S. Representative David Cicilline (D-R.I.) echoed this sentiment in a House Antitrust Subcommittee hearing last summer by accusing Facebook of “getting away” with disseminating misinformation because it is “the only game in town.” He continued by noting that, for social media giants, “there is no competition forcing you to police your own platform.”

And the focus on competition is understandable. After all, the political power of social media companies flows from their economic power. Facebook, Instagram, and YouTube benefit from network effects, where their value to both users and advertisers increases with their number of active accounts. Large social media platforms also collect a significant amount of personal information about individuals, allowing them to monetize and target advertisements to users more effectively. Furthermore, some companies have engaged in certain behaviors—such as Facebook’s acquisitions of Instagram and WhatsApp and Google’s preinstallation agreements for YouTube and other apps—that have cemented their market power. The Federal Trade Commission, U.S. Department of Justice, and numerous state attorneys general recently filed lawsuits against Google (which owns YouTube) and Facebook, alleging that these latter actions violate the Sherman Act and harm consumers and economic competition.

These pending lawsuits reflect the state of antitrust law today by focusing on Facebook and Google’s economic impact on consumers and competition––not political or other social effects. Chicago School jurisprudence, which has guided antitrust enforcement for the past four decades, is concerned principally with price effects on consumers—not political harms or other risks associated with content moderation by powerful platforms. And since most social media platforms offer their services to consumers at no monetary cost, U.S. antitrust laws—under current interpretation—do not address the full scope of non-monetary effects stemming from lack of competition.

Antitrust doctrine does not address how social media companies collect large and detailed amounts of personal information, control misinformation, address extremism, exhibit transparency and accountability, and more generally, wield influence over democratic institutions. Yet, as former Chair of the Federal Trade Commission Robert Pitofsky wrote in 1979, the congressional intent underlying U.S. antitrust laws did not focus exclusively on economics: “It is bad history, bad policy, and bad law to exclude certain political values in interpreting the antitrust laws.”

It is possible that the Facebook and Google antitrust lawsuits could reduce the companies’ control over the content we access—a change that would be neither easy nor quick. For example, if these lawsuits result in a breakup of either company, they could create a more competitive environment with a broader disbursement of power over political information. But these cases will take years to litigate, and government enforcers must meet a high burden of proof in court.

Furthermore, courts have traditionally taken a conservative view to antitrust enforcement, interpreting the Clayton and Sherman Acts over the past 40 years to call for a high level of confidence that anticompetitive behavior would result in financial harm to consumers and competition—leaving the resolution of these cases uncertain.

Although current antitrust laws fall short in addressing social media’s power to affect democratic processes, members of Congress have demonstrated interest in reassessing or updating them. U.S. Senator Amy Klobuchar (D-Minn.) recently proposed legislation to amend the Clayton and Sherman Acts. In addition, the House Antitrust Subcommittee released a majority staff report last year, and U.S. Representative Ken Buck (R-Colo.) released a separate report. Both called for reform, suggesting a bipartisan interest in reducing the raw power of a few dominant firms and thereby helping new social media platforms compete.

There are alternate paths as well: Congress could address the potential for platforms to misuse their power over information and hate speech by updating Section 230 of the Communications Act of 1934, which sets certain liability standards for social media platforms and user-generated content.

No matter which direction Congress takes, the limitations of current antitrust laws to address modern-day problems associated with dominant social media platforms demand a fresh look at how the United States addresses the political and social consequences of economic power. As social media’s role in the 2020 election demonstrates, dominant tech platforms can limit the dissemination of dangerous disinformation. But this same power can be used irresponsibly and either unreasonably limit access to important information or perpetuate the “Big Lie.” Injury in that sense is not limited to direct price effects. It is becoming harder to overlook the reality that some change may be required to address the power and risks associated with the dominance of social media platforms.


Facebook and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.

Authors