As both members of Congress and federal law enforcement agencies investigate the origins and execution of the January 6 insurrection at the U.S. Capitol, the role social media played in the mayhem is emerging as a crucial issue.
The House Select Committee probing the mob attack has asked a wide range of social media and telecommunications companies to preserve records related to several hundred people, including members of Congress, who could be relevant to the investigation. Beyond these specific requests, the Committee has signaled a broader interest in how false claims about the 2020 election spread on platforms like Facebook and Twitter, including how algorithms might contribute to the promotion of disinformation and extremism. Meanwhile, federal prosecutors pursuing more than 600 criminal cases are relying on evidence gathered from social media accounts used to organize the attempt by Trump supporters to stop Congress from certifying President Joe Biden’s victory.
A report we recently published through the Center for Business and Human Rights at New York University’s Stern School of Business sheds light on the relationship between tech platforms and the kind of extreme polarization that can lead to the erosion of democratic values and partisan violence. While Facebook, the largest social media platform, has gone out of its way to deny that it contributes to extreme divisiveness, a growing body of social science research, as well as Facebook’s own actions and leaked documents, indicate that an important relationship exists.
Our central conclusion, based on a review of more than 50 social science studies and interviews with more than 40 academics, policy experts, activists, and current and former industry people, is that platforms like Facebook, YouTube, and Twitter likely are not the root causes of political polarization, but they do exacerbate it. Clarifying this point is important for two reasons. First, Facebook’s disavowals, in congressional testimony and other public statements, may have clouded the issue in the minds of lawmakers and the public. Second, as the country simultaneously tries to make sense of what happened on January 6 and turns its attention to elections in 2022, 2024, and beyond, understanding the harmful role popular tech platforms can play in U.S. politics should be an urgent priority.
Social media contributes to partisan animosity
Facebook’s Mark Zuckerberg has on multiple occasions dismissed suggestions that his company stokes divisiveness. “Some people say that the problem is that social networks are polarizing us, but that’s not at all clear from the evidence or research,” he testified before a U.S. House of Representatives subcommittee in March 2021, instead pointing to “a political and media environment that drives Americans apart.” A few days later, Nick Clegg, Facebook’s vice president for global affairs and communication, argued that “what evidence there is simply does not support the idea that social media, or the filter bubbles it supposedly creates, are the unambiguous driver of polarization that many assert.”
Contrary to Facebook’s contentions, however, a range of experts have concluded that the use of social media contributes to partisan animosity in the U.S. In an article published in October 2020 in the journal Science, a group of 15 researchers summarized the scholarly consensus this way: “In recent years, social media companies like Facebook and Twitter have played an influential role in political discourse, intensifying political sectarianism.” In August 2021, a separate quintet of researchers summed up their review of the empirical evidence in an article in the journal Trends in Cognitive Sciences: “Although social media is unlikely to be the main driver of polarization, they concluded, “we posit that it is often a key facilitator.”
Partisanship is complicated, but platforms do not fully escape responsibility
Polarization is a complicated phenomenon. Some divisiveness is natural in a democracy. In the U.S., struggles for social and racial justice have led to backlash and partisan animosity. But the extreme polarization we are now witnessing, especially on the political right, has consequences that threaten to undermine democracy itself. These include declining trust in institutions; scorn for facts; legislative dysfunction; erosion of democratic norms; and, in the worst case, real-world violence.
All of this cannot be attributed to the rise of Silicon Valley, of course. Polarization began growing in the U.S. decades before Facebook, Twitter, and YouTube appeared. Other factors—including the realignment of political party membership, the rise of hyper-partisan radio and cable TV outlets, and increased racial animus during Donald Trump’s uniquely divisive presidency—have contributed to the problem.
But that doesn’t exonerate the tech platforms, as Facebook would have us believe. One study published in March 2020 described an experiment in which subjects stopped using Facebook for a month and then were surveyed on their views. Staying off the platform “significantly reduced polarization of views on policy issues,” researchers found, although it didn’t diminish divisiveness based strictly on party identity. “That’s consistent with the view that people are seeing political content on social media that does tend to make them more upset, more angry at the other side [and more likely] to have stronger views on specific issues,” Matthew Gentzkow, a Stanford economist and co-author of the study, told us in an interview.
Facebook and others have pointed to other research to raise questions about the relationship between social media and polarization. A 2017 study found that from 1996 to 2016, polarization rose most sharply among Americans aged 65 and older—the demographic least likely to use social media. A 2020 paper compared rising polarization levels in the U.S. over four decades to those in eight other developed democracies. The other countries experienced smaller increases in divisiveness or saw polarization decrease. These variations by country suggest that, over the long term, factors other than social media have driven polarization in America
But notice that both the age-group and inter-country comparisons spanned decades, including extended stretches of time before the emergence of social media. More recent snapshots of the U.S. are thus more relevant. A paper published in March, based on a study of more than 17,000 Americans, found that Facebook’s content-ranking algorithm may limit users’ exposure to news outlets offering viewpoints contrary to their own—and thereby increase polarization.
Maximizing online engagement leads to increased polarization
The fundamental design of platform algorithms helps explain why they amplify divisive content. “Social media technology employs popularity-based algorithms that tailor content to maximize user engagement,” the co-authors of the Science paper wrote. Maximizing engagement increases polarization, especially within networks of like-minded users. This is “in part because of the contagious power of content that elicits sectarian fear or indignation,” the researchers said.
As we wrote in our report, “social media companies do not seek to boost user engagement because they want to intensify polarization. They do so because the amount of time users spend on a platform liking, sharing, and retweeting is also the amount of time they spend looking at the paid advertising that makes the major platforms so lucrative.”
Facebook is fully aware of how its automated systems promote divisiveness. The company does extensive internal research on the polarization problem and periodically adjusts its algorithms to reduce the flow of content likely to stoke political extremism and hatred. But typically, it dials down the level of incendiary content for only limited periods of time. Making the adjustments permanent would cut into user engagement. Examples include the tumultuous period immediately after the November 2020 election and the days before the April 2021 verdict in the trial of Derek Chauvin.
It’s time for the government to intervene
In a series of investigative articles published the same week as our NYU report, The Wall Street Journal relied on internal Facebook documents to show that company researchers repeatedly have identified the harmful effects of its platforms, but top management has rejected proposed reforms. In one episode, a major algorithm modification in 2018 backfired and, according to Facebook’s own in-house studies, inadvertently heightened anger and divisiveness on the platform. But Zuckerberg reportedly resisted some proposed fixes because he worried that they might hurt user engagement.
Clearly, Facebook and its social media peers need to move beyond denial and come to grips with their role in heightening polarization. One place the industry could start is to make temperature-reducing algorithmic adjustments more permanent, rather than temporary. In doing so, tech platforms will have to continually refine their automated systems and content moderation policies to guard against the removal of legitimate political expression—admittedly a difficult challenge but one they brought upon themselves by building such vast, pervasive networks. Another step the social media companies should take is disclosing how their now-secret algorithms rank, recommend, and remove content. With greater transparency, lawmakers, regulators, academics, and the public would be in a stronger position to assess how the platforms function and demand accountability when warranted. Unfortunately, Facebook lately has moved in the opposite direction, in one case cutting off researchers at New York University who were studying whether the platform has been used to sow distrust in elections. The company accused the NYU team of gathering information improperly—an accusation the researchers denied.
It would be preferable for the social media companies to police themselves, but that’s not happening to a sufficient degree. As a result, the government needs to intervene and provide the sustained oversight that until now has been lacking. In our report, we propose that Congress empower the Federal Trade Commission to draft and enforce a social media code of conduct that would go beyond transparency and define the duties of tech companies when addressing hateful, extremist, or threatening content.
For example, the standards could set benchmarks for various categories of harmful content that remain on platforms even after automated and human moderation. If the benchmarks are exceeded, fines could be imposed. Congress could require social media companies to incorporate the new rules into their terms-of-service agreements with users. Then, if the companies fail to observe the standards, the FTC could initiate enforcement action under its existing authority to police “unfair or deceptive” commercial practices. Reps. Jan Schakowsky (D-Ill.) and Kathy Castor (D-Fla.) have introduced a bill that points generally in the direction that we recommend.
Widespread social media use has fueled the fire of extreme polarization, which, in turn, has contributed to the erosion of trust in democratic values; elections; and even scientific facts, such as the need for vaccination in the face of a lethal pandemic. Failing to recognize and counter these developments risks repetition of what happened at the Capitol on January 6—or worse.
Barrett is a senior research scholar at New York University’s Stern School of Business and deputy director of its Center for Business and Human Rights, where Sims is a research fellow. Hendrix, an associate research scientist and adjunct instructor at NYU’s Tandon School of Engineering, is the founder and editor of Tech Policy Press.
Facebook and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.
Commentary
How tech platforms fuel U.S. political polarization and what government can do about it
September 27, 2021