Sections

Commentary

How social media platforms can reduce polarization

Illustration showing divided groups of red and blue silhouettes. (Lightspring/Shutterstock.com)

Polarization is widely recognized as one of the most pressing issues now facing the United States. Stories about how the country has fractured along partisan lines, and how the internet and social media exacerbate those cleavages, are frequently in the news. Americans dislike their political adversaries more than they used to. Meanwhile, disinformation and hate speech, often produced by actors with strong incentives to inflame existing social and political divisions, proliferate in digital spaces. The real-world consequences are far from trivial—consider the violence at the Capitol on January 6 or even the more recent assault on Nancy Pelosi’s husband. Although the extent to which political polarization leads individuals to violate democratic norms is a matter of debate, it is hard to imagine an event like the Capitol riot occurring absent such a polarized political climate.

Of particular concern is affective polarization, which refers to the animus individuals feel toward those who disagree with them politically. If the free exchange of ideas between non-likeminded people is a basic tenet of democracy, then affective polarization threatens to undermine democracy itself. In the United States, affective polarization now underlies partisan standoffs over everything from COVID-19 policy to climate change.

For social networks and digital platforms, polarization is both a challenge and opportunity. Social media companies are often blamed for driving greater polarization by virtue of the way they segment political audiences and personalize recommendations in line with their users’ existing beliefs and preferences. Given their scale and reach, however, they are also uniquely positioned to help reduce polarization. Based on our recent review of more than half a century’s worth of research into how best to bridge social divides, there are clear steps digital platforms can take to curb polarization.

What we know about polarization and social media

Recent research suggests that social media can inflame polarization, even if the full relationship between digital platforms and polarized attitudes remains uncertain. Studies have found that polarization varies markedly across different platforms, with the strength of different findings depending on how polarization is measured. The extent to which polarization owes to online echo chambers and filter bubbles is also not well understood, with evidence pointing in countervailing directions.

Yet there is nonetheless a growing body of scholarship that suggests social media applications are indeed fueling polarization, especially in established democracies. For instance, Jamie Settle’s work demonstrates, through a combination of surveys and experiments, that affective polarization is likely to rise when social media users encounter content with partisan cues, even if the content is not explicitly political. A 2020 study by Hunt Allcott and colleagues echoes these concerns. The authors asked some participants to refrain from using Facebook for four weeks. Afterward, these participants reported holding less polarized political views than those who had not been asked to refrain from using Facebook. Deactivating Facebook also made people less hostile toward “the other party,” although that was only the case for those who get news content on Facebook regularly.

What, then, makes social media polarizing? A major problem is that divisive content tends to spread widely and quickly on social media. Posts that express moral outrage or bash one’s outparty, for example, tend to be particularly successful at going viral. This virality is driven by the prioritizations made by social media algorithms coupled with people’s general inclination to favor sensational content. As a consequence, when people log on to social media, they are likely to see content that is divisive and presses their emotional buttons. What’s more, these trends incentivize politicians, news outlets, and would-be influencers to post divisive content because that’s what’s most likely to yield the engagement they crave.

What we know about how to reduce polarization

Our review of the scientific literature on how to bridge societal divides points to two key ideas for how to reduce polarization. First, decades of research show that when people interact with someone from their social “outgroup,” they often come to view that outgroup in a more favorable light. Significantly, individuals do not need to take part in these interactions themselves. Exposure to accounts of outgroup contact in the media, from news articles to online videos, can also have an impact. Both positive intergroup contact and stories about such contact have been shown to dampen prejudice toward various minority groups.

The second key finding of our review concerns how people perceive the problem of polarization. Even as polarization has increased in recent years, survey research has consistently shown that many Americans think the nation is more divided than it truly is. Meanwhile, Democrats and Republicans think they dislike each other more than they actually do. These misconceptions can, ironically, drive the two sides further apart. Any effort to reduce polarization thus also needs to correct perceptions about how bad polarization really is.

For social media platforms, the literature on bridging societal divides has important implications. In addition to implementing sound moderation policies, social media firms should consider the following:

Surface more positive interparty contact

Because negativity and moral outrage foster virality, social media algorithms tend to favor content that is hostile to an out-group. By contrast, positive interparty contact shown in users’ newsfeeds—whether from friends, politicians, or news outlets—might dampen affective polarization. Platforms should thus seek to surface more examples of positive interparty contact between authoritative voices on the left and right.

Prioritize content that’s popular among disparate user groups

One way to identify posts that are seen as interesting without being polarizing would be to prioritize posts that receive lots of positive engagement from actors across the political spectrum. In doing so, platforms could build on initiatives like Birdwatch/Community Notes, which prioritizes notes that are rated “helpful” among users who have disagreed in the past.

Correct misconceptions

As noted above, Americans tend to think the nation is more polarized than it is—a finding that is unsurprising given that negative and extreme voices tend to be amplified on social media. Platforms could instead alert users when they engage with content that overstates the degree of polarization and insert links to more accurate survey results about how polarized the nation truly is.

Design better user interfaces

A platform’s user interface has a material impact on how users interact with each other. Twitter’s “quote retweet” feature, for instance, has been widely used to quickly “dunk” on political opponents rather than engage in meaningful dialogue, which is one reason why Mastodon—the increasingly popular decentralized alternative to Twitter—was explicitly designed not to have the feature. Likewise, Tumblr and other networks have experimented with removing comments to ensure that the platform’s affordances lead to more positive and constructive discourse.

Collaborate with researchers

Several of these suggestions hinge on each platform’s ability to identify certain kinds of content (e.g., depictions of positive intergroup contact) and evaluate the impact of any measures they introduce. Both of those tasks are easier said than done. As a result, social media platforms would benefit from working with third-party researchers currently working on these types of problems. Providing computational and qualitative researchers with more access to platform data could also go a long way to fostering an understanding of how best to reduce polarization and other potential societal harms.

By adopting the approaches above, social media platforms can take a leading role in curbing polarization online. Amplifying divisive content less frequently and offering fewer opportunities to engage with it will disincentivize news outlets, elites, and would-be influencers from producing and publishing divisive content in the first place. Meanwhile, spreading more examples of positive intergroup contact—and highlighting accurate data on polarization—could go a long way toward improving perceptions about how polarized we really are.

Especially in light of legislation like the European Union’s Digital Services Act, which places greater pressure on platforms to accept responsibility for the content on their platforms and how it impacts their users and society at large, adopting novel measures to reduce polarization online will be vital. Fortunately, there are a wide array of scalable ways to curb polarization and strengthen democratic societies. Given how extensively social media platforms have been blamed for causing polarization, they would be wise to avail themselves of the opportunity to reduce divisiveness and strengthen democracy instead.

Christian Staal Bruun Overgaard is a Knight Research Associate at the Center for Media Engagement at The University of Texas at Austin.

Samuel Woolley is an assistant professor of journalism and media and program director of the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin and the author of the forthcoming book Manufacturing Consensus: Propaganda in the Era of Automation and Anonymity.

Meta and Google provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.