Sections

Commentary

COVID-19 is triggering a massive experiment in algorithmic content moderation

Content Moderation Post Image

Major social media companies are having to adjust to a difficult reality: Due to social distancing requirements, much of their human workforce that moderates content has been sent home

The timing is challenging, as platforms are fighting to contain an epidemic of misinformation, with user traffic hitting all-time records. To make up for the absence of human reviewers, platforms largely handed off the role of moderating content to algorithmic systems. As a result, machines currently have more agency over the regulation of our public discourse than ever before. 

This forced experiment is the greatest test to date of computers’ ability to police speech online. So far, the strategy has been relatively successful, as authoritative sources are largely promoted over disinformation. But the move to algorithmic content moderation is also exposing blindspots and resulting in disparate enforcement depending on the nature and origin of the content. 

No borders for the infodemic

In recent years, tech platforms have started staffing special teams to monitor certain countries when the informational environment there becomes particularly critical, such as before an election. But they have never had to deal with a crisis impacting the whole world simultaneously.

Although some of the narratives around the pandemic are common to many countries and easily translatable for a machine–claims that garlic kills the virus, for instance–many can only be captured with a contextual understanding that is out of reach for a machine.

These limits are inherent, and were anticipated by the platforms, who apologized ahead of time for the legitimate content that will be demoted and the disinformation that will be missed during the move toward algorithmic moderation during the pandemic. This reallocation of resources has also affected human moderators, who are now overwhelmed with appeal and content reporting procedures.

Social media companies are forced to make difficult trade-offs in allocating these scarce human resources and they are prioritizing certain regions over others. This gap is widened by the scarcity of language-specific data to train the algorithms, making automated moderation less effective for less widely spoken languages. 

Platforms, journalists and researchers are all disproportionately focusing their attention on content moderation in the U.S. and in certain European markets, while the situation is much worse in many other countries. Good data on geographical differences in content moderation is scarce, but COVID-related misinformation appears to be worse outside the United States. A report from the activist group Avaaz, for example, found that COVID-related misinformation is far more prevalent among non-English content.

It would be shortsighted to think that the problem would stop at the border. The fight against the pandemic is global, and likely to last. Rampant disinformation in certain countries will necessarily have backlashes in the rest of the world.

Arbitrating the global influence game

Platforms have been surprisingly willing to acknowledge the importance of their role in moderating content since the beginning of the pandemic, while they have otherwise typically tried to lessen the extent of their responsibilities.

This embrace is likely motivated by a desire to redeem themselves from previous failures to limit political disinformation. But what appeared to be, in comparison, an easier topic to moderate neutrally, turned out to be just as controversial.

Platforms rightfully decided to redirect users toward statements from public health authorities. But with high scientific uncertainty and a lack of consensus among experts, it is difficult for platforms to maintain consistent guidelines about what COVID-19 information to push to users. This is further complicated by major political leaders politicizing the issue, and making statements that contradict advice from public health authorities. 

This has forced platforms to censor speech from public figures that would ordinarily have been given a pass. Twitter, for instance, decided to censor tweets from Brazilian President Jair Bolsonaro in which he referenced the effectiveness of hydroxychloroquine to treat the COVID-19, a claim that medical experts seriously doubt. U.S. President Donald Trump, who has also promoted the unproven drug, has not yet received the same treatment. 

As the debate over COVID-19 moves beyond medical issues, it is impossible for platforms to remain neutral. In the United States, Facebook banned pages owned by conservative political activists, who organized anti-confinement protests and tried to make them look like grassroots movements. Conspiracy theories about the origin of the virus are spreading, encouraged in part by American and Chinese state officials. The Chinese Communist Party, Russia, and even the Islamic State militant group are all attempting to take advantage of the saturation of content-reviewing pipelines to promote propaganda on social media. Meanwhile, governments in Hungary, Turkey, or Egypt have passed special coronavirus measures to censor political opposition, and are trying to force social media platforms enforce them.

Biting off more than they can chew

In selecting which narratives are appropriate to be broadcast, social media platforms are acting as referees in a global struggle for influence, reinforcing concerns about the control that these companies have in defining our informational environments.

Even when the rules are legitimate, enforcing them at a global scale remains incredibly challenging. Facebook founder Mark Zuckerberg has argued that artificial intelligence would ultimately solve that problem, and his company’s reliance on AI-moderation for the pandemic is now putting that to the test. 

So far, algorithmic systems have proven they can bring precious support in scaling up the enforcement of certain guidelines. However, they also reveal fundamental limits in their ability to capture nuances and contextual specificities.

Our dependence on humans to moderate content will last. By shifting to a more participatory and decentralized model, big tech platforms could make their content moderation pipeline more resilient and adaptable. Moderating public forums for half of the planet is simply too big of a challenge for platforms to take on on their own. Even with AI.

Marc Faddoul is a researcher at the UC Berkeley School of Information. He tweets at: @MarcFaddoul

Facebook provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors