Sections

Commentary

COVID-19 misinformation is a crisis of content mediation

Newspapers stand worker wearing a proctective mask is seen after Italy tightened the lockdown measures to combat the coronavirus disease (COVID-19) outbreak in Venice, Italy, March 22, 2020. REUTERS/Manuel Silvestri

Amid a catastrophe, new information is often revealed at a faster pace than leaders can manage it, experts can analyze it, and the public can integrate it.

In the case of the COVID-19 pandemic, the resulting lag in making sense of the crisis has had a profound impact. Public health authorities have warned of the risk of Covid-19 misinformation and disinformation, while the World Health Organization has even gone so far as to call the problem an “infodemic.” And while the current moment has certainly lent itself to ample conspiracy theories, scams, and rumors spreading quickly over social media, the information problem we are facing right now is less one of moderation (of identifying and removing content that is demonstrably false and/or harmful), and more one of mediation (identifying what information is credible, when, and how to communicate these changes).

Moderation versus mediation

Over the last several years, platforms like Facebook, Google, and Twitter have been criticized widely for allowing false and harmful information to spread uncontested over their platforms. When it came to information that could be construed as political, they have been especially reluctant to take a position, handing over responsibility for decisions about truth or falsehood to fact-checking organizations.

But in the case of COVID-19, the platforms have been far more aggressive. Google quickly prioritized information form public authorities, including the World Health Organization and the Centers for Disease Control. Facebook responded similarly, launching a “Coronavirus Information Center” that prioritized updates from similar sources. And Twitter has used its existing tools – the blue check mark – as a way to identify expert and authoritative sources for health information.

Where once the platforms were reluctant to make decisions about content, they are now taking a much firmer stand about what users are seeing. According to Facebook CEO Mark Zuckerberg, this is because good and bad information is easier to determine in a medical crisis than it is in politics, making it easier to set policies that are “more black and white” and to “set a much harder line.”

But this claim about the “black and white” nature of medical information isn’t really true. Again and again, we have seen that medical knowledge of COVID-19 is limited, changing, and uncertain. Scientists have warned the public about this, noting that facts and predictions about the virus will change, as will the advice we will receive to limit its spread. This lack of clarity—referred to as “epistemic uncertainty,” or the idea that truth can change over time, with new information—has been broadly accepted by health officials and other experts. Moreover, with a virus that scientists and medical professionals are learning about in real-time, social and economic factors can be just as important in shaping the course of the disease in particular communities.

This can mean that what was accepted as fact about the disease and who is at risk can vary over time and place. For instance, young people are affected (contrary to earlier beliefs) and masks do limit spread (contrary to earlier American public health guidelines). And as new symptoms have emerged, such as loss of smell and taste, many more people realize they may have been infected.

A shifting understanding of the virus and changes in guidance from public health authorities represent a major challenge for the platforms. Differentiating between experts and non-experts, officials and non-officials, is one way platforms can guard against the seemingly inevitable “context collapse” that occurs on places like Facebook, where information from the CDC might be displayed along a relative’s gossip about possible COVID-19 treatments. In some cases, as with YouTube, platforms are combining this raising of “authoritative” information (mediation), with removal of content that may contradict it (moderation). Information from vetted sources can be an effective tool to fight against things like “fake cures” that may be harmful, but platform policies must both be flexible to adapt to inevitable changes in scientific consensus, and more transparent with their logic, as users try to make sense of constantly shifting rules.

By identifying good sources of information and highlighting them, platforms can reduce the need to address bad information that is quickly gaining visibility and engagement over algorithmically determined spaces. And that’s helpful, particularly because, within crises, rumors can spread quickly over social media (though evidence suggests denials of these rumors spreads quicker), even as good information is what’s desperately needed. This type of identification and separation between sources can also help with problems with content moderation—the policies and processes platforms use to decide what content to remove or deprioritize.

In choosing to prioritize and highlight some information over others, platforms are having to mediate long-standing disputes between expert and non-expert knowledge and official and unofficial sources. It is not easier to do this because medical information is more black-and-white—it’s clearly not. Rather, platforms are making these content decisions because they are, for perhaps the first time, perceiving it as their public responsibility. But these decisions become complicated when the expert community’s understanding of the disease shifts, and information provided by independent agencies, like the CDC, may be influenced by political considerations.

These changes in how leaders and even health officials perceived the risk of COVID-19 affect other sources of information and their completeness—namely the data and the models used to understand the spread of the disease. And with the platforms relying on expert opinions and public health authorities to supply quality information to their users, platforms should be asking themselves whether they have an obligation to scrutinize that information. (But that of course raises even more questions about the platforms’ roles as quasi-media entities that no Silicon Valley executive really wants to touch. For now, that work remains outsourced to third-party fact-checkers and media organizations receiving financial support from Facebook.)

By embracing their role as gatekeepers for quality health information, platforms have placed themselves in the position of mediating between information sources. This has, of course, been the case for some time—done through algorithms and content policies, through financial partnerships or relationships with governments, or through prioritizing their own goods and services at the expense of others. In this case, this rhetoric of “public responsibility” is a shift away from what we’ve come to expect. Despite their power to personalize content, Facebook still maintains they leave questions of “validity” up to the “public to decide.”

What we must ask now is whether we trust tech companies to play this role of reconciling the user-generated internet with hierarchies of knowledge production. And we must also ask what the consequences are for the global internet, and for local communities, if we ask platforms to play this part.

Robyn Caplan is a researcher at Data & Society and a PhD candidate at Rutgers University. 

Facebook, Twitter, and Google, the parent company of YouTube, provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors