Sections

Commentary

Disinformation is evolving to move under the radar

The word "plandemic" is painted on a wall in Barcelona, Spain, in protest against COVID-19 restrictions imposed by the Spanish government.

For disinformation, 2020 was a pivotal year. The novel coronavirus spread around the world accompanied by viral medical falsehoods. Political leaders became bolder in their use of disinformation to maintain power and sow discord. Online conspiracy movements grew rapidly, gaining untold adherents. And online platforms made unprecedented, though inconsistent, moves to moderate content on their services.

These developments have required groups that peddle disinformation to evolve. Malicious and sophisticated actors, confronted by this unprecedented response from platforms, have tried to find alternative strategies to spread disinformation, increasingly moving under the radar to evade detection from researchers, journalists, and fact checkers. Meanwhile, online infrastructure continues to enable both widespread disinformation campaigns and the proliferation of misinformation.

Going under the radar

With platforms investing more resources in detecting information operations, sophisticated purveyors of disinformation are going to greater lengths to obscure their identities and the origin of their propaganda. In June 2020, the EU DisinfoLab exposed the link between Observateur Continental, a French-language website producing disinformation, and InfoRos, a Moscow-based company tied to Russian Military intelligence (GRU). Observateur Continental relied on content laundering—the use of proxies to endorse disinformation. Through partnerships with conspiracy-oriented websites on the right-wing fringe, Observateur Continental disseminated its material to specific audiences and attempted to obscure the Russian origin of the content. The PeaceData operation, exposed in September of last year and linked to Russia’s infamous Internet Research Agency, followed a similar pattern, hiring independent journalists in the United States to write stories around the U.S. election while obscuring the identity of those behind the operation.

These obfuscating strategies have proven successful and growing in popularity. A 15-year-long disinformation operation exposed by EU DisinfoLab in December relied on the fake media website EU Chronicle to discredit Indian adversaries in South Asia and succeeded in rallying international policymakers to its campaign. These campaigns reveal the vulnerabilities in democratic systems that increasingly embrace transparency as a tonic to disinformation but are reluctant to take the tough decisions necessary to hold actors accountable by, for example introducing sanctions. Both operations (Observateur Continental and Indian Chronicles) remain active today even after their exposure.

Indeed, the Indian Chronicles and Observateur Continental operations share several similarities in design and outcome:

  • Both efforts lacked a sophisticated online presence;
  • Neither turned to “active” methods, such as online advertisement or amplification through the use of troll farms;
  • Both had an active offline partnership strategy contingent on finding allies in the real world: namely, partnering with established websites and soliciting content from external contributors, such as freelancers or policymakers, who could publish on their own;
  • Both used content laundering to broadcast their messages to targeted communities;
  • Both stayed under the radar and were only exposed after long investigations.

Uncover the broader disinformation ecosystem

Online platforms have stepped up their efforts to limit the spread of disinformation, providing context to misinformation and developing new content moderation policies, including bans on prominent figures. But this whack-a-mole strategy has been insufficient to the challenge of widespread misinformation. Confronted with resilient malicious actors and recurring narratives, journalists, fact-checkers, and investigators have repeatedly debunked false content, only for the same actors return again and again. Those fighting disinformation have felt like Sisyphus, pushing the same rock every day up the mountain.

Assessing who is accountable for the distribution of disinformation must be expanded. The year 2020 showed that the distribution of disinformation stems not only from a list of online assets readily available on large platforms, such as Facebook pages, Twitter accounts, and YouTube channels, but also relies on a combination of active measures and a passive ecosystem. Active measures include the production and dissemination of disinformation through content production, publication, paid and/or coordinated amplification, while the passive ecosystem includes the mechanisms that allow this content to be hosted and spread, and sometimes to hide ownership, such as DNS infrastructure, adtech,  and algorithmic recommendation. This passive ecosystem allows for the use of cross-platform strategies—the hosting of content on a less moderated platform, amplifying it on another, and monetizing it through crowdfunding—to be used to spread disinformation.

It is the passive ecosystem that enables under the radar campaigns to flourish. If we want to tackle disinformation in the long term, we need to better address the passive ecosystem contributing to the distribution of disinformation in our analysis. This research would allow researchers to better describe the impact of malicious actors in our societies. It would also help us to better identify how to hold bad actors accountable and improve regulation efforts.

The increasing sophistication of disinformation actors requires those investigating disinformation to see beyond the usual scapegoats—most commonly Russia, troll farms, and Twitter bots. We also need to stop basing disinformation reporting on arbitrary and unverifiable performance indicators, such as the number of fake accounts in a campaign or the number of countries effected. Applying greater rigor to disinformation studies will only grow in importance as actors in the field evolve. In response to deplatforming and improved content moderation, disinformation actors are joining less moderated online spaces, such as Gab, Telegram, or Parler. Where they once had monetized their work on YouTube, these actors are now leaving the streaming platform for crowdfunding services (Tipeee, Patreon, GoFundMe, etc.). As a direct consequence, these new infrastructures cannot be left out of the broader conversation of the online distribution. 

After years of a hands-off approach in moderating online content, profit-oriented platforms need to be held accountable for their role in actively supporting the spread of disinformation and harmful content. This is also why it is time to open the black box of online distribution of information, the business models of algorithms, domain name registration, syndicated content, and programmatic advertising systems. In order to do so, civil society groups investigating disinformation need better access to data—access that social media companies have so far denied.

There is no accountability without transparency. Stemming the spread of online disinformation requires defining sanctions for malicious actors and the online architecture that allows disinformation to spread. Otherwise, the systemic causes of this issue will remain with us, and disinformation will continue to spread under the radar and work against democratic societies.

Alexandre Alaphilippe is the executive director of the EU DisinfoLab, an NGO focused on researching and tackling sophisticated disinformation campaigns.

Facebook, Google, and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. These organizations did not provide funding to support the research described here.  

Authors