Earlier this year, leaders from government, civil society, business, and international organizations met at the World Economic Forum in Davos, Switzerland. There they revealed results from a survey of 1,400 experts, policymakers, and private sector representatives that highlighted mis- and disinformation as the greatest short-term risk facing the world.
Part of the concern reflected in the survey results stems from three simultaneous developments: new tools for creating fake content, unprecedented opportunity, and an exceedingly motivated actor. Since the launch of ChatGPT in November 2022, generative AI has captured public imagination and raised concerns about the deceptive nature of AI-generated images, video, text, and audio. In tandem, 2024 promised to be a marquee year for elections—with more than 60 countries and nearly half the world’s population heading to the polls to cast their votes. The final concern was that Russia—a prominent actor in the disinformation space—was particularly motivated to meddle this year. A sharp turn on the political trajectories of Western democracies backing Ukraine could be a boon for the Kremlin’s war goals.
An overhyped threat?
Despite these concerns, elections have come and gone around the world. While the information environment has been as crowded as ever, there is little evidence that AI-enabled foreign malign influence has had a meaningful effect on elections so far—notwithstanding sporadic examples and some coordinated efforts. In the European Parliament elections, where far-right populists made significant gains, no “major disinformation incidents” were identified. The U.K.’s elections, which were not particularly competitive, were also fairly quiet. In France, a snap election timeline may have made it more difficult for disinformation narratives to make much of a dent in an election ultimately lost by Putin’s preferred party, the National Rally. Where generative AI has played a role, it has shown up in a small portion of content fact-checked as false or was primarily used by parties and candidates running for office. And while there were reports of Russian disinformation and some physical sabotage tied to the 2024 Paris Olympics, so far, those have not disrupted the ongoing games.
This begs the question: Are concerns about Russian efforts to sway elections throughout Europe, and around the world, overblown? The answer, as always, is complicated. While some of the discourse is a function of the hype cycle around AI more broadly, it is also possible that European government officials and civil society organizations have done an effective job in pushing back against disinformation. In addition, countries such as Russia are also getting better at masking their role in fomenting discontent across the information environment.
Thus far, fears of generative AI-driven disinformation taking over have not yet materialized. Instead, AI-generated content has been used more frequently for spam and scams unrelated to political conversations. While we have seen generative AI deployed to sway voters, it only makes up a small portion of the contested information space. AI developers have documented usages of generative AI influence operations for tasks such as creating fake user engagement, but these efforts have been sloppy and are unlikely to shift public opinion, which is often dependent on the information source. There have also been a number of high-profile and rapidly debunked cases involving prominent political figures and, most alarming, several extremely targeted use cases of attacking individuals with deepfakes, which have proven to be very reputationally damaging and a cause for significant concern. The latter represents the most likely area where we will see generative AI’s pernicious impacts.
Greater wariness of digital dangers
Beyond the limited uses of generative AI that we have seen, it is also likely that governments and civil society are better prepared to push back against and preemptively counter threats to the information environment this election year. Although many tech companies have concerningly reduced content moderation and data sharing efforts, the Digital Services Act in Europe aspires to build a degree of platform accountability by creating transparency requirements for “very large online platforms” such as Facebook, YouTube, and X. In Europe and elsewhere, fact-checking organizations have banded together to monitor the information space across multiple languages, collaborating to counter viral narratives and misleading claims. Through widespread education efforts, citizens may also be more aware of the potential threat and may be less likely to be deceived—due in part to pervasive alarm and media coverage.
Another possibility is that where Russia can devote resources, it is doing so more strategically, with careful attention toward masking its role. In an effort to reach new audiences, Russia-affiliated accounts have begun to spread across social media to platforms with less tested content moderation practices, such as TikTok. They also use content aggregators and fake domains to launder content more effectively. And according to the U.S. intelligence community, foreign actors such as Russia increasingly turn to commercial firms and domestic voices based within the targeted country to spread their preferred narratives. This blurs the line between organic and inorganic political discourse and makes it more difficult to attribute influence activities to Russia directly.
Given these competing dynamics, it is unclear whether the relatively low prevalence of disinformation, and in particular AI-driven disinformation, in the 2024 election cycle is a function of an overhyped threat, societies hardening to the challenge, or more effective masking of efforts to stir the domestic discontents fueling polarization across democracies. It is likely a combination of these factors and many others.
Several researchers and commentators in this space have argued that it is critical not to overstate the disinformation challenge generally, because that feeds into the goals of hostile actors. The rise of anti-democratic subsets of the population cannot be attributed exclusively to disinformation. It is uncertain how persuasive disinformation campaigns are, and any threats to democratic processes are more likely to come from a domestic actor’s unwillingness to accept results, with or without Russian interference.
Yet Russian disinformation is also at its most effective when it feeds on and exacerbates existing fissures in political discourse, as evidenced by recently by far-right riots in the U.K., where state actor involvement is under investigation. And partisan conflict is high in many democracies around the world.
Russia eyes the prize of the U.S. election
Despite the limited effectiveness of malign influence activities in elections so far, the U.S. election later this year represents the biggest prize for Russia, as it seeks to shift the West’s policies away from support for Ukraine. Former U.S. President Donald Trump’s team has hinted at a “radical reorientation” of U.S. support for NATO, and Trump himself has encouraged Russia to “do whatever the hell [it] wants” to NATO member countries that don’t spend enough on defense. Trump’s vice-presidential candidate, J.D. Vance, has said he does not care what happens to Ukraine, and he has actively spread disinformation about a purported yacht purchase by Ukrainian President Volodymyr Zelenskyy’s minister, a lie which played a critical role in delaying U.S. assistance to the war-torn country earlier this year.
In a highly competitive race for the U.S. presidency, Russia is undoubtedly mobilized. Last month, for example, the U.S. Justice Department identified and disrupted an AI-enabled Russian bot farm of nearly 1,000 users on X. While this was a promising and proactive move, it is likely that this network is just the tip of the online iceberg. Even though the digital disinformation threat has yet to fully materialize, it remains critical not to weaken the collective defenses built up over time. Affective polarization—the phenomenon of people disliking and distrusting members of an opposing political party—remains exceedingly high, and the year of elections is still far from over.
Commentary
Are concerns about digital disinformation and elections overblown?
August 7, 2024