Skip to main content

Tomorrow’s tech policy conversations today

Silhouettes of mobile users are seen next to logos of social media apps Signal, Whatsapp and Telegram projected on a screen in this picture illustration taken March 28, 2018.  REUTERS/Dado Ruvic/Illustration
Silhouettes of mobile users are seen next to logos of social media apps Signal, Whatsapp and Telegram projected on a screen in this picture illustration taken March 28, 2018.  REUTERS/Dado Ruvic/Illustration

In recent years, propaganda campaigns utilizing disinformation and spread on encrypted messaging applications (EMAs) have contributed to rising levels of offline violence in a variety of countries worldwide: Brazil, India, Mexico, Myanmar, South Africa, Sri Lanka, the United States, and Venezuela. EMAs are quickly becoming the preferred medium for complex and covert propaganda campaigns in many countries around the world. Amid the COVID-19 pandemic, EMAs have become a key distribution channel for medical misinformation, hoaxes, and scams.

The rise of EMAs as a vector for propaganda—politically-motivated, -biased, or -misleading communications—corresponds to their rapid rise in global popularity. Many of these applications are outstripping larger, public, social-media platforms in terms of user growth, with WhatsApp and others having long ago exceeded “traditional” social media networks (Facebook, Instagram, Twitter, LinkedIn) in terms of monthly added users. In contrast to public platforms like Facebook and Twitter, communication on EMAs is encrypted—concealed by code. Moreover, it most often occurs among groups, such as families and friends who have close network connections, or interest-driven communities who have moderately close network connections.

The public nature of correspondence on Facebook and Twitter has contributed to a large and growing body of research about how propaganda campaigns on those sites work. But because of the encrypted, private nature of communication on EMAs, we have an extremely limited understanding of the mechanics of propaganda campaigns, especially those focused on spreading misinformative (accidentally spread) and disinformative (purposefully spread) false content on these applications. It is important to note that platforms including Facebook and Instagram have their own direct messaging functions which are themselves private. These walled-off portions of more mainstream platforms are crucial to study because they provide alternate, and hidden, means of manipulative communication during political events. Beyond that, they serve tremendously large, global user bases. 

Researchers have focused heavily on political manipulation campaigns on Facebook, Twitter, YouTube, and Reddit. And not without reason: These platforms were all used to spread disinformation during the last presidential election. But the people who make and launch computational propaganda campaigns are shifting their focus to other means of communication. The next phase of “information operations” is already being deployed on video platforms like TikTok, “peer to peer” texting apps, and—most elusively—over encrypted messaging applications like Telegram, WhatsApp, and Signal.

Propaganda goes private

For researchers and policy analysts, the shift to EMAs poses two important questions: Can we learn anything about how propaganda works on these platforms? And if so, what should we be studying?

The answer to the former is a resounding yes. The content shared on EMAs may not be as open as a Twitter profile or Facebook page, but that doesn’t mean it’s completely opaque. My team—the propaganda research group at the Center for Media Engagement at UT Austin—is actively working to remedy the informational deficit surrounding computational propaganda on EMAs by studying the people and groups who design, build, and launch coordinated political communication campaigns using these platforms. This actor group has in-depth knowledge of all stages of political campaigns on EMAs. To do research on them, we are using a combination of qualitative social scientific methods, including interview and open-source intelligence gathering, and quantitative data science methods, including network analysis. We are focusing on speaking to people in India, Mexico, and the United States.

The second question is more complicated. While my group tends to focus on the supply side of digital propaganda, others study either content (quality and quantity of messages) or the demand-side (who is propaganda directed toward, who consumes it, and why it is effective). Our project is a corollary to ongoing “content” and “demand-side” work on EMA and political influence campaigns by groups at Harvard’s Shorenstein Center; UNC Chapel Hill’s Center for Information, Technology, and Public Life; and Columbia’s Tow Center.

For our project on the supply side of digital propaganda, we are focusing on the following question: Who makes and launches digital propaganda campaigns and why? Without a richer understanding of these groups and their intentions, we won’t have a full view into how digital propaganda works, much less be able to formulate effective policy responses.

A research agenda for the supply side of digital propaganda should have three goals:

  1. First, it should assess the tactics and strategies of “encrypted propagandists.” How do they do what they do and why do they believe it will be successful?
  2. Second, it should interrogate why encrypted propagandists target particular groups. Which groups are most susceptible and why are social groups and issue-focused groups often the targets of such campaigns?
  3. Third, it should generate methods for both studying and dealing with the problems associated with EMA-driven propaganda. How can its more harmful effects be studied? How can they be mitigated? And how can those working for the purposes of democracy and human rights respond? 

In our preliminary research, we have found that a number of different types of groups work to spread political messages and propaganda on EMAs in India, Mexico, and the United States. Some groups using EMAS are firmly within the political establishment, others work on the fringes of politics. Digital political communication consulting firms, party communication teams, special interest groups, political extremists, and civil society groups all variously use EMAs as tools for spreading their content. Across all three countries, governmental entities—or government-supported and adjacent organizations—have begun to use a broad array of chat applications as part of their political messaging campaigns. In India, it appears that most of the so-called “IT Cells” (regional political communication groups) working on behalf of Prime Minister Narenda Modi’s Bharatiya Janata Party primarily focus their attention on WhatsApp. This platform, owned by Facebook, also plays a burgeoning role in propaganda and disinformation campaigns in Mexico. In the United States, private messaging features on Twitter and Instagram, as well as EMAs Facebook Messenger and Telegram, are increasingly important tools for all manner of political communication.

In the early days of my research into computational propaganda—the use of automation, algorithms, and social media to manipulate public opinion—my interview subjects consistently brought up the historical precursors to today’s disinformation campaigns. They pointed me to spam campaigns on Internet Relay Chat, conspiracy-laden email forwarding, and phishing scams on instant messaging platforms. Indeed, much of what we see online today, on both EMAs and other social media platforms, is a more amplified, more insidious version of what we’ve experienced since the web went public.

My interview subjects also told me that it’s important to always look out for emerging technology. Scammers, they argued, are always one step ahead of those working to stop them. I set out to study these technologies for my new book The Reality Game: How The Next Wave of Technology Will Break the Truth and found that several new digital tools, from EMAs to virtual reality to human-mimicking AI voice systems like Google Duplex, look to be at the forefront of novel digital manipulation campaigns.

While we still have a great deal of work to do in order to understand and combat propaganda and disinformation campaigns on sites like Facebook and YouTube, we must also begin to study the next wave of underhanded digital politicking. By addressing the rise of false and misleading political content on emergent technology systems, we can stop computational propaganda before it becomes the new normal.

Samuel Woolley is an assistant professor at the School of Journalism at the University of Texas, Austin. He is also the project director for propaganda research at the Center for Media Engagement at UT.