It is a time of great danger for American political campaigns and their staff as advances in digital technology have created extraordinary threats. There are deepfake videos that falsify information and distort reality, false news creation and dissemination, robotic tweets and social media posts that spread inaccurate narratives, and systematic disinformation campaigns from foreign and domestic adversaries. Left unchecked, these tactics can disrupt campaigns, encourage extremism, sow discord, and undermine democratic discourse, but there are steps campaigners can take to protect themselves.
Evidence of Russian interference in the 2016 elections shows how easy it is to manipulate social media platforms, push divisive content, and sway the overall electorate. Russian tactics were relatively simple: create content and then use bot networks (a series of accounts controlled by computers) and trolls (individuals operating accounts to achieve destructive goals) to deliver deceptive, misleading, and inaccurate content to as many voters as possible. When successful, voters accepted content as fact, the media reported on it, or both. It is now apparent that any foreign or domestic actor can employ the same tactics to abuse social media platforms for personal, political, or financial gain.
From my position as digital director for the 2018 Angus King for U.S. Senate campaign, we prepared for a well-funded attack run by those who had experience interfering in previous elections. In our campaign, we took stock of what we knew, obtained training in areas where we had knowledge gaps, and created a strategy to protect the candidate. In this article, I distill what we learned about digital and social media threats and show the methods designed to mitigate the risks of digital attacks. There is no “one-size fits all” solution, but our experience suggests there are a series of effective steps that campaigners, policymakers, technology companies, citizens, and the media can take to combat disinformation without sacrificing freedom of expression or civil liberties.
Digital and social media threats
In January 2018, when candidates began petitioning to get on the ballot, we evaluated our strengths and weaknesses against the digital attacks that occurred in the 2016 election. The formats of disinformation are familiar to social media users, but those who propagate them do so with malicious intent. When crafting our digital strategy, we knew to be on alert for the following:
- Memes, images with embedded text, often use humor or evoke an emotion. They are frequently shared among social media users and thus rewarded within Facebook’s algorithm and seen by more users.
- Deepfakes, videos altered by an artificial intelligence tools to either misrepresent an event that occurred or manufacture an event that never occurred.
- Altered videos use traditional editing tools to misrepresent an event that occurred. Like a deepfake they seek to alter the facts that voters use when deciding whom to support.
- False news pages and articles are created and run for either political or financial gain. They can be from foreign or domestic sources and typically post divisive or sensationalist content to sway social media users to believe their content and vote accordingly. They also increase website traffic, thereby increasing revenues from ad sales.
- False information spread from individual accounts seeks to undermine the campaign’s chances of success.
In July 2018, the King campaign hired digital consultants to further expand our knowledge as we prepared for potential malign activity. They taught us how information flows across platforms, providing the necessary information to protect ourselves from a digital attack, by teaching us to recognize two specific types of tools used by an adversary:
- A bot, short for robot, is a computer script that runs a social media account automatically. Relatively easy to spot on Twitter, bots will have regular engagement and retweet, share, or send content in order to increase the reach of content in order to boost a topic into the trending category. Bots often have high levels of engagement (e.g., hundreds of tweets or posts per day since an account’s creation) and unhuman characteristics. They will typically share divisive content or content focused on a single topic, but rarely or never post original content (e.g., trip photos, recipes, personal updates).
- A troll is an account that undermines a candidate’s message through falsehoods or introducing unrelated topics to an online conversation. Trolls are typically run by an individual, who can manage operate multiple inauthentic accounts at a time. Trolls can seek to manipulate social media algorithms, which determine what users see. Through artificially increasing engagement on content, they boost the number of individuals who see content without paying to advertise.
The types of accounts are fluid. An individual can log into a bot account and post original content to mask its nature, or reprogram it to share different content, and a troll account could be programmed into a bot. A single bot account or a single troll account acting alone will have a negligible impact on the social media algorithms. However, when multiple accounts work in coordination, they can disperse a false narrative or destructive message far and wide across social media platforms.
How campaigns and campaign staff can protect themselves
As an independent campaign, our perspective was unusual for an American campaign. We did not coordinate with political parties, and we had to find cost-effective ways to protect our candidate, campaign, and our team in the digital age from a cyber intrusion. From 2017-2018, the campaign spent 4.3 million and by election day had a team of 18 full-time staff and consultants. The digital team consisted of one, later two, full-time staff and at its maximum eight interns focused on disinformation and four interns focused on traditional content creation and implementation. Consulting services for training and subject matter experts on-call cost slightly over $25,000, a considerable amount but a small portion of a Senate campaign’s overall budget.
We saw that the newer challenge of disinformation can come from the left and the right, and the amount of information on the internet can make it a challenge to decipher noise from threats. To focus our efforts, we took an impact-based approach, and considered whether or not voters believed disinformation instead of attempting to figuring out the origin. Campaigns can reduce their risk by focusing on the following priorities:
Protect infrastructure
Multimillion-dollar campaigns contain highly sensitive and valuable information. They rely heavily on thousands of volunteers, who are largely unvetted, and the organization is always in the startup phase. Vulnerabilities are seemingly inherent, but can be addressed and mitigated with low to no-cost measures. Assume that the candidate, the campaign, and staff are targets in their professional and personal capacities. Harvard’s Belfer Center provides a guide for how past campaigns approached cybersecurity, which serves a roadmap for campaigns going forward. Additionally, the following steps can be taken:
1. Create a honeypot, a way to counteract hack-and-leak attacks. Following French President Emmanuel Macron’s campaign’s example, in the fall, we revived disabled intern email accounts and removed the two-factor authentication (2FA). To these accounts, we sent false polling information, fabricated office gossip, pleas for assistance with fictitious urgent tasks, and altered strategies. In the event of an intrusion, this can create confusion about what emails are real versus intentionally planted should an adversary attempt to use the information against you or leak emails to the press. Start this exercise early for a more robust defense.
2. Film the candidate at any public speaking engagements that would not otherwise have a record, in order to guard against a deepfake or altered video. In that way, the campaign has a record of the event and could turn over raw footage to the public to expose such practices, if needed. We learned this lesson after an altered video was released and moved through a series of authentic and inauthentic accounts, and voters believed the video. The video was reported on by the press. A more famous example of this, is the altered video of CNN’s Jim Acosta.
3. Replicate a classified environment, to the extent possible. Compartmentalize access to information by team, and create a need-to-know culture to limit risk, so that in the event one staffers email gets compromised, the intruder cannot access all data. Consider also rewarding staff for raising cybersecurity concerns in real time to allow for timely investigation.
Develop a proactive outreach strategy
If a well-funded experienced group launches a disinformation effort, a campaign’s advantage is its supporters: No matter how much social media data may or may not have been stolen, campaign staff will likely have an edge because of their deep understanding of who their likely voters are and the challenges they face.
In January, 2018, the campaign developed a proactive outreach strategy based on storytelling that limited the susceptibility of our messaging being coopted in a disinformation campaign. We highlighted those who wanted to share their story about why they were supporting the senator on election day. In a state of 1.3 million people, if an imposter account surfaced to spread false information or organize an event, we would know. This strategy also identified supporters who may be comfortable pushing back on falsehoods online, should we need to activate our network.
Develop a defensive strategy
There are several steps that can be taken to seek out disinformation at its origin:
1. Create social media clips to understand the conversation had on social media pertinent to your race. We created a daily report within the digital team that highlighted mentions of candidates, issues in the news cycle, and traditionally divisive issues (e.g., immigration, guns); analyzed each post’s reach; and determined if it was authored or amplified by likely inauthentic accounts.
2. Get to know how the social media algorithms work and understand how the algorithms could be manipulated against you. If a post gets significantly more engagement than normal in a coordinated manner (e.g., content with 40 out-of-state shares within one second of posting), or something looks out of the norm—it’s worth noting and flagging for social media companies to investigate.
3. Pay attention to Facebook pages. Often, disinformation will take a more subtle and nuanced approach to influencing conversations around an election, without actually mentioning a candidate. Facebook currently has a transparency feature, which can only be accessed from a desktop, which shows the location of page administrators. Facebook’s current policies do not consider foreign pages posting political content as grounds for removal, unless pages are paying to advertise. However, awareness can still help campaigns understand online conversations. The following pages have since been removed but illustrate the content that can come from the left and the right. Document and flag platforms as you come across them.
4. Integrate operations with the field team, who are on the ground and always the first to hear from supporters, to flag any rumors. We adopted an operating procedure that required any falsehoods to be immediately messaged via Signal, an encrypted messaging platform, to the digital director for further investigation. Tips ranged from conspiracies to misremembering facts, but a “see-something, say-something” culture gives the advantage of time.
5. Have a plan for what you will do once you discover disinformation. The campaign determined, based on our race, to use two factors in determining a response: The likelihood that the information either originated from or was disseminated by an inauthentic source and the likelihood that it would reach real voters and cause us to lose votes. We developed and used the following rubric to help guide our response to social media threats, and limit the likelihood of an unforced error.
It may seem counterintuitive, but sometimes it is best to ignore disinformation. If a false news story is written but no one reads it, the campaign drawing attention to the story could increase the story’s reach.
Work with social media platforms
Social media platforms control the algorithms, store the data, and control the terms that allow others to access their platform, and we found that our points of contact also wanted to stop the platforms from being manipulated.
1. Develop a line of communication with social media platforms as soon as your digital efforts begin. We kicked off our digital program in earnest in June 2018, and it was not until September, after an altered video circulated via inauthentic and authentic accounts and was subsequently published by journalists, did we develop meaningful relationships with the social media platforms. We flagged accounts we thought could be acting in coordination and violating community standards, or operated by a foreign country for malicious purposes.
2. Our relationships with the platforms, especially with Facebook, were honest. With the amount of available information, there is always a level of uncertainty with regards to account origin and whether or not it is inauthentic. We erred on the side of caution when flagging accounts, and we did not always get it right, but many of the ones we flagged were at a minimum “checkpointed” for operating in a gray area of the community standards, and others removed.
3. Take advantage of enhanced security, when available. Facebook piloted campaign security features for the 2018 cycle which gave campaign staff additional protection. An unintended benefit of this program is that in order to enroll, users must have two-factor authentication, which helped us identify those not in compliance with our internal policy that 2FA be active on all accounts.
Recommendations for future elections
In the aftermath of the 2018 elections, it became increasingly clear that the 2016 tactics have been adopted and implemented by other foreign and domestic actors. On this trajectory, the prevalence of disinformation will continue to grow unless multiple sectors start to act. In the lead-up to the 2020 campaigns, there are a number of actions that would help protect the integrity of American elections.
U.S. policymakers
- Designate a government office to curtail foreign influence efforts that seek to manipulate the information voters receive before they cast their ballots, and hold the office accountable to the public. The Department of Homeland Security, which is charged with protecting elections as part of our national critical infrastructure, is the logical choice for this mandate.
- Fund public education for social media literacy. We learn in public schools how to read for bias in formal media, but with an increasingly high percentage of Americans getting their news on social media, we need to learn how to spot biases there too. Empower voters with the information necessary to cast their votes by funding education.
- Clarify who owns user data. Consider following the European Union example of putting data back under control of the user. With ambiguous guidelines, the ownership falls with tech companies in a space where users have few, if any, alternatives.
Technology companies
Some transparency features are already available, predominantly on Facebook. The Political Ad Archive and transparency features are a start. More should be done across platforms quickly to give consumers the information they need as to whether or not to trust the source.
Increase available information:
- Show the country from which page administrators most commonly access their accounts, and make that information part of the basic user experience by integrating country of origin into the post itself. Aggregating historical data to the country level would both increase accuracy and protect civil liberties;
- Make a version of algorithmic review tools available to the public, within reason. Users should be able to understand why they are seeing what content they see, regardless of if the post is an advertisement, and if there is a likelihood that artificial engagement caused them to view a post.
Increase authenticity:
- Prevent page administrators from using a virtual private network when accessing their page to prevent misrepresenting their country of origin;
- Require that 2FA be used on all accounts, and limit the number of accounts that can use the same phone number. This simple additional security measure would make it increasingly harder to start a bot network and provide an additional method of detection;
- Work toward verifying all page administrators. Currently, all who are authorized to create political advertisements have had their identity verified by Facebook. Require that all page administrators must do the same in order to start mitigating the risk of artificial conversations.
Review community standards:
- Prevent abuse of platform by adjusting terms for engagement. Social media manipulation is not limited to foreign actors, and anyone with access to a bot network or trolls can game the algorithm through coordination.
- Close loopholes that allow for foreign accounts or pages that are not paying to advertise to contribute to public discourse online. Currently, only accounts that are in violation of community standards and foreign accounts paying to advertise political content are subject to removal from social media. However, organic engagement on posts garner foreign actors significant reach. It also allows for disinformation that starts with an American citizen to be amplified by nefarious actors.
General public
- Read social media critically and for bias. Posting on social media has an objective—whether it’s to share personal news, obtain a “like” on a picture of a life milestone, attend an event, or to vote on election day. Understand the motive behind the posts you read;
- Learn how the platforms work and how information is tailor-made to each user. The social media platforms determine the content each user sees based upon a complex algorithm that factors in which users, pages, and content the user engages. Understand how these work for the platforms you access;
- Use social media responsibly. Every time a user shares, likes, or comments on content, it increases the likelihood that others will see it. If a user shares a falsehood, be it a meme, a video, or an article, he or she is contributing to the information flow of false news. In this vein, users can help stop the spread of disinformation by reading critically and not engaging with knowingly false information.
News media
- Learn how to spot inauthentic behavior and consider how that could influence reporting. If a journalist files an article that draws, consciously or unconsciously, from something online which was moved to him or her through bot networks, the journalist could be giving credence to a topic that is undeserving. This kind of reporting is and should be avoided;
- Do not equate followers, likes, or shares as peer review. Followers can be bought. Social media strategists can attempt a follow back strategy, where they will follow an account in the hopes that the account will follow them, thereby increasing their followers and reach. Bots and inauthentic accounts can distort these metrics, and learning to spot them can allow a journalist to analyze accordingly.
Action on these recommendations can reduce the impact of disinformation and social media manipulation in elections moving forward. It is vital that we take these steps in order to safeguard our upcoming elections.
Commentary
How campaigns can protect themselves from deepfakes, disinformation, and social media manipulation
January 10, 2019