Amid a 2018 civil-rights audit of the company, Facebook came under pressure to consider a novel set of questions about its role in politics: What does voter suppression look like on social media? And, in the absence of U.S. legislation on the subject, should the company set the rules to ensure that voter suppression does not occur, in any form and at any level, in the digital world?
Voter suppression has been defined traditionally as efforts to discourage or prevent certain groups of people from voting. But in the digital world, it is much more complicated. On social media, the most obvious forms include posting false information about dates, locations, and voting procedures, and those are relatively easy to combat with the proper mix of machine learning and human review.
But what about the less clear-cut cases of voter suppression? The relentless messaging about voter fraud, mail-in ballots, and election rigging are all meant to undermine trust in the electoral process or confuse voters about registration and voting procedures. Allegations of potential voter fraud or spurious claims that the elections will be rigged, as President Donald Trump has repeatedly claimed, threaten to confuse voters and discourage them from casting their ballots.
In addition to organic content, Facebook’s targeting tools offer political advertisers the means to target vulnerable populations with messages that the broader public won’t see and potentially dissuade that population from voting. These tools have become even more concerning since Facebook instituted its infamous policy stating that politicians would not be subject to Facebook fact-checking rules and their ads would be allowed to run even if they contained falsehoods.
Currently, the United States lacks laws governing social media’s responsibility to protect our elections. The recent debacle over Trump’s false claims in a series of tweets about the use of mail-in ballots leading to widespread voter fraud laid bare how far behind the U.S. is in updating elections rules to fit the current information ecosystem. While Twitter chose to enforce its civic integrity policy and add a label encouraging users to “get the facts about mail-in ballots,” Facebook chose not to enforce its voter suppression policy. CEO Mark Zuckerberg doubled down on his stance that the company will not fact check political candidates, making it clear that, despite the company’s many public promises to combat voter suppression, they will not do so if it involves enforcing their own policies against Trump.
As it stands, there is nothing compelling Facebook, or any internet company, to protect the bedrock of our democratic process: free and fair elections. We have ceded the decision-making about what rules to write and what to enforce to CEOs of for-profit internet companies. With five months to go until a presidential election that promises to be a major test of American democratic institutions, American laws are in desperate need of update to address digital forms of voter suppression and how political debate and campaigning has moved online.
Facebook’s empty promises to combat voter suppression
Understanding the necessity of reform to address digital voter suppression requires understanding the recent history of Facebook, which has been grappling with how to address voter suppression on the platform since at least 2018, when a civil rights audit forced it to more closely examine its role in politics.
During that audit, I was serving as the head of “Global Elections Integrity Ops” on the advertising side of the business. With the company gearing up for that year’s midterm elections, my team began drafting a strategy to ensure that political ads on the platform would not result in voter suppression.
Since the company was not scanning ads for misinformation about voting at the time, my team prioritized the issue. We worked across multiple offices and proposed a coordinated plan to scan political ads for false information about voting procedures. The proposal launched a fierce debate and was ultimately rejected, for what seemed at the time to be typical tech company operational priorities.
Opponents wanted to wait until they had more data on the prevalence of voter suppression on the platform and whether solutions to combat the problem would scale globally. Since every election has a different set of challenges and local realities, waiting for a scalable solution meant the company would never act at all. And with the midterms fast approaching, I emphasized that even one ad—that Facebook profits from—providing misinformation about voting could be extremely damaging, both to our elections and to the company.
Having already bumped up against management by exploring ways to combat misinformation in political ads, the debate left me further convinced I would not be empowered to perform my job effectively. I left the company in November 2018.
A year later, and four months after the civil rights audit went public, Vanita Gupta, the president and CEO of the Leadership Conference on Civil and Human Rights issued the following statement: “For more than a year, we have worked in good faith with Facebook to develop robust policies to combat voter suppression. But Facebook’s policy exempting politicians’ content from the company’s Community Standards and its fact-checking program undermines all of that progress and will do irreparable damage to our democracy. While we can all agree that free expression is core to our democracy, fair elections must be as well.”
Four days later, Facebook announced new polices to fight voter suppression and said it would remove content misrepresenting details about how to vote, who can vote and what materials are required, and violent threats related to voting, “regardless of who it’s coming from.”
Yet when the company failed to take action on Trump’s statements that widespread use of mail-in ballots could result in voter fraud—a claim that may dissuade some from voting—Facebook chose not to enforce these very rules.
Speaking at a company town hall, Zuckerberg said Trump’s criticism of the governor of California’s efforts to distribute mail-in ballots amounted to “political debate” and “was not likely to encourage anyone basically to not register or not vote.” In effect, the company’s earlier decision to exempt politicians from its third-party fact-checking program trumped the voter suppression policy.
How Facebook writes and enforces policies to protect elections has a profound impact on democracy. As we have seen with Trump’s threats to strip Twitter of its liability protections in response to flagging his tweets, enforcing a broad voter suppression policy creates difficult decisions for any company, especially when the president himself has no qualms about spreading misinformation about voting. But if political considerations factor into Facebook’s decision-making about how to enforce its own policies, then the policy itself carries no weight.
So what should be done?
There is no black and white answer on how to handle content that intentionally sows distrust in the electoral process but that does not engage in clear, undeniable voter suppression. The business of what content to fact check, what to flag, what to downrank or remove is extremely complicated. And while Zuckerberg is right when he argues that Facebook should not be the arbiter of truth, the company should not tilt the scales by amplifying misinformation and providing political operatives with tools to microtarget voters with their most divisive content.
Revisiting the rules that govern online speech, including Section 230 of the Communications Decency Act that exempts platforms from liability for the material posted by users on their platforms, is an absolute necessity and deserves a much more honest and nuanced debate. But Trump’s threats to punish Twitter by removing its Section 230 immunity is an absurd abuse of power and blatant misrepresentation and weaponization of the First Amendment. And unfortunately, his threats make common-sense reform of Section 230 to fit our 2020 information environment less likely.
Protecting our democracy requires the government to step in and at least write the most basic rules, instead of leaving the platforms to govern themselves. While it will be impossible for Congress to pass legislation ahead of November to govern how platforms should address organic political speech—which many argue is more dangerous than online political advertising—this does not mean nothing should be done.
Facebook, meanwhile, is only making changes around the margins, announcing recently that users could opt out of seeing political ads. After this article was published, Zuckerberg announced on June 26 updated voter suppression and hate speech policies that came in response to growing pressure from advertisers, the civil-rights community, employees, and activists. Despite some good steps—most of which were recommended by both employees and the civil-rights community back in 2018—there was no mention of revising targeting rules for political ads, and it remains unclear whether Facebook will fact check or enforce its content-removal policies against posts and ads from political candidates that aim to sow distrust in the electoral process.
Several ideas for rules that government could enact to provide the necessary transparency to help ensure that voter suppression does not run unchecked online include:
1. Updating campaign finance laws to include digital advertising. In addition to setting enforceable rules for companies like Facebook and Google to follow, this would allow the Federal Election Commission to fulfill its mandate of tracking money in political advertising. One lingering piece of legislation that addresses some of this is the bipartisan Honest Ads Act, which seeks to ensure “that political ads sold online are covered by the same rules as ads sold on TV, radio, and satellite.”
2. Limiting the ability of advertisers to target users based on criteria—which the civil rights community could help define—that would allow campaigns to broadcast divisive ads about voting to the those who are most receptive while reducing public scrutiny of those ads. Just as the U.S. government sued Facebook in 2019 for violating the Fair Housing Act by allowing housing ads to target users by race and gender, the U.S. government should ensure that the platforms do not allow political ads that violate the tenets of the Voting Rights Act.
3. Demanding transparency about how ads are targeted that goes beyond the basic details contained in the Facebook ads library, such as whether custom audiences and “look-alike tools” are used, whether the company algorithms amplify the ads, and whether the advertiser listed in the “paid for by” disclaimer is verified and matches the actual name of the authorized advertiser.
While these tools may help prevent voter suppression online, with less than five months until the election, I have no hope that the necessary laws or fiduciary requirements will be in place by the time Americans vote to prevent the major platforms from being used as vehicles of voter suppression. The only tools we, the public, have left are to educate users about what voter suppression can look like online, to expose when platforms do not apply their policies evenly, and to inspire more people—including advertisers, employees, lawmakers, and investors—to demand accountability.
A politician abusing social media platforms to intentionally spread false information about voting procedures and sow distrust in our electoral system is, in my view, one of the biggest threats to our November election. Facebook has gone a long way to solve for the threat of 2016—Russian manipulation of the platform to sway voters—but it has not solved for the threat of 2020.
Protecting the integrity of elections helps ensure democracy survives. I believe that if Facebook continues to allow blatant lies about our elections system continue unabated, they will be on the wrong side of history. And when, in 2021, Facebook executives roll out the same excuses that “nobody could have seen this coming,” remember this moment.
This article has been updated to reflect voter suppression policies announced by Facebook on June 26.
Yaël Eisenstat is a visiting fellow at Cornell Tech’s Digital Life Initiative, a former elections integrity head at Facebook, and a former intelligence officer and White House advisor.
Facebook, Twitter, and Google provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.