Online vitriol, especially in the hands of widely-followed, influential, and well-resourced politicians and governments, can have serious — and even deadly — consequences. On January 6, 2020, President Trump tweeted false claims of election fraud and seemingly justified the use of violence as his supporters stormed the U.S. Capitol. Although an in-person speech appeared to most directly trigger the violence, Trump’s social media presence played a large role in the mob’s actions. For weeks after losing the 2020 election, President Trump tweeted false claims of election fraud and encouraged supporters to descend on Washington, D.C. on January 6, refuse to “take it anymore,” and “be strong.” On the day of the assault, a tweet that Vice President Mike Pence “didn’t have the courage to do what should have been done” was followed by messages from Trump’s supporters on the social networking platform Gab calling for those in the Capitol to find the vice president, as well as in-person chanting of “Where is Pence?” Leading up to and during the outbreak of violence, various social media platforms helped the mob assemble at the right place and time, coordinate their actions, and receive directions from the president and one another.
As we argued in a recent article for the journal Survival, abuse of social media is not confined to terrorist groups like the Islamic State. Social media companies should also develop emergency protocols to counter the exploitation by malign agents and states that seek to foment violence.
A worldwide problem
As horrific as the assault on the Capitol was, Trump’s abuse of social media is not unique. In 2020, against a backdrop of existing religious tensions and history of communal violence, politicians of India’s Bharatiya Janata Party (BJP) posted incendiary content regarding Muslims, some even threatening vigilante violence. At least four officials maintained their Facebook accounts despite potential violations of the platform’s hate-speech rules, with only a few posts taken down after the Wall Street Journal inquired about them. Facebook’s content moderation team took no action on politicians’ claims of Muslims intentionally spreading coronavirus, and while former BJP lawmaker Kapil Mishra’s video threatening vigilante violence was eventually taken down, rioting and killings disproportionately affecting Muslims took place soon after it was uploaded.
Malicious state actors have long used communications technology to achieve violent or hateful goals. For instance, during the Rwandan genocide, Hutu extremists weaponized Radio Télévision Libre des Mille Collines broadcasts to inflame hate against Tutsis and moderate Hutus, and to list the names, addresses, and even license plate numbers of those they wanted killed. Today, state leaders can use social media platforms in a variety of ways, from explicit calls for violence to content that insidiously encourages hate, to similarly exacerbate tensions and incite violence.
New risks and opportunities for companies
Although states’ exploitation of communications technology is not new, social media provides new dangers and risks. Given platforms’ reach, states can have a huge impact on their populations if they dominate the narrative on popular platforms like Facebook and Twitter. Additionally, social media facilitates “echo chambers,” where feeds are personalized based on user data and users’ pre-existing views are reinforced (possibly to the point of inciting action) rather than challenged. Lastly, most social media platforms have no gatekeepers and lack the editorial role of newspapers or television broadcasts, though they do usually have minimum community standards. There is usually little accountability, and a small number of individuals or state-hired troll farms can amplify a certain message and make fringe opinions appear more mainstream. Widely followed politicians like President Trump themselves can communicate directly with huge swaths of the population. With repeated exposure to a rumor strengthening its believability and likelihood of being recalled, rumors on social media become extremely powerful.
At the same time, the lack of gatekeepers and dominance of the information environment provides a new opportunity. Social media can provide alternatives to a hateful government-controlled narrative on traditional media, connect communities in crises, and provide information for civil society members, foreign governments, and other concerned actors. Therefore, social media companies must acknowledge their unique power when it comes to states and take action against potential violence.
Threading the needle
Although still imperfect, social media companies have made strides in addressing the nefarious online activities of jihadists and other violent sub-state groups. When it comes to moderating state content, however, social media companies grapple with thornier questions on how to maintain their bottom line, accurately apply technological tools, and follow their asserted free-speech ideals while crossing these powerful actors. In the case of Trump and BJP officials, they are also democratically elected, giving them legitimacy and raising the issue of whether social media company standards are more important than elections for determining who has the right to privileged access to these important platforms.
Different social media behemoths have balanced these varied motivations and concerns in different ways. Twitter and Facebook, for instance, responded differently to Trump’s incitement of violence on January 6 as well as BJP politicians’ anti-Muslim rhetoric. These different policies suggest the difficulty of answering the underlying question: How can and should social media companies treat politicians and governments fomenting hate online?
Although Facebook and other companies have devoted significant resources to the problem of bad content, technical tools and available human moderators often fall short of solving the problem. Humans are necessary to train and refine technological tools, handle appeals, and treat nuanced content requiring social, cultural, and political context to be understood. For instance, the uptick in use of automated content moderation during the COVID-19 pandemic caused essential information — such as Syrian journalist Mohammed Asakra’s Facebook posts documenting injustice in the country and sharing this with other activists — to be removed. Even in Myanmar, which has been a focal point after the social media-orchestrated mass murders there in 2018, human content moderators often cannot live in the country itself and are burdened with several countries on their profiles.
Beyond these resource limitations, defying or acting against governments may harm social media companies’ commercial interests. To access key markets, social media companies rely on government approval, licensing, and a favorable regulatory environment. Social media companies may therefore be more willing to bend to a government’s will and prevent backlash.
In the face of these difficulties, one “easier” solution may be to take a blunter approach to content moderation, making certain content inaccessible or broadly limiting access to platforms. However, over-restriction can have equally devastating consequences. Repressive regimes often shut down the internet in the name of security while using the silence to harm dissenters or minority communities. Furthermore, limiting any content, especially government content, may be at odds with U.S.-based technology companies’ supposed principles. Many companies claim to be committed to free speech for all their users and do not see themselves as arbiters of appropriate or inappropriate content. Making these judgments places social media companies in a role they should not nor want to be in. Yet, with the power these platforms yield, social media companies must find ways to prepare for this role and prevent escalation of tensions in a crisis.
To balance different commitments and take an appropriate response to state-backed hate online, social media companies must be able to assess different situations and implement emergency protocols as necessary. Hateful rhetoric and borderline content become most dangerous when tensions in a society are already high. Companies can use indicators from scholars and institutions to recognize that an emergency may be close. Internally, an uptick in dangerous content sensed by artificial intelligence systems or human analysts may trigger increased scrutiny. Furthermore, companies must recognize when an emergency is over to ensure limitations on social media do not pose undue burdens. Temporary victories, such as a number of days without violence, or the announcement of larger events, such as peace negotiations or the return of refugees, could indicate an end in the crisis. Once determined, companies must decide what action to take during these periods of emergency.
Companies have a plethora of tools at their disposal to limit the spread of dangerous content in a crisis. Platforms can apply warning labels on controversial posts to lead users to immediately question the content and seek additional information. Twitter used such a label regarding Trump’s “when the looting starts, the shooting starts” tweet during the 2020 Black Lives Matter protests. Social media companies can also ramp up existing efforts to limit the visibility of dangerous content in a crisis situation. Already, Facebook downranks posts identified as false, YouTube and Facebook apply more stringent guidelines when deciding whether or not to recommend content, and Twitter claims to remove trending topics if they are misleading or disruptive to public conversation. Questionable posts are sometimes put in a “limited state” with comments, likes, sharing, forwarding, and monetization through ads turned off or limited. These efforts should be increased during times of crisis and be expanded to temporarily apply to more borderline content. At the peak of a crisis, companies could even apply this limited state rule to posts containing certain keywords.
Furthermore, more resources should be devoted to removing borderline content from the platform and suspending dangerous accounts during crisis situations. Both Twitter and Facebook, which had long subscribed to the idea of maintaining public figures’ content in the name of public interest, realized the danger of keeping this content up as the January 6 protests in Washington, D.C. boiled over. Twitter first labeled Trump’s incendiary tweets with warnings, then temporarily locked the president out of his account before finally permanently suspending it. Facebook, meanwhile, suspended Trump’s account “at least” until President-elect Joe Biden’s inauguration. Apparently recognizing that removing perpetrators at the source may ultimately protect public safety, social media companies lowered previous provisions for suspending accounts and deleting posts for accounts given preferential treatment. Lowering these thresholds for “regular” accounts passing along rumors in a crisis may similarly slow the spread of inciting rhetoric.
At the same time, social media can also provide necessary information from apolitical outposts of government, civil society, first responders, and loved ones. Positive content therefore should be highlighted. Through partnerships with local civil society organizations and reputable international organizations, trusted sources should be identified and elevated in newsfeeds. Companies can follow YouTube’s model by creating a “top news” or “breaking news” tab at the top of homepages that makes the collection of these sources easily accessible. In certain situations, companies could even whitelist certain sources and allow access to these pre-approved sources while preventing access to others.
Limiting content on social media, especially state content, contains risks for the victims of a crisis as well as the companies themselves. The vigor and duration of steps taken must be carefully evaluated when altering a platform’s performance and purpose. Still, the risks of inaction are greater. Although the measures presented here cannot solve underlying fissures in societies, they can reduce the powerful impact of exploiting social media to spread hate and violence.
Commentary
Social media companies need better emergency protocols
January 12, 2021