Sections

Commentary

Why is Elon Musk’s Twitter takeover increasing hate speech?

Elon Musk Twitter account displayed on a phone screen and Twitter logo displayed on a screen in the background are seen in this illustration photo taken in Krakow, Poland on November 22, 2022. (Photo by Jakub Porzycki/NurPhoto)NO USE FRANCE

“The bird is freed,” tweeted Elon Musk on October 27, 2022, as a celebratory endnote to his acquisition of Twitter. However, this begs the question, free for which type of birds? By show of his previous political statements, current missteps, and future plans for the platform, this so-called freedom may simply galvanize extremists and further expose racial and religious minorities to hate speech and trauma. Twitter is now a major platform for proponents of hate speech.

Twitter saw a nearly 500% increase in use of the N-word in the 12-hour window immediately following the shift of ownership to Musk. Within the following week, tweets including the word “Jew” had increased fivefold since before the ownership transfer. Tweets with the most engagement were overly antisemitic. Likewise, there has also been an uptick in misogynistic and transphobic language. This surge in hateful language has been accredited to various trolling campaigns on sites like 4chan and the pro-Trump forum “The Donald.”

In response to these reports, Yoel Roth (former Head of Safety and Integrity at Twitter) posted a thread detailing that the majority of this derogatory language is coming from about 300 “inauthentic” accounts. Even if this hateful conduct is coming from a small number of troll accounts, this phenomenon speaks to how fringe, alt-right networks not only feel empowered by Musk’s takeover, but protected as well.

Shortly after the acquisition, Musk laid off almost 50% of Twitter employees. He also fired several longtime executives including: CEO Parag Argrawal, CFO Ned Segal, chairman Bret Taylor, the company’s General Counsel, Sean Edgett, and the head of legal policy, trust, and safety, Vijaya Gadde. Gadde is especially of note because she was instrumental in banning former President Donald Trump from Twitter. After a Twitter poll with over 15 million votes (with 52% voting yes to return Trump to Twitter), Musk restored Trump to the platform. As a result, the team that was previously in place to monitor and censure hate speech is no longer at Twitter.

As Musk is now the primary owner of the platform, he may follow through with loosening standards of harmful content and dissolving the so-called censorship he has criticized in the past. He plans to form a “Content Moderation Council,” and also changed how users are verified on the platform. He offered an $8/month verification process for the elusive blue Twitter check mark, in lieu of the traditional merit-based process that seemed to reward users based on number of followers and prominence in a particular field such as journalism, academia, or entertainment. This new verification process failed miserably and was pulled as users created fake accounts for companies and political leaders. One fake account actually led to Ely Lilly’s stock dropping over 4% and costing investors billions of dollars.

While the verification process has dissolved for now, we are worried about potential changes to Twitter’s moderation process. When acquisitions of social media platforms occur, there should be an obligation of the new owner(s) to ensure that hate speech is moderated. This is even more vital given the current political environment and spread of misinformation. Some policymakers agree. In a bipartisan effort, Democratic Senator Amy Klobuchar (D-MN) and Republican Senator Cynthia Lummis (D-WY) introduced the NUDGE Act (Nudging Users to Drive Good Experiences on Social Media) to provide funding for the National Science Foundation and the National Academy of Sciences, Engineering, and Medicine to conduct a study on interventions that will reduce social media addiction and harmful language. In 2021, Democrats reintroduced the Protecting Americans from Dangerous Algorithms Act “to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

Two research studies on Twitter speak to the importance of these potential policies. First, researchers analyzed political content on Twitter from Canada, France, Germany, Spain, Japan, Britain, and the United States. In six of the seven countries (Germany being the exception), right-wing political content received higher algorithmic amplification than left-wing content. Second, a report by The Brookings Institution examined over two million tweets and posts from Twitter and Reddit. Researchers found that users on Twitter, compared to Reddit users, were 2.5 times less likely to engage in bystander intervention to combat racist language. They attribute this difference to the divergent moderation policies on the two platforms. Reddit has a much more rigorous moderation policy to check and address hate speech on the platform. Conversely, Twitter is like the Wild Wild West and seems to be moving in a direction that may further marginalize people on the platform. Increasingly polarized politics and the prevalence of misinformation make this trend even more disconcerting.

Some may claim that internet censorship and the de-platforming of people has no effect, but this is simply not the case. As an example, following the banning of Trump from Twitter and other social media sites, online engagement around Trump decreased by 95%. The presence of online misinformation regarding election fraud plunged by 73% following Trump’s Twitter ban. The ban of Andrew Tate may follow a similar trend. His removal from TikTok, where the algorithm connected him to millions of impressionable young boys, has effectively siphoned his violent reach. It is of course naïve to believe that de-platforming will completely halt the efforts of those so committed to bigotry. They are oftentimes pushed to more hidden and niche parts of the internet to spread their message. From Timothy McVeigh to Dylann Roof, we have evidence of what far-right extremists who are pushed further to the fringe can do.

Collectively, recent changes to and at Twitter disrupt the ability of marginalized people to find community, produce useful discourse to share ways to foster equality, and protect themselves from hate speech and trauma. Musk’s acquisition of Twitter and his potential plans to loosen moderation guidelines will continue to increase the use of hate speech and likely inhibit the ways that marginalized groups have organized and mobilized on the platform to resist harmful language and discrimination in their everyday lives. Censoring hate speech and users who use hateful rhetoric are primary ways to ensure that the bird is truly free.

Authors