Over four billion people worldwide are estimated to use social media by 2025. Though a majority of people use social media to engage with family and friends, people also use platforms and apps to obtain news and engage with communities on a range of issues. The polarization and sharing of news content in an era of “alternative facts” and misinformation exacerbates potential conflicts online and can reinforce false rhetoric about specific social issues and racial groups. As a result, social media provides a forum for hate speech and cyberbullying to flourish with limited understanding about tools or tactics to counter these attacks. Consequently, about 70 percent of people report doing something abusive to someone online, a majority of whom report being cyberbullied themselves. Even more troubling, nearly 90 percent of teenagers report witnessing bullying online.
Senior Fellow - Governance Studies
Assistant Professor - Santa Clara University
Research Faculty - Maryland Institute for Technology in the Humanities, University of Maryland
Former Research Assistant, The Race, Prosperity, and Inclusion Initiative - Thomas Reuters
Doctoral Candidate and Lead Researcher - Department of Sociology, the Lab for Applied Social Science Research (LASSR), University of Maryland
While false rhetoric, hate speech, and cyberbullying have many deleterious effects, there is a silver lining: over 80 percent of youth report seeing others stand up during cyberbullying incidents and engage in bystander intervention online. This high percentage shows the power of bystander intervention—a strategy that has proven effective in ensuring the dissemination of more fact-based public health information—and holds much promise for addressing and curbing online interactions that reinforce systemic racism. Even more promising, a majority of youth report wanting to identify effective strategies to intervene in cyberbullying situations.
While existing studies mostly focus on cyberbullying related to gender or LGBTQIA issues, research focused on hate speech and cyberbullying related to race and racism is less examined. Racism continues to be one of the most polarizing topics in America. Social media polarization has helped re-open the Pandora’s box that allows white supremacy and racism to wreak havoc on people’s lives. As previous research has shown, reactions to the #BlackLivesMatter movement have created echo chambers on social media that enhance hate speech and cyberbullying related to race and racism. Social media offers the opportunity for people to mask their identity, similar to the KKK hoods of the past.
This report aims to identify effective strategies to combat hate speech and misinformation online. By examining how people respond to cyberbullying, our goal is to highlight bystander intervention strategies that are effective at constructing healthy communication, calming anger and frustration, and changing attitudes. This research has broader implications for leveraging strategies, tools, and tactics, many of which have already helped address the spread of public health misinformation, and for the development and implementation of positive coping strategies for better mental and emotional health outcomes among marginalized communities.
Accordingly, we conceptualize effective bystander strategies as those that:
- are perceived by other social media users as favorable;
- alter the discussion in more positive, objective, and less antagonistic ways; and
- change the online behavior of the bully or agitator.
Through this effort, the team aims to answer the following questions: How do people combat misinformation online, particularly related to systemic racism, and, more specifically, how do people engage in bystander intervention on social media? What strategies do they use and how effective are people at changing attitudes? How do people encourage healthy coping strategies for better mental and emotional health outcomes?
Analyzing over two million tweets and posts scraped from Twitter and Reddit from 2020, we examined the effectiveness of bystander strategies used online to combat racism. These social media platforms were specifically chosen because they have inherent ranking systems that allow us to examine which strategies are viewed as most effective. On Twitter, people like and retweet messages. On Reddit, people rank comments that move them up or down in the hierarchy queue of importance to be seen more by others. Both platforms are also open allowing most people to comment on most tweets or posts.
Methodologically, we conducted a quantitative analysis of tweets and posts and a content analysis of comments. The analysis focused on four domains related to addressing racism (systemic racism, police brutality, education inequality, and employment and wealth) by using synonyms to each term to search hashtags on Twitter and search posts on Reddit that use these terms.
We found four primary types of racist discourses: stereotyping, scapegoating, accusations of reverse racism, and echo chambers. We also found four types of bystander intervention strategies, which include: call-outs, insults or mocking, attempts to educate or provide evidence, and content moderation. However, only one in six Twitter discussions and slightly less than 40 percent of Reddit discussions featured bystander action. Our findings contribute to research identifying and disseminating results about patterns regarding online communication and effective strategies to combat hate speech and misinformation about systemic racism.
In this report, we provide an overview of academic research on cyberbullying and social support, a detailed methods section regarding our analytical approach, and quantitative and qualitative findings from our investigation into how discussions about systemic racism manifest on social media.