Social media companies are under increased scrutiny for their mishandling of hateful speech and fake news on their platforms. There are two ways to consider a social media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. On the other hand, one can argue that social media platforms have now evolved curators of content. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech.
Artificial and Human Intelligence together
At the beginning, social media companies established themselves not to hold any accountability over the content being published on its platform. In the intervening years, they have since set up a mix of automated and human driven editorial processes to promote or filter certain types of content. In addition to that, their users are increasingly using these platforms as the primary source of getting their news. Twitter moments, in which you can see a brief snapshot of the daily news, is a prime example of how Twitter is getting closer to becoming a news media. As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly.
While I believe it is naïve to consider social media as merely neutral content sharing technologies with no responsibility, I do not believe that we should either have the same level of editorial expectation from social media that we have from traditional news media.
The sheer volume of content shared on social media makes it impossible to establish a comprehensive editorial system. Take Twitter as an example: It is estimated that 500 million tweets are sent per day. Assuming that each tweet contains 20 words on average, the volume of content published on Twitter in one single day will be equivalent to that of New York Times in 182 years. The terminology and focus of the hate speech changes over time, and most fake news articles contain some level of truthfulness in them. Therefore, social media companies cannot solely rely on artificial intelligence or humans to monitor and edit their content. They should rather develop approaches that utilize artificial and human intelligence together.
Finding the needle in a haystack
To overcome the editorial challenges of so much content, I suggest that the companies focus on a limited number of topics which are deemed important with significant consequences. The anti-vaccination movement and those who believe in flat-earth theory are both spreading anti-scientific and fake content. However, the consequences of believing that vaccines cause harm are eminently more dangerous than believing that the earth is flat. The former creates serious public health problems, the latter makes for a good laugh at a bar. Social media companies should convene groups of experts in various domains to constantly monitor the major topics in which fake news or hate speech may cause serious harm.
It is also important to consider how recommendation algorithms on social media platforms may inadvertently promote fake and hateful speech. At their core, these recommendation systems group users based on their shared interests and then promote the same type of content to all users within each group. If most of the users in one group have interests in, say, flat-earth theory and anti-vaccination hoaxes, then the algorithm will promote the anti-vaccination content to the users in the same group who may only be interested in flat-earth theory. Over time, the exposure to such promoted content could persuade the users who initially believed in vaccines to become skeptical about them. Once the major areas of focus for combating the fake and hateful speech is determined, the social media companies can tweak their recommendation systems fairly easily so that they will not nudge users to the harmful content.
Once those limited number of topics are identified, social media companies should decide on how to fight the spread of such content. In rare instances, the most appropriate response is to censor and ban the content with no hesitation. Examples include posts that incite violence or invite others to commit crimes. The recent New Zealand incident in which the shooter live broadcasted his heinous crimes on Facebook is the prime example of the content which should have never been allowed to be posted and shared on the platform.
Facebook currently relies on its community of users to flag such content and then uses an army of real humans to monitor such content within 24 hours to determine if they are actually in violation of its terms of use. Live content is monitored by humans once it reaches a certain level of popularity. While it is easier to use artificial intelligence to monitor textual content in real-time, our technologies to analyze images and videos are quickly advancing. For example, Yahoo! has recently made its algorithms to detect offensive and adult images public. The AI algorithms of Facebook are getting smart enough to detect and flag non-consensual intimate images.
Fight misinformation with information
Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and Facebook bans white supremacist content. The other is to provide alternative information alongside the content with fake information so that the users are exposed to the truth and correct information. This approach, which is implemented by YouTube, encourages users to click on the links with verified and vetted information that would debunk the misguided claims made in fake or hateful content. If you search “Vaccines cause autism” on YouTube, while you still can view the videos posted by anti-vaxxers, you will also be presented with a link to the Wikipedia page of MMR vaccine that debunks such beliefs.
While we yet have to empirically examine and compare the effectiveness of these alternative approaches, I prefer to present users with the real information and allow them to become informed and willfully abandon their misguided beliefs by exposing them to the reliable sources of information. Regardless of their short-lived impact, diversity of ideas will ultimately move us forward by enriching our discussions. Social media companies will be able to censor content online, but they cannot control how ideas spread offline. Unless individuals are presented with counter arguments, falsehoods and hateful ideas will spread easily, as they have in the past when social media did not exist.
Commentary
How should social media platforms combat misinformation and hate speech?
April 9, 2019