Sections

Commentary

How do we solve social media’s eating disorder problem?

Cartoon people look from behind a phone screen

Eating disorders have been on the rise over the course of the pandemic. In the last two years, the number of adolescents admitted into hospitals for eating disorders has skyrocketed, with medical experts citing increased time on social media as a contributing factor. As internal research leaked by Meta whistleblower Francis Haugen claims, “the tendency to share only the best moments [and] a pressure to look perfect” could leave many with a desire to look or act different. That, coupled with a plethora of online content on dieting and what it means to have the perfect body, have further exacerbated users’ insecurities and misled them down dangerous paths. Taken to the extreme, many social media users have also been guided to dangerous pro-eating disorder communities, corners of the internet where users actively encourage and shame each other into unhealthy or even life-threatening weight loss.

Lawmakers have become increasingly aware of these dangers. In a hearing last September, Sen. Richard Blumenthal (D-CT) had his office create a fake Instagram account to understand the prevalence of pro-eating disorder content on the platform. As lawmakers work to hold tech companies to a higher standard in protecting users, this is an important aspect of user safety that cannot be overlooked.

Pro-eating disorder communities aren’t new

Pro-eating disorder communities have had a long history on the internet. As early as 2001, Yahoo removed 113 pro-anorexia websites from its servers. After a Huffington Post exposé on “thinspiration” blogs on Tumblr, the platform took action against a cluster of pro-eating disorder blogs. And decades after the problem first surfaced, social media continues to struggle with the same problem. Over the last few years, YouTube, Instagram, TikTok and more faced criticism for failing to address pro-eating disorder content and search terms on their platforms. Communities of eating disorder enthusiasts have been found on Twitter, Discord, Snapchat and more.

All major social media networks explicitly state in their terms and conditions that users should not promote behaviors of self-harm, including the glorification of eating disorders. Ad policies on Pinterest, Instagram, Snapchat, TikTok and other online platforms have either banned or imposed restrictions on weight loss ads. Across most platforms, search terms and hashtags such as “anorexia”, “bulimia”, and “thinspiration” have been rendered unsearchable. When users look up related terms, they are instead directed to a “need help” page, with resources such as the National Eating Disorder Association (NEDA) volunteer hotline.

Yet social media platforms’ eating disorder problem continues to remain unresolved because, at the end of the day, it is only part of a larger, much more complex problem. While the leaked Facebook papers claimed 1 in 3 teenage girls said their body image issues had been made worse by Meta’s Instagram platform, other researchers have questioned this conclusion, flagging that “disentangling cause and effect in correlational research that links experience and mental health is an enormous challenge”. It is important to also consider how users’ life experiences shape their social media experiences, as harmful messages on food and dieting do not exist solely in the vacuum of social media. Diet culture is everywhere in the modern world, rooted in the belief that being thin equates to healthiness and attractiveness, manifesting in “guilt-free recipes” and New Year resolutions to lose weight. In turn, disordered eating habits are often normalized, with a 2008 survey sponsored by the University of North Carolina at Chapel Hill reporting that 75% of women report disordered eating behaviors across age groups, racial and ethnic lines. But women are not the only victims of disordered eating, and the condition affects people across genders and sexual identities. Eating disorders are a serious problem, and have the highest morbidity rate of all mental illnesses.

But while social media platforms are not solely responsible for causing eating disorders, they are responsible for amplifying them among wider audiences. Increasingly, more young people are using the internet as a tool to find answers, following misguided or even dangerous advice from influencers and peers. Platforms rely upon machine learning algorithms to filter content based on user preferences and seek out new audiences for various information, especially so they could market more ads. For users with preexisting body image issues, seeking out one or two fitness or healthy recipe videos could fill their feeds with similar videos, and those who continue watching similar content could easily be led to content explicitly promoting eating disorders. Such regular exposure has the potential to trigger or worsen disordered eating behaviors for users. For closed-network social media platforms such as Snapchat or Discord, the same functionalities that allow people to connect with those they’ve never met before have also facilitated the formation of closed group chats, where users share how much they weigh and encourage others to fast.

Existing measures are inadequate

Existing takedown policies fall short on various counts. As it is with most other content moderation challenges, people and companies posting and promoting such content have outsmarted AI systems with misspelled alternatives to banned search terms and hashtags for users to find each other. Many users also post untagged content, which can slip through existing systems unchecked.

Among platforms and other stakeholders, there is also the bigger problem of what should and should not be taken down. When talking about user welfare, it is also important to recognize that users contributing to these platforms, even those that are actively glorifying eating disorders, are also struggling with debilitating mental illnesses, and that taking down accounts could cut them off from much-needed communities of support.

What can be done?

The good news is that these challenges are not insurmountable, and there are methods for social media companies to improve their responses to content promoting disordered eating. Regarding how algorithms work in the status quo, the problem is that they work too well in catering to user preferences, bombarding users with the content they need to facilitate their self-harm. Recognizing these fallacies following the fallout over the Facebook papers, Meta announced the rollout of a “nudge” function for teens on Instagram, which can be activated when a user spends an extensive amount of time watching workout videos or diet content. In response, the algorithm would instead redirect them to unrelated content such as animal videos or travel pictures. Similar functions could be enabled across other platforms and for all users, as eating disorders could affect a wide range of people other than teenage girls. To encourage algorithmic accountability, researchers should also be granted access to platform data. This would allow those specializing in eating disorders and teen wellbeing to analyze how platform algorithms handle and remove this content. The involvement of third-party standards heightens the pressure on platforms to be accountable for their users’ wellbeing. While it is unlikely for platforms to start doing this voluntarily, the EU’s Digital Services Act is likely to open up such avenues going forward. A similar framework for the US would be useful in encouraging further research and promoting accountability.

Users should also be granted more agency in filtering through content that could be harmful to their wellbeing. A useful example of this would be Twitter’s “mute” function, which allows users to avoid seeing tweets on specified keywords. Expanding this across platforms could grant users the tools they need to avoid dangerous content on their own terms. However, a “mute” function will still be inadequate for image- or video-heavy platforms such as Instagram, TikTok, or Pinterest. For such platforms in particular, users should be allowed expanded control over the categories of content they are shown. Facebook’s ad preferences page has shown that platforms already collect and categorize users’ content preferences. There should be an option for users to eliminate categories of content, going beyond eating disorders and self-harm to gambling content for recovering addicts and pregnancy content for mothers who have miscarried. This could remove even untagged content, as those would often be categorized with other dieting or exercise posts.

With existing technologies, categorical removals will likely be imprecise and overly broad. Removing unhealthy food content would mean removing most food content as artificial intelligence software would not be able to make the distinction. Regardless, this could still be useful, in particular for users in recovery whose only other alternatives would be to risk exposure to such content or get off social media completely. In the long run, platforms could also explore alternatives to letting users choose what specific categories of content they would want to reincorporate or avoid completely. Approaches to this would differ based on how platform algorithms prioritize and recommend content.

Instead of simply linking to the NEDA webpage, social media companies should take a more proactive and involved approach. Theories on inoculation have shown that people become more resilient to political misinformation when they have been told to prepare for it. Similarly, social media companies could ensure that their users are better primed for harmful narratives surrounding diet culture and eating disorders by preemptively challenging harmful narratives. This could involve working with NEDA and other healthcare experts to create informational graphics, short videos, or easy-to-read Q&A resources. These resources could then be incorporated into social media feeds of high-risk users as standalone posts. Platforms could also create an eating disorder resource center for users, similar to the Voting Information Center on Facebook feeds during the time of the 2020 elections. Instead of simply referring them to a phone number where they could seek further information, platforms could compile useful resources that users could click through to educate themselves.

Comprehensive changes in algorithmic design, data control mechanisms and user control can make social media a safer space for all users. But at the end of the day, it is important to recognize that this ties back to a very human problem: a cultural norm of fat-shaming and diet culture, the scientifically unsound idea that skinnier is superior. All this will have to be accompanied by a larger movement for body positivity, in education, media, and beyond, with the understanding that all bodies are good and worthy of love.


Alphabet and Meta are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.

“Social Media Surveillance” by Khahn Tran is licensed under CC BY 4.0

Authors