Sections

Commentary

Why mental health apps need to take privacy more seriously

Eugenie Park and
Eugenie Park
Eugenie Park Research Intern - The Brookings Institution, Governance Studies, Center for Technology Innovation
Darrell M. West
Darrell West
Darrell M. West Senior Fellow - Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies

November 30, 2023


  • As diagnosed mental health conditions are increasingly prevalent, the use of mental health applications has been on the rise, some of which have brought in millions of dollars of revenue.
  • These apps make mental health services more convenient, but they also generate massive amounts of sensitive personal data.
  • Often, mental health apps do not have adequate privacy protections, and some either scraped user data to develop AI bots or disclosed data to other companies for advertising purposes.
  • Mental health apps must ensure their privacy policies are comprehensible to the average user, and they must be held to similar standards as traditional health providers.
A Numii system by AIO, modules that collect data to improve worker health, is displayed at "CES Unveiled" during the 2019 CES in Las Vegas, Nevada, U.S. January 6, 2019.
A Numii system by AIO, modules that collect data to improve worker health, is displayed at "CES Unveiled" during the 2019 CES in Las Vegas, Nevada, U.S. January 6, 2019. Credit: REUTERS/Steve Marcus

Over the past few years, the use of mental health apps has been on the rise, with a reported 54.6% growth between 2019 and 2021. This growth is likely correlated with the increased prevalence of diagnosed mental health conditions during the COVID-19 pandemic, along with the implementation of social distancing measures that made traditional in-person psychotherapeutic services less accessible.

Among apps that are broadly intended to improve mental health, several distinct types have emerged. The first are those that guide the user through practices which are intended to relax one’s mental state, such as meditation or deep breathing. Examples of these apps include Calm, which brought in an estimated $355 million in revenue in 2022, and Headspace, which brought in an estimated $235 million. The second category, which includes apps like BetterHelp, is comprised of platforms that connect people with licensed therapists and facilitate their treatment (in the case of BetterHelp, clients and therapists use the app to have asynchronous text messaging sessions). Finally, a more recent category of mental health apps that has emerged includes those that employ AI tools to emulate mental health professionals. Examples of these are Elomia, Wysa, and Woebot, which take the form of automated chatbots that respond to patient comments.

Although these apps undoubtedly make mental health services more convenient, they also generate massive amounts of sensitive data, and therefore have raised serious concerns over patient privacy. In May, Mozilla released a report analyzing the privacy policies of 32 mental health apps. Out of those, 22 were marked with a “privacy not included” warning label, which means that they were found guilty of two or more of the following: problematic data use, unclear and/or unmanageable user control over data, suspect track records of data protection, and failure to meet Mozilla’s Minimum Security Standards. Privacy concerns about these apps have also started to make their way to policymakers. In March 2023, the Federal Trade Commission (FTC) filed a complaint against BetterHelp for disclosing customer’s emails, IP addresses, and intake health questionnaire information to Meta, Snapchat, Criteo, and Pinterest. This data was then used for advertising purposes, despite BetterHelp assuring its users that all data would remain confidential and saved for a few uses related to the functioning of its services.

In addition to being used for advertising purposes, data generated by these apps are also scraped for insights to help improve the app itself. The app Talkspace, which serves as a platform through which individuals can text licensed therapists, “routinely reviewed and mined” the private conversations of users for business insights, as well as to develop AI bots to incorporate into their services. It is not clear exactly what the company does with the material reviewed or how they gain insights.

Particularly sensitive data

There are several reasons why mental health data is especially sensitive. Firstly, many of these companies facilitate “intake questionnaires” to enroll individuals into their services. These questionnaires can include identifiers such as gender identity, sexual orientation, and details about the potential client’s past experiences with their mental health. If this information were to be shared with advertisers, as it was in the case of BetterHelp sharing with Meta, then users may find extremely intimate details about their life reflected in the advertisements that they are receiving. This can be concerning for users, especially during a time when aspects of identity (such as gender and sexuality) are becoming increasingly politicized and, if placed into the hands of employers or other parties, could affect an individual’s quality of life. Additionally, mental health apps are unique in the sense that data brokers can make inferences about the mental state of individuals just by knowing that they use one of these apps, making the abundance and ease of accessing sensitive information especially significant. Even when this data is de-identified, anonymization can easily be reversed when integrated with other datasets, which is one of many reasons for comprehensive federal privacy legislation with mandated controls on data collection.

What to do?

So, what is to be done about these concerns? First, mental health app companies must commit to ensuring that their privacy policies are comprehensible by an average user. A recent study found that, out of 27 mental health apps that were examined, 24 had privacy policies that required a college-level education or higher to understand. Even if they were easily understandable, these policies are often hidden behind additional links or in paragraphs of text, which makes them inaccessible to many people. Given the personal nature of the data that these apps collect, and the intimacies developed with the app by users, it is of paramount importance that people understand from the start the conditions under which they are revealing this information and meaningfully consent to them.

Secondly, policies need to be put in place that hold these digital health apps to similar privacy standards as traditional health providers. Currently, this isn’t the case, which is a large oversight given that the sensitivity of the information shared on these platforms, in many instances, is likely to hold a similar gravity to mental or physical health problems shared with in-person providers.

Third, a larger conversation should be had about the influence of letting digital platforms into the most personal aspects of our lives. Having that data breached or sold with the possibility of being connected to a specific individual can have serious consequences in a society that still stigmatizes people seeking resources to protect their mental health. Although, in many ways, the benefits of these technologies are clear in terms of accessibility, their users must stay cognizant of the fact that private technology companies — not licensed clinical facilities — are facilitating the services that they are using. And these technology companies carry with them a unique ability to surveil mental health and other data at a massive scale for their commercial interests.

Digitization is creeping into every facet of the lived experience, and that poses profound risk in the field of mental health. Currently, personal pain and anguish can be exploited for commercial purposes, and without a national data privacy standard, as well as guardrails over these companies, the most intimate parts of our lives will be out of patients’ control and up for sale.

Authors

  • Acknowledgements and disclosures

    Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.