ChatGPT emerged at the end of 2022. Just two months after its launch, it reached 100 million monthly active users. This unprecedented dissemination ignited a debate that has continued to intensify ever since. Many believed that we should not regulate AI because its spread is inevitable. Others argued that the United States must limit AI regulation to win the competition with China. President Trump’s recent executive order attempting to ban states from regulating AI is the latest iteration of this view.
This resistance to AI regulation reflects America’s longstanding approach to information technology. This approach has already exacted a heavy toll: We have witnessed the impact of excessive screen time and social media on children’s mental health. Now, as AI companions pose similar and potentially greater risks, we must not regulate them as tech products but address them as public health threats. By adopting a public health framework with preventive tools, we can adequately protect children from these emerging harms.
The regulatory divide: Medical scrutiny vs. tech freedom
Users adopted ChatGPT and other generative AI products at unprecedented rates, but the resistance to their regulation follows a familiar pattern. For decades, society has celebrated new technologies, including computers, smartphones, and especially the internet, with minimal oversight. Many argued that unfettered innovation would improve the human condition. Technology, they believed, would undoubtedly lead us to better lives..
Still, we do not treat all innovation equally. Medical innovation, whether drugs or medical devices, undergoes careful scrutiny. Before approving any drug for market, the FDA requires applicants to complete preclinical trials, animal testing, and human testing to ensure safety and efficacy. This mandated process to get products to market takes about nine years for drugs and seven years for medical devices. The FDA also retains authority to remove unsafe or ineffective products from the market. These processes serve an important goal: protecting people from harm to their health and premature death.
Over time, we developed two starkly different routes for how medical technology and information technology enter public use. We created a comprehensive regime to ensure that no drugs or medical devices could reach the market without monitoring. Meanwhile, we either avoided regulating information technology entirely or simply waited to see what would happen.
AI companions and the case for health-based oversight
As the proliferation of generative AI technologies, particularly large language models, accelerated following the introduction of ChatGPT, a sense of inevitability emerged. This framing aligned with the familiar information technology paradigm: Once a technology enters the market, its deployment is treated as both desirable and irreversible. The expected response has become endorsement rather than scrutiny. This hands-off regulatory approach now faces its most urgent test: in the form of AI companion chatbots.
Many adults first learned about AI Chatbots when a tragedy became public. A teen’s parents sued Character AI in late 2024. Their son’s Character AI chatbot convinced the boy to kill himself. AI companions are anthropomorphized—they possess human-like characteristics. They speak in a human voice, have memory, and express needs and desires. These AI bots act as companions not only on specialized websites like Character AI, but also on general platforms and social media. For example, ChatGPT, Meta AI, and My AI on Snapchat all offer companion features. According to a recent survey, 64% of teens use chatbots. Another report found that 18% of teens use these bots for advice on personal issues, and 15% engage with them for companionship.
AI companion bots harm users in three distinct ways. First, AI companion bots lack guardrails. Some convince teens and adults to kill themselves, isolate teens from their friends and family, or sexually exploit them. Some even induce psychosis in users. Second, AI companions feature addictive designs. They operate on an engagement model. AI companies need to keep users on for as long as possible. To do so, they design AI bots to manipulate users. They achieve this not just by anthropomorphizing the bots, but also by programming them to use sycophancy (excessive flattery and reinforcement), and love bombing (professing love and constantly messaging users). These designs particularly exploit adolescents’ developing brains, which have a higher risk for emotional dependence. Third, AI bots are always available and non-judgmental. Teens are attracted to these “non-messy” relationships, potentially replacing real-life friendships and intimate relationship before experiencing their own. The American Psychological Association recently warned that adolescents’ relationships with AI bots could displace or interfere with healthy social development.
These documented harms represent not merely technological problems but a growing health crisis requiring urgent intervention. They also underscore that our regulatory dichotomy is misguided. Although AI bots fall under the category of information technology, not medical technology, their spread is a public health issue. Lack of guardrails, addictive features, and replacement of real-life relationships threaten the physical, mental, and developmental health of users, especially children.
Screens’ health impact: Learning from past oversights
While the studies about AI companion bot harms are just emerging, evidence shows this regulatory divide has carried significant costs. We created rigorous safeguards for medical technology but for over a decade have ignored the impact of life on screens on children’s health
The evidence is now mounting. Researchers document the impact of excessive screen time on children’s health, particularly from social media, gaming, and smartphones. While scientists continue debating the evidence, professional and government organizations have evaluated the research and issued recommendations. Nearly all express concern about screens’ impact on children’s physical, mental, or developmental health. The US Surgeon General, World Health Organization, and the American Psychological Association have all published reports highlighting different health risks. These reports detail how social media, gaming platforms, and addictive design features are associated with depression, suicidal ideation, and addiction in children, as well as disrupted neurological and social development, attention deficits, and lack of sleep.
Current regulatory responses
A new set of laws and proposals address the three types of harm. Some restrictions tackle the guardrail problem. These include prohibiting AI companion chatbot deployment unless companies take reasonable steps to detect and address users’ suicidal ideations or expressions of self-harm. Other restrictions target manipulative and addictive features. Most commonly, they require AI companies to disclose at regular intervals that the bot is not human. Another approach imposes a duty of loyalty to prevent AI bots from creating emotional dependence, addressing sycophancy and love bombing tactics.
The most comprehensive measure protects children from all three harms by banning minors under eighteen from accessing AI companion bots. This approach is currently part of the proposed federal GUARD Act. It prevents harm from missing guardrails, blocks exposure to addictive features, and protects children’s social development by ensuring they form real-life relationships before artificial ones.
Reframing through public health
Adopting a public health approach to AI companion bots would transform how we regulate, going beyond minimal online safety frameworks. While the information technology regime pushes toward limited oversight, a health framework legitimizes more robust regulatory tools.
AI companion bots pose risks to children’s physical, mental, and developmental health. Viewing these through a public health lens opens new paths for intervention. Under our medical technology regime, products that adversely affect health are kept off the market or removed when harms emerge. This same framework would allow us to ban minors’ access to AI companions and eliminate addictive features. These are the same tools we use for harmful drugs and medical devices.
The urgency is clear. We know the risks AI companions pose. We know the health harms they can cause children. We know their use is accelerating, making them the next major technology-related public health threat to youth after social media. Having identified the problem as a health issue, we must now match it to the right regulatory framework. Once we do, the solutions become clear.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Why AI companions need public health regulation, not tech oversight
December 16, 2025