I was shocked to read that 52% of teenagers report regular use of AI to augment human companionship. In a Common Sense Media survey, more than half of teenage respondents said they use such tools a few times or more per month, and 72% reported using an AI application at some point during the last year. Teens are more likely to adopt an AI companion, though research shows a growing number of adults are doing the same. One study found that 55% of those over the age of 50 have made use of AI software for a variety of purposes, with 14% looking for health-related information and 12% seeking social connection.
These levels of utilization raise important questions regarding how technology is transforming personal relationships, the quality of responses provided, and risks arising from the dearth of human contact. Are we witnessing the emergence of digital relationships and a world devoid of personal interactions? Or are such tools merely complementing human relationships and representing just one more outlet for individual existence? Looking at information that bears on these questions can help us understand the new world we are entering.
What are AI companions?
AI companions are digital friends with whom you can talk or text, express feelings, share experiences, or get feedback. Many individuals use them as interactive tools for companionship, medical advice, personal therapy, or a sounding board for difficult experiences. Based on their rising capabilities, these tools can offer insight, advice, opinions, or personal empathy that especially teenagers and older Americans might be missing.
How do young people use them?
In the survey of American teenagers, respondents reported using AI companions as a general tool (46%), for conversation (18%), for emotional or mental health support (12%), for role-playing (12%), as a friend (9%), or for romance (8%). Respondents said they find such devices entertaining, appreciate the advice provided, enjoy their perpetual availability, or find them easier to talk to than a real person. About one-third (31%) actually claim that dealing with AI companions is more satisfying than talking to a human being.
However, not all AI interactions have ended well. In one widely-publicized case, a young man hung himself after engaging with an AI tool. In seeking to understand what had happened, his father discovered on the teenager’s phone conversations with ChatGPT-4 about suicide. Following their son’s passing, the parents sued OpenAI for what they said was a wrongful death.
One proposal that would help with underage use of AI companions is parental controls, which would offer some transparency over troubling interactions that could lead to tragic outcomes. OpenAI has recently announced that parents can now monitor their children’s queries and disable certain features when the child is in “acute distress”. That is a first step in the right direction toward giving families some control over therapeutic applications.
Elder care assistants
Most countries lack the social workers, health assistants, or home care nurses they need to aid patients. For that reason, AI companions, robots, and digital platforms are stepping into the breach and providing friendship or health services. A Harvard Business School study found that talking with a digital friend reduced people’s loneliness and helped them feel more connected to other individuals. Another research project claimed, “seniors who used AI-powered wearable devices experienced a 25% decrease in hospital admissions related to adverse health events.”
At the same time, there are substantial doubts about the utility of this technology. 46% of older individuals say they had little or no trust in the information provided by digital tools. That number suggests serious reservations in the consumer marketplace for digital friends, at least among older individuals.
Therapy assistants
AI companions are also becoming more prevalent for therapy. An experimental study of patients suffering from depression or anxiety found reductions in symptoms with the use of AI chatbots. Those involved reported satisfaction with the tools and high utilization rates. Another project found improvements in mental health, especially among those experiencing mild to moderate difficulties. Several patients thought therapy assistants helped deal with their mental and emotional problems.
Yet owing to quality, security, and privacy concerns, the state of Illinois passed legislation that limits AI usage in therapy. The state’s Department of Financial and Professional Regulation Secretary Mario Treto Jr. explained his skepticism regarding health-related assistants, saying, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients.”
Texas Attorney General Ken Paxton is investigating these tools as well. Citing concerns that chatbots “impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services,” he is looking at several firms to see if they are violating his state’s fair trade and marketing rules.
How do AI companions perform?
Digital companions cannot replicate the experience of talking to another human. For example, some research has found that such tools sometimes tell people what they think they want to hear. In addition, interactions with an AI chatbot are generally devoid of body or facial cues central to human connection, which can make it difficult for people to interpret reactions. Some individuals say they find digital interactions with a device either scary or depressing. One analysis of survey results found that “companionship-oriented chatbot usage is consistently associated with lower well-being.”
Yet, advances in generative AI are making these tools more interactive and helpful for certain individuals. Digital companions are moving beyond static chatbots that can respond to specific questions to full-fledged conversationalists that are personal, spontaneous, and informative. Some tools are said to provide access to a wider range of information than is available from human assistants and can improve people’s overall productivity and sense of fulfillment.
Impact on human interactions
The $64,000 question with AI companions is to what extent they are replacing or supplementing human experiences. Very few people would want a world devoid of human companionship and interaction, and with an exclusive reliance upon digital companions. Even if some individuals benefit from these devices, that kind of world reeks of a dystopian future that would repel some people.
But if AI companions were to combine with adequate levels of human interaction, the results could be beneficial. People would gain the virtues of low-cost digital companions while still engaging with humans and having a wide range of information sources and emotional support.
Privacy and security risks
In a digitally integrated world, we need to mitigate personal privacy and security concerns while engaging with AI companions. In the survey of teenagers, 24% said they share personal information with AI companions, raising worrisome questions regarding confidentiality. What if AI devices were subject to privacy violations or cybersecurity hacks that released personal information to the broader public?
Even worse, some industry leaders have warned that conversations with chatbots, AI tools, or digital assistants can be subpoenaed as part of legal proceedings. Online discussions do not have the same legal protections as in-person meetings with certified doctors or therapists. Interactions that one thought were personal could be used against them.
In a world with legal threats, periodic hacks of personal information, and malicious use of people’s private material, we need much stronger privacy and cyber protections of online information to make sure that people’s digital friends don’t pose unacceptable risks. Policymakers need to address these risks before AI companions become even more prevalent.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Should you have an AI companion?
September 3, 2025