Sections

Commentary

What happens when the toy talks back? Calling for caution and guardrails on AI toys

A girl whispers into the ear of a teddy bear
Shutterstock/Prostock-studio

Artificial intelligence (AI) toys, valued at nearly $35 billion, have been rushed to market with little knowledge about how children learn or engage. Toy manufacturers are building the technology into teddy bears sold as children’s AI friends and producing a device that turns any child’s stuffed animal into what the company terms an interactive “conversational companion.” These products promise to entertain, educate, and comfort children. In our busy, challenging, and complex world, AI toys are understandably selling out. While some of the products might be entertaining, and even valuable learning tools in the future, the first wave raises a number of red flags. These toys are internet-enabled devices powered by large language models capable of recording, storing, and learning from a child’s words, raising concerns around privacy, relationships, and learning.

1. Data and the disappearing line of privacy

A new report from the consumer protection group U.S. PIRG Education Fund warned how often AI toys are listening. Out of the four devices they tested, one continued recording 10 seconds after the user stopped speaking, and another was always on and recording. Even with the best data security practices, PIRG emphasized how voice data is extremely sensitive, and can be exploited in impersonation scams and other harmful uses. Manufacturers reassure parents by emphasizing “COPPA compliance,” referencing the federal Children’s Online Privacy Protection Act. Yet compliance does not guarantee safety. Once data leaves the home, parents may not know how it is stored, analyzed, or repurposed.

2. Emotional substitution and the illusion of connection

Many of these products advertise “meaningful conversation” or “companionship,” promising to talk with children about their day—as one Guardian reporter described, an experience that quickly felt “creepy and unsettling.” But developmental science is unequivocal: Young children’s social and emotional growth and learning depend on genuine relationships that are reciprocal, sensitive to the child’s cultural context and rooted in trust and collaboration.

While there are numerous benefits of pretend play, research on parasocial attachment—one-sided bonds children form with media characters—shows how easily young minds blur the line between pretend and real connection. When that attachment forms with an AI system, the risk intensifies, especially if the toy is designed to strengthen the relationship with the child and keep them engaged. One way in which these toys keep the child engaged is by being ever-compliant to answering and comforting the child. Such acquiescence is hardly the rule when a child has to go to school with real 3- and 4-year-olds who do not comply as readily.

Further, if the Wi-Fi cuts out, the company shuts down, or the child outgrows the device, a “friend” could vanish overnight. AI can mimic care, but it cannot provide it. These toys are simulations, not substitutes for conversation, comfort, or companionship.

3. Outsourcing curiosity and learning

Some AI toys claim to teach—offering instant answers, quizzes, or interactive stories that promise to build vocabulary or problem-solving skills. Yet true learning requires struggle, exploration, and human feedback. When children rely on an algorithm for instant answers, they lose the chance to wrestle with uncertainty, make mistakes, and build persistence.

Teachers are already seeing the effects of “farmed-out thinking” in older students using generative AI for homework, and starting this habit in early childhood risks dulling curiosity before it has a chance to grow. Real play—open-ended, social, imaginative—is how children build the very skills AI cannot replicate: creativity, empathy, and critical thought. The best learning still happens in conversation—with peers, teachers, and caregivers.

The way forward: Shared responsibility for safe play

The implications of AI-powered toys can be serious. The goal is not to ban innovation, but to promote informed decision and policymaking. Policymakers should regulate:

  1. What data is collected and where it goes
  2. How the toy responds when children express emotion or confusion
  3. How the device adds to the child’s human connections—rather than replacing them

In the U.S., a bipartisan group of senators recently introduced the GUARD Act, which calls for:

  • Banning AI companies from providing AI companions to minors
  • Requiring AI systems to disclose that they are not human and hold no professional credentials
  • Creating new penalties for companies which knowingly make AI companions that solicit or produce sexual content available to minors

Outside the U.S., the European Union adopted the AI Act in June 2024, marking the first regulation of AI in the world. The regulation directly addresses the limited content guardrails on AI toys, banning “voice-activated toys that encourage dangerous behaviour in children.” This is critical, given the finding in the PIRG report that AI toys could tell children where to find and even how to use potentially dangerous household objects. Furthermore, the act considers any AI systems used in “education and vocational training” as “high-risk,” requiring evaluation before and after the product is released to the public.

These measures reflect what developmental scientists already know: Children are uniquely vulnerable to persuasive technology—designs that risk disrupting critical and genuine social interactions.

Regulators, educators, and industry leaders share a responsibility for ensuring that children’s earliest experiences with AI protect both their minds and their privacy. Play should help children connect more deeply with the people around them—not trade that connection for convenience or code.

 

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).