The human brain is primed for social interaction. In the first few years of life, thousands of daily interactions shape lifelong social and cognitive systems that prepare us to live and work with other humans, and to engage in symbolic thinking through systems like numeracy and language. Decades of developmental cognitive and neuroscience research have made it clear that young humans cannot develop optimally without daily, real-time interaction with their caregivers, educators, and peers.
While there are ample warnings about how AI might stifle older students’ and workers’ critical thinking or allow our adult writing skills to atrophy, AI’s impact on infants and toddlers could be much more profound. A group of esteemed scientists have just released a statement to show just why this often ignored part of the AI conversation should be taken seriously.
Interactive AI bots and toys are rapidly entering the lives of our youngest
In the last few months, major brands have signaled their rapid acceleration toward engaging very young humans with this very new technology. Open AI announced a deal with Mattel to bring “age-appropriate” AI children’s toys to market. Can “Nanny-AI” be far behind? And xAI announced the development of Baby Grok, a chat bot fashioned for 6-year olds. AI-driven toddler “friends” for even younger children could well be next. The science of human development offers several cautionary lessons for any who are treading in this market.
-
Babies are born to bond with other humans—in our human complexity defined by warmth, caring, emotional alignment, and variability
Babies bond and interact with other humans holistically. In the caregiver-infant dyad—whether that’s mom, dad, or a childcare educator—infants develop in the cradle of complex, unique duets, linking touch, eye contact, words and coos, and beneath it all, neurons and oxytocin receptors firing and developing in concert.
AI interactions are humanoid, not human, but we have reason to believe that baby brains might not be able to tell the difference. All generative AI agents today are based on large language models—and while emotional content is embedded in these models, these AI agents lack many of the qualities that humans bring to their interactions. Claude doesn’t have touch receptors on her skin. Gemini doesn’t have oxytocin receptors throughout its body. While her pitch variability may mimic ours —something we know that babies’ medial prefrontal cortices tune in to when conveyed by their parents—she’s faking it. ChatGPT can’t—yet—modulate its arousal to facilitate optimal engagement for an infant. Claude doesn’t smell like mommy. An interaction with a good fake could very well trick a baby. We simply do not know how children’s brains will be shaped by these language-rich, contingent, but emotionally hollow exchanges.
-
Timing is everything for young children’s development
Temporal contingency—the timing of back-and-forth conversations—is the throughline of a figurative dance between caregivers and babies. Grandma smiles, baby smiles. Dad plays peek-a-boo, baby gasps in rhythm. Back and forth, over and over, every day, gently evolving over time as children start to take the lead with their partners. This social contingency is how babies learn to navigate the entire world—how they learn language, how they develop emotionally, and how they grow up to engage every day as friends and citizens. Timing is everything. But it’s not the mechanical or perfect timing observed in the bot toys about to hit the market. While the human brain contains neurons that respond to precise, mechanical rhythms, the social brain responds to the messy, naturalistic, “just-right” timing of real human interactions. Babies need both predictability and variability to effectively learn from the social and environmental rhythms around them. These early interactions prepare young children to respond both to parents who will dote and modify their behavior to meet the child’s desires, and to older siblings who will not bend to their whims. Will children who are entrained to an optimized, robotic bot ever choose to leave their AI friend for a real one? And if they prefer their AI friend, will they be prepared to interact with the less predictable 2-year-olds in childcare?
-
Babies are active agents in their early human relationships. We respond and scaffold to guide them.
Human infants are sponges, not passive blobs. Years of research show that even newborns are active, innately curious social partners. They are born seeking connection with us that will form the basis of lifelong social relationships. By the third trimester, they recognize their mothers’ voices. Parenting itself is a sensitive period in human development; when we parent, our brains change right along with theirs. During these years, the cradle of development rests in real-time interactions, when both partners show up present, engaged, and curious. We know that existing forms of digital media can disrupt these critical interactions. Will AI be even more disruptive to the young brain than other kinds of media exposure, even more likely to interfere with the critical parent-child bond forming in those early years?
Currently, AI bots are well behaved. If we express irritation with them, they demur. They are endlessly patient. They show no sign of fatigue after hours of interaction (unlike their human counterparts). Young children need to regularly encounter difficult emotions, both their own and other people’s, to learn how to regulate their feelings. And their early emotional regulation is predictive of later regulation—a hallmark of how children will behave with others and learn in school. We simply do not know how perfect partners will change human brains and human interactions. But anything that we change at this early foundational period can have cascading consequences that unfold over a lifetime.
Our babies shouldn’t be guinea pigs for toy companies
Scientific knowledge generally advances because of carefully designed research.
But one of the most powerful examples of the importance of relationships for early development comes from a scientific study of a cruel natural experiment—a period in Romanian history where tens of thousands of young children were socially neglected in orphanages. Our colleagues developed a research program to empirically compare the cognitive and socioemotional development of children who were placed into foster care and those who were left in institutions, demonstrating to the world what developmental scientists have known for nearly a century: Social nurture—not just food, water, and shelter—is essential to human development.
We are on the brink of a massive social experiment, and we cannot put our youngest children at risk. We simply do not know how engagement with human-mimicking AI agents will shape the developing human brain. In that spirit, we and our colleagues have issued an urgent global warning about the potential of AI to disrupt the fundamental, innate social processes that enable us to grow up as well adjusted, thinking and creative humans.
We are not opposed to AI or technology in general. With the right regulation, AI can be used thoughtfully—by adults—to improve the lives of young children. AI tools have the potential to aid in early diagnosis of developmental delays or disorders, or the development of personalized interventions. But leveraging these new tools to inform and empower parents and practitioners is fundamentally different than allowing our children to make friends with Baby Grok or the infant-directed equivalent.
Technology will always outpace research. The scientists who issued this warning have led many of the critical studies on early human interaction and its impact on brain development, language, and social skills. As a field, we are now committed to developing new research projects to inform parents, practitioners, and policymakers about the impacts of AI. Sadly, by the time our initial waves of research on baby-AI interaction is complete, the infants and toddlers of today will have sailed through several sensitive periods of development. The marketplace progresses full steam ahead, but the science is clear—the risks of disrupting human to human interaction are simply too great to proceed without serious guardrails.
We will leave the specific policy proposals to those who understand the policy mechanisms needed, though we stand ready and willing to share our deep knowledge of the mechanisms of optimal early human development. For now, caveat emptor. Let all of us beware.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Policy guardrails needed as babies around the world begin to interact with AI
September 19, 2025