Since the debut of ChatGPT and with the public’s growing familiarity with generative artificial intelligence (AI), the education community has been debating its promises and perils. Rather than wait for a decade to conduct a postmortem on the failures and opportunities of AI, the Brookings Institution’s Center for Universal Education embarked on a yearlong global study—a premortem—to understand the potential negative risks that generative AI poses to students, and what we can do now to prevent these risks, while maximizing the potential benefits of AI.
At this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits.
After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized.
It’s not too late to bend the arc on AI implementation
We find that AI has the potential to benefit or hinder students, depending on how it is used. We all have the agency, the capacity, and the imperative to help AI enrich, not diminish, students’ learning and development.
- AI-enriched learning. Well-designed AI tools and platforms can offer students a number of learning benefits if deployed as a part of an overall, pedagogically sound approach.
- AI-diminished learning. Overreliance on AI tools and platforms can put children and youth’s fundamental learning capacity at risk. These risks can impact students’ capacity to learn, their social and emotional well-being, their trusting relationships with teachers and peers, and their safety and privacy.
To this end, we offer three pillars for action: Prosper, Prepare, and Protect. Under each pillar, we present actionable recommendations for governments, technology companies, education system leaders, families, and all those who touch this issue. We urge all relevant actors to identify at least one recommendation to advance over the next three years.
Downloads
Recommendations
-
Shift educational experiences in school.
-
Co-create educational AI tools with educators, students, parents, and communities.
-
Use AI tools that teach, not tell.
-
Conduct research on children’s learning and development in an AI world.
-
Promote holistic AI literacy for students, teachers, parents, and education leaders.
-
Prepare teachers to teach with and through AI.
-
Provide a clear vision for ethical AI use that centers human agency.
-
Employ innovative financing strategies to close the AI divide.
-
Break the engagement addiction and design platforms that are centered around positive mental health for children and youth.
-
Establish comprehensive regulatory frameworks for educational AI.
-
Procure technology that protects students’ privacy, safety, and security.
-
Support families to manage children’s AI use at home.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).