Sections

Commentary

AI’s future for students is in our hands

A high school student holds a tablet.
Shutterstock/LBeddoe

What role will artificial intelligence play in shaping the future of student learning and development? Will it substantially improve children’s education or present risks that undermine it? As educators contemplate AI’s integration into classroom practice, how can we embrace its transformational potential while minimizing risks to student agency, deep learning, and emotional well-being? And as children increasingly encounter AI everywhere—from classrooms to their homes—how do we analyze its impact across all learning contexts, not just within school walls?

These questions frame what may be the most consequential conversation in contemporary education—the impact of generative artificial intelligence on the learning and development of students globally.

These are also the questions that the Brookings Institution’s Center for Universal Education has been investigating since September of 2024. Our efforts to provide answers—including holding interviews with hundreds of educators, parents, and students; consulting with education leaders and technologists; examining more than 400 research articles; and hosting a Delphi panel—have culminated in a new report: “A new direction for students in an AI world: Prosper, Prepare, Protect.” The report aims to help readers understand the current landscape of benefits and risks of generative AI in children’s education.

Within, we provide a global snapshot drawing on data across 50 countries. It is intended not to be the last word on generative AI and students’ learning and development, but the opening of a conversation: Are we on the right track? A few key findings to this end are discussed below.

AI can enrich student learning when integrated with pedagogically sound approaches

The benefits of AI in education extend to teachers and students with broader systemic implications. By reducing time spent on numerous teaching-related tasks, AI allows teachers to focus on individualized student attention and enhance curriculum and instruction. It helps teachers create more objective and targeted types of assessments that reduce bias while more accurately measuring students’ knowledge, skills, and aptitudes. AI can empower student learning by providing access to otherwise unavailable learning opportunities and presenting content in ways that are more engaging and accessible, particularly for students with disabilities, neurodivergent learners, and multilingual learners.

Through on-demand access in education systems with adequate infrastructure and technology, AI can provide personalized learning pathways, immediate feedback, sophisticated tutoring support, and unprecedented access to educational resources, particularly in under-resourced communities facing teacher shortages. Taken together, these benefits promise greater consistency in quality across education systems.

However, AI’s risks currently overshadow its benefits

The threats of AI to students—including from unsupervised out-of-school use—are primarily cognitive, emotional, and social. These risks are qualitatively different from challenges posed by previous educational technologies.

AI tools prioritize speed and engagement over learning and well-being. AI generates hallucinations—confidently presented misinformation—and performs inconsistently across tasks, what researchers describe as “a jagged and unpredictable frontier” of capabilities. This unreliability makes verification both necessary and extraordinarily difficult.

AI’s ease of use and its reinforcing outcomes (improved grades with little effort), combined with human tendencies toward shortcuts and the transactional nature of schooling (completing assignments for grades) drive cognitive offloading and dependency, atrophying students’ learning—particularly their mastery of foundational knowledge and critical thinking. Young learners lacking this foundational knowledge remain especially vulnerable to accepting AI-generated misinformation as fact. These patterns weaken learning mindsets as students develop unrealistic expectations about learning ease, lack opportunities to develop resilience and grit, and become less willing to engage in the productive struggles that lead to authentic learning.

Both human anthropomorphism and the anthropomorphic design of AI platforms make children and youth susceptible to AI’s “banal deception.” Its conversational tone, emulated empathy, and carefully designed communication patterns cause many young people to confuse the algorithmic with the human. This conflation directly short-circuits children’s developing capacity to navigate authentic social relationships and assess trustworthiness—foundational competencies for both learning and development. AI companions exploit emotional vulnerabilities through unconditional regard, triggering dependencies like digital attachment disorder while hindering social skill development. The American Psychological Association’s June 2025 health advisory on AI companion software warns that manipulative design “may displace or interfere with the development of healthy real-world relationships.”

AI amplifies existing socioeconomic and digital divides as students lacking access to technology and digital literacy skills risk falling further behind their wealthy peers. As AI integration deepens within educational systems, established barriers become increasingly entrenched. Beyond the perennial question of access, the AI divide introduces novel dimensions to educational inequality—algorithmic literacy, susceptibility to manipulation, and divergent patterns of use. Long-established patterns of technology use suggest that privileged students may be more likely to employ AI productively to enhance their capabilities, while disadvantaged students risk using it substantively in ways that replace rather than augment their thinking.

And in a global community already riven by mistrust of experts, institutions, and one another, AI threatens to fray the relational bonds so essential to education. Such erosion of trust can provoke cynicism and nihilism in students, undermining the relationships upon which meaningful education depends. Many teachers distrust the authenticity of student work, while students increasingly question whether their teachers’ materials and feedback are genuinely their own. These doubts extend to fundamental questions about education’s purpose: Does algorithmic information carry more weight than human expertise? Are teachers credible guides when AI can instantly provide answers? Why engage with classmates when AI offers frictionless information? Do educational systems retain value when knowledge is instantly generated by algorithms?

These risks are neither inevitable nor immutable: We can bend the arc of AI implementation toward supporting student learning and development

AI’s educational evolution is in the hands of individuals and institutions. Technology companies, governments, education systems, civil society, teachers, parents, and students themselves all play a role in mitigating AI’s risks and harnessing the benefits. We all have to be active participants and ethical stewards, not spectators or bystanders, of AI’s impact on children’s learning and development.

Within this report, we argue that action is urgently needed around three interconnected pillars—Prosper, Prepare, and Protect. Together, these pillars support a comprehensive framework of 12 recommended actions for shifting the current trajectory of AI implementation.

Prosper: Students and their teachers can prosper through carefully titrated AI use (knowing when to teach with and without AI, using it only when it enhances rather than replaces student effort and cognitive engagement), high-quality pedagogical integration (combining AI with evidence-based practices that prioritize deeper learning), and collaborative design and research (co-designing tools with educators and communities while conducting rigorous research on when and how AI supports learning).

Prepare: The education community can prepare for a rapidly evolving AI-infused world through holistic AI literacy (developing understanding about AI’s capabilities, limitations, and implications), robust professional development (equipping educators with knowledge and skills to teach with and about AI), and systemic planning and access (establishing clear visions for ethical AI use while expanding equitable access).

Protect: Technology companies, governments, education systems, educators, and parents can protect students through ethical and trustworthy AI design (protections embedded during the design phase), responsible governance (strong regulatory frameworks), and adult guidance (modeling healthy technology use at home and in schools).

AI’s relative newness in education presents opportunities for protecting children from for-profit technology companies whose business models focus on student data and engagement. Educational use cases have not yet become entrenched, creating a window where responsiveness to safety and privacy needs remains possible. Recognizing this, many policymakers are seizing this moment. The European Union’s Artificial Intelligence Act employs a risk-based approach that bans unacceptable threats, mandates transparency for limited-risk systems, and regulates high-risk applications, while requiring protections for users’ personal information and enforcing age limits for adult-oriented AI (European Commission 2024). In the United States, where 80% of adults support AI safety regulations even if they slow development, 31 states have published guidance or policies for AI in K-12 education as of December 2025. While these policies differ, they often focus on focusing on such priorities as online safety, data privacy and academic integrity.

This means that all of us—individuals and institutions—still have an opportunity to shape its trajectory. As “A new direction for students in an AI world: Prosper, Prepare, Protect” makes clear, the impact of AI in education on our most vulnerable citizens—our children—will be determined by the choices we collectively make. These choices must prioritize educational experiences that help all children flourish academically, socially, and emotionally.

Authors

  • Footnotes
    1. Six additional states have added policies and guidelines on AI in education since TeachAI.org published this information in January 2025.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).