Sections

Brookings Global Task Force on AI in Education

The Brookings Global Task Force on AI in Education aims to ensure generative AI can be harnessed to transform education for the better by unlocking every young person’s full potential.

The Center for Universal Education at Brookings launched its Global Task Force on AI in Education in September 2024 during the U.N. General Assembly meetings in New York. The task force is made up of education leaders and artificial intelligence experts from across government, multilateral institutions, civil society, teacher organizations, philanthropy, business, and grassroots student and family networks. In January 2026, the task force’s work culminates in the launch of the report “A New Direction for Students in an AI World: Prosper, Prepare, Protect.”

Generative AI in education holds incredible potential to transform student learning and development by, among other things, unleashing creativity, enhancing individualized learning, improving accessibility to educational resources, reaching the most marginalized, and alleviating administrative burdens on teachers. However, every technology brings with it potential, and often unanticipated, risks. If generative AI is not used well in education—from early childhood to school and transition to work—it has the potential to increase student disengagement, reduce critical thinking, expand inequities, and undermine learner resilience and agency.

The goal of the task force is to help ensure generative AI can be harnessed to transform education for the better by unlocking every young person’s full potential. To do this, the task force conducted a “premortem” on AI and students to anticipate potential negative consequences of generative AI in education in order to mitigate risks and optimize the benefits. History shows us this is a wise course of action. We have to look no further than the current discussion of social media’s impacts on the well-being of young people.

Premortem analysis on generative AI in education

A premortem is a forward-looking thought experiment conducted with a team. It starts with a simple future-oriented premise: “It’s (2035). Our innovation, product, or project has failed. Why?” Team members brainstorm potential causes of failure, discuss how and why these issues might arise, and group the causes into categories. From there, they work backward to the present, prioritizing the most critical risks and identifying actions to begin to address these challenges now.

For this Brookings task force’s convenings and research, the premortem is especially valuable in addressing the evolution of generative AI in education in that it helps us systematically identify how AI might fail to meet its promises while enabling us to identify the strategies that educators, parents, policymakers, the private sector, and young people themselves can take now to prevent failures and harness the benefits.

The task force focused on answering these two questions:

  • What are the potential risks that generative AI poses to the education of children and youth
  • Assuming these potential risks, what can we begin to do now to prevent them while maximizing the potential benefits of AI?

The task force found that in communities with access to AI, well-designed AI tools and platforms can offer students a number of learning benefits if deployed as a part of an overall, pedagogically sound approach. However, overreliance on these tools and platforms can put children and youth’s fundamental learning capacity at risk. These risks can impact students’ capacity to learn, their social and emotional well-being, their trusting relationships with teachers and peers, and their safety and privacy. Indiscriminate AI implementation also risks exacerbating social divides.

Ultimately, the task force found that at this point in its trajectory, the risks of utilizing AI in education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized. While AI’s potential negative risks and the damages it has already caused are daunting, they are fixable. Clear actions can be taken to help AI enrich, not diminish, students’ learning and development.

  • AI-enriched learning. Well-designed AI tools and platforms can offer students a number of learning benefits if deployed as a part of an overall, pedagogically sound approach.
  • AI-diminished learning. Overreliance on AI tools and platforms can put children and youth’s fundamental learning capacity at risk. These risks can impact students’ capacity to learn, their social and emotional well-being, their trusting relationships with teachers and peers, and their safety and privacy.

The task force report presents 12 recommendations for multiple stakeholders organized around three foundational pillars that together form a comprehensive framework for action: Prosper, Prepare, and Protect.

Task force timeline

The task force convened over 18 months, conducting research and consultations, and sharing insights for feedback along the way.

  • January – October 2025. Focus groups, interviews, and consultations with over 500 students, teachers, parents, education and education technology experts, education and policy leaders, technology developers, funders, and researchers. Review of over 400 articles and studies
  • June – September 2025. In-depth analysis and development of key findings
  • October – December 2025. Draft final report
  • January 14, 2026. Task force final report launch
Filter by
Date
Language