In a way, we’ve been down this road before.
Digital revolution? The internet, social media, email, and personal computers have been with us for over a generation. How has that turned out? Their legacy is a mixed bag, we’d say.
Scaling edtech to improve education? Governments and organizations have attempted to scale technology to improve education (edtech) for decades. Time and again we’ve heard edtech presented as a surefire way to close education gaps in low- and middle-income countries (LMICs). Our own back-of-the-envelope evaluation, however, suggests that sustained impact from scaling edtech in LMICs is low. That’s partly about scaling being difficult in general and partly about edtech not fulfilling its promises.
Artificial intelligence—both reactive AI and generative AI (including large language models)—may be categorically different in kind, degree, and impact from prior digital revolutions, but we can still predict some implications. This blog sets the context and offers some learnings from the recent past.
2 categories of AI in scaling education impact
One category is scaling solutions that center AI. AI’s now being integrated into education innovations—such as children’s toys with AI inside them, adaptive learning devices like Duolingo, and student assessment software for teachers. While this category has the potential to bring new, even exponential benefits, there are also serious concerns based on both broader worries about AI and what has been learned from past attempts to scale edtech.
A second category is using AI in the scaling process itself, where teams designing, testing, adapting, and scaling education innovations might use AI to save time, offload tedious tasks, make use of a stimulating AI thought-partner, and translate materials into other languages or personalized use.
Regardless of the category, our last several years of research suggest that scaling in education is more than a technical process. It’s also relational and occasionally improvisational. It requires moral decisions about trade-offs, understanding how an innovation operates differently in different places for different people, and continuous adaptation based on local evidence and changing circumstances.
Potential pitfalls of AI for scaling in education
Given these complexities, we share four potential pitfalls of over-relying on AI in scaling in education:
- There’s an assumption that edtech innovations quickly, cheaply, and almost magically solve complex education problems (the “leapfrogging” belief). In truth, digital innovations are tools used by humans and their impact hinges on if and how they’re used, what they contain, and whether there’s sufficient capacity for effective use.
- Offloading some (or much) of the cognitive work of scaling to machines can deskill and decrease the capacity for humans to conduct required intellectual and moral work in the future or engage in scaling efforts that lie outside the central tendencies of the bell curve of life (such as outliers, nonlinear progress, and alternative perspectives).
- Relying on AI means that realities that cannot easily be captured or summarized as digital information are lost. Happy accidents, hard-to-express values and dreams, indigenous knowledge, creativity, perseverance, and social learning likely won’t make it into an AI scaling future.
- Contextualizing an innovation—digital or not—to fit the contours, practices, equity considerations, and history of people in a location is nuanced. There’s a popular but incorrect presumption that a standardized digital device or practice fits everyone’s needs in the same way or is inherently adaptive. This promise of AI-based innovations risks falsely persuading decisionmakers and implementers that no or little contextualization for the specific location and population is required.
Lessons from scaling edtech can help address the pitfalls
Millions Learning’s decade of research on scaling the impact of innovations in education offers some lessons for scaling AI or using AI for scaling education improvement:
1. Decisionmakers shouldn’t let the strong pressure to adopt AI-centered innovations obscure crucial questions about scalability, sustainability, and equity.
It’s essential to carefully evaluate the motivation, viability, and sustainability behind innovations under consideration. Right now, there’s strong demand to adopt AI coming from constituencies, aggressive tech company marketing, retail politics, and pressure to emulate higher-income countries. It’s hard to swim against this current. But AI may not be the most strategic choice for expanding learning opportunities to all. For many LMIC locations (including rural areas with limited digital connectivity, electricity, or digital literacy), the notion of scaling a high-tech innovation for widespread impact may be overly optimistic. Even if it can be scaled, can it be maintained and upgraded over the coming decade? (How many once-promising digital innovations for schools now sit dormant on unused laptops or in closets?) Further, there are equity concerns in terms of who will benefit from scaling AI-enabled innovations and who will be left behind.
It may be that limited budgets and human resources for education improvement are better spent scaling empirically proven, analog solutions such as establishing community-school partnerships, cohering government priorities, and increasing the social capital of teachers—or on less “shiny” but eminently useful digital innovations for record-keeping and planning.
2. Simpler and more flexible innovations tend to scale more easily and more successfully.
The more complex an innovation, the more expensive and difficult it is to scale. This is because it’s costly to put something new and complicated in classrooms and communities, and harder for users and contexts to master it, spread it, and adapt it for their own needs. How will AI-based innovations not meet the same ignoble fate that many recent edtech innovations did: unable to fit into the culture of target locations, digital hardware falling into disrepair or unused when tech problems arise, effective training to use the innovations still elusive, and little evidence of positive impact on student learning? Simple WhatsApp chat groups to facilitate teacher peer-exchange might work, but as Robert Palmer said to me recently, what will it take to get teachers to go to a chatbot and feel afterwards that it was worth it? And how will we ensure that the information inside the AI stream is good for teachers, policymakers, and children—not bad for them?
3. Rigorous evidence and research demonstrating the results of AI and education are sorely needed—and fast.
Our previous research on edtech found very little evidence useful for gauging the value of scaling specific edtech innovations for student learning improvements. Promotional materials, research conducted by companies’ own paid researchers, and peer-reviewed academic research that takes years to complete are not actionable for decisionmakers wishing to ascertain the cost-effectiveness of scaling a digital innovation quickly. Until rapid, careful, accessible evidence is available, governments and school leaders may do well to remain skeptical.
What’s the best way forward?
It may be that the real value of AI in scaling is not scaling AI-based innovations but rather using AI to support education experts to design, adapt, scale, and study non-AI innovations.
There’s promise for researchers, decisionmakers, funders, and scaling practitioners to use AI to improve their work. AI could assist in identifying education needs in specific locations (provided the information it delivers is trustworthy and equitable). Contextualizing externally developed innovations for scaling in new locations could perhaps be done better and more efficiently (and therefore more cheaply) by an AI that runs thousands of scaling scenarios in minutes to reveal the best strategies for a given situation. And AI can likely comb through large datasets for use in planning innovation rollouts or evaluating scaling impact across geographies. These scaling support practices might be conducted faster, more accurately, and less painfully with AI, rather than by hand. But these AI supports cannot and should not replace the human judgment, social relationships, and experience-based intuition central to all scaling processes.
Perhaps—and this is still a big if, given that there’s not yet any systematic evidence that AI’s outputs are better than analog or traditional digital education practices—the best use of AI is to support the existing work of adhering to tried-and-true scaling principles, engaging collaborative coalitions of committed stakeholders, and supporting educators and communities to embrace innovation and work hard to improve their systems for all children.
Used with care and concern, AI could offer unique opportunities to actually accelerate progress toward making quality education accessible for all. Or it could be the same old story of promises unrealized and minimal lasting education impact delivered. Let’s not blow it.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
How to avoid past edtech pitfalls as we begin using AI to scale impact in education
October 21, 2025