This blog is the first in a series examining big research questions related to the scaling process in education.
When it comes to supporting innovations at large scale, governments play a central role. But nonstate actors, such as researchers or project implementers, are also essential. Often, they’re the ones who design, pilot, and promote the innovations—hoping one day to hand the initiative over to the government for collaborative, long-term adoption. As evidence: In the Center for Universal Education’s global catalog of nearly 3,000 education innovations, two-thirds of them were started in the nonprofit sector, while only 12 percent originated in government.
Former Project Director, Millions Learning Project - The Brookings Institution
Former Consultant - Center for Universal Education
This means that nonstate education actors must become adept at presenting, proving, and pitching innovations to government. They need to learn how government decisionmakers identify and adopt innovations for scale, and the more the implementers and researchers know about the decisionmaking process, the more effective they can be.
Turns out, however, that there’s scant research on how government decisionmakers decide to support and scale an innovation—and even less in the field of education (most is focused on scaling in health care, agriculture, and poverty alleviation). We reviewed existing research across all four fields to look for consensus and divergence, as well as identify gaps to investigate in our own research.
What the research revealed is that governmental decisionmaking about scaling is neither linear nor purely rational. Instead, it’s a three-dimensional zigzag in which technical and financial features of an innovation interact with personal relationships, political incentives, and competing innovations. While there’s diversity across innovations and contexts, the literature suggests that, for government adoption to have a chance, the innovation and scaling strategy must be aligned with the decisionmaker’s needs. Additionally, to increase the probability of government taking up an innovation for large-scale implementation, it should respond to an urgent local problem, prove effective at different levels of scale, and be both cost-effective and politically appealing.
Respond to an urgent local need
Context matters. The essence of a demand-driven innovation is that it solves a pressing problem. If the decisionmaker doesn’t believe that it does, the innovation won’t be viewed favorably. This means that innovation implementers and researchers must identify and articulate what entrenched educational issue will be addressed by their innovation. Furthermore, one size does not fit all, and so the innovation should be tailored to the specifics of the setting. And, finally, in an environment of scarce financial and human resources, politicians prefer innovations aligned with the government’s own agenda.
Evidence matters, but how and how much?
Scientists often believe that data alone will convince policymakers to choose the right policies for scaling. However, experiences in agriculture, health, and poverty alleviation indicate that policymakers rely on statistical evidence only in part—additionally turning to their own intuitions, beliefs, trusted advisers, familiarity with the topic, and political realities. This is for several reasons. First, decisionmakers don’t always have reliable data or the right kinds of data. Education management information systems (EMIS) in low- and middle-income countries vary from good to bad to nonexistent. Second, some data are more desirable than others, and policymakers need support in making use of data: An interesting assessment investigated whether investments in education data match data use. Custer et al. found that decisionmakers cited program evaluation data to be the most desirable. However, decisionmakers also reported a need for research assistance to interpret the data, communicate results to stakeholders, and craft evidence-based policies. And, third, data have known limitations, such as: (1) The accuracy of test score data depends on how they’re interpreted; (2) there are always factors that research cannot capture; (3) using randomized control trials when piloting an innovation can be misleading; and (4) too much data can lead to decision paralysis.
Decisionmakers are constantly presented with new innovations that promise to improve school and student performance. Given limited education-sector resources, decisionmakers will want to know which innovations are cost-effective relative to other options. This is especially important now, given that two-thirds of low- and lower-middle-income countries have cut their education budgets since the COVID-19 pandemic began. Being convinced that a particular innovation’s benefits outweigh its costs—or that it’s more cost-effective at achieving intended outcomes than alternative options—will be a key factor. Sidestepping the finance question could prove fatal.
Our review found that government decisions on whether to adopt an innovation are only partly about the quality or potential of the innovation, and only partly about having data that demonstrate the effectiveness of the innovation. They appear to equally be the result of aligning the innovation with the decisionmaker’s own political needs, reputational desires, trusted advisers’ opinions, and personal conversations with education experts. This suggests that nonstate innovators, implementers, and researchers must think broadly, deeply, and strategically as they scale their projects. Yet, it also suggests that more study is needed.
The Center for Universal Education (CUE) at Brookings has partnered with the Global Partnership for Education’s (GPE) Knowledge and Innovation Exchange (KIX), a joint partnership between the Global Partnership for Education (GPE) and International Development Research Centre (IDRC), to undertake research on the scaling process in education, including how decisionmakers identify and adopt innovations to scale, and the role that trade-offs and key scaling drivers play in the process. We look forward to sharing relevant knowledge and insights with the global education community as we learn them, including through beginning this blog series on decisionmaking around scaling. In this inaugural post, we introduce the landscape, and in future posts we will discuss other scaling-related themes and findings.
This project is supported by of the Global Partnership for Education Knowledge and Innovation Exchange (KIX), a joint partnership between the Global Partnership for Education (GPE) and the International Development Research Centre (IDRC). The views expressed herein do not necessarily represent those of GPE, IDRC or its Board of Governors.
Brookings is committed to quality, independence, and impact in all of its work. Activities supported by its donors reflect this commitment and the analysis and recommendations are solely determined by the scholar.