From data to learning: the role of social accountability in education systems

A high school student tests a "Quipus" laptop during a presentation ceremony in El Alto, July 31, 2014. Some 15,000 public school students received the laptops, assembled by Bolivian state-run company Quipus, during the ceremony. According to local media, El Alto schools will receive a total of some 140,000 laptops this year as part of a government initiative to provide final-year high school students with access to a new laptop, aiming to incorporate the use of new technology in educational institutions. REUTERS/David Mercado (BOLIVIA - Tags: EDUCATION POLITICS SOCIETY) - RTR40UBC

The call for more and better data is a popular refrain in development circles. The United Nation’s Sustainable Development Goals, for example, call for nothing short of a data revolution and innovations like results-based financing and adaptive management demand an ever increasing body of data to work from. But as we proceed down the path of data growth and reform, there are important questions that must be asked.

The rallying cry for a data revolution stems from an awareness that the necessary information required to support monitoring, accountability, and decisionmaking for the delivery of critical services is sorely lacking. This is true in the field of education, as it is in other sectors, where crucial data are often missing or difficult to use, complicating our ability to answer even the most basic questions about what children are learning.

Coupled with this focus on developing robust data systems are parallel efforts to make these data open and accessible to the public. Citizens and local providers are seen as the missing link—or “the shorter route”—for strengthening the quality of education services. As a result, the education space has seen many initiatives to provide local actors with school-level data on everything from teacher attendance and the number of textbooks per student, to test score results and school finances. This follows from the idea that, with the right information, citizens and frontline providers can tackle local issues even if political systems fall short at the national level.

For proponents, timely information is assumed to reduce corruption; facilitate evidence-based decisionmaking at all levels; enable monitoring to hold schools accountable for students’ learning; improve administrator, teacher, and parent effort; and make reforms and resources more equitable and efficient.

Underlying these assumptions are two actors: 1) the “superbureaucrat,” who has the time, the ability, and the incentive to analyze bulk datasets, optimally allocate and target scarce resources, and align policies and programs to evidence-informed, long-term strategies; and 2) the “supercitizen,” who engages with her own personal data analysis, is attuned to the deficiencies of existing practices, and has the means and wherewithal to hold service providers accountable to standards of delivery.

But, clearly, neither of these actors exists in the real world. Thus, while it is hard to disagree with demands for more timely and better data, it is important to ask for whom and to what end data are being collected to avoid simply adding to bytes of unused information.

At the Center for Universal Education at Brookings, we have recently embarked on a research project to explore this issue, looking first at how citizens respond to school-level information that is made open and available. To do so, we reviewed a vast literature on what is broadly defined as bottom-up efforts to improve service delivery, increase citizen engagement, and promote transparency, and also assessed and categorized 25 evaluations that looked at the impact of such efforts on school quality—from passive information campaigns to more involved participatory governance efforts. The idea is not just to identify what works but also clarify why these reforms work (or do not).

We find that while there have been a small number of successes (for instance, Reinikka and Svensson’s oft-cited findings that a newspaper campaign in Uganda substantially reduced the siphoning of school grants by corrupt officials), it is clear that the route between making information available and improving education outcomes is not as short as imagined—only 15 out of 31 “arms” of interventions that we analyzed demonstrated positive impacts on student learning or intermediate outcomes like teacher attendance and enrollment.

But does the elusiveness of success mean that we should throw the baby out with the bathwater?

Not necessarily. More likely, these failures signal a lack of understanding (or even an explicit misunderstanding) of what these sorts of initiatives can do, or even what they should do and that a more deliberate design for stakeholder engagement could be a useful complement to other efforts to use data to reform the education system.

As practitioners increasingly build transparency and social accountability processes into their strategy goals, there are several design aspects to consider: 1) clarifying a realistic end game (i.e., what success looks like); 2) identifying primary change agents (i.e., whether it’s parents, teachers, or administrators that are meant to modify their behavior); 3) determining what information can be used to effect change; and 4) building the capacity of users to understand that data and take action based on it.

In sum, a good design targeting a specific audience, measurable outcomes, and behavior change will require resolving some difficult questions:

1. How does one ensure data quality, make data easy to understand and use, and guard against simplistic interpretations and comparisons?

2. Do different types of school-level information (for example, data on inputs versus outputs, or data that’s comparable to other schools versus benchmarked against national standards) spur different responses?

3. Does citizen response support collective action or simply serve the pursuit of individual benefits? Does response differ systematically depending on the nature of the education system–i.e., private versus public versus mixed systems?

4. What is the role of “infomediaries”¬—civil society organizations, researchers, and the media—in translating data into actionable information?

5. What is the potential for adverse effects, such as burdening the marginalized, empowering the empowered, and promoting a culture of high-stakes testing?

6. What motivates or limits government response to citizen pressure?

In the next few weeks, we will share some of the initial findings of our work through a number of blogs that explore some of these unanswered questions. We hope to offer a better understanding of the pathways of change and incentives for citizen action that come as a response to more and better data. Stay tuned.