New research on public private partnerships in education in Liberia and Pakistan

If you want to use public money to fund private provision in education, make sure you get the details right. That is the message from the extensive literature on vouchers. Now, two new papers—a Center for Global Development (CGD) report focused on Liberia and a World Bank report on Pakistan—dramatically increase what we know about such publicly-funded, privately-provided schemes in education, how they can be structured, and what we can learn from them. These public-private partnership (PPP) schemes allow entrepreneurs to tailor their schools to the local environment, allowing for considerable social benefit. But, the “best” international school operators may be spending a lot of money for nothing. Instead, schemes that rely on local entrepreneurs do as well, if not better. 

Mechanisms that allow for “fair-play” in the market for education are scarce, and evaluations of such mechanisms are scarcer still. Below, I’ll discuss why these studies are exceptional, how they examined fair play, and what the results were.

Rigorous data collection and analysis

The excellent data collection allows us to be less concerned with data issues and instead focus on the findings. As one example, in Liberia, Romero et al. tracked students to ensure that schools could not “game” the evaluation by sending weaker children home:

We took great care to avoid differential attrition: Enumerators conducting student assessments participated in extra training on tracking and its importance, and dedicated generous time to tracking. Students were tracked to their homes and tested there when not available at school. [A]ttrition from our original sample is balanced between treatment and control (and is below 4 percent overall).”

Finding children who have left a school is like finding a needle in a haystack. In a country where only 42 percent have access to a cell phone, it’s heroism.

Similarly, in Pakistan, Barrera-Osorio et al. surveyed households and children allowing for exceptionally rich data to estimate the demand for schooling. In 199 villages, they surveyed up to 42 households each for a sample of 6,000 households. In the majority of the villages they surveyed every household, again, a heroic task.

Second, both teams analyzed their data beyond comparisons of (randomly allocated) “treatment” and “control” groups. Romero et al. strove to understand the role of teacher turnover and the estimation of a structural model of demand in Pakistan to evaluate whether an all-knowing, benevolent state could do better than local entrepreneurs in maximizing social value.

What the studies evaluated


In Liberia, the management of schools is given to private providers. There is an open and competitive bidding process that led to the selection of seven organizations, of which six passed financial due diligence.

Three important points emerged from the examination of this process.

First, there is one group—called Bridge International Academies—that is not contracted through the competitive tender, but through a “separate agreement.”

Second, the schools remain public and government spending is around $50 per student per year. Contractors receive an additional funding of $50 per student (Bridge has a “separate agreement”). Most contractors appear to be international school chains, although one (Liyonet/YMCA) is a local entrepreneur.

Third, 185 schools are randomized into control and treatment and then given to the contractors. The latter is a complex, non-random process.


Pakistan asks entrepreneurs to set up schools in underserved areas in the province of Sindh. Local entrepreneurs applying for programs must submit written assents from parents of (at least) 75 children, demonstrate a site for the school, and identify potential teachers with at least eight years of schooling.

Selected entrepreneurs are paid half the per-student cost in public schools (roughly $4 per month) to cover the setup and operational costs of running schools (see the paper for gender differences in the subsidies). The program also paid “in-kind” subsidies in terms of books and materials. Ultimately, the $4 per month and the in-kind subsidies add up to the operational per-student cost in public schools.

The study randomized 199 villages into treatment and control; control villages did not enter the scheme. In treatment villages, the responsibility of local entrepreneurs included building the school, finding teachers, and running the school.

The findings

In order to understand the findings, we have to first understand how test scores are measured. Tests can be either “criterion referenced” (did the person taking the test pass a certain threshold such as “passed the Grade IV exam” or “norm referenced”), which compares students with each other using measures like percentiles. For instance, SAT scores in the U.S. are typically expressed as a score (700), and as a percentile, which tells the student what fraction of all children taking the test did worse. In the academic literature, instead of using percentiles, we use “standard deviations,” which are similar. For instance (under mild assumptions), an increase of 2 standard deviations is like moving from the 50th to 95th percentile.


 After one year of the program, children in “control” schools, which did not receive any intervention, gained (roughly) 0.3 standard deviations. Among the schools that were managed by private providers, test scores were, on average, 0.18 standard deviations higher. If we pro-rate the annual gain in control schools, this would imply that the treatment schools increased test scores by 60 percent. (Caveat: This computation requires several assumptions).

However, recall that there were six contractors and each of these could have produced a different effect. Although the schools could not be randomly allocated between the six contractors, careful modelling suggests substantial variation in performance across the contractors. Four contractors generated effects of 0.27 standard deviations (the local entrepreneur was one of them), two had effects of 0.15 standard deviations and two had zero effects. The zero effects are likely due to non-compliance.

These improvements came with a heavy price tag. The authors note, “Rather than $50 per pupil, contractors’ ex ante budgets ranged from $57 for Youth Movement for Collective Action to $1,050 for Bridge International Academies (later revised to $663).” Since the learning gains for Bridge were 0.27 standard deviations, another way to look at it is that the Bridge program costs roughly 20 years (or 13 years using “revised” estimates) of regular public schooling to impart an additional year of learning!

Finally, about half of the overall increase in learning can be attributed to changes in the composition of teachers. This is important: Bridge International Academies got rid of 50 percent of the existing teachers and brought in 71 percent new teachers. Since these are public employees who can’t be fired, this group appears to have displaced poor performing teachers to other public schools.


After a little less than two years of schooling, the program increased test scores by 0.63 standard deviations among children in program villages. But because not all children actually enrolled in the new school, the “Treatment on the Treated” or the effect on those children who took up the program was 2 standard deviations.

Entrepreneurs receive half the regular schooling costs per child who enrolls and after adding in “in-kind” subsidies, the costs are equivalent to that of educating a child in a public school. Further, for the cost estimates, it’s only the children who enroll that matter, so the fact that not all children took up the program does not affect the cost effectiveness. The authors also point out that they compare the establishment, operational, and administrative overhead costs for program schools to only the operational costs in government schools.

The local entrepreneurs came within 10 percent of the best possible (social) outcome that an all-knowing, benevolent government could achieve. To get at this, the authors estimate the demand for different school attributes. They then ask whether enrollment could have been even higher if the school had chosen a different set of attributes from what they observe. For instance, maybe female enrollment really increases when there is a computer. Knowing this, would all-knowing government have made very different investments? The answer is no; local entrepreneurs pretty much knew what to do, and came very close to doing the best possible for society. Their conclusion says it all:

It is remarkable and reassuring that program-school operators have proven so successful in selecting the most-essential inputs for their schools. The results suggest that, when the government provides adequate support, enormous potential exists for local actors to find appropriate solutions to their challenges.


We can quibble. Yes, the programs were different and the contexts were different. Yes, one was two years and the other was one year. But for me, there are two key takeaways. 

First, allowing schools to tailor their inputs to the local environment is critical, but this flexibility is often overlooked in centralized programs that standardize investments in every school. Both papers show the very different inputs and strategies that the entrepreneurs used. Second, programs that decentralize provision to local entrepreneurs appear to achieve more at lower costs than programs that attract international providers. “Keep calm and trust the locals” seems like a good strategy to follow.

One more things for donors: To make things fair, please don’t fund groups that sidestep the procurement process. A funding pledge to that order would be great. As Romero et al. document, the organization that did not participate in the open bidding in Liberia—Bridge International Academies that had a “separate agreement”—spent the most per student, got rid of the most teachers, received preferred allocation of the schools as well as a lower class-size cap. This does not allow for a fair playing field.