A version of this op-ed was originally published on Real Clear Markets on October 23, 2019.
Last week the Royal Swedish Academy of Sciences announced the winners of the Nobel prize in economics, known formally as the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. The three winners were Abhijit Banerjee and Esther Duflo of M.I.T. and Michael Kremer of Harvard. Their distinctive contribution was to use experimental methods to learn about the effectiveness of small-scale policy interventions aimed at helping improve the lives of some of the world’s poorest citizens. The Nobel prize appropriately recognizes both the winners’ use of a powerful research technique and their application of that technique to some of the world’s most pressing policy problems.
The particular experimental method applied by the three winners is called a randomized controlled trial, or RCT. The crucial ingredient that distinguishes randomized trials from other research methods is the random assignment of treatments to the people, places, or other objects of study involved in a research project. In a typical observational study, analysts do not assign a treatment to the people whose behaviors or other characteristics are the topic of interest. Instead, the analyst collects information about a sample of people and attempts to draw conclusions about the effects of naturally occurring environmental or policy differences that are believed to affect the behaviors or outcomes of sample members.
In the simplest kind of RCT, a sample of people or villages is enrolled in a study. The analyst randomly assigns some of the enrolled sample into a special treatment, for example, eligibility for free mosquito nets. The remainder of the sample is enrolled in a control or “null treatment” group. People or villages in the control group are not eligible to receive the tested treatment. To determine whether and how much the free mosquito nets affected the well-being of people in the special treatment group, the analyst collects follow-up information about outcomes in the two groups and then estimates the difference in outcomes between the two groups. In the example just described, the analyst might want to determine the effect of the offer of free mosquito nets on the incidence of mosquito-borne illness in the year or two after the free nets are first offered.
In a typical observational study without random assignment, the researcher might find nearby villages that differ with respect to their use of mosquito nets. To determine the effect of mosquito nets, the analyst then estimates the impact of, say, a 10-percentage-point difference in the use of mosquito nets in high-utilization villages compared with low-utilization villages. For many purposes, this type of estimate can give useful information. However, it does not necessarily tell us whether higher utilization of nets caused the difference in mosquito-borne disease. Villages with residents who conscientiously use mosquito nets may also be meticulous in following other hygienic practices. In this case, the difference in the incidence of disease between high-utilization and low-utilization villages might be partly attributable to other differences between the two kinds of village. Random assignment of villages to the free-net treatment reduces the risk of drawing an erroneous conclusion about whether the increased use of nets has caused the reduction in disease. In addition, it can reduce our uncertainty about the precise impact of wider use of mosquito nets. Equally important, it can tell us the effect of making free nets available on the utilization of nets among villagers.
Since the mid-1990s, Banerjee, Duflo, and Kremer have designed a number of RCTs aimed at discovering practical, cost-effective policies that can improve the health, boost the schooling, or increase the productivity of poor villagers in developing countries. In many cases the trials have a more sophisticated design than the simple RCT described above. In all cases, however, the experiments are designed to shed light on the behavioral choices of some of the world’s poorest citizens in order to learn how policies can be designed to improve educational, health, and other outcomes. Almost a decade before winning the Nobel prize, Esther Duflo offered a clear and appealing defense of her methods in a brief TED talk that is still worth seeing.
Before and after they were awarded the Nobel prize, Duflo and her colleagues were criticized for the supposed weaknesses of relying so heavily on RCTs in their research. A common complaint is that by relying on this research method they are compelled to focus on small-bore problems, such as teacher absence from the classroom or the correct pricing of mosquito nets. It would be better, some say, for good economists to address broader policy questions that affect a bigger slice of the population in developing countries. Indeed, on the day after the Royal Swedish Academy announced this year’s award, Hoover Institution economist David Henderson sniffed that by focusing on small-bore problems, the Nobelists were aiming too low.
The correct response to Henderson’s objection is that the answers to small-bore questions can nonetheless have important consequences for human well-being. In her TED talk, Duflo mentions a low-cost experimental intervention in Indian villages that boosted the child immunization rate from 6 percent to 38 percent. The fact that this result was obtained in a randomized trial means that the finding was treated with unusual respect by most social scientists, who found the estimates believable. The experimental design and results were also easy for most nonscientists to understand and believe. As a result, policymakers may have been more inclined to act on the findings from the RCT. This may in fact be the most important advantage of randomized trials from the point of view of policymaking.
Scientists unfamiliar with empirical economics may be under the impression that RCTs were unknown in economics before the 1990s, when Banerjee, Duflo, and Kremer started using this tool in their research program. In fact, the first large-scale RCT was the New Jersey guaranteed income experiment, which began in the late 1960s. The New Jersey experiment was soon followed by three more guaranteed income experiments as well as other large-scale RCTs that tested housing subsidy plans, various health insurance schemes, prison rehabilitation programs, time-of-day electricity pricing, and worker training. By the mid-1990s, more than $1.5 billion (in today’s dollars) had been spent on economic experiments.
The results of some of the early large-scale experiments are still used in policy analysis. For example, the findings from the health insurance experiment remain useful for predicting the impact of higher or lower insurance co-payment rates on consumers’ demand for health services. If Congress ever seriously considers a universal basic income, such as the one proposed by presidential candidate Andrew Yang, budget analysts will almost certainly look to the guaranteed income experiments for help in predicting the effects of different basic income schemes on the work behavior of low-income Americans.
Critics of RCTs are correct to caution us about the limitations of this crucial evaluation tool. When a random-assignment experiment is infeasible or is likely to produce statistical results that are too imprecise or incomplete to yield useful answers to our policy questions, we should turn to a different and better research strategy (if one is available). The research findings of Banerjee, Duflo, and Kremer demonstrate, however, that there are many instances where the advantages of random-assignment experiments vastly outweigh their supposed disadvantages.
Commentary
Op-edExperimental economists win Nobel Prize (and deserved to win)
October 23, 2019