Building the Connection between Policy and Evidence: The Obama Evidence-based Initiatives

Jon Baron and
Jon Baron headshot
Jon Baron Vice-President of Evidence-Based Policy - Laura and John Arnold Foundation
Ron Haskins
Ron Haskins Senior Fellow Emeritus - Economic Studies

September 7, 2011


There is a growing belief in both the US and the UK that intervention programs addressed to domestic social problems can be greatly improved if policymakers and managers will support programs shown by scientific evidence to produce impacts. Since his inauguration in 2009, President Barack Obama and his administration have developed and are now implementing the most extensive evidence-based initiatives in US history. The purpose of this paper is to trace the evolution of these initiatives and to examine both their promise and problems.

Muddling through vs. rational policymaking
In 1971, Alice Rivlin published a seminal book on decision making entitled Systematic Thinking for Social Action.1 She identified four ‘propositions’ that can be taken as a reasonable summary of the basic elements of what is often referred to as rational decision making. They are:

  • Define the problem.
  • Figure out who would be helped by a specific program attacking the problem and by how much.
  • Systematically compare the benefits and costs of different possible programs.
  • Figure out how to produce more effective social programs.2

Rivlin believed that at the time she was writing, economists, statisticians, and other analysts had made good progress on most of the steps in this approach to rational decision making, but that little progress had been made in determining the benefits of particular programs.

A much more skeptical view of the potential for rational, evidence-based policymaking can be seen in the classic 1959 article by Charles Lindblom on making decisions by “muddling through.”3 Lindblom argued that no program administrator could actually follow the rational decision making model because the demands on knowledge required to compare all alternative programs are too large, the effects of most programs are not known with any confidence, and not enough time is usually available to perform elaborate analyses before a decision must be made. Thus the choice set faced by managers is limited to incremental adjustments in current policy and practice, and the most important factor in policy choice is usually reaching consensus on a particular alternative. Lindblom argued that this process of what he called “successive limited comparisons” among alternatives not radically different from the status quo – or more famously, “muddling through” – was both a better description of how policy actually is made and a more practical guide to action than the rational approach.

Our view is that the dichotomy between the rational decision making approach and the muddling through approach is a false one. Policymaking inevitably involves political constraints on choices as well as limitations on evidence and time. But that does not mean there is no evidence available, or that policymakers should ignore the evidence that does exist or fail to devote resources to obtain better evidence. Indeed, Rivlin argued that the case for “systematic analysis” was strong and had been well made, even by 1971, and that “hardly anyone explicitly favors a return to muddling through.”4 Rivlin also held that the key challenge is to recognize the limitations of analysis but to nonetheless employ a systematic approach whenever and wherever possible. Rivlin was especially forceful in calling for better evidence of program effects, perhaps the central feature of any systematic approach. Few would disagree that everyone from program managers to senior level policymakers could improve their decisions if they had reliable information about program impacts, or that developing programs with strong positive effects that can be widely replicated should be a fundamental objective in both policymaking and program evaluation.

Rivlin’s propositions today
Updated to 2011, the Rivlin view of rational policymaking is still central to improving policy decisions. Ironically, the Rivlin proposition that now provides the strongest basis for expanding evidence-based policies is the dramatic expansion of high-quality evidence on programs that work (or not), the proposition that Rivlin thought the weakest in 1971. The most important contribution of social science to the public good is the use of scientific designs that allow definitive answers about whether specific intervention programs produce their intended impacts. Given this powerful tool, in a perfect world policymakers could follow a simple decision rule on program funding: if the program works, continue or even expand its funding; if it doesn’t work, reduce or end its funding or find ways to improve it.

Evidence from scientific designs is now available for a large and growing set of interventions in early childhood education, K-12 reading and math, treatment of families that abuse or neglect their children, preparation of high school students to enter the world of work, community-based programs for juvenile delinquents and their families, several program models that reduce teen pregnancy, ‘second chance’ programs for children who have dropped out of school, prison release programs, and many others.

Broadening the evidence-based approach to achieve greater impacts in attacking society’s social problems, government (and the private sector, especially foundations) can employ two approaches. First, as government provides money to establish new social programs, the money should be accompanied by a requirement that the specific programs implemented at the local level be supported by strong evidence from scientific evaluations. Indeed, government might even specify a set of evidence-based programs that can be funded in order to avoid conflicts over what constitutes strong evidence. As we will see, the Obama administration has pioneered methods of identifying evidence-based programs and of ensuring that only evidence-based programs are implemented with government dollars.

Of course, anyone who has watched policymakers in action knows that they will rarely allow evidence on program effectiveness to be the sole or even major factor driving the policy process. Politicians focus on costs, the needs and desires of their constituents, the position of their party leaders, public opinion, their own political philosophy, pressure from lobbyists, the position favored by people and groups that finance their campaigns, and a host of other factors in making decisions about how to vote on program proposals. Allowing an adequate range for all these factors however, does not gainsay the possibility that in some circumstances evidence can have (and has had) a major impact on political decisions.

The second approach to employing evidence to improve social programs is to ensure that programs are implemented in a way that reliable information about program impacts is continuously generated. One of the Achilles heels of social programs is diminishing effectiveness as program models are implemented in more locations. A leading example of this problem is Head Start in the US. Over the past four decades, numerous preschool programs have shown that they can have both immediate and lasting impacts on children learning and other behaviors.5 Yet a recent high-quality evaluation of Head Start, a program specifically designed to spread the benefits of preschool to a very large (enrollment in 2010: 900,000 children) group of disadvantaged children, produced only modest impacts that were barely detectable at the end of the first grade.6 To combat the problem of diminishing impacts as programs are expanded to new sites, program operators must be vigilant in following the program model, perhaps adapted in some ways to local conditions. The key to replication of effective program models is continuous generation of evidence on program effects on participants and adjustments in implementation if the program is not achieving its expected effects. For this reason, enabling legislation should provide a mandate for continuous evaluation and the funding to make it possible.

1. Rivlin, A. (1971) ‘Systematic Thinking for Social Action.’ Washington, DC: Brookings.
2. Rivlin, pp. 6-8.
3. Lindblom, C.E. (1959) The Science of ‘Muddling Through’. ‘Public Administration Review.’ 19, (2): 79-88.
4. Rivlin, p. 3.
5. Ramey, C., Campbell, F., and Blair, C. (1998) Enhancing the Life Course for High-Risk Children: Results from the Abecedarian Project. In ‘Social Programs that Work.’ Ed., Jonathan Crane. New York: Russell Sage Foundation; Schweinhart, L.J. and others (2005) ‘Lifetime Effects: The High/Scope Perry Preschool Study through Age 40.’ Ypsilanti, MI: High/Scope Press; Reynolds, A.J. (2000) ‘Success in Early Intervention: The Chicago Child-Parent Centers.’ Lincoln, NE: University of Nebraska Press; Barnett, W.S. and others (2007) ‘Effects of Five State Pre-Kindergarten Programs on Early Learning.’ Rutgers University: National Institute for Early Education Research; Gormley, W.T. Jr., Phillips, D., and Gayer, T. (2008) Preschool Programs Can Boost School Readiness. ‘Science’ 320: 1723-1724.
6. Puma, M. and others (2010) ‘Head Start Impact Study: Final Report.’ Report prepared for the Office of Planning, Research and Evaluation Administration for Children and Families, US Department of Health and Human Services. Rockville, MD: Westat.