As Tom Kane recently wrote in this forum, we should be investing more in research on education if we want to see more progress in education. We want education to embrace positive findings emerging from research. This logic also implies that education should move away from negative or null findings, when evidence shows that a program or policy does not work. But the history of federal afterschool programs suggests that a program that was funded on its potential can continue to be funded based on a kind of wishful thinking in which evidence is viewed through rose-colored glasses. How much evidence or what kind of evidence is needed to offset wishful thinking is a question worth considering.
Afterschool programs, or out-of-school time programs, burst into view in the late 1990s. The federal government—flush with budget surpluses of hundreds of billions—began spending more on the 21st Century Community Learning Centers (CCLC) program. The program was created by the 1994 Improving America’s Schools Act and had languished as an obscure provision to promote schools as community resources. Initially, the program received no appropriation, until Congress appropriated $40 million for it in 1998.
Spending exploded after the program pivoted to support afterschool programs. By 2002, the program’s appropriation was $1 billion. For a federal program to grow from $40 million to $1 billion in a few years happens rarely. The agency overseeing the program, the U.S. Department of Education, partnered with the Charles Stewart Mott Foundation to underwrite conferences and technical assistance for program providers, pumping millions more into the program.
In 1999, the Department of Education contracted with Mathematica Policy Research to evaluate the 21st Century program. The evaluation had elements that were both rigorous and representative. The elementary school part of the study was designed as an experiment; the middle school part was designed as a random sampling of programs around the country, with students participating in the program matched with students in neighboring schools (or the same school, in rural areas) that were not participating in the program. The evaluation collected data on a wide range of outcomes including grades, test scores, attendance, and behavior.
Ultimately, the evaluation reported on how the program affected outcomes. In a series of reports released between 2003 and 2005 (here, here, and here), the answers emerged: the program didn’t affect student outcomes. Except for student behavior, which got worse. And small samples were not an issue explaining why findings were insignificant. The national evaluation included about 2,300 elementary school students and 4,400 middle school students. The results were insignificant because the estimates of program effects hovered around zero.
In the face of these results, one course of action would have been to at least reduce program spending, if not eliminate the program altogether. The Bush administration proposed a reduction of $400 million in the program budget, advocates rallied to the cause, Arnold Schwarzenegger got involved, and ultimately Congress left program spending unchanged. To this day, the program spends more than a billion dollars each year.
If the national evaluation was thought to be unreliable or errant, a sensible next step would be to do another, possibly with different focuses or features. That hasn’t happened. Or perhaps the evaluation findings were dismissed because other research has shown that afterschool programs are effective. It hasn’t. Echoing a previous 2006 review by Zief, Lauver, and Maynard, a 2015 review of dozens of studies that were published up to 2014 concluded that “mean effects were small and non-significant for attendance and externalizing behaviors.” (This is how researchers say the evidence shows that after school programs do not improve attendance or behavior.)
Two other pieces of evidence add to this picture. First, the U.S. Department of Education continues to collect and summarize the program’s annual performance reports (each state reports on its programs to the Department). Its most recent summary noted that ‘nearly all of the performance targets for the 2009-2010 reporting period were not reached.’ Second, a recent federal study of supplemental services programs found no effects on academic outcomes. The study examined programs that are required to be offered by schools that do not meet target levels of adequate yearly progress under No Child Left Behind. They are tutoring and academic support service programs offered outside the regular school day that have a stronger academic focus than the 21st Century programs (which can offer snacks, recreation, and youth development activities), and yet they still did not improve academic outcomes.
Other studies reach conclusions that are more positive. They do so by reporting findings such as this one: “A statewide evaluation of South Carolina’s 21st CCLC programs found that 79 percent of students believed that the program had improved their academic skills.” Asking students what they think happened to them hardly is a scientific basis for measuring program effects. There’s a good reason why a new drug for, say, reducing blood pressure, would not be approved simply because patients reported that they felt their blood pressure was lower. Studies need to undertake objective measurements of real outcomes.
One afterschool program that did affect academic outcomes perhaps shows why effects are unlikely. The Higher Achievement afterschool and summer academic program in Washington, D.C. recruits students carefully, provides them with hundreds of hours of programming during school years and summers, and spends $4,500 per student each year (21st Century programs spend about $600 per student each year). A rigorous evaluation of the program reported that the program improved math skills, which is good news. It did not improve reading comprehension, despite the program’s explicit emphasis on reading comprehension. And once again, behavior was negatively affected. Even when all the conditions seem right, success is mixed.
To date, more than $12 billion of federal tax money has been spent on a program that a preponderance of evidence indicates doesn’t help students. There are other beneficiaries of afterschool programs, however. Working parents may need inexpensive childcare, and having their children stay longer at school keeps them in a trusted setting. The national evaluation’s finding that students attended only a couple days a week on average is consistent with parents viewing programs as childcare.
But there is already a federal childcare program, much larger than the 21st Century program. The $5 billion ‘Child Care and Development Block Grant’ program gives money to states to support childcare for low-income working families. If the basis of the 21st Century program is to provide childcare, folding its resources into the childcare program seems appropriate.
Recently, the White House called for agencies to conduct more evaluations and embrace evidence to support their budget requests. Yet the 2016 budget request for the 21st Century Community Learning Centers program is the same as past years: $1.16 billion. Perhaps this is simply program inertia, like a cruise ship that keeps moving forward after its engines are cut. It’s not indicative of an evidence-focused process.
The current House and the draft Senate bills reauthorizing ESEA eliminate the 21st Century program but allow states to support afterschool programs through other titles, if they want to. That seems like a prudent approach. Some states may want to go all in, as the Higher Achievement program does. Some may have scientific evidence that their local programs improve outcomes. And some may conclude that they want to use funds for other purposes. Whatever path a state takes, let’s look at the evidence without rose-colored glasses.
 I was a researcher at Mathematica Policy Research at the time and directed the national evaluation.
 See Kirstin Kremer, et al. ‘Effects of After-School Programs With At-Risk Youth on Attendance and Externalizing Behaviors: A Systematic Review and Meta-Analysis. Journal of Youth and Adolescence, (2015) 44: 616-636.