The watchword of this year’s Brown Center Report is caution—caution in linking state tests to international assessments—“benchmarking” is the term—caution in proceeding with a policy of “algebra for all eighth graders,” caution in gleaning policy lessons from the recent progress made by urban schools. State and local budget woes will restrain policymakers from adopting costly education reforms, but even so, the three studies contained herein are a reminder that restraint must be exercised in matters other than budgets in governing education well. All too often, policy decisions are based on wishful thinking rather than cautious analysis. As education evolves as a discipline, the careful analysis of high-quality data will provide the foundation for meaningful education reform.
The report consists of three sections, each discussing a separate study. The first section looks at international testing. Powerful groups, led by the National Governors Association, are urging the states to benchmark their state achievement tests to an international assessment, the Programme for International Student Assessment (PISA). After comparing PISA to the Trends in International Mathematics and Science Study (TIMSS), the other major international assessment in which the United States participates, the Brown Center analysis examines findings from a chapter of the 2006 PISA report that addresses student engagement. The chapter presents data on students’ attitudes, values, and beliefs toward science.
Benchmarking proponents argue that PISA offers policy guidance to American school officials by identifying the characteristics of successful school systems around the world. The Brown Center analysis calls that claim into question. The PISA report makes causal claims from cross-sectional data that cannot support such inferences. The chapter on student engagement presents inferences based on selective treatment of data, with policy recommendations going beyond the evidence adduced to support them.
Moreover, PISA poses questions that contain ideological bias. To define scientific literacy as encompassing beliefs as well as knowledge—a definition also embraced by skeptics of evolution—is a dubious position for any science assessment to take. PISA wants to assess whether students are capable of applying science to public policy. Fair enough. That capacity can be evaluated, however, without making a judgment about students’ political beliefs. PISA asks students whether they support several environmental policies and then creates an index of “responsibility for sustainable development” from the responses. Responses in favor of the policies are responsible; those opposed are not. That kind of questioning is inappropriate on a science assessment. Without serious reform, PISA is inappropriate for benchmarking.
The second section tackles another hot topic in policy circles—whether all eighth graders should take an algebra course. California recently adopted a universal eighth grade algebra policy that will be implemented in 2011, joining a Minnesota policy with the same objective and implementation date. Are all eighth graders prepared to take an algebra class? National data are examined from eighth grade math classes in 2005 to answer that question.
Low achievers in mathematics, those scoring in the bottom tenth of all students, function several years below grade level. A shocking percentage of these low achievers, 28.6 percent, were enrolled in advanced mathcourses—Algebra I, Algebra II, or Geometry—in 2005. A policy of algebra for all eighth graders will dramatically increase the proportion of these misplaced math students. Sample math items are presented to illustrate the large gaps in the misplaced students’ mathematical knowledge, in particular, their poor grasp of fractions, decimals, and percentages. The misplaced students are described in terms of demographic characteristics, the schools they attend, and the teachers who are instructing their math classes. The portrait is deeply troubling. The misplaced students are some of the nation’s most vulnerable youngsters. The analysis raises questions about the feasibility of an “algebra for all” policy until we know how to reduce the number of underprepared students and how to effectively teach algebra to students who struggle with basic arithmetic.
The final section of the report is a good news story. The 2001 Brown Center Report presented an analysis of academic achievement in big city school districts. That study compared test scores for school districts serving the top fifty cities in the 2000 U.S. Census to the average test score in the cities’ respective states. Not surprisingly, the big city districts lagged far behind. This year’s report replicates that study using the most recent achievement data. Big city schools have made significant gains. While all school districts have notched achievement gains, the big city districts made even larger gains than other districts. They are closing the gap with suburban and rural districts, slowly, to be sure, but they are clearly making progress.
The analysis does not hazard a theory as to why big city achievement is rising. One possible catalyst is mayoral control, a popular urban reform in recent years. The data neither support nor refute the effectiveness of mayoral control. Another possible influence is No Child Left Behind. The law targets low-performing students, and studies of test scores at both state and national levels have shown greater progress at the bottom of the achievement distribution than at the top. Having a disproportionate share of low achievers, big city schools benefit from that trend. As noted above, cross-sectional data are limited in what they can reveal about the causes of events, so whether NCLB has played a role in the progress of big city schools is merely speculative. In addition, not all big city districts have made gains.
A daunting obstacle to determining the drivers of academic trends is that there is no authoritative source that documents the policies that local districts have adopted, along with such details as when particular policies were started, when they were modified, what policies they replaced, and how they were implemented. The Brown Center Report ends with a call for a periodic national inventory of district policies across the country. We are getting much better at determining how well students are learning and tracking trends in test scores as they unfold over time. But policy analysis lags behind. Explaining why students are learning more or less—and really pinpointing the causes of trends in achievement—will take much more information about the policies and practices of our schools.