SERIES: Brown Center Report on American Education | Number 12 of 15 « Previous | Next »

The 2010 Brown Center Report on American Education

Introduction

This edition of the Brown Center Report marks the tenth issue of the series and the final issue of Volume II. The publication began in 2000 with Bill Clinton in the White House and the Bush-Gore presidential campaign building toward its dramatic conclusion. That first report was organized in a three-part structure that all subsequent Brown Center Reports followed. Part I presents the latest results from state, national, or international assessments and alerts readers to important trends in the data. Part II explores an education issue in depth, sometimes by investigating different sources of empirical evidence than previous research, sometimes by posing a conventional question in an unconventional way. Part III analyzes a current or impending question regarding education policy. In all three sections, the studies strive to ask clear questions, gather the best available evidence, and present findings in a nonpartisan, jargon-free manner.

Part I of this year’s Brown Center Report focuses on international assessments. The latest data from the Programme for International Student Assessment (PISA) were released in December 2010. The performance of the United States was mediocre, and although notching gains in all three subjects, the country scored near the international average in reading literacy and scientific literacy and below average in mathematical literacy. The term “literacy” is a signal that PISA covers different content than most achievement tests, and, indeed, assesses different skills than are emphasized in the school curriculum. As the 2006 PISA Framework states, the knowledge and skills tested on PISA “are defined not primarily in terms of a common denominator of national school curricula but in terms of what skills are deemed to be essential for future life.”

Two myths of international assessments are debunked—the first, that the United States once led the world on international tests of achievement. It never has. The second myth is that Finland leads the world in education, with China and India coming on fast. Finland has a superb school system, but, significantly, it scores at the very top only on PISA, not on other international assessments. Finland also has a national curriculum more in sync with a “literacy” thrust, making PISA a friendly judge in comparing Finnish students with students from other countries. And what about India and China? Neither country has ever participated in an international assessment. How they would fare is unknown.

Part II of the report looks at state test scores on the National Assessment of Educational Progress (NAEP) in light of the recent Race to the Top competition. The federal program encouraged states to apply for $4.35 billion in new money by promising to pursue a reform agenda backed by the Obama administration. Twelve states (for this discussion, the District of Columbia will be called a state) won the grants. But are the states that won the grants the same states that have accomplished the greatest gains in student learning? Not necessarily.

Who’s winning the real race to the top? Both short- and long-term gains on NAEP are calculated with statistical controls for changes in the demographic characteristics of each state’s students. Eight states—Florida, Maryland, Massachusetts, District of Columbia, Kentucky, New Jersey, Hawaii, and Pennsylvania—stand out for making superior gains. At the other end of the distribution, Iowa, Nebraska, West Virginia, and Michigan stand out for underperforming. Five of the eight impressive states won grants, but three did not. And a few states won grants even though they are faring poorly in the race to boost student achievement. Some of the reasons why a program called Race to the Top could distribute grant money in this manner are discussed.

Part III looks at NAEP. In June 2010, the Common Core State Standards Initiative released grade-by-grade standards for reading and mathematics. Two consortia were awarded $330 million to write tests aligned to the standards, and a total of 46 states have signed to at least one group. As the only assessment administered to representative samples of American students, NAEP has called itself “the Nation’s Report Card” for decades.

How well does NAEP match up with the Common Core? We examined 171 public release items from the eighth-grade NAEP math test and coded them based on the grade level the Common Core recommends that the content be taught. The items registered, on average, two to three years below the eighth-grade mathematics recommended by the Common Core. More than 90 percent of the items from the “number” strand (content area) cover material below the eighth grade. Almost 80 percent of the items assessing “algebra” are, in fact, addressing content in the curriculum that is taught before eighth grade. With Common Core assessments on tap to begin in the 2014–2015 school year, policymakers and analysts alike need to start thinking now about how NAEP and the Common Core assessments can be reconciled so as to inform, not to confuse, the public about student achievement.

An overarching theme of this year’s report is that events in the field of education are not always as they appear to be—and especially so with test scores. Whether commentators perpetrating myths of international testing, states winning races while evidencing only mediocre progress, or an eighthgrade test dominated by content below the eighth grade, the story is rarely as simple as it appears on first blush. This report tried to dig beneath the surface and uncover some of the complexities of these important issues.

SERIES: Brown Center Report on American Education | Number 12