This year’s Brown Center Report contains studies taking a long view.
Part I examines national test data going back to 1971 from the National
Assessment of Educational Progress (NAEP). The study in Part II compares
the 1989 test scores of more than 1,000 schools to the same
schools’ scores in 2009. Part III compares the test scores of conversion
charter schools from 1986, when they operated as traditional public
schools, to those from 2008, when they operated as charter schools. The
studies tackle perennial questions that, as often happens in education,
manifest themselves as controversial topics on the contemporary scene:
how to interpret trends in test scores, the distribution of achievement,
school turnarounds, and charter schools.
Part I rejects the conventional reaction to the 2009 NAEP scores. Scores
in fourth-grade math were unchanged from 2007 to 2009. Eighth-grade
scores were up a little. Press articles featured expressions of disappointment
and concern, primarily from protagonists who used the flat scores
to support policy arguments. Part I places the 2009 scores in the context
of the 19-year history of the main NAEP, and after comparing the latest
scores with results from other equally trustworthy tests of U.S. math
achievement, concludes that the hand-wringing is unwarranted.
So when is a purported NAEP trend really a trend? Part I continues by
examining achievement gaps, not between two racial, ethnic, or socioeconomic
groups, but between the nation’s highest- and lowest-achieving
students. It focuses on the distribution of academic achievement
instead of the direction of average achievement. The study is a follow-up to a 2009 Fordham Institute paper documenting that the gap between
high- and low-achieving students has been shrinking in recent years.
The data in Part I show that the trend, which began sometime around
1998 or 1999, is historically unprecedented and extends across subjects
(reading and math), grades (fourth and eighth), and tests (long-term
trend and main NAEP). It is also more pronounced in public schools
than in private schools. The two analyses in Part I highlight the contrast
between a trend indicated by data collected from several independent
sources over an extended period of time and speculative assertions
arising from “instant analysis” of a single set of test scores.
Part II asks a simple question: do schools ever change? The sample
consists of 1,156 schools in California that offered an eighth grade in
1989 and 2009. Test scores from 1989 are compared to scores from
2009. The scores are remarkably stable. Of schools in the bottom
quartile in 1989—the state’s lowest performers—nearly two-thirds
(63.4 percent) scored in the bottom quartile again in 2009. The odds
of a bottom quartile school’s rising to the top quartile were about one
in seventy (1.4 percent). The reverse was true as well, with similar
percentages of top quartile schools staying among the top performers
(63.0 percent) or falling to the bottom quartile (2.4 percent). Changes
in a school’s socioeconomic status had only a marginal statistical relationship
with test score changes.
The persistence of test scores has major implications for today’s push to
turn around failing schools. It can be done, but the odds are daunting.
California certainly cannot be accused of inactivity in education reform
from 1989 to 2009. Few states tried as many diverse, ambitious reforms
that targeted every aspect of the school system—finance, governance, curriculum, instruction, and assessment. Not only have these efforts
failed to elevate California from its low national ranking on key performance
measures, but they have also had little effect on the relative
ranking of schools within the state.
The study suggests that people who say we know how to make failing
schools into successful ones but merely lack the will to do so are selling
snake oil. In fact, successful turnaround stories are marked by idiosyncratic
circumstances. The science of turnarounds is weak and devoid of
practical, effective strategies for educators to employ. Examples of largescale,
system-wide turnarounds are nonexistent. A lot of work needs to
be done before the odds of turning around failing schools begin to tip in
a favorable direction.
Part III looks at charter schools. Conversion charters are favored by the
Obama administration as a restructuring strategy. Most charter schools
are start-ups, begun from scratch by their founders. Conversion charters
are schools that are traditional public schools and convert to charter
school status. They typically continue to rely on their home districts
for several functions (e.g., maintenance of buildings, managing pension
obligations, transportation services) but are freed from regulations
pertaining to curriculum and instruction. The idea is that schools can be
more productive if they are allowed to tailor core educational operations
to the needs of their students.
California has the largest number of conversions, and the study was able
to collect data on two cohorts: 49 schools from 2004 and 60 schools
from 2008. For both cohorts, test score data were also available from
1986, allowing a comparison of scores before and after the schools converted. The analysis is exploratory and mainly descriptive. No causal
conclusions can be derived from the data.
What do we know about conversions? Test scores look similar before
and after conversion. The 2004 cohort evidences a 2 to 3 percentile
point advantage as charters, but the 2008 cohort’s scores declined slightly,
less than 2 points, from 1986 to 2008. On several key characteristics,
conversions look more like traditional public schools than start-up
charters. Compared with start-ups, conversions are more concentrated
in urban areas, have larger student enrollments, and serve greater numbers
of Hispanic and black students. Teachers at conversions are more
experienced and more likely to hold teaching certificates, particularly in
bilingual education. It is clear that future evaluations of charter schools
must differentiate between start-ups and conversions because of the significant
institutional differences between the two types of charters.
To sum up, the studies in this year’s Brown Center Report focus on
long-term changes. Part I analyzes NAEP data. Parts II and III examine
California test scores from the 1980s and compare them to scores from
recent years. Because of its long history of testing, California is currently
one of the few states able to provide assessment data for such long-term
comparisons. That will change as other states continue to test students
annually. Creating rich archives of student performance data bodes well
for school reform. Improving schools requires patience and persistence,
what education professors Richard Elmore and Milbrey McLaughlin1
call “steady work.” It also requires good information to verify whether
reforms have paid off, or, like many efforts in education, produced
hopeful signs that soon vanish. The future looks bright if analysts’
capacity to peer into the past continues to improve.