If there’s one thing Secretary of Education Betsy DeVos deserves credit for, it’s consistency in her public positions. The Secretary asserts that the traditional public school system in the U.S. is failing and alludes to stagnant test scores as evidence. With that predicate, she advocates for a shift in federal funding to alternative delivery systems, including charter, private, and parochial schools.
She took this position in a public conversation with me last March. I asked her what she would do if the choice-friendly policies she favors were realized but outcomes for students got worse. Her reply:
DeVos: Well, I’m not sure how they can get a lot worse on, you know, a nationwide basis than they are today. I mean, the fact that our PISA scores have continued to deteriorate as compared to the rest of the world and, you know, that we’ve seen stagnant, at best, results with the NAEP scores over the years, I’m not sure that we can deteriorate a whole lot.
That was a year ago. Here’s what she said this past Sunday (March 11, 2018) on 60 Minutes:
DeVos: We have invested billions and billions and billions of dollars from the federal level, and we have seen zero results.
Stahl: But that really isn’t true. Test scores have gone up over the last 25 years. So why do you keep saying nothing’s been accomplished?
DeVos: Well actually, test scores vis-a-vis the rest of the world have not gone up. And we have continued to be middle of the pack at best. That’s just not acceptable.
Stahl: No it’s not acceptable. But it’s better than it was. That’s the point. You don’t acknowledge that things have gotten better. You won’t acknowledge that, over the —
DeVos: But I don’t think they have for too many kids. We’ve stagnated.
What does the evidence say?
Is DeVos right? Have test scores for U.S. students deteriorated or remained static over the last quarter century?
There are five different tests (assessments) that are relevant, each of which is given periodically to representative samples of U.S. students, and three of which are also administered to samples of students from other countries. With each, we can track the direction of test scores for U.S. students over repeated administrations. For the international assessments, we can also ask how U.S. students score relative to students in other nations, but the answer is complicated by the changing list of participating countries in different assessment years. For that reason, I will present the comparative results only for the most recent assessment.
International Assessments
On the PIRLS (Progress in International Reading Literacy Study), though U.S. scores are stagnant, U.S. students are well above “middle of the pack.” PIRLS is a test of reading ability that has been administered every 5 years since 2001. There was no measurable change in the U.S. overall average reading scale score between 2001 (542) and 2016 (549). In each administration, U.S. students performed above the average of all participating countries. In the most recent administration, The U.S. overall average was lower than the averages for 12 education systems, higher than the averages for 30 education systems, and not significantly different from the averages for 15 education systems.
Same with PISA (Program for International Student Assessment)—despite stagnant scores, U.S. students do comparably well. PISA has measured 15-year-old students’ reading, mathematics, and science literacy every three years since 2000 (with the focus rotating between reading, math, and science in each assessment cycle). For science and reading, the average for U.S. students did not change significantly from the first to the latest administration in 2015. For mathematics, scores in 2015 were lower than in the first administration. In 2015, the U.S. average was lower than 18 education systems, higher than 39, and not measurably different than 12 education systems.
On the TIMSS (Trends in International Mathematics and Science Study) test, U.S. students did even better relative to other countries—and U.S. math scores are improving. TIMSS has assessed student achievement in mathematics and science in grades 4 and 8 roughly every four years since 1995. The most recent assessment was in 2015. In mathematics, U.S. students at both 4th and 8th grade scored appreciably higher in 2015 than in 1995. In science, scores did not change significantly for 4th graders between the first and the most recent administration of the test. Scores were up for 8th graders. In 2015, U.S. fourth-graders’ average score in mathematics was higher than the average scores of students in 34 education systems and lower than the average scores of students in 10 education systems. Likewise, U.S. students performed better than most of the other participating countries in the 8th grade math and in science at both 4th and 8th grade.
U.S. National Assessments
On the long-term trends NAEP (National Assessment of Educational Progress – long-term trends) evaluation, all except 17-year-olds are seeing improvement on math and reading. This assessment has been administered periodically since 1971 in reading and 1973 in mathematics to a representative sample of U.S. 9-, 13-, and 17-year-olds. The most recent administration was in 2012. Compared to the first assessment, scores in both mathematics and reading were higher in 2012 for 9- and 13-year-olds and not significantly different for 17-year-olds.
On the main NAEP (National Assessment of Educational Progress) evaluation, scores for math and reading are going up. NAEP has been administered to 4th and 8th graders in many subjects over the years, but the primary focus has been reading and mathematics. The assessment in those subjects was first given in math in 1990 and in reading in 1992. The most recent released results are for 2015. In both grades and both subjects, scores were higher in 2015 than in the first year of assessment. The growth over time is most dramatic for 4th grade mathematics, as indicated in the following figure.
The bottom line
Secretary Devos has cherry picked her data. She is correct that U.S. test scores are stagnant on PIRLS and PISA. They are decidedly not stagnant on TIMSS, NAEP long-term trends, and main NAEP. She is incorrect when she asserts that on international tests “we have continued to be middle of the pack at best” (in each of the international assessments there are many more countries that score below than above the U.S.). It is important for the Secretary get the facts of these matters straight.
It is also important for everyone to understand that the progress and comparative advantage (or lack thereof) of U.S. students on standardized assessments is not in and of itself an argument for an expansion of school choice. Whether more school choice would raise student achievement and move the U.S. ahead in international assessment league tables is an open empirical question, with the devil very much in the details.
Commentary
Betsy DeVos is half-right on test scores, but test scores alone don’t make the case for school choice
March 12, 2018