The introduction to the 2015 Brown Center Report on American Education appears below. Use the Table of Contents to navigate through the report online, or download a PDF of the full report.
TABLE OF CONTENTS
Part I: Girls, Boys, and Reading
Part II: Measuring Effects of the Common Core
Part III: Student Engagement
INTRODUCTION
The 2015 Brown Center Report (BCR) represents the 14th edition of the series since the first issue was published in 2000. It includes three studies. Like all previous BCRs, the studies explore independent topics but share two characteristics: they are empirical and based on the best evidence available. The studies in this edition are on the gender gap in reading, the impact of the Common Core State Standards — English Language Arts on reading achievement, and student engagement.
Part one examines the gender gap in reading. Girls outscore boys on practically every reading test given to a large population. And they have for a long time. A 1942 Iowa study found girls performing better than boys on tests of reading comprehension, vocabulary, and basic language skills. Girls have outscored boys on every reading test ever given by the National Assessment of Educational Progress (NAEP)—the first long term trend test was administered in 1971—at ages nine, 13, and 17. The gap is not confined to the U.S. Reading tests administered as part of the Progress in International Reading Literacy Study (PIRLS) and the Program for International Student Assessment (PISA) reveal that the gender gap is a worldwide phenomenon. In more than sixty countries participating in the two assessments, girls are better readers than boys.
Perhaps the most surprising finding is that Finland, celebrated for its extraordinary performance on PISA for over a decade, can take pride in its high standing on the PISA reading test solely because of the performance of that nation’s young women. With its 62 point gap, Finland has the largest gender gap of any PISA participant, with girls scoring 556 and boys scoring 494 points (the OECD average is 496, with a standard deviation of 94). If Finland were only a nation of young men, its PISA ranking would be mediocre.
Part two is about reading achievement, too. More specifically, it’s about reading and the English Language Arts standards of the Common Core (CCSS-ELA). It’s also about an important decision that policy analysts must make when evaluating public policies—the determination of when a policy begins. How can CCSS be properly evaluated?
Two different indexes of CCSS-ELA implementation are presented, one based on 2011 data and the other on data collected in 2013. In both years, state education officials were surveyed about their Common Core implementation efforts. Because forty-six states originally signed on to the CCSS-ELA—and with at least forty still on track for full implementation by 2016—little variability exists among the states in terms of standards policy. Of course, the four states that never adopted CCSS-ELA can serve as a small control group. But variation is also found in how the states are implementing CCSS. Some states are pursuing an array of activities and aiming for full implementation earlier rather than later. Others have a narrow, targeted implementation strategy and are proceeding more slowly.
The analysis investigates whether CCSS-ELA implementation is related to 2009-2013 gains on the fourth grade NAEP reading test. The analysis cannot verify causal relationships between the two variables, only correlations. States that have aggressively implemented CCSS-ELA (referred to as “strong” implementers in the study) evidence a one to one and one-half point larger gain on the NAEP scale compared to non-adopters of the standards. This association is similar in magnitude to an advantage found in a study of eighth grade math achievement in last year’s BCR. Although positive, these effects are quite small. When the 2015 NAEP results are released this winter, it will be important for the fate of the Common Core project to see if strong implementers of the CCSS-ELA can maintain their momentum.
Part three is on student engagement. PISA tests fifteen-year-olds on three subjects—reading, math, and science—every three years. It also collects a wealth of background information from students, including their attitudes toward school and learning. When the 2012 PISA results were released, PISA analysts published an accompanying volume, Ready to Learn: Students’ Engagement, Drive, and Self-Beliefs, exploring topics related to student engagement.
Part three provides secondary analysis of several dimensions of engagement found in the PISA report. Intrinsic motivation, the internal rewards that encourage students to learn, is an important component of student engagement. National scores on PISA’s index of intrinsic motivation to learn mathematics are compared to national PISA math scores. Surprisingly, the relationship is negative. Countries with highly motivated kids tend to score lower on the math test; conversely, higher-scoring nations tend to have less-motivated kids.
The same is true for responses to the statements, “I do mathematics because I enjoy it,” and “I look forward to my mathematics lessons.” Countries with students who say that they enjoy math or look forward to their math lessons tend to score lower on the PISA math test compared to countries where students respond negatively to the statements. These counterintuitive finding may be influenced by how terms such as “enjoy” and “looking forward” are interpreted in different cultures. Within-country analyses address that problem. The correlation coefficients for within-country, student-level associations of achievement and other components of engagement run in the anticipated direction—they are positive. But they are also modest in size, with correlation coefficients of 0.20 or less.
Policymakers are interested in questions requiring analysis of aggregated data—at the national level, that means between-country data. When countries increase their students’ intrinsic motivation to learn math, is there a concomitant increase in PISA math scores? Data from 2003 to 2012 are examined. Seventeen countries managed to increase student motivation, but their PISA math scores fell an average of 3.7 scale score points. Fourteen countries showed no change on the index of intrinsic motivation—and their PISA scores also evidenced little change. Eight countries witnessed a decline in intrinsic motivation. Inexplicably, their PISA math scores increased by an average of 10.3 scale score points. Motivation down, achievement up.
Correlation is not causation. Moreover, the absence of a positive correlation—or in this case, the presence of a negative correlation—is not refutation of a possible positive relationship. The lesson here is not that policymakers should adopt the most effective way of stamping out student motivation. The lesson is that the level of analysis matters when analyzing achievement data. Policy reports must be read warily—especially those freely offering policy recommendations. Beware of analyses that exclusively rely on within- or between-country test data without making any attempt to reconcile discrepancies at other levels of analysis. Those analysts could be cherry-picking the data. Also, consumers of education research should grant more credence to approaches modeling change over time (as in difference in difference models) than to cross-sectional analyses that only explore statistical relationships at a single point in time.
Part I: Girls, Boys, and Reading » |