Sections

Commentary

Student test engagement and its impact on achievement gap estimates

Ten-year-old pupils take part in a public examination at a school in Dhaka

Achievement gaps are one of education’s most important policy metrics. Gaps between boys and girls, as well as white and racial minority students, are often used to measure the effectiveness and fairness of the education system at a given point in time, over the course of decades, and as children progress through school. Major policy initiatives related to accountability, assessment, and funding are partially motivated by a desire to close gaps.

As is so often the case, however, estimates of achievement gaps are not as straightforward as practitioners and policymakers might like. Gaps result from the sum total of students’ schooling, after-school activities, home life, and neighborhood experiences. Further, gaps are not measures of intelligence or ability, but of performance. Therefore, observed scores are impacted by factors that adults control (like what students are taught), and by factors that may be unrelated to achievement (like motivation to perform).

In a recent study that will soon be published in Teachers College Record, I examine whether motivation to perform on tests impacts estimates of achievement gaps. I approached the question by using a measure of test motivation called “response time effort” (RTE), which is supported by decades of research. Specifically, this metric quantifies how often a student responded to questions on a test so quickly that the item’s content could not have been understood. Oftentimes, these rapid responses are provided by students in well under 10 seconds. If a student took a 100-item test, and rapidly responded to 10 items, then his or her RTE score would be .90. Researchers often call this behavior “rapid guessing” because items answered in this way are almost always correct at a rate no better than chance. Studies also show that students are not simply looking at items quickly, deciding they’re too difficult, and moving on. Finally, this research shows that rapid guessing biases observed test scores downward relative to what students actually know—oftentimes by more than .2 standard deviations, which is a bigger effect than some highly successful programs have on student learning.

What I noticed from this previous work and in my own data is that rates of rapid guessing differ by student subgroup. In my sample, African-American students in high school are almost 10 percentage points more likely than white students to rapidly guess enough that their scores might be impacted. Male students also rapidly guess more than females by 10 percentage points in later grades. If rapid guessing is higher among minority and male students, and if rapid guessing results in an understatement of what a student knows on achievement tests, could gap estimates be biased by differential test motivation across subgroups?

To answer that question, I estimated achievement gaps in ways that did, and did not, account for rapid guessing. Essentially, gaps that accounted for motivation only compared students who rapidly guessed on the same proportion of items across the test. I found that, while most gaps estimates were impacted only minimally, some changed substantively. In particular, male-female gaps in mathematics not only shifted when accounting for test motivation, but did so enough to flip the gap from slightly favoring girls to favoring boys in middle school.

This result has important policy implications given that the gap favoring males in math and science during elementary grades has been suggested as one explanation for differential rates of participation in quantitative college majors and careers between men and women. If the math achievement gap favoring boys has narrowed over the past decade due partially to differential test engagement rather than content mastery, then educational leaders trying to close gaps may not have a clear picture of just how big the gap is and, perhaps, how to reduce it. Not knowing the true magnitude of the gap, in turn, creates problems for lawmakers and educators trying to understand if policies designed to close gaps are working.

Fortunately, educators and policymakers have tools at their disposal to help address the bias that can be introduced into test scores by disengagement. After a test is administered, statistical approaches are available to re-score tests that account for the impact of rapid guessing, and those scores could be used in policy analysis. Research also shows that rapid guessing can be pre-empted by monitoring the behavior in real time, and gently encouraging students to re-engage. Based on this research, my organization, NWEA, has implemented software that alerts adults proctoring achievement tests when a student is rapidly guessing on the assessment. Both front-end and back-end options could be built into assessments used for evaluation and policy purposes.

My research may also have implications for understanding broader disengagement from school. Initial work shows that rapid guessing is strongly related to academic disengagement behaviors like chronic absenteeism, course failures, and suspensions—all of which are warning signs that a student may be at risk of dropping out. This finding could be helpful when trying to understand why high rates of test disengagement occur, and what they mean for academic engagement. For example, while black-white gaps did not shift much in my study as a result of accounting for differences in test motivation, over 10 percent of African-American middle school students in the sample rapidly guessed on 30 percent or more of the items on the test. Investigating why disengagement is so pervasive among certain subgroups may shed light on the student attitudes, beliefs, and aspirations related to schooling that often drive academic disengagement.

Put simply, my work suggests there is an engagement gap as well as an achievement gap, and that both are linked. This finding could place an even greater premium on developing students’ social-emotional competencies like academic self-belief, which are often the root cause of disengagement.

However, perhaps the most important implication of this research is that it invites us to rethink what achievement gaps mean. Achievement gaps are talked about so often in education policy and have been present so long that one could start to feel like they are intractable. The link between test motivation and gap estimates should make us wonder what, exactly, the gaps are measuring, and why they are occurring. Sometimes, developing innovative, effective policies begins with seeing old problems through a new lens.

Authors