Measuring inequality in a State Education Agency

In the past year the Wisconsin Department of Public Instruction—my employer—has shifted its focus to racial inequality in student outcomes, like the state black-white graduation gap which is among the nation’s worst. Like many other states facing similar challenges, we want to offer schools a portfolio of strategies to tackle this challenge. This effort is called Promoting Excellence for All (PEFA) and its first phase focused on identifying strategies in use in Wisconsin classrooms today that show evidence of closing the minority-white achievement gap for reading and mathematics.

As a research analyst with the Department, I identified candidate schools to participate in a task force and share their strategies. Identifying which schools are closing the gap consistently over time requires constructing measures of school-level achievement growth for minority and non-minority student groups from annual assessments measured with noise. In addition, Wisconsin’s student population is predominantly white with minority students both clustered in urban areas and then sparsely dispersed in schools around the rest of the state. What I needed, then, was a measure of these school-level growth parameters that also appropriately accounted for the varying precision among schools with diverse sizes and student compositions.

To do this, we developed a package for the R statistical computing language to simulate the expected rank of each school’s random effect estimates from a longitudinal multilevel model.[1] This approach allows us to compare schools, simultaneously, on both the magnitude of their gap closure and the precision with which we can estimate them. We then use these expected ranks to identify a list of candidate schools to invite that represent the schools for which there is good evidence of a school-level impact of closing achievement gaps.

This is an example of what is being done in school districts and state education agencies across the country—advanced statistical techniques being used to solve an immediate practical problem. One promising result of the rise of longitudinal administrative data systems is the emergence of a community of practice among staff doing this kind of work across the country. I co-founded one such group, called DATA-COPE, which encourages idea sharing and feedback for methodological and measurement questions that a project like this raises. There is also a wealth of published internal studies conducted by staff in these agencies, now being gathered and made searchable through ARNIE Docs. These platforms help LEAs and SEAs move forward and pool their analytical skills more efficiently.

Across the country, practitioners are eager to hear of success stories within their local context. Identifying which practitioners to bring into the room and to learn from is something that quantitative analysis can help to identify. The work in Wisconsin culminated in a set of strategies that the Department can point to as being in place in schools that empirically demonstrated progress in closing the achievement gap in Wisconsin. This combination of data analysis in service of qualitative discussions with teachers and school officials can be a powerful tool in identifying promising strategies and practices for the challenges facing schools today, and for the research proposals of tomorrow.

[1] The package is known as merTools and was inspired in part by: Lingsma HF, Steyerberg EW, Eijkemans MJC, et al. Comparing and ranking hospitals based on outcome: results from The Netherlands Stroke Survey. QJM: An International Journal of Medicine. 2010; 103(2):99-108. DOI:10.1093/qjmed/hcp169