There have been substantial advances in the development of states’ education data systems over the past 20 years, supported by large investments from the federal government. However, the availability of modern data systems has not translated into meaningful improvements in how consequential state policies, such as funding and accountability policies, use data to identify students in need of additional resources and supports. Today, as has been the case for decades, states ubiquitously rely on blunt categorical indicators associated with disadvantage to identify these students, such as free and reduced-price lunch enrollment (among others). In other words, we are still using 20th-century technology to identify at-risk students in 21st-century schools.
In a recent article published in Educational Evaluation and Policy Analysis, we develop a new measure of student risk that can be used in consequential education policies. Our new measure leverages the rich information available in state data systems to better identify at-risk students, whom we define precisely as those at risk of poor academic performance. We refer to our new measure as “Predicted Academic Performance,” or PAP.
Academic performance can mean many things—achievement on standardized tests, on-time grade progression, school attendance, high-school graduation, college attendance, etc.—and in principle, PAP can be built around any of these outcomes. In our proof-of-concept application in Missouri, we measure student performance using state standardized tests. To construct PAP, we predict student performance using data from Missouri’s State Longitudinal Data System (SLDS). Specifically, we use information about student mobility across schools, family income, English language learner status, individualized education program (IEP) status, sex, and race/ethnicity. (The precise set of predictor variables is flexible, and in our academic article, we also consider versions of PAP that use subsets of this information.)
These and related variables are typically included in emerging early warning systems (EWSs) in some states and school districts. EWS indicators are diagnostic indicators that identify students at risk of poor academic performance, with the goal of helping schools target proactive support toward these students. In fact, PAP can be viewed as a special case of an EWS indicator, with the added constraint that PAP predictors are not manipulable (course grades are an example of a manipulable predictor used in some EWSs). This added constraint makes PAP well suited for use in consequential education policies, such as funding and accountability policies. In contrast, EWS indicators are not designed for policy use and would create perverse incentives for schools and districts if used in this way.
We show that PAP is more effective than common alternatives at identifying students who are at risk of poor academic performance, as intended. Using a funding policy simulation, we further show PAP can be used to target resources toward these students more effectively than alternative risk indicators. In addition, PAP also increases the effectiveness with which resources are targeted toward students who belong to many other associated risk categories.
Why we need to think differently about risk measurement
The status quo approach to measuring student risk is limited in two ways. First, the use of basic risk categories is a dated technology. It made sense when states’ data systems were underdeveloped and it was difficult to assemble detailed information about students. But today, we have a plethora of information, most of which is ignored. For instance, many states allocate funding to school districts based on the number of students from low-income families. However, we are not aware of any state policy that acknowledges the difference between students identified as low-income for one year versus those who are persistently identified as low-income. This is despite clear evidence that the persistence of student circumstances matters and readily available data that allow us to measure it.
The second limitation of the current approach is that common indicators used to identify at-risk students are both inaccurate and subject to change based on external policy decisions. The inaccuracy of common risk indicators has been shown most conclusively for free and reduced-price lunch (FRL) enrollment, which research shows is greatly oversubscribed (an outcome that is not surprising given program incentives for families and schools to increase enrollment but not to verify enrollment accuracy). For other risk indicators—e.g., indicators for English language learner status or special education status—much less is known about their accuracy, although there is cause for concern. Compounding this issue, policy changes outside the control of the education system can significantly alter the informational content of some risk indicators. This occurred, for example, for FRL enrollment with the implementation of the Community Eligibility Provision in the National School Lunch Program.
It is instructive to juxtapose the investments we make to measure student risk with the investments we make to measure student achievement. We view these two measures as the most important measures for informing education policies at all levels of government. States spend 1.7 billion dollars annually to administer standardized tests alone, and this does not count the many other tests administered locally by school districts. Furthermore, an entire subfield of education scholarship is devoted to understanding test-measurement issues. In contrast, it is hard to identify any meaningful investments in the development of risk measures that are suitable for use in consequential education policies. Rather, it seems we have done the bare minimum, using convenient and readily available indicators that often have a different purpose (e.g., to allocate free or subsidized school meals to students) with no substantive efforts to verify the accuracy of the data.
How PAP could improve identification
The limitations of existing measurement practices motivate our development of PAP. PAP is a singular indicator of student risk that draws on numerous data elements available in state systems. In essence, PAP is a weighted average of many student attributes, where the weights are higher for attributes that are more strongly associated with how students perform in school.
The PAP framework is flexible and can incorporate a variety of types of information into students’ total risk scores. It can also incorporate this information contemporarily (e.g., income and mobility status this year) and accounting for individual persistence (e.g., income and mobility status over the past three years) and schooling context (e.g., average income and mobility status at the student’s school). PAP is limited by the same fundamental challenges as current categorical systems in that some of the underlying data elements are inaccurate and may be subject to change. However, by using a large number of predictors of student risk—rather than just one or two variables as is the current policy norm—the influence of inaccuracies in the individual variables is reduced, as is the impact of policy-induced changes to the variables’ meanings.
In our proof-of-concept application of PAP using Missouri data—the details of which can be found in our academic article—we show that PAP is more effective than common alternatives at identifying students at risk of poor academic performance. This is not surprising because PAP is designed precisely to identify these students. Still, we believe this basic finding is important: It shows that states can do a better job of identifying these students—arguably those most in need of additional supports—than is currently the case. We further show that in the context of a school funding policy, PAP can be used to target resources toward these students more effectively.
In addition, we show that PAP is more effective than common alternatives at identifying—and targeting resources toward—other at-risk student groups. Examples include English language learners, special education students, and underrepresented minority students.
Moving forward
State longitudinal data systems contain rich information about students and permit the construction of risk indicators that are much more informative than in the past. However, earnest efforts to improve risk measurement in state education policies have largely failed to materialize. PAP is a step toward the development of more informative, modernized measures of student risk that put the information in states’ longitudinal data systems to work. PAP can be readily used for diagnostic purposes now—for instance, to help policymakers better understand the allocation of resources under current funding formulas and how effectively those resources reach students at risk of poor academic performance. In addition, with further testing and development, we are optimistic that PAP, or something like it, can be incorporated directly into consequential education policies.
PAP has limitations and is not a panacea. Given the inherent difficulty of measuring student risk, no measure will be. However, it is important to remember that while we wait for the perfect solution to arrive, we continue to use the same decades-old status quo approach to measuring student risk. And we can surely do better than that.