In late April, the U.S. Department of Education released the Civil Rights Data Collection (CRDC) for the 2015-16 school year, a Congressionally-mandated biennial survey that has collected and reported data on critical school indicators since 1968. It was the second time that all public schools in the U.S. were required to report the share of its students who were chronically absent, in this case missing three or more weeks of school.
Under No Child Left Behind, schools reported average daily attendance, but not how many students were missing so much school that they were at risk academically. Until two years ago, the national scope of chronic absenteeism had been unknown because most schools were not measuring it and few states were reporting it. Schools tracked who was absent without an excuse—truancy—without paying much attention to how much instructional time students were losing due to the combination of all absences including excused absences occurring, for example, because of illness as well as suspensions.
Editorial director of FutureEd - Georgetown University’s McCourt School of Public Policy
Executive Director - Attendance Works
Research Professor and Co-director of the Center for the Social Organization of Schools - Johns Hopkins University School of Education
Director of the Everyone Graduates Center - Johns Hopkins University School of Education
In contrast, measures of chronic absence track all missed days, as too much lost instructional time—regardless of the reason—can erode student achievement. As we discuss in this post, in measuring and reporting chronic absenteeism, accuracy, standardization, transparency, and oversight are critical as chronic absenteeism becomes a high-stakes accountability measure in most of the country.
Improvements in tracking explain “surge” in chronic absenteeism
According to the latest data, 8 million students (more than 15 percent of all students nationwide) were chronically absent in 2015-16. This figure is an increase of more than 800,000 students in just two years since the inaugural reporting.
You might think that chronic absenteeism was surging nationwide; but, that would be the wrong takeaway. Rather, data from the 2015-16 survey suggests that school districts and states have made improvements in tracking and reporting attendance. In other words, the apparent “surge” in chronic absenteeism may in fact reflect improvements in tracking these data accurately, rather than a sudden increase in chronic absenteeism.
The federal data from the 2013-14 school year was the first time that all schools were required to submit information on chronic absence. The calculation includes all absences—excused, unexcused, and those due to disciplinary actions—a novel concept for some districts used to reporting only truancy.
Not everyone got it right. About 12 percent of schools had missing data or reported no students being chronically absent, and some schools reported no students as being chronically absent when that was unlikely to be the case. To their credit, when alerted to issues with their initial submissions, some states (like Florida) and districts (from large districts like New York City and Atlanta to smaller districts like Columbia 93 in Missouri) moved to fix their data.
Figure 1 shows how the distribution of school rates of chronic absenteeism changed from the first year of reporting (2013-14) to the second (2015-16). Almost half of the increase in the total number of chronically absent students in the 2015-16 period (just over 380,000 students) came from the roughly 5,500 schools that had reported no chronically absent students in the first year of reporting. Nearly three out of five schools reported higher rates of absenteeism than in the earlier submission.
These upward “revisions” were smaller or non-existent in states that had a history of auditing attendance data. Taking Connecticut as an example, among schools in both waves of the survey, no regular public schools (i.e. not a school that only serves a particular population) had missing data in either year and only two alternative schools did. The number of schools reporting zeros decreased from 38 to 27. The overall share of chronically absent students in Connecticut went down between 2013-14 and 2015-16, consistent with the progress the state has made to reduce chronic absence. Whereas in Florida, the number of schools reporting no chronically absent students went from 236 in 2013-14 to 181 in 2015-16 and the share of students who were chronically absent increased 1 percentage point. The Florida schools continuing to report no chronic absenteeism are predominantly virtual, alternative, and charter schools.
Chronic absenteeism remains a pervasive problem
While better data reporting, not a spike in absences, explains the apparent “surge” in chronic absenteeism between the 2013-14 and 2015-16 reporting, the more accurate data from 2015-16 indicates that chronic absenteeism is indeed a pervasive problem. In 58 percent of schools, at least one in 10 students are chronically absent – that represents about 52,000 schools, 6.7 million chronically absent students, and 31.3 million enrolled students.
As shown in this video, evidence suggests that chronic absenteeism has a deleterious effect on the whole class’ achievement. In other words, more than 60 percent of public school students are affected either directly or indirectly by very high levels of chronic absenteeism in ways that we suspect may harm their learning.
Chronic absenteeism is a problem across the country. In every state, there are schools that report 10 percent or more chronically absent students. In 8 states and the District of Columbia, more than 20 percent of students were chronically absent in 2015-16 (figure 3).
Chronic absenteeism as a high-stakes accountability measure
As part of the Every Student Succeeds Act (ESSA), states were required to add a measure of “school quality and student success” to their statewide accountability system. Thus far, 36 states and the District of Columbia plan to hold schools accountable for improving school rates of chronic absenteeism as part of high-stakes accountability under ESSA.
Because this is a relatively new metric, districts and states need to focus on setting consistent rules and promoting data accuracy to ensure fairness and to prevent schools from gaming the system to make their attendance rates look better.
The first step for states is to settle on common statewide definitions for attendance so that each state can make apples-to-apples comparisons of schools within their state for accountability purposes. The Education Department’s EdFacts initiative has set some consistent definitions for the chronic absence data it collects: students should attend at least half a day to be considered present, and schools should count anyone who has been on the rolls for at least 10 days.
Though more clarification is still needed. Setting common definitions would help alleviate the following types of problems:
- In several states, school districts are allowed to set their own definition of a school day. In these states, schools cannot be fairly compared to each other for accountability purposes.
- Some states do not count a student’s absences until she has attended a school for 45 to 60 days while others allow schools to disenroll students if they don’t show up for a certain number of consecutive days. A highly mobile student could escape notice altogether, and schools could use withdrawals or disenrollment policies to fudge chronic absenteeism numbers.
- Some schools disenroll students taking extended vacations, only to re-enroll them when they return, leaving no trace of absences or the weeks of lost instructional time.
Beyond setting rules, states need to vet the data they’re receiving from schools and districts to ensure its accuracy. In Connecticut and California, data systems are set up to signal state managers when attendance rates show sudden drops or gains. Unusual circumstances—like a group of students who were disenrolled then re-enrolled—generate a call from the state office. Such approaches not only ensure quality data but also discourage any effort to game the system.
Credible data promotes more learning
With good data, reliably vetted, we can get down to the business of improving attendance and student achievement. Chronic absenteeism should be a vital sign for our schools—like a heartbeat or temperature. If it is irregular, i.e. if a student is missing too much school, it signifies the need to dig deeper to understand the underlying challenges.
Attendance Works, of which Chang is the Executive Director, offers a framework for improving student attendance. In this comprehensive tiered approach, educators play a critical role as a first line of prevention and early intervention. The key is starting with positive engagement and problem-solving to identify and address barriers to getting to school. It requires an intentional shift away from punitive action and blame that have no evidence of yielding sustained improvements in attendance.
Schools facing high levels of chronic absence can’t do this alone. As described in the Fall 2017 report, Portraits of Change, educational institutions need to forge partnerships with public and community agencies to ensure their students are in class so they can learn.
These efforts, in school and out, depend on accurate information. If absenteeism trends are skewed by faulty data or outright gaming, the metric will lose its power to identify the students and schools most in need of support. The good news is close scrutiny of publicly available data can be an essential tool for catching inaccuracies and improving the quality of what is reported.
The Brown Center Chalkboard launched in January 2013 as a weekly series of new analyses of policy, research, and practice relevant to U.S. education.
In July 2015, the Chalkboard was re-launched as a Brookings blog in order to offer more frequent, timely, and diverse content. Contributors to both the original paper series and current blog are committed to bringing evidence to bear on the debates around education policy in America.