Recently submitted state plans for implementing the “Every Student Succeeds Act” (ESSA) provide insight into how research is making inroads into education policy at the state level. Based on my review of a sample of plans, a fair answer is that it is not. A previous post in this series by Martin West describes how ESSA created opportunities for states to use research and evidence in ways that improve student outcomes.1 Opportunities, yes—but most of what is in the plans could have been written fifteen years ago.
To date, most of the attention on submitted plans has focused on the accountability structures they propose. ESSA requires each state to specify how it will hold schools and districts responsible for meeting the state’s education goals, unlike No Child Left Behind, which specified an accountability structure that applied to all states. Other organizations are reviewing these aspects of plans.2 My focus emerges from another ESSA requirement: each state has to designate at least 5 percent of its schools, and high schools with graduation rates below 67 percent, as low-performing and use “evidence-based interventions” with them.
I looked at one other aspect of plans—how they proposed to “use data to help educators be more effective.” Intervening in schools falls under a different funding stream than using data to improve educator skills (Title I for the first and Title II for the second), but both clearly involve a role for research and evidence.
Research can enter in various ways, some of which I do not focus on. For example, some plans cited statistical research to support their n-size determination (the minimum number of students in a subgroup, such as English learners, above which schools are held accountable for outcomes for that group). Some plans cited research to support their choice of a “nonacademic indicator” in their accountability structure. As has been reported elsewhere, chronic absenteeism has been a favorite choice, and research on it is cited in many plans.3 Some plans cite research on early warning systems designed to flag students who may be in need of support to help them progress in school. Debates about n-size have been occurring at least since NCLB. Chronic absenteeism and early warning systems are relatively recent.4
what the plans say
Reviewing 51 plans (each between 100 and 200 pages plus appendixes) was too extensive an undertaking. Instead, I sampled 10 states with probability proportional to their 2015 K-12 student enrollment. That sampling process yielded California, Texas, Florida, Illinois, Ohio, Michigan, Indiana, Arizona, Colorado and Alabama. The 10 states account for about half of the country’s K-12 enrollment.
Table 1 displays how state plans describe their approaches for using evidence to support low-performing schools or to use data to help educators be more effective. The text in the table mostly is from plans themselves, edited to remove acronyms and make wording more concise.
The variability in the table is noticeable. California’s plan for improving low-performing schools essentially is we got this. Ohio’s plan is much more detailed about how it will take steps to promote improvement. How long a school can underperform before the improvement requirements take effect is variable, from two to four years depending on the state. Some states require low-performing schools to partner with external entities. Some states call for state longitudinal data to be a source that educators will use to improve their practices. Other plans simply say data will be used somehow.
Overall, it is hard not to reach the conclusion that plans mostly ignored research on what works and what does not to achieve particular outcomes (effectiveness research). The logic that is most evident in plans goes something like this: if a school does not improve after some number of years, the school and its host district will do a “needs assessment,” and a “root cause analysis,” which will support choosing appropriate evidence-based interventions. Not one of the ten plans offered an example of how that process might yield evidence-based interventions that schools could implement. Requiring a needs assessment and a root-cause analysis might fairly be interpreted as “intervening” with schools, but if your doctor tells you there’s something wrong, you would expect to hear a treatment plan. The Department of Education’s guidance to its peer reviewers who scrutinize the plans is unclear about what they should be looking for as treatments.5 There is a glaring missed opportunity to tie effectiveness research more closely to identified needs.
States understand that using the expression “evidence-based” in these plans, over and over, is a plus. In using the term, however, plans do not provide enough detail to allow a reader to assess the evidence base being referred to. For example, Arizona’s plan indicates it will work with low-performing schools to implement “evidence-based interventions which are bold and based on data.” Yes, well. It goes on to say it will work with districts to support selection of “innovative, locally-selected evidence-based interventions leading to dramatic increases in student achievement.” If there is substance underlying Arizona’s claim to be able to help districts achieve dramatic success with students, why wait?
It is at least a bit uncomfortable that, under ESSA, states do not require schools to undertake interventions until years have passed and millions of students have continued to perform at low levels (ESSA requires that states designate at least five percent of schools and low-graduation rate high schools as having to improve, which is more than 2 million students). Why schools are not immediately identifying needs and root causes is unclear. Perhaps conserving resources?
Another feature of some plans is that schools designated as needing to improve have to work with external partners or entities. How working with external partners will improve schools is unclear. It seems unlikely that the driving factor in why schools do not improve is that they are not working with external partners. Partners do not hold the keys to unlocking achievement. It would have been useful to see at least citations to evidence that working with partners is a pathway to improvement.
Plans to use data to improve educator skills are even more unclear than plans to implement evidence-based interventions. A number of plans appear to be describing what they currently do with data, which means their “plan” is to keep doing it. While that might be a good idea if they had evidence that what they were doing was working, none of the plans offer that.
Several states note that their teacher evaluation systems will use value-added models and those evaluations will point the way to improving teacher skills. Using value-added models for evaluating teachers has been contentious, but they certainly are grounded in research. But research demonstrating how to successfully use the outcomes of those evaluations to improve student learning is a different matter, as I have documented previously in this series.6
noteworthy aspects
Five states (Ohio, Indiana, Michigan, Alabama, and Illinois) indicated they will set up “clearinghouses” or listings of interventions that have been vetted for evidence of their effectiveness. Clearinghouses seem like a reasonable idea—and in fact the Institute of Education Sciences has been operating one, the What Works Clearinghouse (WWC), for nearly 15 years. It has reviewed thousands of studies and released hundreds of reports. Re-inventing this concept seems inefficient, though it should be noted that ESSA’s evidence standards and WWC evidence standards differ and creating vetted lists using the WWC as the starting point will require some effort. Indiana proposes that its clearinghouse only list programs that have evidence of effectiveness in Indiana. This seems a bit strict, like asking my doctor to only prescribe me medications that have been shown to work on patients living in New Jersey.
A number of plans mention “multi-tier systems of support.” The logic of these systems is that students, schools, or districts can be arrayed into tiers. The lowest tier applies to just about everybody. Those in higher tiers need more support. Arraying individuals into tiers can be cost-effective to the extent that lower-cost forms of assistance can be broadly applied and higher-cost forms of assistance can be narrowly applied to those showing they really need the assistance. It is like triage in hospital emergency rooms. However, what happens in the highest tier still needs to be identified. The notion of using tiers is simply structural—the tiers need to be filled with something.
developed authority has its costs
ESSA moved more authority for K-12 education back to states. But there are sensible reasons the Federal government should continue to invest in education research. All states share in the benefits of these investments, at no direct cost to them. But it remains up to the states to take advantage of these investments.
When I started reviewing plans, I thought I would see more concrete ways effectiveness research was or would be used. What I see is closer to leaps of faith—needs will be assessed, causes will be identified, and, then, suitable interventions will be selected. That last step is a big one, though, and it is where most of the resources will be spent. Researchers and states will need to work together more closely than these plans suggest to make this happen.
The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. He is currently not an officer, director, or board member of any organization with an interest in this article.
-
Footnotes
- https://www.brookings.edu/research/from-evidence-based-programs-to-an-evidence-based-system-opportunities-under-the-every-student-succeeds-act
- https://bellwethereducation.org/publication/independent-review-essa-state-plans
- http://www.npr.org/sections/ed/2017/09/26/550686419/majority-of-states-plan-to-use-chronic-absence-to-measure-schools-success
- State plans can be downloaded from https://www2.ed.gov/admins/lead/account/stateplan17/statesubmission.html. Each state also posted their submitted plan on the state website.
- https://www2.ed.gov/admins/lead/account/stateplan17/essastateplanpeerreviewcriteria.pdf
- https://www.brookings.edu/research/teacher-observations-have-been-a-waste-of-time-and-money