Symposium on education systems transformation for and through inclusive education

LIVE

Symposium on education systems transformation for and through inclusive education
Sections

Commentary

Getting beyond ‘did it work?’: Proposing a new approach to integrate research and policy

A lone worker passes by the U.S. Capitol building in Washington

Every day, state and district leaders make decisions about policy design and implementation. These decisions range from the seemingly mundane—school bus route maps—to the highly consequential—the selection and design of indicators to be included in state accountability systems.

The research community has access to data, information, and even theories that could be germane to these decisions. Yet, researchers too often are unwilling to answer questions that have not been answered with causal research—or, for that matter, to make policy recommendations at all (whether based on causal research or not). When researchers do respond, it is often in a timeline (months or years) and a format (journal articles especially, but even many policy briefs) that do not directly answer the questions. This mismatch means that policymakers are usually making decisions without sufficient benefit of insight from expert researchers and their knowledge of the full breadth of the existing research base.

Since the advent of No Child Left Behind, education policy has increasingly focused on finding “what works.” In general, this “what works” movement has privileged causal research—work that uses social experiments or advanced quantitative methods to carefully identify the causal effects of programs and policies—rather than descriptive, correlational, and qualitative research. While we applaud and support this causal revolution, it turns out that very few of the decisions policymakers need to make are related to questions that have existing causal evidence.

In states’ Every Student Succeeds Act (ESSA) school accountability plans alone, states had to make literally dozens of consequential choices about the design of their accountability systems. These included: how to construct proficiency and growth measures of performance; whether to combine ratings into a single index; which additional indicators of school performance to use beyond assessment results and graduation rates; and whether to test subjects other than math and ELA for accountability purposes.

The existing causal research base has little to say about these design issues. For example, researchers have developed many kinds of growth models: one-step and two-step value-added models, student growth percentiles, simple subtraction-based or gain-score models, and others. While researchers may advocate one or another for conceptual reasons, we know of no research that says “choosing X growth model produces better achievement gains than choosing Y model.” Similarly, descriptive research has found that focusing on math and ELA in accountability systems has led to a narrowing of the curriculum to and in those subjects. Does this imply that a system that uses more subjects’ test results for accountability would lead to better outcomes? That question, too, cannot be answered by existing causal research.

Even where causal evidence exists, many policy questions will still fall into grey areas. For instance, we know of one study that compares accountability under an A-F regime to accountability under a regime where no summative grades were provided. That study found achievement was better under the former than the latter. Does this one study in New York provide sufficient causal evidence for all locales to adopt a letter grade? We would say not. But even if it did provide sufficient evidence to warrant that decision, there is no evidence we are aware of that tests an A-F system versus some other way of reporting overall school performance.

Research is just one factor that influences policymaking; values, professional judgment, legal frameworks, and other factors matter, too. But right now, the questions asked and answered by researchers are too disconnected from the needs of policymakers for research to have much influence at all. Policymaking suffers as a result.

Solving the problem through a new system of research and dissemination

We see a need for a new structure and a new kind of writing that inform important decisions policymakers need to make on a daily basis. In our minds, this work would have several key features.

First, the questions answered through this system must be motivated by issues identified directly by practitioners. If the questions are not coming out of real-world problems, the answers will not be widely used. We envision two possible approaches. One option would be to assemble sitting panels of state and district policymakers chosen to represent diversity along several dimensions (position, district size/location, types of students served, etc.). These panels would be responsible for surfacing difficult decisions they and their colleagues need to make (or problems they need to solve) and sharing them to be addressed. Another option would be to use an element of crowdsourcing, perhaps making available a website where issues posed by policymakers could be upvoted or downvoted until they received sufficient support to be investigated. In either case, the questions or issues would be motivated by what policymakers were hearing or experiencing in the field, rather than by what researchers thought was interesting or relevant.

Second, the questions would be answered by informed experts (perhaps writing with policymakers or wonks deeply familiar with policy and practice), relying on their judgment and their analysis of the full scope of the existing literature. This is a very different approach than, for example, the What Works Clearinghouse, which uses a set of pre-determined criteria for evaluating research such that anything that does not meet those criteria is not reviewed. These experts would need to be carefully chosen. For instance, many strong researchers nevertheless have obvious ideological stances that would make them inappropriate to be fair arbiters of existing evidence. Furthermore, authors’ work would likely need to be reviewed by other academic and policy experts to ensure it is accurate, clearly written, and actionable. However, unleashing experts with deep content expertise would result in a richer analysis of what is known and not known and would lead to greater trust in the recommendations.

Third, the answers to the questions would actually be … answers to the questions! Too often, the norms and structures of academia (e.g., the journal peer-review system) mean that academics prefer to pass the buck on answering difficult questions, going to great lengths to avoid making actual recommendations. It will be challenging to get researchers out of this trap. But we think there is growing interest, especially among younger academics, of influencing policy through public engagement, so we believe this challenge can be overcome. The work we propose, therefore, would be different—it would only employ experts who were willing to make informed recommendations based on their expertise. These experts need not provide a single answer for each question—it is perfectly reasonable to give multiple options and to include caveats—but specific and concrete suggestions are what will make the work useful.

Fourth, the work would need to be as contextually relevant as possible. (Here is where expert judgment would be even more essential.) The writing would clarify which dimensions of the evidence were most likely to be contextually specific and which were not. It would clarify which recommendations were based on strong evidence, which were based on weaker evidence, and which were based on informed hypotheses or theories. Where possible, it would include disaggregations of relevant data or evidence so that policymakers could put the findings into local context. Where the key data are unavailable nationally, it would include recommendations for questions policymakers could investigate with local data.

Finally, the work would be produced and made accessible rapidly and updated frequently. We envision the main form of dissemination would be an attractive website with an aggressive dissemination strategy, but that targeted mailings or even a magazine-like mail or digital publication could also emerge. Written pieces would be approachable in terms of length (perhaps two to three pages) and readability, such that a policymaker could read and digest in a few minutes. This could perhaps be paired with supplementary material in another document. The work would be turned around quickly, ideally in a couple weeks or less. Authors could update their work periodically as new research came to light, and they would be encouraged to do so (e.g., through annual reminders). And of course all of the work would be open-source, ensuring that paywalls would not get in the way of policymaker access. It might also be beneficial to allow others to respond to these pieces—other researchers to respond with their own ideas and recommendations, and policymakers and practitioners to respond with their own experiences and additional questions.

One example of such writing (though it was originally written for a different purpose) is here. This piece, which was provided as an open letter comment during the U.S. Department of Education’s ESSA rulemaking period, made specific suggestions about the design of status-based performance measures under ESSA. Even though it was not aimed directly at states, the letter was cited in three states’ ESSA plans and “informed discussions” (as reported by state officials) in an additional four (as well as at the Department of Education). The success of this letter, even though it was not directly prompted by policymakers’ requests and not directly written to them, suggests that future work that is more directly targeted at their questions might have even bigger effects on decisions.

Of course, there are many challenges with building and sustaining this kind of publishing model. For one, everyone involved in this operation would need to be paid—the authors for their time writing, the policymakers for their efforts in surfacing questions, and even the reviewers for providing feedback. We envision launching the effort with support from philanthropy and sustaining it either from continued philanthropy or a fee-for-service arrangement for more targeted questions. Further, dissemination could not be an afterthought, but would have to be carefully considered before launching the effort—all this work would serve little purpose if no one saw it. Finding the right writers willing to offer direct and reasoned recommendations may also be a challenge, but we imagine that this will become easier as the potential impact of this work becomes apparent.

Similar approaches exist, but they’re not solving the problem

Several existing resources address aspects of this need, but none fully bridges the gap. Journalistic outlets such as Education Week, Chalkbeat, and Education Next report on research in more accessible language. However, they typically prioritize new research findings from individual studies rather than synthesizing what is known generally about a policy area, and what they publish is not driven directly by policymakers’ questions. Academic journals do sometimes publish syntheses of streams of research, but they are usually written in academic jargon, focus on questions of academic rather than policy interest, and are published in outlets that are inaccessible to policymakers because of paywalls.

Similarly, the federal government sponsors the Regional Education Laboratory (REL) program, which is intended to provide policymakers with access to research evidence and technical assistance as well as to conduct applied research. But constraints in the legislation enabling the REL program, the contracts for the individual regional centers, and the nature of federal bureaucracy mean that the REL cannot move nimbly to address rapidly emerging policy questions and cannot make specific policy recommendations.

Membership organizations such as the Education Commission of the States directly respond to policymakers’ questions and may provide access to research findings on a faster timeline, but typically the work is done by staffers rather than national experts with deep understanding of the academic literature and research methodology.

After nearly two decades of federal investment in high-quality education research, we know more than we ever have about education processes and policies and how they affect outcomes for kids. But we still lack sufficient structures that connect policymakers and the research community in real time, and as a result, policy decisions are not as well informed as they could be. We suspect that a new resource for building these connections would be highly used and would yield an impressive policy return on investment.

Authors