Sections

Commentary

(re)Searching for impact

Researchers who receive funds from donors now have to undergo all kinds of contortions to demonstrate “impact.” Not only are we supposed to research and collect data, but also in the search for impact we must also blog, tweet, Facebook, converse with media, and strive for that ultimate seal of approval, “a seat at the table.” The U.K., as usual, is leading the charge. A 2014 Research Excellence Framework asked researchers to demonstrate how their work would lead to positive impacts for “the economy, society, culture, public policy or services, health, the environment, or quality of life, beyond academia.” 

It has reached the point where researchers put out newsletters showing that their randomized trial had impact because someone, somewhere started to “scale it up.” If those who believe so deeply in the importance of an experimental counterfactual are happy to jettison any counterfactual in search of demonstrable impact, we are indeed in serious trouble. (One such example is Chapter VI of this report, but there are many others).

This constant desire for impact is distorting effort, decreasing the quality of research, and undermining democratic discussion and debate in the countries that need it most. Research funders have a poorly specified model of the production function of “impact,” let alone the link between impact and social welfare.

The deeper problem is that this increasing micromanagement of research may reflect something else entirely: A fundamental breakdown in trust between funders and researchers. If this is indeed correct, replacing non-verifiable components of research with verifiable proxies will not address the underlying problem and, in the long run, will be bad for everyone. We need to talk.

To start that conversation, here are three key examples.

Example 1: In some cases, research may spur policy. But in many cases, its appropriate role is to hold back policy. Suppose a policymaker who just “wants to get on with it” (a likely donor favorite) receives a signal of what to do—say, “build roads.” Based on this one signal, she decides appropriate policy. But now suppose she commissions further research. It is almost certain that the new research will produce additional signals that are not consistent with the original signal received by the policymaker—perhaps “rail is better than roads.” Although in the long run, this accumulation of signals will lead to better policy, in the short run it will make it harder for the policymaker to push through what she perceived as the best way forward. We see this in our own work: Top management often uses research to confirm “what they already know” leading to uncomfortable conversations when the research shows otherwise. As a drunk uses a lamppost, this is research for support rather than for illumination.

Example 2: Blogs, tweets, and other activities compete directly for research time. I am neither sufficiently skilled, nor sufficiently trained to produce these outputs and having to do so directly impinges on my research. (Of course, if you are good at it and enjoy it, you should always do so!). I understand that technical papers may require further translation for a wider audience, but this should be the job of someone trained to communicate complicated subjects. There might be a trade-off between comparative advantage and economies of scope, but if that is the model donors have in mind, this needs to be put on the table and discussed clearly. Again, I am not arguing that blogs are bad. All I am saying is that forcing researchers to write their own blogs or use their own research funds to write blogs can be a costly diversion.

Example 3: It is not that hard for me to get a seat at the table. But my having a seat at the table can hurt as much as help the poor and we have to maintain some firewall between our research and policy formulation. My personal approach has been to decline such a seat when it is for agenda setting or policy formulation. My research can inform policy, but it cannot make policy because policy must account for a myriad of issues that I have not studied. Particularly problematic for me are cases when a “policy champion” wants to scale up what our experiments have shown. In each of these cases, we must take care to ensure that all our advice is publicly available and open for debate—particularly in countries where governments are authoritarian and democratic traditions are weak. Counterintuitively, we must strive for less impact in precisely the places where our research has the highest ex ante chances of being scaled-up.

Alternatively, where the seat at the table is for implementation of a policy, I have participated actively. But serious policy implementation requires a time commitment that makes it hard to combine with research. In my only experience, it was a close to a full-time commitment for more than a year, which could have continued indefinitely.

What is really going on?

Knowledge is a funny commodity because its value is notoriously hard to determine. That is why for most research, where value is not linked to market demand, academia assigns a set of peers (conference participants, referees, and journal editors) who adjudicate the value of what we have produced. But what if the preferences of our peers will not lead to research that will be most impactful for the poor in a country?

Suppose we are trying to decrease infant mortality. We could set up an operational research team that works with large hospitals, tackling each problem as it comes and moving forward with daily statistics on neonatal deaths. Dr. Armida Fernandez, the former head of neonatology at Sion hospitals in Mumbai (one of the largest public sector hospitals in the city), tells of how she dramatically reduced neonatal deaths by 50 percent in the 1970s by (a) putting a wash basin inside the ward instead of outside; (b) getting rid of incubators that required warm water that teeming with germs and replacing them with space heaters; and (c) rubbing babies’ bodies with oil to preserve body heat.

A research team could work closely with such dedicated physicians to provide feedback, monitoring, and even new ideas. But a much more publishable paper might be on the use of innovative financial incentives and prospect theory that decreased neonatal deaths by 5 percent. The second study would get the neurons buzzing among our peers; the first would get accolades but has less chance of a high-level publication—especially for a program that is not in the U.S.

This particular problem of whether academic preferences are sufficient for impact, how to identify when they are not, and how to fund research if there is a gap has not been adequately articulated or studied. Instead, donors have been too quick in transferring the responsibility for adjudication from our peers to policymakers. But policymakers may have their own biases, and, particularly in places with poor governance, their preferences may not adequately reflect the welfare of the poor.

So here is my plea: Let’s sit down, articulate the problem clearly, and discuss what can be done. And yes, there may be both theoretical and empirical research that can help us create better policies and institutions if a severe problem is indeed identified. But an inadequate and poorly theorized statement of discomfort should not distort research the way it has been doing over the last few years.

{P.S. I may be over-projecting my own frustrations. If you are similarly worried or have examples you would like to share, please do write a comment on the blog or send me an email and we can compile the responses.}

Authors