One month ago, I wrote on the link between policy and impact. I argued that funders have a mis-specified model of impact, which has distorted research efforts. I was humbled by the large number of comments and emails I received on this blog—both in support and in opposition.
Lead Economist, Development Research Group
I was also directed to discussions of research and impact such as social anthropologist Marilyn Strathern’s talk on impact in research funding (hat tip to Andrew Brandel at the Institut für die Wissenschaften vom Menschen, Vienna) and the fascinating panel at the recent American Economic Association meetings on the problems with publishing in the economics profession (h/t my colleague Quy-Toan Do). Strathern’s discussion on the tension between research as a description of the present and the funder’s desire for prediction provides much food for thought on the fundamental role of research. Similarly, the AEA panel’s take on an increasing obsession with publishing in the top-5 journals and the damage it may cause to the economics profession is a must-listen for those interested in how academia is responsive and concerned with the incentives generated within the profession.
Three further points arose in the discussion that followed.
First, the problem with funding and impact is not limited to economics. There is widespread agreement that it distorts efforts toward activities with low social returns. For instance, Avinash Kishore Shahi from the International Food Policy Research Institute writes:
“What qualifies as ‘impact’ in donor funded research sets up terrible incentives for researchers. What you need for impact, in most cases, is not a real engagement with policy formulation or implementation, but a friend or friendly figure in those circles who would cite you, acknowledge you, or praise you—in writing—and say somewhere that they are doing what they are doing because your research said so. Anyone who has worked with developing country governments knows that you achieve such ‘impact’ not by doing good research, but by wining and dining government officials, taking them to exotic locales (or fancy universities), and by hiring their retired senior colleagues on a retainer. It used to happen in BCGs and McKinseys. Now, research institutions are doing the same.”
Similarly, Professor Nicholas Baigent from the London School of Economics writes:
“I only just switched to working on the concept and measurement of ‘total/aggregate violence’ after retiring from doing very abstract social choice theory for just about all my working career. I have been shocked at the sorts of questions I get asked in seminars. ‘So how does this help reduce violence?’ My answer: ‘I hope it leads to people who work in the area reconsidering what they think they know about violence.’ It is a hard sell. While it will make little difference to me as I approach 72, my younger co-authors, Dzenana Pupic (Graz University) and Magda Osman (QMUL) will need to worry about not only the ‘impact,’ but the ‘immediate practical impact’ of whatever thoughtful work they do.”
And, in an email to me, Assistant Professor Manoj Mohanan from Sanford School of Public Policy and the Duke Global Health Institute writes:
“Here’s one source of the specific problem you point to—if one were to broadly classify research into two buckets (a) research targeted at policy and (b) academic research questions that (might or might not) have policy implications, we could say:
- “The first is a more of a straight policy evaluation question—does intervention A achieve its goals, does it do better than alternative B? For research of this nature, the notion of impact based on policymakers’ subsequent decisions is relatively straightforward.
- “The second—where we conduct research to investigate an underlying mechanism (like your example) of responding to incentives—could have policy implications down the line. But our primary target for this type of research is our academic colleagues. It could well have a policy implication—but that is not the driving force in the short run.
“The problem—in my view—is that donors are tempted to put all of this in the same bucket and use metrics that are likely more appropriate for the former. If they could separate these two types of work, and have different expectations, it might help some of this issue.”
Second, and picking up on Manoj’s point on the very different purposes of research, Andrew Brandel takes issue with the idea that research can be commodified into project “nuggets,” each with a predictable impact:
“The problem of impact, as Jishnu suggests, is multiple and serious. For one, even if we ascended to the idea of ‘impact’ provisionally, as a barrier to funding, it assumes that not only the degree of value but the forms it might take are discernible ahead of time. And even barring all of these problems, again as Jishnu notes, it presumes that value judgments would be possible and measurable, allowing us to compare incommensurate kinds of interventions—like the money form/labor value does for commodities. The notion of impact simply does not stand extension to its logical conclusions. To take an absurd example, how might we compare the values of combating poverty versus cancer? By number impacted? By economic burden on society? How would we incorporate all the value contributed by the people saved or helped by those respective efforts? Such questions could only be asked, somewhat ironically, outside of science in the domain of ethics, by definition.
“But there is, as Jishnu says, also another fundamental and internal kind of problem in the question of impact, and it is one that pertains to the ways in which science itself is understood. Even if we leave off the question of ‘impact on society,’ I think the idea of discrete projects is anathema to scientific inquiry and pertains instead to the commodification of knowledge. This bourgeois scientific regime demands that products be clearly defined and predictable, and that the process of their production be clearly articulable and distinguishable from others (tacitly some kind of propriety). It cannot tolerate failures or adjustments or new directions. It cannot bear serendipity or spontaneous inspiration. These, any student of the history of science knows, are the very stuff of scientific advancement.”
Third, detractors argue that by shying away from impact, researchers will not remain accountable and may sound like a lobby group. This is an important critique, and the language of transparency and accountability is often used to invoke centralized systems of decisionmaking. But it bears further analysis.
The key question is whom should researchers be accountable to? Suppose we subscribe to the idea that each research “project” has benefits and costs and the value of the information generated from each specific project should be non-zero. I agree with cost accountability, and I don’t think that researchers are siphoning off millions to line their pockets. Most of us work in difficult conditions, trying hard to cut costs as much as we can (yes, even in the World Bank—my public health colleagues are constantly surprised when they learn just how little our research costs). So, the accountability in question is about the benefits, and this is again where a system of adjudication and trust comes in. To begin with, as Professor Nayanika Mookerhjee from Durham University reminds us (private communication):
“One thing to keep in mind: Impact was introduced in U.K. by the conservative government as a way of `accounting’ for social science disciplines as we are otherwise seen to have no impact on society and just be in our ‘ivory towers.’ So the issue of mistrust toward social science is central to this and it also follows a natural science template. These days one has to write impact statements even for new projects. So research is no more an intellectual exercise but first and foremost a functional one. Impact is about proving the usefulness of social sciences so that we get government funding.”
The alternative, which I have argued for, is that the accountability remains within academia so that a researcher’s peers judge the “impact” of research. This is by no means an efficient system (In the AEA panel referenced above, Akerlof makes the evolutionary point that if peers choose judges like themselves, there will be long cycles, fashions. and fads). But, like democracy, it is probably better than the alternative.
The alternative is that we are responsible to taxpayers (which we are) but without the intermediation of the academy. I retain strong reservations about moving to such a notion of research, where expert institutions are delegitimized in the name of populism. The academy may have problems that increase when they behave as monopolies (here is a great read on “incest on the Charles”), but as long as there are competing groups that are not hermeneutically sealed from each other, discussion and associational life engender progress.
And frankly, I don’t know why forming a lobby is so bad. In one country where I work, we were recently told that researchers are constantly harassed (arrested, hassled, not allowed to work, etc.) precisely because “they have no lobbying presence.” Taking into account the increasing correlation of poverty, conflict, and violence, I find it incredible that we still don’t have something like Researchers without Borders that at the very least documents every case of harassment and intimidation in the field that researchers face. Lobbying has its problems, but surely it should not be so easy for us to be divided and conquered?