Sections

Commentary

Op-ed

Measuring Aid Effectiveness Effectively: A Quality of Official Development Assistance Index

Last year, rich countries spent more than $128 billion to improve development in poor countries. That is a substantial amount of money. But it is hard to say whether aid has really helped to raise growth or reduce poverty or contribute to the Millennium Development Goals. The trouble is that aid is a drop in the bucket compared to actual resource needs for making a major push in these areas. Adding to the complexity, there are so many non-aid factors that affect these broad outcomes, like international food and energy prices, the global recession, world interest rates, trade credits and the like, that trying to identify the impact of aid becomes almost impossible.

So it is very difficult to say what aid money has done in aggregate terms; instead we tend to tell stories about individual aid projects. But there are thousands of those (around 80,000 new ones each year) so for every success story there is bound to be a story of a failure or a project where money has been stolen. Retelling these stories is what lies behind the sometimes vicious academic debate over whether aid works. Each side chooses its favorite examples and ignores the other.

Every aid agency faces this dilemma. They might do an evaluation at the project level to provide information on whether they are doing things right, but they have a great deal of difficulty knowing whether they are doing the right things to achieve the objectives they have set out for themselves. Absent that, aid agencies cannot learn and improve and for a long time most aid failures and successes were attributed to recipient countries, as if aid agency performance was a minor factor.

But donors want to know about the quality of aid agencies. Bureaucrats have to make decisions about which multilateral agencies to fund. Donor country citizens want to know whether their government is spending aid wisely and as effectively as possible. Aid agency managers want to know how to do better.

The Quality of Official Development Assistance assessment that was developed by myself, at Brookings, and Nancy Birdsall at the Center for Global Development, is a tool and methodology that is designed to provide the information needed to answer these questions. Others have also done the same. But our work differs in five ways. First, we use as wide a source of data as possible and conduct our analysis both at the level of a donor country and at the level of each of its aid agencies. Second, we have developed a tool that can be updated every year so that improvements over time can be tracked. Most academic studies are one-time assessments. Third, we have tried to bring the voices of developing countries themselves into our measures of aid effectiveness by drawing on various polls and surveys. Fourth, we use cardinal quantitative measures only, avoiding qualitative judgments that are often used in expert assessments. Fifth, we benchmark agencies against each other, so that agency scores reflect comparative performance, rather than trying to separately identify a theoretical “best practice” goal.

So what is QuODA? It is a method to rank 31 donors – 23 countries and 8 multilaterals (we combined the UN agencies into a single category) – on four dimensions of aid effectiveness– maximizing efficiency, fostering institutions, reducing administrative burden on recipients, and transparency and learning.

Since aid quality is multidimensional, QuODA does not try to develop a single index, but provides information on each of the four dimensions. This is because the purpose is really to achieve change, which is based on specific elements of QuODA, not to produce “name-and-shame” rankings. As it happens, when we applied our assessment, we found there to be little correlation between the four dimensions. Some donors/agencies were stronger on some aspects, others on other aspects. This empirical finding reinforced our determination not to look for a good headline by combining the dimensions, but to stay true to the notion of using benchmarking for change.

Indices like QuODA are useful if they are used. We’ve found in the year since we published QuODA that civil society organizations and aid agency managers and bureaucrats have paid particular attention to the assessment. We’ve tried to provide all the data, along with a web-based tool that permits anyone to make their own comparisons and to focus on whichever indicator they are most interested in. By avoiding making our own judgments about agencies, we think we have increased the willingness of others to take a hard look at the data for themselves.

Any quantitative assessment depends on the actual indicators that are chosen. We were guided by three criteria: (i) indicators that had been suggested by the academic literature; (ii) indicators that had been agreed to in the context of the discussions on aid quality that had taken place in high-level international forums, like the Paris Declaration on Aid Effectiveness; and (iii) indicators that are being used in recipient country mutual accountability exercises on aid effectiveness. Of course, there is some overlap between these categories, but we tried to avoid introducing our own judgments about what we think aid quality or aid effectiveness should look like.

We developed thirty indicators of aid quality, grouped under our four headings. Choosing how many indicators is a key aspect of an exercise like this. If one chooses too few indicators, the measures might be relevant, but it might be hard to know what actions are really required. For example, if someone says “be more transparent” there are so many ways of doing this; it is not easy to know where to start.

So good indicators are those that are actionable in the sense that it is simple to know what to do to raise your score.

These indicators tend to be detailed, and so it is easy to end up with many indicators, and indeed there are indices that have hundreds of sub-components. The drawback there is that there is no sense of priorities and the weighting that is given to each indicator becomes more important. It turns out that twenty to forty is a happy balance in terms of the number of indicators and that is why we settled on using thirty sub-indicators.

There are two caveats to be made. First, there are areas where we simply could not get any data that captured the issue in a meaningful fashion. Evaluation and learning is an example. We wanted to ask which agencies were true learning organizations, but there is very little data that allows you to address this in a quantitative way.

Second, we lose quite a lot when we only look at quantitative indicators and ignore the nuanced issues of development. A good example of this is the fact that some agencies work in fragile states, where it is harder but maybe more important to do good development work, while other agencies work in easier environments. How should one score that? We did not arrive at a very good answer. Instead, we tried to include some indicators that those who work on fragile states would do better in (like the poverty selectivity) and other indicators that those who work in other areas would do well in (like governance selectivity).

Indicators are just numbers. We hope that people will use them as the basis for an informed discussion, and not as the end result. If that happens, then we are confident that data like that included in QuODA can be a useful input into the much broader discussions on aid quality that are happening around the world.