Register
Register

November

16
2011

9:00 am EST - 10:30 am EST

Past Event

Measuring the Quality of Aid: The Quality of Official Donor Assistance (QuODA) Second Edition

Wednesday, November 16, 2011

9:00 am - 10:30 am EST

The Brookings Institution
Kresge Room

1775 Massachusetts Ave., NW
Washington, DC

On November 16, Global Economy and Development at Brookings and the Center for Global Development (CGD) launched the second edition of the Quality of Official Development Assistance (QuODA) assessment to a select group of donor country and multilateral organization representatives during a private roundtable event. With the Fourth High Level Forum on Aid Effectiveness a few weeks away, it is hoped that QuODA will help inform discussions within the international aid community. In Paris in 2005, and again in Accra three years later, the development community agreed upon and set standards for good practice in aid delivery. QuODA provides an independent quantitative analysis of whether donors are meeting the standards they set themselves.

The launch event began with a review of the updates to the assessment and a summary of the top findings in the latest edition. This year’s assessment analyzes the progress made by the donor community as well as a ranking of the “best-in-class” among donors. The full database is available at http://www.cgdev.org/section/topics/aid_effectiveness/quoda. As was true with the last edition, there is no single absolute ranking of donors, but a ranking along four distinct dimensions. QuODA 2011 finds significant progress in three dimensions, Fostering Institutions, Reducing Burden and Transparency and Learning, but no progress along the fourth dimension of Maximizing Efficiency.

Three main themes arose from the discussion which echoed this year’s global debate on how to best help fragile states, manage for results, become more transparent and reduce fragmentation.

Several participants questioned the applicability of the indicators to fragile states, which tend to be poorly governed, adding difficulty to aid effectiveness. Additionally, fragile states lack good country systems for donors to work with, requiring donors to find alternative modalities for aid delivery. While donors should be commended for tackling the hard problems associated with working in fragile states, they should not be universally rewarded for giving large amounts of aid to fragile states, nor should fragile states be ignored when analyzing aid effectiveness. The participants recognized the inherently difficult problem of identifying an indicator to adequately reward donors who work in fragile states.

The meeting’s participants also stressed the distinction between measuring actual development outcomes and measuring the process used to deliver such outcomes. QuODA indicators are largely about process, partly because outcome indicators do not exist in a useful comparative way, and partly because attribution of outcomes to individual donors is problematic. However, processes can also be dependent on the donor, and there is no unique set of indicators to identify the best processes. This is especially true for the Maximizing Efficiency dimension. There are different views on the relevance and benefits of donor specialization versus responsiveness to country priorities, on working in fragile states, and on other modalities. Individual donor agencies have quite different mandates and scope of operation making benchmarking—the heart of the QuODA assessment methodology—hard to interpret. Nevertheless, participants agreed that there is considerable value to a discussion of aid processes and that QuODA fills a gap of independent quantitative analysis of a donor’s commitment to commonly accepted standards for delivery of development assistance.

Many participants commented that their respective organizations have taken steps to improve aid delivery, but the impact of these adjustments would not be reflected in QuODA for some time because of the lag in data collection. The main source of data for QuODA is the OECD Creditor Reporting System (CRS) and Development Assistance Committee (DAC) databases. For QuODA 2011, the most recent complete data available on these portals is from 2009. Those disbursements, in turn, are partly from projects designed and implemented several years beforehand. Any quantitative assessment has to manage this trade-off; the benefit of an objective measure of what is happening in practice, versus the drawback that past practice may not be relevant for current decision making if agencies have changed significantly.

Participants commended QuODA as a useful tool to determine how well donors are putting the Paris principles into action as well as for determining their relative positioning against each other over time. Several ideas were presented on how to get more involvement from stakeholders and advocacy groups outside the ‘inner circle’ and how to promote use of QuODA’s quantitative strength to guide agency policies and the larger aid discussion in order to accelerate changes in the practice of aid delivery.

View the participant list »
View second edition assessment brief »
Learn more about QuODA »