Last week, the Commission on Growth and Development, chaired by Nobel-Laureate A. Michael Spence, issued a path-breaking report on strategies for sustained growth in developing countries. The report pushes the frontiers of understanding of the complex process of growth. In a concise way, it offers an antidote to the “Washington Consensus” that stultified much development thinking for the past two decades.
The Growth Report covers a wide range of issues and offers sensible advice in many areas. It changes the thinking about development economics in ways that should have far-reaching practical implications.
The report underlines a fact that at first sight seems obvious, but is often forgotten in the development debates—poor countries are not simply the same as rich, mature countries, just with less capital, technology and skills. In fact, poor countries lack many of the institutions that allow markets to function. That means that the way in which the economy responds to policy change in poor countries might be very different from the response observed in mature economies. For a development policymaker, the challenge is to make decisions in an environment of great uncertainty. The new insight offered by The Growth Report is to recognize that the most sensible strategy in these cases is “learning-by-doing”, an experimental or pilot approach to see how policies chosen from a menu of options that may have been effective elsewhere might work in a particular country context.
We would add to this the observation that an effective learning-by-doing approach needs to marry strong evaluation and implementation capabilities with an explicit focus on scaling up successes throughout the economy and beyond borders, where feasible. That process of experimentation followed by scaling-up whatever works is what allows developing countries to grow much more rapidly than advanced countries.
In the private sector, scaling-up can happen quickly thanks to market forces. When a successful firm shows that profits can be made in some industry, it creates many competitors and a growth surge. But in the public sector, incentives and accountabilities are more blurred. Successful pilots do not always get scaled up. The most basic practical challenge of The Growth Report is how best to identify and replicate at scale successful public interventions.
Examples of successful scaling up exist, of course, such as the decades-long effort to fight the river blindness disease in Africa, or the micro-credit and community development schemes of two well-known Bangladeshi NGOs, or the successful large-scale anti-poverty programs in Mexico and Brazil that rely on conditional cash transfers to poor people. But much too often governments and aid agencies simply do not focus on the evaluation of interventions, on how to take successful pilots to scale and on how to sustain programs through the vagaries of governmental changes. In fact, too often there is an excessive focus on seeking innovative solutions and starting new interventions, rather than on implementing the tried and true.
Take as examples three public interventions that are described in The Growth Report as having very high rates of return: early childhood development; agricultural research; and urban development. In each area, the evidence from pilot programs is overwhelming that more investment would have high pay-off. Yet governments in developing countries (and aid agencies) have not focused on these areas and the needs remain vast. One might argue that this could be because of alternative investments with even higher rates of return. But this would be wrong. The real issue is the politics of public policy—the recognition that scaling-up does not happen automatically when a pilot is tried and evaluated as a success, but it requires systematic attention to incentives, resources, and leadership to make it happen.
Why don’t such high-return projects get implemented? From a practical point of view, one immediate problem is that few developing countries can take a long-term perspective on development and they therefore underinvest in projects where results take many years to mature. Early childhood development, agricultural research and well-managed cities are all areas where the real development benefits are measured in a time span of decades. Normally, long-gestation investments are not very attractive because the benefits are discounted over a long time horizon, so they correctly receive less priority. But for these examples, the benefits are so large that the returns are attractive even when discounted.
Unfortunately, the development system is biased against long-term investments. Developing country policymakers rarely have the leadership and vision to focus on the long-term, especially when confronted by volatile political cycles. Aid agencies are increasingly insistent on “measurable results” which in practice means short-term indicators as a condition for their aid. The International Development Association, a branch of the World Bank which gives soft credits with 40 year repayment periods, has a Results Measurement System that looks at just a few indicators to determine how much aid countries should get. None of the three examples mentioned above and highlighted by the Growth Commission report are included in these indicators. Not surprisingly, the result is that these investments get crowded-out.
So what can be done in practice to encourage countries to take the long view and replicate successful programs with well-established high returns?
A growth-oriented approach in these areas would require three things on the part of developing country policymakers. First, identify early child development, urbanization and agricultural research as priority areas for governmental intervention and aid support. Second, identify and bridge the prevailing gap between what is known to work from pilots and scaled-up practical implementation. Political obstacles from vested interests need to be analyzed, understood and as far as possible removed. Leadership and institutional mandates need to be clarified and resources made available. Third, ensure that interventions are evaluated against realistic medium term outcomes and those that work replicated and scaled up.
Aid agencies could also help in several ways. They can do more to promote and disseminate the results of evaluation in these areas. They can make resources more readily available for these areas, perhaps by introducing a “replication fund” mechanism, that is, a funding scheme that specifically supports the replication and scaling up of interventions shown by evaluations to have been successful. They should do more to introduce stakeholder analysis, political analysis, citizens’ report cards and client surveys as routine instruments for country strategies and project appraisal to better identify and overcome political obstacles to scaling up. Last, they could carry out an institutional audit on themselves to learn about how the scaling up principle could be better supported by the institutional culture, the policies and practices of the agency and what needs to be done to create the internal incentives and accountabilities to ensure that the principle is in fact acted upon systematically.
In sum, the Growth Commission pointed to a central requirement for successful – learning-by-doing. We would only add that a key element of this process is a systematic focus on scaling up what works.
Homi Kharas is a member of the Working Group of the Growth Commission. For more analysis on the implications of the Growth Commission Report for urbanization see: Johannes Linn “Urbanization: Some Practical Implications of the Growth Commission’s Findings”. For more on the dimensions and experience with scaling up development interventions, see Arna Hartmann and Johannes Linn “Scaling Up: A Path to effective development.”