Sections

Commentary

Gayle Smith—USAID has ‘gotten real’ about evaluation

USAID Administrator Gayle Smith addressed the state of USAID’s evaluation practice at a March 30 event hosted by the Brookings Institution and the Modernizing Foreign Assistance Network (MFAN).

Administrator Smith spoke candidly about how, over the past five years, USAID has inculcated evaluation into the DNA of the institution. This has meant moving away from a default approach of saying “this is our success” when describing projects and programs to instead explaining “this is what we know and how we have learned from trial and error.”

“Development is aspirational, but it’s also a discipline,” asserted Smith, who stressed that USAID is shifting away from checking boxes and measuring inputs toward scrutinizing outcomes, making mid-course corrections, and working with outside evaluation experts to double check their in-house assessments.

Taking a bird’s eye view, Smith said one should think of the agency as advancing its mission in three broad settings: First, in countries and on issues where peace and stability are a given, second, in situations where a crisis is driving action and focusing attention, and third, in the toughest circumstances where national security is the raison d’etre and staff cannot move or be out in the field.

According to Smith, the agency is performing strongly in the first setting, OK in the second, and is weak on accountability in the third.

The challenges she discussed include balancing the competing mandates of evaluation to provide both accountability and learning, the difficulty of conducting evaluations in the many complex and fragile environments in which USAID works. As referenced in a recent USAID report, among other challenges the agency faced when the evaluation policy was launched in 2011 were the lack of USAID staff experience, and government processes that interfere with timely decision-making and the ability of managers to adjust activities. Smith also acknowledged that more can be done to ensure cross-sector learning.


The impact of evaluations

The panel segment of the event—comprised of Ruth Levine, director of Global Development at the Hewlett Foundation, and Wade Warren, assistant to the administrator for the Bureau of Policy, Planning and Learning (PPL)—considered how USAID can add value to work being done by organizations like the International Initiative for Impact Evaluation and also discussed the agency’s 2013 meta-analysis of quality evaluations, which revealed progress but room for improvement.

The panelists noted that the field of evaluation is shifting in important ways, with performance evaluations—those that assess whether projects did what set out to do and whether they contributed to positive change—now less “in vogue” than impact evaluations. The latter examine fundamental assumptions that underlie programs. If an activity aims to raise yields for smallholder farmers, for example, did project designers consider the effects of distributing subsidized fertilizer for one crop year and whether it would be affordable to poor farmers in subsequent years after the benefits ended?

Entire programs can be built around assumed goals, but one cannot assume those initial well intended goals are valid. When randomized control trials are built into the design of impact evaluations, they can reveal when a control area may thrive just as well as a group that received project support.

In responding to how evaluations are used for learning, Warren cited the example of a USAID-supported Mozambique Childhood Reading Program for second and third graders, where evaluators assessed two slightly different design interventions. The first involved working with teachers and students to improve reading skills; the second also included school administrators. The latter design approach had much better outcomes. Informed with those evaluation results, the project team consulted with government counterparts and a decision was made to scale up the second approach and take it nationwide.

Evaluators often complain that, with an evaluation under way, they find an important new thread to pursue, but lack the flexibility to change their approach. They would benefit from being able to revise the terms of reference and guidance so as to pursue what may be a relevant and important unexpected element of a project.

Levine noted that the creation of the PPL Bureau and restoration of the practice of evaluation has restored the agency’s cerebral cortex, which was lost in the 2000s with the elimination of the predecessor to PPL, the Bureau for Policy, Planning, and Coordination.


Is knowledge from evaluations translated into learning?

The fifth anniversary of USAID’s evaluation policy has been accompanied with a USAID report, Strengthening Evidenced-Based Development, which identifies the key functions of evaluations to: “inform effective program management, demonstrate results, promote learning, support accountability and provide evidence for decision-making.” It also notes key challenges of balancing learning and accountability, the lack of staff expertise, the timeliness of evaluations, and the ability to managers to make program changes.

In a recently released report on the use of USAID evaluations, the agency rated well on utilizing its evaluations and compared favorably to other U.S. government agencies.

The same report found that evaluations are used primarily in project design, secondarily for modifying existing activities, and least frequently in policy formulation. A key determinant in whether an evaluation is utilized is the extent to which it is disseminated; evaluations are most often distributed to USAID staff, second to implementing partners, and least frequently to country partners. The vast majority of evaluations are of specific projects, a few are at the sector level, and even rarer are policy-level evaluations.

While the study reports that, overall, USAID staff expresses positive experience with the evaluation process, some express concern over poorly crafted evaluation questions and timeliness.

A key issue is whether the information and knowledge garnered by evaluations is translated into learning and shared beyond a narrow audience of USAID operating units.

Playing off Smith’s earlier depiction that evaluation “is not a sprint, it’s a marathon,” Levine asserted that evaluations and their challenges are more akin to physical workout regimes—they require a lot of commitment and perseverance over a long time before you see results. And, as you sweat and effort, weaknesses are exposed and you often don’t look good.

In years to come, the agency will need to contribute in real time to a growing global evidence base that can be shared among partners.


Full video from

#AIDeval

event


available
here



.