Sections

Commentary

Foreign assistance accountability in fragile environments

Migrants gather as they try to get products from a truck, at a makeshift camp on the Greek-Macedonian border near the village of Idomeni, Greece March 10, 2016. REUTERS/Stoyan Nenov TPX IMAGES OF THE DAY

The development community has made impressive gains over the past two decades on accountability for foreign assistance and development—monitoring, evaluation, use of data and evidence, and transparency. These instruments have helped to improve the effectiveness of foreign assistance and give U.S. stakeholders greater assurance that it is well used. These accountability gains are grounded in the recognition that solutions for development will not come from Washington, Brussels, or Beijing, but from the country and people who are supposed to benefit from international aid.

An initial milestone in accountability policy was the Bush administration’s establishment in 2004 of the Millennium Challenge Corporation (MCC) as an institution built on the use of specific good governance criteria to determine who could qualify as a recipient. Once selected, MCC co-designs programs with the recipient government, ensures they are grounded in rigorous assessments and evaluations, and makes criteria and findings publicly available. The establishment of the President’s Emergency Plan for AIDS Relief (PEPFAR) in 2003 and its rigorous use of data was another stepping stone.

The Obama administration continued this trend, especially with evaluation and transparency of aid information. The U.S. Agency for International Development (USAID) issued a first-class evaluation policy in 2011 and significantly ramped up the use of evaluations, with over 1,100 since the adoption of the policy. The administration introduced evaluation requirements for all foreign assistance agencies, which—other than USAID and MCC—lacked the practice of using evaluations. The Obama administration also committed to aid transparency, joining the International Aid Transparency Initiative (IATI) and directing all agencies engaged in foreign assistance to make aid data publicly available.

At its heart, accountability is about ensuring that development staff do their best to achieve effective development outcomes.

In 2016, Congress solidified this trend by legislating evaluation, learning, and transparency across all agencies and departments carrying out foreign assistance programs through enactment of the Foreign Aid Transparency and Accountability Act.

Limitations on accountability in fragile environments

These accountability gains are grounded in several decades of experience operating programs in a particular type of developing country—countries that are steadily, if erratically, moving along the development continuum, with relatively stable environments that are amenable to basic development work chartered along a somewhat predictable, if meandering, course. Even in these relatively stable contexts, development actors have increasingly realized that international development needs to be driven by and regularly adapted to the needs, preferences, and realities on the ground.

But the situation in fragile environments is even more uncertain. The context changes quickly, the politics are complicated, and the types of interventions are equally complex. Foreign assistance in these contexts aims to correct the relationship between the state and society, rebuild institutions destroyed by war, prevent the emergence of violent extremism, and create inclusion and good governance in a context of exclusion and poor governance. To achieve these transformative aims in uncertain contexts requires not just that development actors adapt to a changing context, but that they learn what works in that particular context at that point in time. This type of programming demands that U.S. agencies, departments, and their implementing partners design and reassess projects while they implement them—learning as they go.

Traditional development planning under these conditions is difficult. It is impossible to effectively monitor even medium-term goals, especially quantitative, with unreliable or nonexistent data. Measuring accomplishments against these goals becomes irrelevant or even counterproductive. Traditional evaluation approaches are difficult if projects have to be altered to meet changing circumstances, and traditional monitoring on a monthly/quarterly basis is often too late to provide implementers with the information needed to learn and adjust.

Adaptive programming

So, if extensive planning and design, based on robust data and a proven track record, and core accountability mechanisms don’t work or are even counterproductive in fragile environments, what is the alternative to providing assistance? To its credit, USAID is experimenting and integrating the answer across its portfolio in the form of adaptive management.

To see how this has functioned in the past, turn to how USAID programmed assistance to the former Soviet Union in early years after the fall of the Berlin Wall. USAID was honest in recognizing its lack of experience, knowledge, and expertise of how to assist the transition from communism. Its existing modalities had not dealt with such environments. So, at least for some programs, it identified an objective and a cursory idea of how to achieve it, and in short order invited proposals on how best to pursue the objective but without the step-by-step approach typically required for aid proposals. A team would be expeditiously placed in the field with the flexibility to work with the USAID country mission to develop the specific intervention and then adjust it as the project moved along.

This recognition of the importance of adaptable programming has led to the emergence of a robust field of practice and best practices. USAID’s Learning Lab is at the forefront—providing examples and guidelines for how USAID and its implementing partners can better collaborate, adapt, and learn. But as with all efforts that require significant changes in organizational culture, many of these attempts at adaptive management have failed to be widely adopted and applied.

One reason is that adaptive management is difficult and time-intensive, requiring scarce staff time, resources, and attention.

Another is that foreign assistance donors usually want more certainty. To convince a donor to give up its money, implementing partners have to develop a clear plan, goal, objectives, set of activities, outcomes, indicators, and intended impact. The donor then assesses whether or not the implementing partner did what it said it would do and achieved the impact promised.

Another reason is that programming tools, even good ones, end up piling on top of one another. Busy staff working for the U.S. government or for implementing partners, are often overworked and inundated with the newest tools, best practices, and checklists. Given competing priorities, learning and adaptation can become just another tool that remains in the toolbox.

A final reason is that the entire foreign assistance system is pushed to spend money in line with budgetary accounting requirements. Even monitoring and evaluation become about spending money—getting money out the door. The focus is often on monitoring how much money is spent and whether it is spent in the right places and in the right ways (inputs and outputs), with “right” being defined as what was promised in the original proposal rather than what and how the program evolved to fit the circumstances. It’s about monitoring compliance with fiscal rules and procedures (to avoid a bad audit), and commissioning evaluations that are generally completed too late for the results to apply to improving the initial programming or project.

So, how do we simplify the focus on adaptable programming? One way is to focus on learning during program implementation. This means that the organization is able to understand the context, understand the relationship between its intervention and the context, identify intermediary outcomes, and regularly adjust programs to the information about whether what it is doing works or not.

Learning can happen with or without intricate tools and best practices. The key is to give staff operating within the developing country permission and the ability to learn.

Supporting learning in the field

To enable staff in the field to learn, not only do DC-based staff need to reduce the constraints placed on country-based staff but they also need to find ways to help them learn and improve their programming. Rather than viewing accountability as the means to ensure compliance with a preset development plan, headquarter-level management should aim to support risk-taking and learning by country-based staff. In the best cases, this means that headquarters and the field mission engage in regular joint problem-solving, where the country office and/or implementing partner identifies the problem and headquarters works with them to help solve it. Monitoring and evaluation approaches can support joint problem-solving by providing valuable information and assessments, but they cannot substitute for it.

At its heart, accountability is about ensuring that development staff do their best to achieve effective development outcomes. Especially in fragile and conflict-affected states, the people closest to the problem—the staff of the country office and the implementing partners who are responsible for the activities—often have the best understanding of the problem. The solution is to give them the opportunity to learn what works and does not work in that particular context and to adjust and adapt the project in response to this learning. This means reducing prescriptions from headquarters about what should be done and, instead, increasing the freedom that staff have to use their own judgment and real-time feedback to learn. The implication of this approach is that the skills and knowledge of development staff in the field are the organization’s greatest resource and that accountability for development in fragile and conflict-affected states rests on their knowledge and judgement.

Conundrum

While it is important to see that current accountability instruments are not appropriate in fragile environments and that we need to trust the judgment of those in the field, there is still the problem that managers at headquarters and policymakers in the executive and legislative branches also are accountable—responsible for ensuring that policies and programs are appropriately implemented and taxpayer dollars are well spent. So the big conundrum: how to provide those in the field maximum flexibility to adapt programs to real-world conditions while allowing homebound stakeholders to perform their responsibilities? The answers require stakeholders and field managers to engage in a serious dialogue and may rest in a number of areas, from assigning the most experienced and knowledgeable experts to the field so they garner the confidence of headquarters, to frequent visits to the field by administrative and legislative stakeholders, to more fulsome, timely availability of information and serious consultations on pending issues.

This is a conundrum that will take goodwill, time, and building trust to work through.

Authors