Sections

Commentary

Learning What Works: Evaluating Complex Social Interventions

A day’s thoughtful discussion among a group that came at the topic of “Evaluating Complex Social Interventions” from many varied perspectives, produced a number of shared observations and insights.

Perhaps the most notable was the idea that while efforts to provide information about complex social interventions rarely produce certainty about “what works,” these efforts can and should be constructed to provide useful clues to what may work and what is promising. Both social scientists and practitioners involved with comprehensive community initiatives and other complex interventions may be well advised to avoid claiming “We know what works!” Rather, they should aspire to gathering and analyzing the information that would enable them to say, “We now have strong support for a number of informed hypotheses about what may work, which cumulatively, over time, can produce sturdy knowledge about what does work.”

Several participants pointed out in various contexts that where outcomes improve substantially in areas that the public cares about (such as a significant increase in the number of inner city children leaving school prepared for employment), issues of evaluation methodology tend to move into the background.

The group also seemed to agree on the need for greater recognition of the importance of the values, norms, and rules of behavior that has generally fallen outside the realm of economic models of evaluation.

Few challenged the contention, voiced in various forms throughout the day, that political considerations often trump rational analysis of effectiveness in making policy judgments. Despite this caveat, and while many participants expressed reservations about the current state of evaluation and how it is used in assessing complex interventions, there was strong support throughout the symposium for greater and more systematic investment in utilizing and integrating a variety of methods to compile useful, rigorous information about the operation and effectiveness of complex interventions. But such investment would only be warranted, several participants warned, when (1) there is clarity about expected outcomes, and the theories connecting interventions, interim markers, and outcomes, and (2) the interventions are functioning at a scale and level of intensity that make it reasonable to believe that they may be successful.

There was also considerable sentiment in support of the idea that greater investment in bold interventions themselves was warranted, because “you have to try an awful lot of things to find out what does work,” or even what might work.

We trust that the symposium will stimulate participants and other colleagues to pursue further the provocative ideas that the discussion generated. To this end, we hope that the following summary of the proceedings—prepared by Kathleen Sylvester of the Social Policy Action Network, with final editing by the two of us—will prove useful.

Table of Contents

  • Report on the Symposium
  • How Values, Optimism, and (Mostly) Politics Compromise Honest Evaluation
  • Evaluation for What?
  • Is There Any Consensus?
  • Reports from the Field
  • The Tension between Rigor and Usefulness
  • The Relationship between Evaluation and Theories-of-Change
  • Iterating toward Change
  • Promising Practices and Next Steps
  • The Fuzzy Line between Design and Evaluation
  • Appendix A: Participants