USAID’s Evaluation Policy
March 7, 2012 § Leave a comment
The evaluation of government foreign aid programs is key to accountability for taxpayer dollars, imperative in the aim of “doing no harm,” and essential if development is to be an ameliorative science, rather than an ad hoc compilation of good intentions.
In light of this, USAID has recently released their Evaluation Policy, which I believe has both its merits and letdowns. To be frank, the policy is largely underwhelming in the amount of detail it provides for what will be done “if and when opportunities exist” to evaluate impact.1 However, the nuts and bolts are there, so I’ll chalk the lack of detail up to a well-informed audience that needs no explanation, rather than apathy for the detailed science of evaluation….But I think I’m being too lenient.
Here are my thoughts:
* I was pleased to see mention of integrating evaluation into the design stage of all programs, as well as a nod toward theories of change and a dedication to baseline data collection. However, I worried about the lack of explanation in regard to how outcome measures will be selected, and I laughed at the judgment that using someone from another US government agency to lead a group of predominantly USAID evaluation staff will count as having an “outside expert” to “mitigate the potential for conflict of interest.” I understand the desire to have USAID staff learn, but at what cost to validity?
* The policy states that non-randomized designs “should” (i.e. not “will”) be utilized where randomization is deemed “infeasible.” Being that randomization is such an essential part of good evaluation, I’m interested to see if, when and how often USAID claims that this step is “infeasible.” Randomization can be tricky in the circumstances under which USAID often works–but it should not be cast aside flippantly.
* I hope that USAID lives up to it goal of providing “evaluation findings that are based on facts, evidence and data…[rather than] exclusively upon anecdotes, hearsay and unverified opinions.” That’s good, because we’re all quite bored of reading reports that are fraught with platitudes and generalizations.
* All evaluations are now to be publicly pre-registered, and all final reports are to provide information on evaluation methodology2 and study limitations3. I’m glad to see greater emphasis on transparency, and interested to see what substantive results follow from these goals. I know that many government organizations are currently making a big push to put all relevant data online in a public forum, and I’d imagine that USAID is at the forefront of this effort.
* I would have liked to have seen some mention of how in-country nationals might participate in the design, implementation and/or interpretation of evaluation findings (this means lay and expert members of the community in which the intervention is based, not just USAID in-country leaders)
Overall, I was pleased with USAID’s Evaluation Policy and I truly hope that future evaluations live up to the ideals set forth therein. However, I also hope that future evaluations improve upon the Evaluation Policy’s lack of scientific maturity and detail.
1 What about making opportunities?
2 This includes, “all tools used in conducting the evaluation such as questionnaires, checklists and discussion guides.” 3 This will supposedly give particular attention to “the limitations associated with the evaluation methodology (selection bias, recall bias, unobservable differences between comparator groups, etc.).”