enquêteIs your organization suffering from Evaluation As Usual (EAU)?  Symptoms of EAU include overly enthusiastic language about successes, wish-washy discussion of potentially negative results, and a general lack of objective and critical data analysis and discussion.

Evaluation can be expensive, and billions of dollars are disbursed every year based on its results.  With this kind of money at stake, it’s essential that funders are getting the most mileage possible out of their evaluation dollars.  Unfortunately, however, much of this money is being directed to EAU.  This is, perhaps, unsurprising:  the central irony of program evaluation is that is usually funded by the very organizations who sponsored the program under inspection, posing a natural conflict of interest.   With reputations and funding dollars at stake, challenging EAU is sometimes difficult.

Fortunately, more and more public and private organizations are recognizing the need for objective, evidence-based evaluation to accurately measure program impact, and in turn make the most effective use of limited resources in changing the world for the better.  With that in mind, here are some common indicators of EAU which may be compromising the power of your evaluation dollars.

Five Key Symptoms of Evaluation as Usual (EAU)

  1. Cheerleading language: Beware of sentences like this:  “The inspirational Kidz R Awesome program continues to drive amazing results for our nation’s neediest and most adorable young people!”  Evaluations should stick to balanced “journalistic”-style reporting and leave the colorful spin to the P.R. departments.
  2. Lack of negative findings. Are all of the results reported in the evaluation positive?  Are potentially negative results buried in the fine print and qualified with excuses?  Beware of evaluation reports that appear to be defensive or vague about results that may be less than stellar.
  3. Comparison of results to arbitrary program goals. Challenging and realistic goals are useful when designing programs and defining expectations.  But if goals are based on arbitrary or lowball targets, they are next to useless.   Well-defined goals have a meaningful context (such as national benchmark levels) and exceed what would be expected in the absence of the program.
  4. Lack of statistical comparisons when appropriate. The most effective means of interpreting program outcomes are in comparison to a meaningful counterfactual (such as the progress of similar subjects not in the program).  For summative evaluations, this usually implies a rigorous evaluation design, such as a randomized controlled trial (RCT) or quasi-experimental design.   Appropriate statistical methods should be used to compare program effects to those of comparison groups.
  5. Lack of thoughtful analysis of evaluation results. All evaluations, including the most summative (such as external, large-scale evaluations of publicly-funded programs), should help inform future policy and funding decisions.  Useful evaluations need thoughtful, evidenced-based discussion sections that reflects upon both successes and challenges in implementation of the program, and that even-handedly explore implications of results.