Practices and Methodology

There are no simple answers to questions about how well programs work, and there is no single analytic method that can decipher the complexities that are inherent within the environment and the effects of public programs. A range of analytic methods is needed, and often it is preferable to use several methods simultaneously. Methods that are helpful in the early developmental stages may not be appropriate later when the program has become more routinized and regularly implemented.

Sometimes information is needed quickly, while at other times more sophisticated long-term studies are needed to understand fully the dynamics of program administration and beneficiary behaviors. Over the years, the evaluation field has developed an extensive array of methods that can be applied and adapted to various types of programs, depending upon the circumstances and stages of the program’s implementation. Fundamentally, all evaluation methods should be context-sensitive, culturally relevant, and methodologically sound.

Common Evaluation Methods

  • Logic models and program theories
  • Needs assessments
  • Early implementation reviews
  • Sampling methodology
  • Compliance reviews
  • Performance reviews
  • Qualitative designs
  • Case studies
  • Quasi-experimental design
  • Randomized field experiments
  • Special focus studies addressing emerging issues
  • Performance measurement systems
  • Cost-benefit and cost-effectiveness analysis
  • Meta-analysis
  • Client and participant satisfaction surveys

Effective Program Evaluation Practices

The field of evaluation has evolved and grown over the past quarter century, expanding and improving its analytic methods, expanding the scope of its practice and relevance, clarifying its relations with clients and stakeholders, and adopting ethical standards for its members. A set of broad practice concepts has emerged that apply to almost all evaluation situations. These general practices provide a starting point, framework, and set of perspectives to guide evaluators through the often complex and changing environment encountered in the evaluation of public programs. The following practices help ensure that evaluations provide useful and reliable information to program managers, policymakers, and stakeholders:

  • Consultation. Consult with all major stakeholders in the design of evaluations.

  • Evaluation in Program Design. When feasible, use evaluation principles in the initial design of programs through program logic models and broader analysis of environmental systems, setting the stage for evaluating the program throughout its life cycle.

  • Life cycle Evaluation. Match the evaluation methodology to the stage of program development or evolution, gathering data via a range of methods over the life of the program, and providing ongoing feedback and insights throughout the program cycle.

  • Built-in Evaluation. Build evaluation components into the program itself, so that output and outcome information begins to flow from program operations as soon as possible and continues to do so (with appropriate adjustments) throughout the life of the program.

  • Multiple Methods. Use multiple methods whenever appropriate, offsetting the shortcomings of any one method with the strengths of another, and ensuring that the chosen methods are methodologically sound, and contextually and culturally sensitive.

  • Evaluation Use. Identify stakeholder information needs and timelines, meet those needs via timed reporting cycles and clear reporting mechanisms, and build stakeholder capacity to understand and use evaluation results.

  • Collaborative Evaluation Teams. Promote the formation of evaluation teams including representatives from allied fields with rich and appropriate mixes of capabilities to follow the emergence, implementation, and effectiveness of programs. Stress the importance of relevant education and experience in evaluation, while recognizing that evaluation is a complex multi-disciplinary endeavor.

  • Culturally Competent Evaluation Practices. Use appropriate evaluation strategies and skills in working with culturally different groups. Seek self-awareness of culturally-based assumptions and understanding of the worldviews of culturally-different participants and stakeholders. Diversity may be in terms of race, ethnicity, gender, religion, socio-economics, or other factors pertinent to the evaluation context.

  • Effective Reporting. Target reports to the needs of program decision-makers and stakeholders. Develop families of evaluation reports and information for programs, covering the entire spectrum of evaluative information including accountability, efficiency, and effectiveness.

  • Independence. While seeking advice from all sides, retain control of the evaluation design, performance, and reporting. Evaluative independence is necessary for credibility and success.

Search