Date: Tuesday, April 15, 2025
Hello! We are Megan López, Senior Research Associate, and Lyssa Wilson Becho, Principal Research Associate at The Evaluation Center at Western Michigan University. We work on EvaluATE, an NSF-funded grant through the Advanced Technological Education (ATE) Program. EvaluATE is a large evaluation capacity-building (ECB) initiative that aims to support grantees in leveraging evaluation to improve outcomes related to student learning and workforce preparation in STEM.
As evaluators doing ECB, we think about the evaluation of ECB on the regular. This is because we know the powerful role that ECB can play in supporting important programs and initiatives that aim to understand, improve upon, or grow impact.
Therefore, we embarked on a journey to learn from others who are also evaluating ECB to uncover promising practices, new resources, and opportunities to further this work. The full results of this exploration can be found in the Fall 2024 Issue of New Directions for Evaluation. While we focused on the evaluation of ECB, we learned a lot about ECB itself and wanted to share some key learnings to support your own ECB work.
ECB initiatives can often feel complex, involving a diverse set of activities aimed at building different capacities across multiple audiences. To untangle these complexities, start by clearly defining the intended outcomes of your ECB efforts, then work backward to ensure that each planned activity logically contributes to achieving those outcomes. It’s important to recognize that ECB can follow multiple pathways of impact—for instance, one pathway may strengthen the capacity to conduct evaluations, while another enhances the ability to engage with evaluation findings.
ECB initiatives in real-world settings often involve dynamic audiences—ECB recipients who come in and out at different intervals and bring varying levels of prior capacity. Anticipating this fluidity creates opportunities to be more responsive to evolving needs and contexts. For example, regularly administering surveys or conducting interviews can help assess participants’ initial capacities, shifting interests, and changing contextual factors over time. This ongoing assessment allows ECB efforts to be tailored to emerging needs. However, as you adapt, be intentional about emphasizing audience strengths rather than focusing solely on deficits—listening and learning from these engaging in and commissioning ECB are at the heart of growth and social betterment.
Reflecting on our ECB practice can lead to meaningful improvements in practice that are responsive to changing contexts, audiences, and needs. They can also be leveraged to assess the quality of ECB artifacts, elicit feedback, and identify opportunities for improvement in our ECB work. While a repository of such tools to systematically assess our ECB work is growing, there are some available to get us started. For example, Urban & Colleagues (2024) share their rubrics to systematically assess and score ECB artifacts, including evaluation plans and theories of change.
Be sure to check out the full New Directions for Evaluation issue (it’s open access!). There are countless opportunities to advance this work (if you’re looking for inspiration, our Epilogue provides a few ideas!) So, if you’re thinking about advancing meaningful ECB, already deep in this work, or hoping to collaborate, let’s keep the conversation going!
To learn more, see:
The American Evaluation Association is hosting Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to AEA365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.