Date: Saturday, April 12, 2025
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello! We are Emily Tolbert and Olivia Jones, internal evaluators at the American Heart Association. We work with teams across the Association to evaluate programs in clinical and community-based settings.
As we all know, program staff, defined as program implementers with decision making capabilities about the future of the program, are crucial partners in evaluation and are often primary users of evaluation findings and recommendations. Program staff are not typically involved in conducting qualitative data collection for a variety of reasons. For instance, program staff are not usually trained in qualitative research methods, and depending on the context, the inclusion of program staff could introduce a power dynamic that threatens the objectivity of the evaluation and contributes to response bias from participants.
However, we found there can be benefits to relationship-building and ensuring the usability of evaluation findings if program staff are part of the qualitative data collection process in our recent experience conducting two formative and process evaluations. For Program A, which took place in small rural community, the program staff person had an existing relationship with the community and their presence during data collection served as a bridge between the evaluator and participants, which allowed the evaluator to quickly establish a connection and build rapport. Having a program staff person serve as a co-facilitator in Program B allowed for more in-depth and relevant data collection through detailed probing and follow-up questions. Since the Program B staff person was deeply engaged in data collection, the evaluator was able to co-create recommendations while still ensuring there was reliable evidence to support conclusions. This facilitated buy-in from program staff on the evaluation findings and recommendations, increasing the likelihood of the evaluation being used.
There were also some challenges. During data collection for Program B, there were times when participants confused the roles of the program staff and the evaluator, which could have influenced their responses to questions. There were also times when program staff became so immersed in the discussion that they contributed to data collection more than planned.
Based on our experience with these program evaluations, we identified several steps evaluators can take before, during, and after data collection to successfully include program staff in qualitative data collection if appropriate:
We hope you find these recommendations useful and consider using them if you are planning qualitative data with program staff in the future.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.