Date: Thursday, December 26, 2024
Hello, Lindsay Scanlon, a lifelong learner here with Guidehouse. Other hats I wear include evaluator and implementer; leadership, executive, and change coach; People, Organization, and Change Community of Practice leader; management consultant; facilitator; and writer. And this is Cara McFadden – someone who loves providing high quality consulting services to our public sector clients that meet their written and unwritten asks; when that doesn’t happen, I’m passionate about making things right and learning to avoid the same mistake in the future.
Our post shares a story about learning from failure after delivering a draft report to a client. The report didn’t quite hit the mark.
Years ago, our team was asked to evaluate an agency’s process for planning, budgeting, implementing, and evaluating performance. We shared our findings, conclusions, and recommendations in a succinct presentation that accompanied our report. Our clients puzzled over some of the recommendations but agreed to read the report before rushing to judgement. A week later, the client called our project manager (PM). Not only did they believe our recommendations were not quite feasible but thought that implementing them would set the organization back from the structure and process alignment they were trying to create.
Brene Brown teaches us it takes vulnerability to rumble with failure and examine it rather than dismissing it as an anomaly, or worse—blaming other externalities. Where did we fail in our agency evaluation? We didn’t co-create recommendations. We tried to put ourselves in our client’s shoes, but we didn’t live what they did day in and day out. That failure turned into a learning experience that has changed our approach.
As a result, we collaborate with clients to determine how program strengths and successes can be expanded, replicated, or optimized—with the context and political will needed for implementation. We co-create recommendations or co-refine draft recommendations that are both feasible and have tested buy-in and ownership—further increasing the likelihood of implementation.
Other improvements to increase the likelihood of implementation include:
Given the success of this practice, we have expanded co-creation or co-refinement of draft evaluation questions. We sometimes find evaluation questions should be more specific, measurable, achievable, relevant, and time-bound (SMART)—achieving the desired result with more rigor and a better basis for evidence-building. When people have an opportunity to contribute to evaluations, they feel heard and are more willing to make changes. As for the resisters—which there most certainly will be—don’t be afraid to employ a few nudges!
The American Evaluation Association is hosting Gov Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Gov Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.