Date: Sunday, December 29, 2024
Hello – I’m Kyle Beaulieu and I’ve spent much of my career working for United States Government (USG) agencies in counterterrorism and violence prevention in conflict zones, particularly in Africa. Through my work in operations, planning, and policy, a recurring challenge I’ve encountered is the difficulty in assessing whether foreign assistance interventions are making a difference on the ground.
I have found that, when it comes to Monitoring and Evaluation (M&E) in conflict zones far from a capital, traditional performance monitoring is too linear for such a dynamic environment: we struggle to define the problem we are trying to solve or identify a specific intended outcome; we select indicators that don’t provide a full picture; and our own bureaucracy’s procurement process and politics can disincentivize us from recognizing when programming is actually making little impact.
Against this backdrop, I recently spent three months in East Africa on a peacebuilding fellowship where I was privileged to learn from several African civil society organizations (CSOs) working in difficult environments. I was struck by the flexibility these CSOs practiced, particularly given limited resources and controlling governments. I learned their Theories of Change were designed to pivot quickly, data collection focused on just a few key indicators, and they emphasized trust building, fostering social cohesion, and realistic goals. This approach contrasted with my USG experience using M&E frameworks that required collecting dozens of performance indicators but were still often surprised by significant changes in the local context.
When I asked further questions, I learned the CSOs were making use of Complexity-Aware Monitoring, Evaluation, and Learning (CAMEL) approaches. CAMEL emphasizes the importance of systems and incorporates a robust framework of probing questions to identify uncertain, emergent, contested, and dynamic aspects in a local context: essentially, what could happen, how might we measure it, and how might the intervention and partners respond? Interrogating these aspects contributes to a more nuanced understanding of the operating environment, as the below cases I witnessed in Africa illustrate:
Ask probing questions to learn iteratively in order to Do No Harm to the target population; do not assume all stakeholders have the same goals.
Plan for continuous adaptation and anticipate potential disruptions; do not assume the intervention will go according to plan.
Look inwards to identify potential blind spots and uncomfortable truths; do not assume impartiality.
I highlight the above cases from my fellowship experience in East Africa to illustrate some of the key lessons learned that I intend to incorporate from CAMEL in conflict zones going forward, with an emphasis on asking questions to interrogate assumptions, navigate uncertainty, and proactively identify and adapt to challenges.
The American Evaluation Association is hosting Gov Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Gov Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.