Date: Monday, March 31, 2025
Hi, I’m Julie Lorah, PhD, an independent consultant at my company, Solarium Consulting. I apply quantitative methods to analyze and evaluate STEM educational programs. Using advanced techniques, I examine data with a social justice lens to assess program effectiveness and identify trends. My goal is to support evidence-based decisions that help improve STEM initiatives for diverse participants and stakeholders.
The history of the development of statistical methods is questionable at best. Early probability theory was developed by mathematicians interested in studying the properties of gambling. The t-test was developed to answer questions relating to the manufacture of beer. Many statistical procedures were developed in the context of questions of agriculture. And statistical methods have been used to “prove” all sorts of misleading findings and as evidence for the perpetuation of problematic power structures.
So why is it that these same statistical methods are unquestioningly used by evaluators today? We’ve lost sight of the history and context of many of our modeling and statistical choices and instead begun to rely on formulaic processes for data analysis. And the problem with this is that we’re often missing big pieces of context in our rush to compute standard descriptive statistics (e.g., sample mean), or to compute standard inferential outcomes (e.g., the ubiquitous p-value). For example, a sample mean is a summary statistic measuring central tendency for a given variable. Many standard statistical procedures are simply more complex analyses of mean structure. Yet, when we’re examining outcomes (e.g., degree of understanding for a biology concept from high school students), is the mean really what we want? Do we really want to thoroughly overlook individual difference in the statistical analyses?
Fortunately, there is a solution. In her book, Escape from Model Land, Dr. Erica Thompson argues for the engagement of more diverse groups in the entire research process, which applies to evaluations. Thompson argues that since most researchers come from Western and privileged backgrounds, their worldview is limited. Stated another way, Dr. Catherine D’Ignazio and Dr. Lauren F. Klein suggest researchers “Embrace Pluralism” as the fifth principle in their book, Data Feminism. So why not include more diverse perspectives within the entire evaluation process? For example, if an evaluator is working to assess the efficacy of a high school biology curriculum, why not start the process by gaining insight from the students and science/biology teachers in terms of how the research design itself will proceed? Why not ask students and biology/science teachers from diverse schools across the country?
Moreover, why not find individuals from these groups to serve as paid consultants in the process from study design and survey construction to reporting results? They may not have formal training in evaluation, research, or any of the quantitative methods we use, but they certainly have extensive domain expertise. This leads to a key issue that Dr. Thompson has identified: the role of context. Involving diverse voices and more varied domain experts can result in creative ways of obtaining data, measuring constructs, modeling phenomena, and interpreting results. In short, the role of context must not be overlooked and further must lead us to question the overuse of the most common statistical procedures in evaluation of science programs.
Learn more about issues related to social justice and statistics through our LibGuide (https://guides.libraries.indiana.edu/statisticsSJ) or the list of books I’m reading with my book club (https://www.solariumconsulting.com/book-club/).
The American Evaluation Association is hosting Extension Education Evaluation TIG Week with our colleagues in the STEM Education & Training Topical Interest Group. The contributions all this week to AEA365 come from our STEM Education & Training TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.