Admittedly, this post is just one more attempt to avoid the inevitable: my last evaluation paper of the semester. In the next 48-hours, I need to create a draft outcome evaluation plan for my dissertation study. The assignment requires a measurable outcome question, a related hypothesis that can be quantitatively assessed, discussion of “effect size” to determine whether or not I have enough statistical power to determine causation, and then the rationale for my research design to actually collect the required data to do the analysis. And yet, I find myself writing this blog post instead because I am wrestling with a huge quandary: this assignment is forcing me to quantitatively analyze my qualitative problem (though that could be the point). However, for right now, I would like to blame Edward Thorndike for my predicament.
As a psychology professor at The Teachers College at Columbia University, Thorndike argued for a scientific approach to education. An educational leader in the early 1900s, he promoted this desire for empirical evidence by calling for students to gain knowledge in measurable ways. The evidence of Thorndike’s legacy can not only be found in the accountability and testing movements present today but also in the quantitative focus of educational research - hence the possible source of my current assignment.
This is not to say that I do not see a place for quantitative research. In fact, it has the potential to provide substantiated evidence for the effect of a program or to explicate a concept. Earlier this week, the Washington Post described a new debate about the value of quantitative research and the use of Randomized Control Trials (RCTs) in education. RCTs are considered the “gold standard” of research for two reasons. First, they significantly reduce sampling and selection bias by randomly choosing participants and assigning them to either treatment or control conditions. Next, they isolate all other variables that could impact the study. While this increases the internal validity of the research - meaning that the study does actually measure what it intends to measure, the results are not always generalizable to other contexts because of those controls. Just because something works in a controlled condition does not always ensure success in the “real” world. Not having read the original research cited in the Washington Post article, I cannot comment on the methodology, analysis, or validity. However, I would raise one question not only to this study but to all quantitative studies in education: how can we really measure change with only a number?
This brings me back to the challenge at hand: I need to reduce my hypothesis to that which can be measured using statistics. Technically, I could write that I will run pre and post tests with an online survey and then compare median scores to look for a statistical difference. What bothers me is knowing that the real data, that which we can truly learn from, that which provides stories and details to paint a richer description of the situation, comes from qualitative data - observations, interviews, stories, and focus groups.
For now, I need to meet the requirements of the assignment and focus on the quantitative analysis though I have no intention of limiting myself to only one research philosophy in the future. For my dissertation, I plan to implement a mixed methods study that collects both quantitative and qualitative data. Despite the growing acceptance of this research philosophy, I still wonder how long before we can completely shake the legacy of Thorndike.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.