I had a great time last week on RadioBoston chatting with Matthew Chingos about his study comparing an online statistics course with face to face course. Matt and his colleagues wanted to know whether the online version of a course had the same effect on student achievement (as measured by passing rates, grades, and standardized test scores) as a fairly traditional intro stats class. To make this comparison, they used a research method called a randomized control trial, where participants (students in this case) volunteer to be randomly assigned to either the regular (control) class or the online (intervention class). Most people are familiar with the idea of randomized control trials from medical research, where patients are given an experimental drug and a placebo.
The major advantage of this method is that, when executed correctly, only one thing differs between the control group and the intervention. In this study, the only difference between the two groups of students was which course they took.
We could imagine more problematic research designs without random assignment. For instance, let’s say that we had all the students at one college take regular intro stats and all the students at another college take online intro stats. We can compare outcomes between the two groups, but the problem is that the two groups differ in lots of ways: they go to different colleges, have different demographic background, have different school cultures, and are different in all kinds of other unobservable ways. When we look at difference in scores, it is hard to tell what is due to the online intervention and what is due to all those other differences.
It’s very hard work to set up randomized experiments, and Matt and his colleagues have my commendation not just for running the experiment, but offering quite a bit of experiment design details and advice in their paper.
All that said, randomized control trials do not have a monopoly on “rigor.” My one strong disagreement with Matt is his characterization, in the radio broadcast, of randomized trials as the only “rigorous” methods in education technology research. That’s the kind of characterization that policy researchers make that just drive the rest of us crazy, and it obscures more than it clarifies.
As I have said above, for ascertaining the efficacy of an intervention (like a new online course) at scale, randomized trials are the way to go. But we need to learn much more than the efficacy of interventions. For instance, at the end of this study, all we know is that the hybrid stats courses achieve the same outcomes as the traditional ones. This study, and other similar randomized trials, are nearly useless for telling us why hybrid courses work (indeed, Matt mentioned during the interview that he hadn’t even examined the entire online course).
We need rigorous anthropological research to learn more about the study and learning habits of students in these online courses. We need rigorous design-based research to take those insights and develop ever more effective learning environments through iterative trial and design. We need rigorous survey research to track how universities are deploying these new online teaching strategies. We need rigorous learning analytics research to identify patterns in course taking behaviors that can inform design.
It’s silly to call randomized control trials the “gold standard” of educational research, when such such studies are only one piece of the puzzle necessary to develop the potential for technology to enhance learning. Moreover, such languages marginalizes the important work that is done, especially by qualitative researchers, that is vital to the field of education technology.
I’m happy to consider random trials the gold standard of comparative efficacy research. But comparative efficacy research is not the sum total of work that is needed; we need diverse scholars examining diverse subjects through diverse methods, and as a field we need to make such that all of it is rigorous.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.