I gave a talk last week at the Berkman Center, where I tried to summarize the state of MOOC research as it relates specifically to learning.
The short version is this: we have terabytes of data about what people clicked, and very little understanding about what changed in people’s heads. It’s hard to look at the science of learning in MOOCs, because we don’t know much about what they are learning. The visual short version is this vizthink from the great Willow Bl00.
The lowest hanging research fruit tends to be more user experience research than learning research. There are lots of studies showing that people who do stuff in one part of a MOOC do more stuff in another part of the course, but very few studies that can characterize what kinds of competencies people came into a course with and how those competencies changed over time.
The main problem is that designing good assessments is really hard and time-consuming anywhere in education, and particular difficult when you are trying to assess thousands of people instantly and without human experts. At the same time, hoovering up clickstream data and running it through clustering algortihms or regression models is relatively easy. So if you are young scholar in a new field trying to make a name for yourself quickly (myself included), you grab the low-hanging fruit.
So I have three suggestions for reshaping the field. First, we need to help course developers create better measures of people’s competencies, so we have outcome variables that are better than engagement metrics. Then, since learning is a measure of the change between two levels of competency, we need to at least be measuring competency at the beginning and end of courses, not just at the end. This means finding more ways to evalute students throughout our courses. Finally, we need to do research that builds causal chains of reasoning to connect the moves we make inside courses with the learning measures that we can obtain. On the quantiative side, that meaning introducing experimental variation into courses so that we can better understand what kinds of instructional moves lead to what kind of learning. On the qualitative side, that means doing more interviews and observations with MOOC learners, so we have a richer set of hypotheses about the mechanisms behind open online learning, especially the learning happening in the untrackable space beyond our platforms.
It has been totally fine and natural that in the early years of MOOC research, as we try to make sense of the data and the courses, people are aiming for the low hanging fruit. But the critque that hangs over that work is that we are unearthing the obvious. The actionable advice we can offer to course developers thus far is “get your students to do stuff.” As we head into a new year, it’s a great opportunity to imagine what kinds of research would let us make claims not just about what students are doing, but about what they are learning.
The full video is below.
Many thanks to those who asked questions, shared their thoughts, tweeted and said hello afterwards.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.