From the recent MOOC Research Initiative Conference (#mri13) to the data-driven education workshop at NIPS, more research based on courses run by edX, Coursera and Udacity will be coming down the ‘Pike. (We say “coming down the ‘Pike” in Massachusetts, because we have this big turnpike that transects the state. Do other people say “coming down the ‘Pike”? Do you say “coming down the ‘Pike wicked fast?”) Here are four kinds of research to look out for:
- Fishing Expeditions in the Exhaust
- Experiments in the Periphery
- Anthropology in the Field
- Design Research in the Core
[[UPDATE (2/2/14): Here’s a graphic that I’ve been using to illustrate these ideas.]]
Fishing Expeditions in the Exhaust
Computer scientists sometimes refer to the tracking logs that come out of software as the “exhaust.”
Much of the early research that we are seeing on MOOCs (my own forthcoming included) is simply the reporting of observations from these clickstream data. Institutions create a bunch of courses, educators run those courses, researchers go in afterwards to see if anything is interesting. Since the educators and researchers are not often working closely in tandem, many courses are not built with carefully designed research questions and methods for answering those questions prepared in advance. So researchers are going fishing.
As an early stage research approach, this seems perfectly fine. I think we may get some useful data from a policy perspective--for instance, examining how closely the rhetoric of “democratizing education” lines up with the demographic characteristics of actual MOOC participants, or getting a sense of how much interest there is in MOOCs in different populations of higher educations non-conusmers.
I also think that we’re going to have some interesting things pop up in the data that inspire more carefully designed research experiments about teaching and learning. If we see interesting correlations between course structure or teaching approach and outcomes, we can start probing more intentionally in those spaces.
The great risk of this approach is that we’re looking at what we have rather than what we care most about. It’s the drunkard looking for his keys under the lampost because that where the light is. We risk developing fixations on clickstream behavior, persistence, attrition, certification--which are all imperfect proxies for learning, which is what we actually care about.
So, this is good stuff for the early stages, but we’ll need to build beyond it.
Experiments in the Periphery
There are experiments happening throughout xMOOCs, but again because of a disconnect between researchers and course teams, many of these experiments are happening at the periphery rather than within the instructional core. This disconnect is totally understandable; it’s a massive effort to build any kind of course, but as a result many experiments are being designed in ways that could be implemented without any attention from the faculty or course teams. Look for priming statements in surveys and courseware, follow up emails to randomly selected teams, and other kinds of experiments that don’t actually get into the core of the course.
I suspect that we may find some nifty nudges, primes, and user experience enhancements here, and lots of small effect sizes can add up. But we shouldn’t expect big breakthroughs from innovations outside of the core.
Observations from the Field
So we have trillions of cells of data about what people are clicking, and not so much data about what’s changing in people’s heads. We have a zillion time stamped logs, and not a clear sense of what’s happening in between the timestamps.
I have heard very little about qualitative research in the MOOC field (I know of one interview study happening in one HarvardX course), and I think there is a wide open space for people to do some really informative work here. Where are people taking courses? Who are they talking with about them? Are there typical patterns of studying or engaging? Are there importnat learning activities not being captured by the clickstream?
I’m a big believer in methodological pluralism, and as much quantitative data as gets produced by these online platforms, making sense of that data is going to require lots of insights from human beings: through interviews, field observations, screencasts, and content analyses of forums and other peer-production environments. danah boyd’s and Kate Crawford’s Six Provocations for Big Data is helpful for thinking through this, and Mimi Ito et. al. ethnography of teens and interest based learning, Hanging Out, Messing Around and Geeking Out is a great model.
There are human learners taking these courses, and their voices matter; it’s not just user accounts.
Design Research in the Core
The research that I’m most excited about, and the research that is hardest to pull together, will be design research, where faculty, course development teams, and researchers work closely together to design courses that have important and interesting questions baked into the architecture of the course, and where the courses are carefully instrumented to be able to capture data that will answer these important questions. The courses will be designed with multiple opportunities for assessment, so that we don’t have to wait an entire course run to start iterating designs and refine hypotheses.
Research in these environments is a forethought not an afterthought.Course faculty will have clearly defined learning goals and a theory of action for how their instructional methods will help students meet these goals. The key research efforts will take place in the heart of that theory of action. Rather than experiments that tinker with parts of the course that instructors aren’t paying attention to, they’ll be experiments that manipulate the parts that faculty care most about.
So as you see the research unfolding from these efforts, see if this taxonomy helps you think about different approaches that researchers are taking. I think all of these modes of research have the potential to make important contributions. That said, I suspect the most impactful research will be anthropological work in the midst of courses and design work that begins long before courses launch, rather than approaches that comb through tracking logs after all has been said and done.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.