I’ve recently concluded two years as a research fellow at HarvardX. To bring things to a close, last week I held a workshop with course developers looking at the question: What have we learned from the last two years of MOOC research that could help improve the design of courses?
Over the next few days, I’ll release a series of short posts on seven general themes from MOOC research that could inform the design of large-scale learning environments in the years ahead.
- MOOC students are diverse, but trend towards auto-didacts (July 2)
- MOOC students value flexibility, but benefit when they engage frequently (July 6)
- The best predictor of persistence and completion is intention, though every activity predicts every other activity (July 12)
- MOOC students (tell us they) leave because they get busy with other things, but we may be able to help them stay on track (Today)
- Students learn more from doing than watching
- Lots of student learning activities are happening beyond our observation: including note-taking, socializing, and using other references
- Improving student learning outcomes will require measuring learning, experimenting with different approaches, and baking research into courses from the beginning
4. MOOC students (tell us they) leave because they get busy with other things, but we may be able to help them stay on track
In my last post, I mentioned that computer scientists have gotten quite good at using MOOC activity data to predict student persistence, but we hadn’t yet used those prediction technologies to improve student learning.
The most interesting application of predicting student dropout has been using these prediction algorithms to conduct “dynamic course evaluation.” In a typical end of course survey for a MOOC, about 3% of course registrants respond to our surveys, and these tend to be the most persistent, most satisfied students. They tell us something about how the happiest students experience the course, but they tell us virtually nothing about why people leave our courses.
To help address this question, two groups now, one from Harvard and one from Stanford, have used machine learning to predict when students have dropped out (as opposed to just taking a break). We have then surveyed these students right at the moment they have left their courses. We still only get about 12% of participants (6% of registrants) to complete these surveys, but that’s a little better than what we got with end of course surveys.
Both Harvard and Stanford groups found that the main reason for not continuing in a course had to do with time. The people who respond to these surveys are very unlikely to be unhappy with the course, they just got busy. In some of our qualitative responses, people mentioned work or heath issues as reasons for stopping out.
These may be problems that we can address. At HarvardX, Michael Yeomans from the Harvard Economic department is conducting a study of student plan-making behaviors. He’s asking students to describe some of their plan-making strategies. He’ll then use computational methods of text analysis to determine which strategies best predict student success. If we can figure out what kinds of plan making are effective for some students in MOOCs, we may be able to help others.
This intervention is similar to the belonging intervention being tested by Rene Kizilcec, that I mentioned in a previous post. Both of them draw upon recent research in cognitive psychology, behavioral economics, and decision science. From these fields we have learned that relatively simple nudges and interventions can have dramatic long term impacts on student performance (I wrote a recent review of some of these kinds of studies for KQED mindshift). I’m intrigued by the promise of these kinds of approaches, which can be easily embedded in courses and have the potential to substantially improve student learning outcomes.
If we know that students intend to persist in online courses, but life gets in the way, we should do what we can to help fortify their commitments.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.