I have a piece in the New Yorker Elements blog this week, which reflects on the last half-century of progress with intelligent tutors. The thesis is that intelligent tutors do two things: they assess human capacity and they deliver content. The earliest intelligent tutors could only conduct assessments through pattern-matching, either through multiple choice or comparing words/numbers to an answer bank. Todays intelligent tutors haven’t made much progress past that. Even when we claim we can do things like automated essay scoring,we really mean taking essays, smashing them with statistical hammers until they are numbers, and then doing pattern matching.
(I don’t get into it in the post, but the progress made in delivering content is fairly modest as well. In the 80s, ETS developed a statistical toolkit called Item Response Theory that we now use to classify items along a number of dimensions and figure out which ones might be appropriately difficult for particular students. Adaptive learning is driven primarily by tweaks on stats models from three decades ago.)
The problem with this slow growth is that during the years that intelligent tutors have been inching along, the demands of the labor market have been wildly growing. As I conclude:
Perhaps the most concerning part of these developments is that our technology for high-stakes testing mirrors our technology for intelligent tutors. We use machine learning in a limited way for grading essays on tests, but for the most part those tests are dominated by assessment methods--multiple choice and quantitative input--in which computers can quickly compare student responses to an answer bank. We're pretty good at testing the kinds of things that intelligent tutors can teach, but we're not nearly as good at testing the kinds of things that the labor market increasingly rewards. In "Dancing with Robots," an excellent paper on contemporary education, Frank Levy and Richard Murnane argue that the pressing challenge of the educational system is to "educate many more young people for the jobs computers cannot do." Schooling that trains students to efficiently conduct routine tasks is training students for jobs that pay minimum wage--or jobs that simply no longer exist.
The example that I use of an early intelligent tutor is PLATO, a system developed at the University of Illinois, Urbana-Champaign. Upon posting the piece, Brian Dear, who has a great website: friendlyorangeglow.com, immediately raised a number of issues with my recounting of PLATO. His full post is here.
In response, The New Yorker issued two corrections into the piece, both of which are entirely my responsibility. I regret the errors, and I thank Mr. Dear for bringing them to my attention and to the editors.
I want to focus one important critique that Mr. Dear raises. The conceit of my article is to describe the most elementary examples of intelligent tutors from nearly a half-century ago, and suggest to readers that these examples might be all too familiar. Our contemporary intelligent tutors look an awful lot like what we first came up with.
But describing the simplest capacities of PLATO fails to account for the groundbreaking social computing that developed within the PLATO community. Audrey Watters gave a wonderful keynote address at a recent EdTechTeacher event, where she recounted what she found to be some of the important elements of PLATO’s history:
This networked system made PLATO a site for the development of number of very important innovations in computing technology -- not to mention in ed-tech. Forums, message boards, chat rooms, instant messaging, screen sharing, multiplayer games, and emoticons. PLATO was, as author Brian Dear argues in his forthcoming book The Friendly Orange Glow 'the dawn of cyberculture.'"
I don’t get into any of this, because I’m telling a story about the limitations of the present. In doing so I definitely do not characterize the full contribution of PLATO to computing history.
Dear writes in his critique, “What Mr. Reich fails to explain is that his beloved MOOCs of today offer even less capability in evaluating student answers. But he can’t mention that because that would make PLATO look good compared to MOOCs and present-day ed-tech which he’s so wired into.”
On the general thrust of this point, I totally agree. MOOCs today, in terms of automated grading, have made little progress (especially paradigmatically) beyond PLATO. Moreover, they lack the social features and general sense of playfulness that makes PLATO so striking. If we haven’t made much progress from PLATO with MOOCs in terms of assessment, MOOCs are far behind what PLATO accomplished in terms of social computing. Dear is absolutely right, and Watters makes the case nicely in her keynote as well, that many of the lessons of 50 years of online learning research and design have not been incorporated into MOOCs.
It’s a history we would be wise to study, and I look forward to Dear’s forthcoming book.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.