Many educators find value in essay-grading software.
English teacher Aleeta Johnson first saw an advertisement for essay-grading software while attending a Florida Educational Technology Conference in Orlando six years ago. Her initial reaction was skepticism, bordering on disbelief.
“I thought, oh, that’s too good to be true,” says Johnson, who works at Farragut High School in Knoxville, Tennessee. “How could a computer grade an essay?”
But since then, Johnson has become a true believer in the power of essay-grading technology—especially Educational Testing Service’s Criterion Online Writing Evaluation, which is used in her district. Now the 22-year-teaching veteran can’t imagine life without Criterion because it has freed up her time to assign an “astronomically” higher number of writing assignments than she did before she used the technology.
Many other teachers have also seen the wisdom of writing-evaluating software, fueling the proliferation of titles such as Writing Roadmap 2.0, a product of CTB/McGraw-Hill; IntelliMetric by Vantage Learning; WriteToLearn by Pearson Knowledge Technologies; and SAGrader by Idea Works Inc.Each program works a little differently. Some, such as Criterion, assess the quality of sentence organization, grammar, usage, and style, but do not evaluate content. Others use artificial intelligence to evaluate the quality of an essay on a particular topic.
Shortly before Johnson attended that Florida technology conference, I wrote a story about essay-grading software. At the time, I was also very skeptical that it would somehow revolutionize the teaching of writing. It seemed more hype than reality, and there was debate raging among researchers about the accuracy and effectiveness of the essay-grading engines. Plus, as a writer and editor, the thought that a machine could replace the craft of evaluating a piece of writing made me cringe a bit. Writing, to me, has always been a quintessentially human experience.
But I do not have to teach writing to more than 150 students in the course of a school year, as Johnson does. And that’s why her experience, not my perspective, carries more weight. In 2002-03, when Criterion was used to grade essays at all 12 of her district’s high schools, writing scores on a subsequent standardized test of 11th graders’ persuasive writing rose 8 points. At one high school where a writing-across-the-curriculum program was put in place in tandem with Criterion, 11th graders’ scores jumped 19 points that year.
Even so, Criterion and other essay-grading technologies have their limitations. They can’t judge the creativity of a writing style or the inventiveness of metaphors and symbolism. And I remain skeptical that artificial intelligence can effectively differentiate between a good essay and a truly excellent one.
Johnson acknowledges that Criterion is not a good tool for very sophisticated writers. It wouldn’t appreciate the skill and creativity of a budding Shakespeare, for example.
But the reality, Johnson says, is that budding Shakespeares make up less than 1 percent of her students. The other 99 percent, she says, are best served by the modern duo of teacher and machine.
Vol. 18, Issue 01, Page 47Published in Print: September 1, 2006, as Scantronning Shakespeare