One of the major assumptions underlying the common assessments is that the writing portions will be computer-scored. This capability is pivotal in managing their cost and producing results quickly enough to provide valuable feedback for teachers.
The national association representing English/language arts teachers has come out against machine-scoring of student writing. Earlier this month, the National Council of Teachers of English issued a statement saying that machines just aren’t able to score the aspects of writing teachers prize most.
As we reported to you last month, some scholars are circulating a petition opposing machine-scoring of writing as well. In that post, we noted at least one study that has found that computers can rival humans in scoring student writing.
In its statement, the NCTE says that artificial intelligence assesses student writing by only “a few limited surface features,” ignoring important elements such as logic, clarity, accuracy, quality of evidence, and humor or irony.
Computers’ ability to judge student writing also gets worse as the length of the essays increases, the NCTE says. The organization argues for consideration of other ways of judging student writing, such as portfolio assessment, teacher-assessment teams, and more localized classroom- or district-based assessments.
The viability of artificial-intelligence scoring on the common assessments is a powerful cost manager for the two groups of states that are designing tests for the common standards. If they decide that humans must score the essays, the expense of the tests soars. And cost is, of course, high on states’ radars as they weigh their continued participation in the two groups.