So robo-readers can grade student essays faster and more “accurately” than actual humans--if you’re willing to overlook a few minor details, such as logic and coherence? I have to admit, being able to grade 16,000 essays in twenty seconds is pretty impressive. That’s about 15 years’ worth of old-fashioned assessing, sitting at the kitchen table with a red pen and pot of coffee, for the capable and conscientious HS English teacher who regularly assigns writing. And of course, you actually have to pay the teacher, a major drawback.
Education Testing Service, its developer, calls the program an automated “reader.” A misnomer. Machines can scan documents, rate items using an algorithm, and even “grade,” but they can’t read, if we’re talking about making meaning, using real words. Nor can they “give immediate feedback"--as claimed--unless that feedback is: write longer sentences, including the word “moreover” whenever possible.
The language matters here--let’s define the terms.
There’s assessment. Which is a combination of evaluating students’ work and acquired knowledge, providing feedback for pupils and guidance for the teacher, using multiple lenses. Assessment asks: Is the work of high quality? How does it compare to exemplars? Is knowledge understood well enough to be applied? What skills have been demonstrated? Does the work represent growth for this student? Is the evidence of learning convincing?
Assessment identifies strengths, diagnoses weakness and informs further instruction.
Then--there’s grading. Built into its etymology: sorting, labeling, ranking. Grading is logging data, computing averages and following ranking protocols. It’s measuring compliance with the designated task.
There are different questions around grading: How does this achievement compare to others in the grading pool? Has content been accurately memorized and reproduced? Which assignments are completed and which are missing? What statistical weight do we give to the task being graded? How do I convert this percentage into a letter?
Some teachers include other judgments in a grade: perceived effort, neatness, timeliness, creativity, even things like “cooperation,” none of which measure actual learning. Some use grades as punishment or reward. Teachers in schools where there are on-line grading programs are often compelled to post a minimum number of grades each week, just to fill up the boxes provided. Grading is what state departments do, when they massage data to rank-order schools.
Grading is not assessment.
You would think that parents would prefer to be given complex information on their child’s progress--detailed assessments--but that isn’t necessarily true. One elementary school in my district changed their quarterly reporting system, moving from letter grades in subjects to narrative comments from teachers on a list of outcomes in the district curriculum. It was a lot more work for teachers, but they agreed to the shift, thinking parents would want to know, for example, that little Ashley could correctly count the value of coins, but did not yet understand principles of multiplication.
But no. The narrative grading system was short-lived. Many parents were blunt: they wanted to know how their child compared to other kids. They didn’t care about assessment information. They wanted letter grades--real, hierarchal letters, not wimpy S and U combos or checklists.
Why? Because they themselves got letter grades, back in the day. Because it took too long to read the extended report card. Because they couldn’t give their kids a dollar for every A.
Who taught them to value grading over assessment? We did.
When working with a novice teacher, she told me about a mother who volunteered in her classroom daily. My mentee couldn’t round up enough jobs to keep her busy, so the mom offered to grade papers, record the data, and pass assignments back to kids. The young teacher immediately got back an hour or more each day. But after a couple of weeks, she started to suspect that the mother was sharing information about the students whose work she graded with other volunteer moms. Newbie Teacher wondered about privacy ethics in outsourcing her grading.
My feedback? Yup--it’s ethically shaky. But what’s worse is that you are no longer looking at your students’ work every day. Papers are graded, but you’re not assessing. You’re not identifying misconceptions or seeing growth.
I’m well aware of the time constraints teachers face. Sometimes, you need to shortcut the assessment function and stand in front of the Scantron machine, feeding in answer sheets and dreading the machine-gun rattle that happens when a student misses many of the multiple-choice questions. When that happens, it’s worth remembering that Scantron funds ALEC--buying those expensive machines contributes to de-skilling teaching in more ways than one.
In a beautifully written essay about robo-grading, Renee Moore says that writing is “an exchange of ideas.” Something that doesn’t happen with a machine. I would extend that descriptor to all good assessment practice. It’s not about rating. It’s about sharing ideas, and what happens next.
The opinions expressed in Teacher in a Strange Land are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.