Formative assessment is squishy. And squishy things don’t easily yield to standardized measurement. And that creates an awkward situation in an era of numbers-driven accountability.
That squishiness was on display yesterday during a panel discussion about formative assessment. (See my story for highlights of the discussion and a summary of the paper that inspired it.)
The key messages being put forth were these: Don’t let the push for new-age assessments mess with formative assessment, and don’t forget what formative assessment really is.
And what is it, exactly? According to Margaret Heritage of the National Center for Research on Evaluation, Standards, and Student Testing, it’s a reciprocal feedback loop of students and teachers, figuring out together whether deep learning has taken place. Her very pointed point here was that you can’t get there with a pop quiz.
The way to get there, according to Heritage and some of the folks on the panel, is not to design new “formative tests,” but to devote resources to teaching teachers how to master this feedback loop called formative assessment. That means professional development. Lots of it. And what that starts to sound like, at least to some ears, is that formative assessment isn’t so much assessment as it is instruction.
Achieve’s Mike Cohen noted this during the discussion, suggesting that we call formative assessment “formative instruction” instead.
As the question-and-answer session began, testing veteran Jon S. Twing addressed that idea as well. Standing up in the audience, he said that 30 years ago, “formative assessment” would have just been called good teaching. “When,” he wanted to know, with what sounded like mild exasperation, “did it become an assessment issue and not an instruction issue?”
Heritage said it is important to think of formative assessment as assessment, since many teachers need a “systematic, planned process” by which to figure out (assess) where their students are with their learning. “Yes, of course it’s good teaching,” she told Twing. “But it’s driven by a process of planful evidence-gathering with a purpose, for a reason.”
Can such a process be designed, formatted and distributed widely? If not, how can—or should—it be employed in the era of numbers-driven accountability?