Special Report
Assessment Q&A

Q&A: Misconceptions About Formative Assessment

By Catherine Gewertz — November 09, 2015 5 min read
Image of students taking a test.
  • Save to favorites
  • Print

RICHARD J. STIGGINS is widely known as an advocate of classroom assessments in the service of student learning. His long career in testing brought him to that vantage point: He holds a doctorate in educational measurement, has worked as the director of test development at ACT, and served on the faculty at several education schools. As the president of the Assessment Training Institute in Portland, Ore., from 1992 to 2010, he helped teachers design classroom-assessment tools and strategies.

He joins Education Week Associate Editor Catherine Gewertz to define what formative assessment is—and isn’t—and explain its purposes, benefits, and how it’s distinguished from other types of assessment.

The interview has been edited for space and clarity.

So what do you think the biggest areas of misunderstanding are about formative assessment among educators?

Stiggins: I think one big misunderstanding is among policymakers at all levels. [It’s] the mistaken belief that somehow annual accountability standardized testing improves schools. It’s not that I’m opposed to assessment at that level, but the obsessive belief that somehow this is the application of assessment that will improve schools flies in the face of everything we know.

A second misunderstanding is that people are tending to think about formative assessment as an event, rather than a process. The way we have to think about it is that we engage in the ongoing, day-to-day classroom-assessment process to give teachers and their students the information they need to understand what comes next in the learning. It isn’t a one-time event.

There’s another misunderstanding—again, a lack of understanding—that may surprise most people when I mention it. That is our failure to understand the role of the emotional dynamics of being evaluated from the student’s point of view. For formative purposes, those dynamics have to center on keeping students believing in themselves. It isn’t merely about getting teachers more information so they can make better instructional decisions. Good formative assessment keeps students believing that success is within reach if they keep trying.

Tell us a little bit more about this idea about students being engaged in the process.

This idea arises from a researcher and assessment expert in Australia; his name is Royce Sadler. What he said to us is, we use formative assessment productively when we use it in the instructional context to do three things. One is, keep students understanding the achievement target they’re aspiring to. The second is, use the assessment process to help them understand where they are now in relation to that expectation. And the third is, use the assessment process to help students understand how to close the gap between the two. Do you see where the locus of control resides? It’s with the student.

Should formative assessments ever be graded? Because we certainly hear about them being graded.

Here’s how I think about it. Anything and everything that students do by way of their work, or their performance, needs to be evaluated, to be sure, in terms of very specific, preset performance criteria that are known to the teacher and the student. So for example, in diagnosis, the judgments about student performance in relation to those criteria help to identify students’ strengths and weaknesses. And, of course, diagnosis is, how do you rely on the strengths to overcome the weaknesses? To provide feedback, we need to help students know how to do better. Judgments about how they’re doing in relation to those performance criteria will reveal that to them and to their teacher.

We need to keep good records so that we can track student changes over time. But [in the formative] context, there’s really never a need to assign a letter grade in this context. My admonition to teachers is, while the learning is going on, and we’re diagnosing and providing that good feedback, the grade book remains closed.

There is a variation on this theme that is important. That is, can formative evidence ever serve summative purposes? And the answer is clearly, “yes.” If I have information from the formative application of assessment in my classroom that reveals a higher level of achievement than was revealed by, for example, a unit final exam, then it’s my responsibility to use the best evidence I have to determine, for example, a student’s report-card grade. So yes, the barrier between the two can come down, but only under those very specific circumstances.

I’d love to hear an example or two of a formative assessment that you think was done really well.

Well, the classic example is a process that I was privileged to watch unfold over time in a high school English class. The assignment was to write a term paper: read three pieces of literature by the same author and [defend a] thesis statement in a term paper. What this teacher did was that, to begin with, she distributed a copy of a term paper that was of outstanding quality. What she asked students to do is read it as a homework assignment and try to make judgments about what it was about this paper that really made it outstanding. The next day in class they brainstormed a list of all the attributes that made it an outstanding piece of work in the students’ opinion.

Next, she distributed a copy of a term paper that she had actually fabricated that was of dismal quality. Once again, the assignment was to read the term paper and see if you can articulate what makes it an ineffective piece of work. And they brainstormed again. And then she said, “OK, let’s talk about the differences between these two papers. What was it about the good paper that differentiates it from the bad paper?” They began to brainstorm that. They had a long list of differences between the two. Now, what I want people to understand is, as this was unfolding, ain’t nobody writing any term papers. They’re still working on how to think about this. What she got them to do is to boil down that long list of differences into the four or five most essential differences, coalesce them, group them together. Then they wrote definitions of them, working in teams, definitions of those key attributes.

She had them begin to create student versions of rating scales of quality—like, she said for this particular attribute, take a few minutes to think about what that attribute would look like when it’s outstanding. What would that attribute look like when it’s of dismal quality? And what would the midrange look like? What they were doing, in effect, under her leadership—and understanding that this wasn’t being left completely to students, she was leading them through this process to center on the key attributes—were essentially student-friendly versions of the learning targets that they were expecting to hit. When they were done with all of this, it came time to draft their papers. So ... what happens is, they begin to zero in on the really key attributes of good work before they begin the work.

A version of this article appeared in the November 11, 2015 edition of Education Week as Formative-Assessment Misconceptions

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Your Questions on the Science of Reading, Answered
Dive into the Science of Reading with K-12 leaders. Discover strategies, policy insights, and more in our webinar.
Content provided by Otus
Mathematics Live Online Discussion A Seat at the Table: Breaking the Cycle: How Districts are Turning around Dismal Math Scores
Math myth: Students just aren't good at it? Join us & learn how districts are boosting math scores.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Achievement Webinar
How To Tackle The Biggest Hurdles To Effective Tutoring
Learn how districts overcome the three biggest challenges to implementing high-impact tutoring with fidelity: time, talent, and funding.
Content provided by Saga Education

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Assessment The 5 Burning Questions for Districts on Grading Reforms
As districts rethink grading policies, they consider the purpose of grades and how to make them more reliable measures of learning.
5 min read
Grading reform lead art
Illustration by Laura Baker/Education Week with E+ and iStock/Getty
Assessment As They Revamp Grading, Districts Try to Improve Consistency, Prevent Inflation
Districts have embraced bold changes to make grading systems more consistent, but some say they've inflated grades and sent mixed signals.
10 min read
Close crop of a teacher's hands grading a stack of papers with a red marker.
E+
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Assessment Sponsor
Fewer, Better Assessments: Rethinking Assessments and Reducing Data Fatigue
Imagine a classroom where data isn't just a report card, but a map leading students to their full potential. That's the kind of learning experience we envision at ANet, alongside educators
Content provided by Achievement Network
Superintendent Dr. Kelly Aramaki - Watch how ANet helps educators
Photo provided by Achievement Network
Assessment Opinion What's the Best Way to Grade Students? Teachers Weigh In
There are many ways to make grading a better, more productive experience for students. Here are a few.
14 min read
Images shows colorful speech bubbles that say "Q," "&," and "A."
iStock/Getty