There’s always a lot of interest among Education Week readers in the topic of “formative assessment.” And despite some confusion over the research, there is at least general recognition that this aspect of teaching and learning has a lot of promise. So how on earth do we get teachers embodying the practice of formative assessment?
It’s a tough question that a few testing experts tried to tackle in a research forum held this afternoon in Washington by ETS, the nonprofit research and test-publishing organization. The forum focused on the obstacles standing in the door to preparing teachers to embody formative-assessment techniques, especially with the Common Core State Standards train approaching.
Let’s back up a minute here to make sure we’re all on the same page regarding formative assessment, a term about which there’s a lot of debate. Formative assessment is probably better described as a cycle of instruction, immediate data-gathering to collect feedback that helps the teacher readjust instruction, and the sharing of that feedback so students themselves are engaged in the learning process.
It was clear from today’s forum that, according to the experts, you can’t do formative assessment on the fly: The technique has has to be planned and executed purposefully as part of a lesson using a variety of strategies (i.e., “entry tickets,” questioning). Nor are they “interim” or “benchmark” assessments, which some districts give every few weeks.
But what are today’s teachers actually learning about the process? That was a question that Caroline Wylie, a research scientist in the R&D division of ETS, had on her mind. To find out, she and colleagues looked at online materials, such as course descriptions and syllabi, from 22 teacher education programs in New Jersey.
“We saw that it was a fairly uneven landscape,” Ms. Wylie said at the forum.
Classes taught ranged from “Assessment and Measurement for Teachers” to “Curriculum, Evaluation, and the Learner” to the seemingly overstuffed “Integrating Elementary Curriculum & Assessment for Equity & Diversity.”
Ms. Wylie said she found only three instances in which formative assessment was specifically mentioned. And she added that what was taught in these classes seemed to vary based on whether a student was taking an undergraduate or graduate education course. Some of the graduate courses, for instance, were much more about the technical nature of the assessments rather than their place in teaching and learning.
It is particularly interesting to cross-reference this observation in light of the shift in teacher-training demographics. The production of education bachelor’s degrees in this country has fallen since the 1970s, but the number of master’s degrees has rapidly increased since then. And one of the criticisms of graduate programs, especially the Ed.D., has been a vague sense of purpose about whether they are supposed to produce researchers, practitioners, or administrators.
Second, Ms. Wylie described the challenges on the professional-development front, for practicing teachers. The best professional-development research shows that teachers need sustained contact hours (between 30 and 100) of training before altering their practices. So, she did a back-of-the envelope calculation about how much time it would take to implement 50 hours of formative-assessment training over the course of a school year.
Again, the results were not encouraging: Teachers would need about six hours a month, for eight months, which amounts to one early-close afternoon a month plus two additional hours. (Good luck with that in this economy.)
And finally, Ms. Wylie summarized, the assessment culture in the United States is not necessarily conducive to this process. In this day and age of high-stakes testing, teachers can feel stressed about the end-of-year tests, and a process that by its definition requires reteaching and altering plans may not work with tightly written pacing guidelines and so forth, she surmised.
From there, Margaret Heritage, assistant director for professional development at the National Center for Research on Evaluation, Standards, and Student Testing, at UCLA, outlined the content support teachers will need to embody formative-assessment practice.
They’ll need to have much deeper content- and pedagogical-content knowledge so as to understand how students think and develop their skills in each discipline and across disciplines; to understand common errors and misperceptions; and to be able to integrate formative assessment in the “rhythm” of teaching and learning, Ms. Heritage said.
And this kind of support is not something you can package up and give to a school district, she said.
“I think we spend too much time having teachers implement programs, and not enough time studying practices, about how teachers make and use judgments about learning,” she said.
For those involved in the common-core effort, the implications of these scholars’ cautions are, obviously, many. Many of them are summed up in a paper Ms. Heritage wrote not long ago warning about the possibility of the assessment consortia misunderstanding the concept and missing an opportunity to put this practice on the policy radar screen.
Of the two assessment consortia, the Partnership for the Assessment of Readiness for College and Careers, or PARCC, is not developing formative-assessment resources as part of its federal grant. The other consortium, known as SMARTER Balanced, is.
The speakers’ bottom line: If teachers are to seize the potential of formative assessment, it’s time to think about the core knowledge preservice teachers need, the format and structure of professional development that will help support interpretation and action, and how the CCSS plays into those discussions.
I’ll admit this is one heck of a lot to think about. Comments section is open for your ideas!
A version of this news article first appeared in the Teacher Beat blog.