A press release from Robert Marzano this week claims that his new teacher evaluation model “is a research-based system for ‘causing’ increases in student achievement through incremental improvements in teaching.”
That’s quite a bold claim, considering that there is not actually any new research behind Marzano’s system, which parallels Charlotte Danielson’s much more widely respected Enhancing Professional Practice framework in that it has four domains with specific descriptions of teaching practice.
Marzano’s brand, of course, is all about research-based teaching practices, and I’ve found his books to be helpful for thinking about specific strategies and why some strategies are better than others. For example, there are good questioning techniques and not-so-good techniques. We can learn from this research.
But I am staggered by the hubris embodied in Marzano’s claims. The marketing materials for his evaluation system make liberal use of text formatting to drive home these claims (formatting in original):
The first of its kind, this causal teacher evaluation model is not only based on studies that correlate instructional strategies to student achievement, but is also grounded on experimental/control studies that establish a direct causal link between elements of the model and student results. Each domain builds on the previous with direct links to create a causal chain that results in the increased learning and achievement of all students. This direct causal effect between elements of the model and student achievement is validated by data analysis from experimental/control studies.
Of course, poke around a bit and you can see that Marzano is simply drawing on his previous meta-analyses of teaching techniques to throw more weight behind his framework. There is no specific research validating the framework itself, much less its godlike causal power. It’s probably a decent framework, but it has no more ability to improve student learning than any other framework for evaluating teaching practice.
Marzano’s offering reduces teaching to a set of “best practices” that can be accreted in one’s repertoire over time to create—not merely support, but actually constitute—good teaching, and turns this definition of teaching into an evaluation system.
Pardon lowly me for saying so, but good teaching is not reducible to a stack of research-based practices. Good teachers use effective practices, but the claim that we can cause increases in student learning by mandating the use of specific practices is not as defensible as it might sound.
As I discussed a few months ago in my post The Halo Effect & the Quest for Easy Improvement, all of Marzano’s noise about effect sizes sends the message that we can simply implement all of the research-based practices and magically improve student learning without limit. While it’s a good idea to use better-validated practices whenever we can, I can’t buy into the logic that effective teaching is nothing more than a stack of strategies that can be implemented.
The silliness of this logic becomes apparent when we start adding up effect sizes: If I can markedly improve student learning with just one or two strategies, as Marzano’s “causal” language implies, won’t my test scores rocket to the top if I use several of these strategies? If we have this much power to improve student learning just by obeying Dr. Marzano, where are the so-called 90/90/90 schools? Effect sizes are a legitimate thing, but Marzano is misconstruing their meaning to imply that whenever fewer than 100% of students are below standard, the solution is to force teachers to implement strategy after strategy.
Indeed, obedience is the dark side of this evaluation framework. When superintendents and principals become convinced that they can “cause” higher levels of learning by mandating Marzano’s favorite practices, they stop paying attention to the professional growth of teachers, and start policing. They stop looking for good teaching, and start looking for specific strategies.
Even in districts using Marzano’s system, let’s not do this to our profession.
The opinions expressed in On Performance are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.