Even in this age of political discord, most people would agree that the main purpose of newly adopted teacher-evaluation instruments is to help teachers improve their effectiveness. However, a policy disconnect stands in the way of using these new evaluation models to actually improve educator practices. To understand why, let’s take a look at the genesis of the recent teacher-evaluation movement.
When President Barack Obama signed the American Recovery and Reinvestment Act of 2009 into law, the federal government’s goals were to stimulate the economy, support job creation, and invest in critical sectors, including education. The law provided $4 billion for the Race to the Top competition, which rewarded states for certain education reforms. The first round of grants focused on making sure that states were serious about teacher accountability; to receive funding, state officials had to enact sweeping changes in how teachers were to be evaluated.
Race to the Top implied that we can no longer afford to retain ineffective teachers and clearly shifted the emphasis behind developing new teacher-evaluation models to the policy arena of accountability from its traditional foundations in continuous professional growth and improvement.
Certainly, accountability and continuous growth are not mutually exclusive, and the last thing I would want to convey is a resistance to accountability. Absolutely, teachers and leaders must be held accountable for the quality of education in their schools and districts. That said, how we use the new evaluation tools will determine whether we simply create the aura of accountability or actually help our teachers grow and improve their practice.
Earlier this year, I had this very conversation with a school superintendent who argued that his district’s new evaluation instrument would (by itself) improve teacher effectiveness by more clearly identifying educators’ strengths and weaknesses. He was so convinced of the power of the instrument to improve performance that he believed the only professional development his district would need involved training principals in how to use the new system.
Being fond of sports analogies, I compared that reasoning to that of a football coach who precisely evaluates and rates his players’ performance but does nothing more. Doesn’t the only logical means of improving the players’ performance lie in a coach’s ability to teach and coach, which involves modeling, demonstrating, and providing just-in-time feedback, reflective study, and practice?
In other words, rating performance (no matter how accurately) does not guarantee the improvement of performance. No logic chain supports the argument that it does.
How we use the new evaluation tools will determine whether we simply create the aura of accountability or actually help our teachers grow and improve their practice."
This brings us back to the critical relationship between accountability and continuous growth. In effect, accountability cannot guarantee continuous growth and improvement. It can simply serve as a gatekeeper for rating teaching practice; maybe more accurately than past evaluation models, but no better in actually developing one’s capacity to teach more effectively. Conversely, a focus on helping teachers continually grow and improve can yield true accountability. Yes, the two are inextricably related, but it’s important to know what ultimately drives this train—and that is a clear focus on growing teachers’ practice.
Any effort to focus on teachers’ growth must ensure that our school leaders have the knowledge necessary to evaluate their teachers with fidelity. Further, leaders must engage teachers in collaborative cycles of reflective inquiry that use the evaluation criteria in an ongoing improvement process. It’s really a two-part equation. First, develop a deep and shared knowledge of high-quality instruction, and, second, seize on that knowledge to develop greater expertise in leading for instructional improvement.
Let’s start with part one of the equation, which includes the knowledge to use the evaluation instrument as designed. In fact, this is an often-overlooked aspect of the new teacher-evaluation instruments. Evaluators must have the instructional expertise necessary to render an accurate diagnosis of teaching along with concrete and useful next steps for the professional learning of the teacher. If our principals cannot do this well, then continuous growth and improvement is just a fantasy.
For example, imagine providing the latest medical-imaging technology to first-year medical students and asking those students to interpret complex images of anatomical systems prior to studying human anatomy. The likelihood of those students’ being able to interpret and diagnose with any kind of accuracy would be very small. We are doing this very same thing, however, in state after state. We are building new, sophisticated teacher-evaluation instruments, but only providing the most cursory training on using them.
There is a default assumption that our school leaders have already developed expertise in engaging these instruments as designed. But this is a false assumption, according to instructional expertise data gathered from assessments of nearly 3,000 school and district leaders using a proficiency-based instrument developed by University of Washington researchers and my colleagues at the Center for Educational Leadership. We found that the prevailing level of instructional expertise among school and district leaders nationwide was approximately 1.80 on a 4-point scale running from novice to expert (with a novice rating falling at 1.0, and an expert at 4.0).
We must recognize that our school leaders need to engage in the same kind of deep study of instructional anatomy as medical doctors do for human anatomy. We’d best not assume that just because leaders have been teachers or principals, they have the knowledge of instruction necessary to use a new evaluation system effectively.
In our experience with our own university-developed teacher-evaluation rubric, we learned that we need to provide this important instructional-anatomy background knowledge before school leaders can learn to use the new evaluation tools. A note to policymakers: This adds additional time and cost to the process. If policymakers fail to invest adequately in this critical process, they may achieve the aura of accountability, but without building a durable foundation that results in, and sustains, continuous improvement.
Even with an ample investment in developing leaders’ instructional expertise, continuous growth and improvement will not occur without an investment in part two of the equation. We must equip leaders with the knowledge and skills necessary to grow teachers’ practice, such as:
• How to provide real-time, useful feedback to teachers.
• How to engage in difficult/challenging conversations.
• How to create a culture of collaboration and reflective practice.
• How to develop cycles of inquiry that result in teachers’ taking on the responsibility for their own (and others’) growth and learning.
The expertise necessary to grow teachers’ practice transcends any specific teacher-evaluation instrument. Regardless of the instrument being used, and one’s ability to use it, the knowledge and skills listed above are crucial to ensuring continuous growth and improvement. And a secondary note to policymakers: This, too, will require a substantial investment.
Just as General Motors couldn’t produce a fundamentally different car without investing heartily in the redesign of its automobiles and the retooling of its factories, school districts will not produce a fundamentally improved teaching product without a commensurate investment of time and resources.
Let’s be clear, however. New investment must result in teaching practice that is continuously growing and improving. We must use this barometer to gauge our progress. If we do so with unrelenting discipline and focus, we can move beyond the aura of accountability to the dramatic improvement of student learning and achievement for all our students. This is what I call real accountability.
A version of this article appeared in the August 07, 2013 edition of Education Week