More and more states are enacting laws that require teachers to be evaluated on the basis of multiple measures. I support the trend as long as the part of the evaluation that uses classroom observations specifically specifies that evaluators are certified in the subject being observed.
In this regard, New York City, home of the nation’s largest school district, is off on the right foot (“Observers Get Key Role in Teacher Evaluations,” The New York Times, Feb. 17). Under an agreement just concluded, the school district, with the consent of the teachers union, will contract with a company to provide observers who possess subject knowledge and experience.
If these evaluators are assigned to classes that match their expertise, the process will be a vast improvement over the past. I’m referring to the practice of having administrators who had no background in the subject being taught put in a position of evaluating instruction. I have no doubt that these evaluators were able to determine if students were engaged, but pedagogy alone is not enough. What is being taught has to be accurate.
It’s here that the Dr. Fox effect comes into play. It’s altogether possible to design a lesson that fools intelligent and sophisticated audiences. By employing enthusiasm, humor and warmth, an imposter can elicit overwhelmingly positive ratings (“Dr. Fox effect,” Wikipedia). It’s not that these factors should be ignored, but they alone are no substitute for subject competency.
The Dr. Fox effect arose from an experiment several years ago when an audience of family doctors, general internists and psychiatrists listened to a lecture titled “Mathematical Game Theory as Applied to Physician Education.” The lecturer was given a script composed of contradictory assertions and invented words. The feedback form showed that all believed the talk had been presented in a clear and interesting manner, which gave them food for thought. If this group of doctors could be so easily impressed, why wouldn’t administrators or teachers who are not certified in the material being taught in public schools be fooled also?
Good teaching is an extremely complex undertaking. As a result, it’s impossible to design a rubric that is perfect. The closest we can come is through peer review. That’s how every other profession conducts its evaluation. The use of what New York City calls “independent validators” presumably goes even a step further by using certified observers with no connection to the teachers. This eliminates any question of a quid pro quo.
I’ve written before that inspired teaching is an art, rather than a science. Teachers in this category are virtuosos whose success defies identification. But how many teachers rise to that lofty level? As a practical matter, good teaching is good enough. If so, then what New York City is doing is encouraging.