Teaching Profession Opinion

John Thompson’s Book Review: “VAM in Education” -- Who has the Burden of Proof?

By Anthony Cody — October 01, 2012 6 min read
  • Save to favorites
  • Print

Guest post by John Thompson.

Douglas Harris’ Value-Added Measures in Education is a masterpiece. Even in the places where I believe Harris is mistaken, he identifies the core issues involved in using value-added for evaluations.

My big complaint is Harris’ agnosticism about who carries the burden of proof. I always assumed, and I still believe, that it should be obvious that value-added advocates carry the burden of proving that their reforms are likely to produce more good than harm. And that raises a question. Would reformers have tried to apply value-added to evaluations if they had had to show a preponderance of evidence that it was good educational policy?

Being a former legal historian, I was impressed with Harris’ discussion of legal issues that value-added evaluations raise. In some cases, he noted, employers may have to show that value-added “fairly and accurately measures a teacher’s ability to perform essential functions of his or her job.” When that happens, advocates may wish that they had had the experience of debating value-added on its merits. Conversely, value-added opponents will have to prove there are “equally effective alternatives,” but that will be a slam dunk. And again, advocates for value-added may regret their failure to explore more effective and less risky methods of holding teachers accountable.

As a former inner city high school teacher, I was pleased with the “mixed picture” that Harris drew about the usefulness of value added. He noted that studies supporting value-added “concern what happens on the average: Are some teachers more effective with certain students on average? Then, Harris concluded, “If these measures are used for accountability, they really need to work for all teachers and schools, not only average ones.” (Emphasis is Harris’)

Harris also observed, “Almost all of the research is based on elementary school and, to a lesser extent, middle school programs.” I was dismayed, however, by the next sentence, “There is no obvious reason to think the statistical properties will be dramatically different at the high school level, but new rigorous studies must still be conducted.”

I suspect that his conclusion reveals an understandable lack of awareness of the concrete conditions in inner city high schools. For instance, on the previous page, Harris seemed ambivalent about the validity of using middle school general science scores when estimating the test score target for 9th grade Biology teachers. He doesn’t seem to recognize the size of the can of worms that such a policy would be opening. Are districts also going to fire Geometry teachers based on a test score growth target derived from scores from Algebra I and 8th grade “Math?” Are they going to fire a Chemistry teacher based on scores from Biology and middle school worksheet “Science?”

More importantly, Harris made a semi-throwaway statement about principals controlling their school’s curriculum and suspension decisions. No! That is true is some schools, but I bet my experience is pretty representative. I have never met a neighborhood school principal who was empowered to assess discipline according to his or her best judgment. All must guess at the unwritten quota of suspensions that they are allowed. In magnet schools, principals never have to use up their allotment. In neighborhood schools, they are only allowed enough suspensions to get them through October, or so.

And that gets to the fundamental flaw of attaching high stakes to value-added. The statistical model cannot determine whether a teacher was “ineffective” because of his own shortcomings, or because he taught in an ineffective school. Administrators who impose curriculum and policies that undermine effective instruction have a fundamental conflict of interest in determining who is to blame for ineffective classrooms and schools.

Harris, a social scientist, correctly asserted that there is a double standard when evaluating the quality of value-added as a scientific experiment, as opposed to its usefulness as policy. But a practitioner, looking at the same evidence, would likely conclude that Harris has it backwards. We have been far too easy in evaluating high-stakes value-added for real-world use. For instance, even if Harris believes that it might someday be possible to control for CONCENTRATIONS of poverty and trauma, surely he would agree that we are not there yet. How could current statistical models control for factors that disproportionately and systematically plague the inner city, such as gang wars, cutbacks to alternative schools that dump large numbers of students who are emotionally incapable of functioning in regular classrooms schools that are already deeply troubled, or even the predictable deluge of funerals that students have to attend.

Harris acknowledged that value-added evaluations could accelerate the loss of teachers from the toughest schools, but he doubted that the drain of talent could get much worse. I’d say he ain’t seen nothing yet.

After value-added evaluations are in full swing, how do you prevent an exodus of best teachers and principals from the schools with lower value-added? When, rightly and wrongly, suburban and selective schools exit their weakest performers, that will create demand for effective teachers in schools where value-added is not as much of an unfair threat. Will top teachers commit to neighborhood schools if there is a 5% or 15% or 10% chance, PER YEAR, that their career will be destroyed or damaged by circumstances beyond their control? How would Harris sketch out an alternative scenario to the obvious one where the only people left in the schools where it is hardest to raise scores are incompetents, 23- year-old idealists, adrenalin junkies, and mathematical illiterates?

But, I do not want to end on a sour note, and I want to get back to a word I consciously used in the first paragraph. Teachers who are the target of value-added evaluations may not appreciate the word “masterpiece” being used for a book that does not condemn high stakes uses of value added. For every quarrel I have with Harris’ decision to not take a stand on an issue, however, there is an argument that his scrupulous neutrality has an over-riding benefit.

If value-added advocates would agree to assume the burden of proof, then they would need to address all of our schools’ unique circumstances. And that would get us back to Harris’ observations on the weaknesses of value-added evaluations. I wish that value-added advocates had started with the question of whether it is possible for economists, who lack concrete knowledge of the details of our diverse nation’s schools, to create a model that does justice to all types of schools and teachers? If they would start to ask that question, Value-Added Measures in Education would provide the ideal structure for that conversation. If reasonable people ask the questions that Harris asks, I am confident about the conclusions that would be drawn.

What do you think? If value-added advocates would assume the burden of proof, could we deescalate the assaults on teachers? In the wake of the Chicago strike, will “reformers” start to pay more attention to evidence? Will they heed the wisdom of a neutral observer like Douglas Harris?

John Thompson was an award winning historian, with a doctorate from Rutgers, and a legislative lobbyist when crack and gangs hit his neighborhood, and he became an inner city teacher. He blogs for This Week in Education, the Huffington Post and other sites. After 18 years in the classroom, he is writing his book, Getting Schooled: Battles Inside and Outside the Urban Classroom.

The opinions expressed in Living in Dialogue are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.