Last summer, the Los Angeles Times made a splash when they published the names of 6,000 teachers along with “value added” scores they derived from the examination of test score data. This came under heavy fire from many - including me, when I found myself on a panel discussing the subject. The story took on tragic overtones when a dedicated veteran teacher, Rigoberto Ruelas, took his own life. His family reported he had been despondent since the LA Times labeled him as “less effective.” In spite of serious questions about the reliability of their methods, the Times has stuck to their approach. This week, a new analysis by the National Education Policy Center in Colorado has been published that thoroughly discredits the “value added” method by which they determined which teachers were more or less effective.
But the real story has become how Felch and the Los Angeles Times have responded to this challenge. Felch got a copy of the report prior to its official release, which was scheduled for today. Although the report was in draft form, Felch rushed to press with a rather ham-handed attempt to manipulate and spin the story to his advantage. His story was headlined “Separate study confirms many Los Angeles Times findings on teacher effectiveness” The story misrepresents the study’s conclusions to such an extent that one of the authors, Dr. Derek Briggs, felt compelled to issue an immediate and emphatic point by point rebuttal. The study and the rebuttal to Felch are both available here.
A few choice quotes:
In yesterday’s article in the LA Times, Felch asserts:
A study to be released Monday confirms the broad conclusions of a Times' analysis of teacher effectiveness in the Los Angeles Unified School District while raising concerns about the precision of the ratings.
Derek Briggs replies:
I don't see how one can claim as a lead that our study "confirmed the broad conclusions"-- the only thing we confirmed is that when you use a value-added model to estimate teacher effects, there is significant variability in these effects. That's the one point of agreement. But where we raised major concerns was with both the validity ("accuracy") and reliability ("precision"), and our bigger focus was on the former rather than the latter. The research underlying the Times' reporting was not sufficiently accurate to allow for the ratings.
Felch later states:
The authors largely confirmed The Times' findings for the teachers classified as most and least effective
Dr. Briggs responds:
No, we did not, quite to the contrary. Mr. Felch seems to be again focused only on the precision issue and not on the accuracy problems that we primarily focus on in our report.
Dr. Briggs goes into more specific misrepresentations and much more thoroughly explains the flaws in the LA Times methods in his rebuttal, and the original paper that was the subject of this dispute.
The entire process elevates once again the question of the fundamental integrity of this project on the part of the LA Times. If they are devoted to doing a public service by raising the legitimate issue of teacher evaluation, they must be willing to enter an honest discussion about the means by which this evaluation should be done. At the discussions at UC Berkeley in which I participated last fall, numerous scholars raised fundamental concerns similar to those expressed by Drs. Briggs and Domingue. Mr. Felch simply brushed them off, as can be seen in this video from the event.
In this most recent instance, he has more than brushed off the criticism. He has misrepresented a critique as agreement. This is media manipulation at its worst.
What do you think? Is the LA Times trying to manipulate this story?
The opinions expressed in Living in Dialogue are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.