The Los Angeles Times has now posted online their controversial database of teachers’ value-added performance. This is extremely interesting if you are an LA teacher or have a child in that system, but not particularly informative otherwise compared to the original article that launched all the controversy here.
But what strikes me is how tiny this really is, compared to the magnitude of the controversy it sparked. This is a database of value-added scores for 3rd, 4th, and 5th grade classroom teachers--less than a quarter of teachers in the LAUSD system. Similarly, earlier this year, when DCPS Chancellor Michelle Rhee fired 165 teachers for poor performance, the most striking thing to me about the story was that only 26 of the teachers fired had value-added data.
The reality is that, even as value-added student test score data has emerged as the center of current debates over teacher evaluation, it’s only available and relevant for a fraction of the teachers in our public schools today. There is currently no value-added data for kindergarten and early elementary teachers, teachers in non-core subjects, or high school teachers in most places. My brother-in-law, who teaches middle school band and drama, and sister, who teaches high school composition and literature, do not have value-added data.
Some critics see this as an argument against new teacher evaluation systems that incorporate data on student performance. I see it the opposite way: The way we currently evaluate teachers is deeply flawed, not helpful to them or students, and there are lots of things we could do to move towards a more effective system of evaluating and developing teachers. Where we have value-added data as a source of information to inform teacher evaluations, we should use it. But since it’s only available for a subset of teachers, and therefore only a small piece of any meaningful solution to teach evaluation, we shouldn’t let debate over value-added or the various methodologies derail the broader effort to create better ways of evaluating teachers’ effectiveness and using that data to inform professional development and staffing decisions. We also shouldn’t pretend--as I sometimes fear my reform colleagues do--that value-added data is some kind of magic panacea that provides perfect information about teacher effectiveness. And we should put a lot more effort into developing and using validated and reliable observational tools, such as the Classroom Assessment Scoring System (CLASS), that look at teacher classroom behaviors and measure the extent to which teachers are implementing behaviors linked to improved student outcomes. (I’m even more concerned that the observational rubrics many districts and states will put into place under their proposed evaluation systems have not yet been validated than I am with any of the issues related to use of value-added data.)