To the Editor:
Evaluating teachers and schools on the basis of the progress their students make during a stipulated time frame, rather than on the basis of their proficiency during the same period, makes eminent sense (“Education Dept. Poised to Approve More States for Growth-Model Pilot,” Nov. 8, 2006). It is a realistic acknowledgment of the way public schools are required to operate in this country. Because they must enroll virtually everyone who shows up at their doors during the school year, they cannot always be expected to achieve proficiency.
But even the progress made by students can too often be the result of factors beyond the control of the best teachers in the finest schools. Proponents of the value-added approach maintain that this metric takes into account such contaminating factors. Yet supporters have failed to adequately explain how this is achieved. The best that they have done is to point out how the value-added model measures the student against himself, as opposed to measuring the student against a single external standard. Under this system, every student’s improvement counts the same toward the school’s overall rating.
That still doesn’t help address questions relating to the technical issues surrounding the value-added model. These include quantifying such events as the impact on performance arising from moving to a new neighborhood, which is particularly common among disadvantaged students. Until more transparency is available, it’s important to proceed with extreme caution in evaluating the instructional effectiveness of teachers and schools involved in the U.S. Department of Education’s pilot program.
Los Angeles, Calif.
A version of this article appeared in the November 29, 2006 edition of Education Week as Proceed With Caution on Growth-Model Pilot