Michael Winerip at The New York Times has a detailed yet intelligible explanation of the failures of the value-added assessment model.
He uses a persuasive example: Stacey Isaacson, a University of Pennsylvania and Columbia-educated third-year teacher who works 10 ½ hour school days and gets rave reviews from her principal, fellow teachers, and students. In her first year teaching, 65 of Isaacson’s 66 students scored proficient on the state’s language arts test. And dozens of her students have gone onto New York City’s most competitive high schools.
But as Winerip explains, “According to the [value-added] formula, Ms. Isaacson ranks in the 7th percentile among her teaching peers—meaning 93 percent are better.” Her students did well on the test, but not as well as predicted by a complicated calculation with 32 variables and a wide margin of error.
Criticisms of value added are a dime a dozen these days, especially since the Los Angeles Times’ controversial teacher-rating project and the National Education Policy Center’s ensuing re-analysis, which concluded the paper’s method was invalid. That said, Winerip does a nice job honing in on the details. He does, however, offer the caveat that the value-added “process appears transparent, but it is clear as mud, even for smart lay people like teachers, principals and—I hesitate to say this—journalists.”
A version of this news article first appeared in the Teaching Now blog.