At the Center for American Progress, Raegen T. Miller, a policy expert (and former teachers'-union leader, natch) has an interesting paper up about “value-added” measures of teacher effectiveness.
He has two major points: The term value-added itself, which comes from economics, is objectionable to some teachers. It probably needs to be changed reflect that teachers contribute to their practices in ways other than boosting test scores, and that the test scores themselves pick up factors outside of a teacher’s control, he writes. He suggests the term “context adjusted achievement test effects” as an alternative.
Second, systems that seek to incorporate the data must address legitimate concerns educators have about the systems.
I don’t know whether changing the term will do much to win over educators who disagree on principle with test scores being used as a measure of their performance. But the paper’s second point is germane, given that schools and districts are only at the very beginning of figuring out how to factor test-score data into judgments about teacher performance.
Miller puts forth a framework that could guide discussions between administrators and unions on how the data might be used. It suggests using a sliding scale of sorts, in which high-stakes decisions involving teachers would be based on several sources of data with a high degree of trustworthiness, in comparison to lower-stakes decisions.
For instance, granting tenure might be based on many years of value-added data and several other measures of performance; A more low-stakes decision, such as determining whom to offer professional development, could be based on fewer measures with a lower standard of trustworthiness, perhaps just two years of value-added data.
It’s a hard concept to describe, so take a look at the paper here and tell us what you think of it as a way of guiding discussions on value-added.