Sociologist Aaron M. Pallas says that, in the next year, we may begin seeing wide discrepancies in student test scores across districts as a result of the Common Core State Standards. The reason:
As states begin aligning their [own] assessments with the Common Core standards—which are, by all accounts, more challenging than the existing standards in much of the country—there is a high probability of uneven implementation of curriculum, professional development, and other supports within those states.
For Pallas, this possibility highlights a central flaw in efforts—such as value-added models—to pin the responsibility for variances in student test scores principally on teachers.
If some districts are using an older curriculum not aligned with the new standards and assessments, while others are using a newer curriculum that is aligned, then there's a risk that differences in student performance on the new assessments will be improperly attributed to differences in the quality of the students' teachers, rather than differences in the curriculum to which students were exposed. That's the inference that would be drawn from a value-added model that doesn't take into account variations in curriculum.
It’s an interesting point. But if a particular district or school were suddenly found to be tanking on the revamped tests, wouldn’t the state be able to see that and realize that the problem lies beyond the teachers? Or am I being too optimistic (or clueless) about the way the data are analyzed and used?