To the Editor:
Regarding your article on “value added” testing (“‘Value Added’ Models Gain in Popularity,” Nov. 17, 2004): Accurately assessing student learning growth is vital for enhancing educational quality and potentially helpful for improving accountability. However, the unanswered questions about value-added testing go far beyond whether the technology is ready for large-scale implementation. Most fundamental is: What “value” is actually being measured?
Value-added growth models currently being promoted rely almost entirely on student performance on multiple-choice test questions. Yet independent evaluations of state exams have repeatedly found that these items cover only narrow slices of state standards. Typically, what is tested are those facts and simple skills that are easiest and cheapest to measure. Under the high-stakes conditions of state exams and the federal No Child Left Behind law, too often all that is taught is that which is tested.
As a result, the “educational growth” that will be reported is equivalent to measuring the changing length of a child’s arms but not the rest of his of her body. The result is close to valueless because it ignores so much that is important. And in the name of measuring “growth,” children’s learning will be stunted.
There are better ways to evaluate growth, using a richer set of measures and inputs. The Learning Record, for example, uses classroom data gathered by teachers and students as the basis for determining student progress on developmental reading scales. These results can be aggregated for public reporting.
Because Learning Record scales cover a wide range of growth over years, a student could improve significantly yet remain in his or her original level, as is also the case with No Child Left Behind categories such as “basic” and “proficient.” This means that governments might have to give up often spurious precision in order to implement systems that can better inform teachers, students, parents, and the larger community about real learning progress.
Whether policymakers decide to do so will depend on what they, and we, value.
National Center for Fair & Open Testing