To the Editor:
I read with interest the article “Adaptive Testing Gains Momentum, Prompts Worries” (July 10, 2013). All who publish computer-adaptive assessments should applaud legislative efforts to include computer-adaptive testing, or CAT, in federal assessment programs. However, I wish to address the concerns cited in the article about whether test items in such assessments should be constrained by grade level.
The article failed to highlight an essential point: Not all CATs are designed for the same educational purposes. A focus on grade level may be appropriate for federally mandated accountability testing—the summative tests discussed in the article. When the purpose is to discover the level at which a student is performing (potentially above or below grade), however, and whether that student is growing academically, the test design will need to be substantially different.
To provide instructionally useful information to students, teachers, and administrators, all students’ achievement levels must be measured with equivalent precision, wherever they reside on the achievement continuum. Information from assessments designed to inform learning can be directly translated into differentiated instruction that gives each child the opportunity to succeed.
Based on the vision sketched in the article, one could ask whether instruction should be merely standards-based rather than student-centric, and whether the only metric that matters is if students are proficient at grade level, not that they are actually growing.
With classroom time at a premium, schools need an assessment program that balances teachers’ needs for actionable information with federal accountability requirements. Computer-adaptive testing can play a pivotal role in this, and all students and teachers in the country stand to benefit.
Raymond Yeagley
Vice President
Chief Academic Officer
Northwest Evaluation Association
Portland, Ore.