The federal officials who oversee the National Assessment of Educational Progress took a first look at a new idea for administering the exam, which would mark a major departure for the test known as “the nation’s report card.”
That method, which is only under discussion at this point, is known as “targeted,” or “adaptive,” testing. In basic terms, it involves tailoring tests so that students at different ability levels receive exams with different levels of difficulty, rather than giving all students tests at the same level of difficulty.
The panel that sets policy for the NAEP, the National Assessment Governing Board, asked to hear a presentation of how targeted testing works. Board members and staff wanted to see if the testing model could, potentially, help them reduce the numbers of students who are excluded from the NAEP, or who receive special accommodations on it, because of disabilities or because of a lack of English skill. The board has been grappling with that issue for years, amid complaints about big disparities in the percentages of students that state and local jurisdictions exclude or provide with accommodations on the test. Those uneven state and local policies raise questions about the value of NAEP scores, some critics say.
Targeted testing might, in theory, also help provide federal officials with more detailed information about the performance of students scoring at the lowest levels. When students from Puerto Rico took the 2003 and 2005 NAEP math tests, for instance, their scores were so low, and there was such a mismatch between expected and actual performance, that federal officials had difficulty interpreting the results.
On Thursday, John Mazzeo, an associate vice president at the Educational Testing Service, a research organization and test developer, spoke to a committee of the board about targeted or adaptive testing, at its invitation.
Right now, students who take part in any given NAEP are given exams at the same level of difficulty.Targeted testing would change that, and it could work in any number of ways, Mazzeo told the committee. One option—though a potentially costly one—would involve testing students by computer. A student who gets a question correct moves on to a more difficult one, or, if he or she trips up, to an easier one. By tailoring the test in this way, the thinking goes, administrators can get a more precise sense of where struggling students’ weaknesses are—as opposed to simply receiving a string of wrong answers.
Under another targeted-testing option, Mazzeo explained, NAEP administrators could conceivably create test booklets at different difficulty levels, and distribute them according to the percentages at various participating schools of students performing at low achievement levels, or possibly according to their population of special-education students.
Targeted testing has been around for a while, and it is used in a number of graduate school admissions tests, Mazzeo said.
The governing board’s exploration of targeted testing is very preliminary, and, in the end, it could amount to nothing. The board’s interim executive director, Mary Crovo, told me after the presentation that the panel would seek more research on targeted testing and its potential usefulness for NAEP.
The board is already considering a number of other options for dealing with exclusions and accommodations; such as setting uniform, national rules on exclusions and accomodations; using “full population estimates” to adjust for exclusions; and attaching “cautionary flags” to scores if exclusions rose above a certain level.
The board is considering holding a public hearing, and soliciting public comments, on exclusion and accommodation options, early next year, possibly in January.
A version of this news article first appeared in the Curriculum Matters blog.