Test Scores May Be Misleading, Experts Warn
A decade ago, policymakers set out to create testing systems that they said would be worth teaching to. But they haven't achieved their goal yet, according to testing experts attending a two-day federally sponsored conference here.
In states where test scores are rising, the improvements may have nothing to do with whether schools have upgraded their teaching and curricula, some experts said.
Instead, the increases may be the result of students' and teachers' increased familiarity with the state assessments and consequent changes in instruction, they said. The scores also could be climbing because students have improved test-taking skills unrelated to the curriculum. Or, the experts suggested, the results may be following a general pattern in which scores on a new test start low and then rise, before leveling off and later declining. ("Testing's Ups and Downs Predictable," Jan. 26, 2000.)
"You may be documenting a situation where scores go up and down, and the test isn't measuring what is to be taught," said Eva L. Baker, a co-director of the Center for Research on Evaluation of Standards and Student Testing, or CRESST, based at the University of California, Los Angeles.
Rather than focus on test scores, policymakers should consider other indicators of progress, such as increases in the number of students taking challenging courses and improvements in the quality of students' work, Ms. Baker argued at the March 24-25 gathering of state policymakers and school district personnel. The National Science Foundation and the RAND Corp., a Santa Monica, Calif.-based think tank, sponsored the event.
The problem with relying on test scores is that they don't necessarily reflect what happens in classrooms, Ms. Baker and other experts here said.
For example, they said, teachers discover what's on the test and narrow their instruction to match the content of the questions.
In extreme cases, teachers will begin to teach students how to solve a specific problem that appears on an exam every year, said Daniel Koretz, a senior social scientist at RAND. Such targeted preparation is especially true for performance-based tests in which the tasks are unusual, he said, because they are easy to remember and there are only a few of them.
W. James Popham, a professor emeritus of education at UCLA, said, "You want to create teachers who are aiming their instruction—not at the test items—but on what the test represents."
While the testing experts told the gathering that the current generation of assessments isn't good enough, some predicted that technology would help solve the problems.
With a computerized battery of tests that is adapted to every student's ability, students can be presented with a wider variety of questions, said Allan Olson, the president of the Northwest Evaluation Association.
The Portland, Ore.-based nonprofit organization electronically transfers its tests via the Internet to the computers of 300 school districts that subscribe to its program. When students take the exams, their success determines the difficulty of the next round of questions. That means test-takers don't see the same questions, limiting the chance that scores will rise based solely on test familiarity.
In addition, the electronic distribution of the test means the evaluation group can add new questions overnight.
"It allows us to change the test progressively," said G. Gage Kingsbury, the group's research director. "We have the ability to phase in new items and new item styles faster than in a paper-and-pencil-test."
What's more, he said, students receive their scores immediately, and the association delivers an analysis of a school's scores overnight—compared with a typical wait of up to four months.
Current computerized tests mostly involve questions adapted from traditional test booklets and put into an electronic format. But that may change soon, according to one researcher who spoke here, and dramatically change the capabilities of assessments.
Technology will be able to deliver results "at any time, day or night," said Randy Elliot Bennett, a researcher for the Educational Testing Service, the Princeton, N.J., publisher of the SAT college-entrance exam and other tests.
"We're not there yet ... but the infrastructure is falling into place, and it's falling into place very quickly."
Eventually, Mr. Bennett predicted, assessments and curriculum will be so intertwined that teachers will be tracking students' progress moment by moment as they are learning what's prescribed in the lesson.
—David J. Hoff
Vol. 19, Issue 30, Page 10