Education and technology forces have converged this year to vault computer-based testing into the headlines, raising important questions about whether this new mode of assessment is more useful than traditional paper-and-pencil exams.
To begin with, the increased testing requirements imposed by the “No Child Left Behind” Act of 2001—a far-reaching overhaul of federal education policy signed into law by President Bush in January 2002—have set schools scrambling to find more efficient ways to assess academic skills and get children ready for high-stakes state exams. Unlike traditional standardized tests on paper, which can take weeks or even months to score and return to schools, computer-based assessments can provide almost immediate feedback. That is arguably one of the biggest draws for educators.
Already, 12 states and the District of Columbia have a computerized exam or a pilot project under way to evaluate the effectiveness of computer-based testing, according to a new Education Week survey of state departments of education. All of these tests—except for one in North Carolina and the District of Columbia exam—are administered via the Internet. In five states, officials report that computerized testing was designed to partially meet requirements under the new federal law.
Eventually, experts predict, technology could change the face of testing itself, enabling states to mesh the use of tests for instructional and accountability purposes.
“You’ve got the potential that technology could be a solution,” says Wesley D. Bruce, the director of school assessment for the Indiana Department of Education, “but there is, right now, just a huge set of issues.”
Chief among them is a simple question: Do schools have enough computers to test children in this new manner? The answer in many places is no. And with most states struggling with budget deficits, it’s unlikely they are going to use their limited resources to fill that void.
Yet researchers point out that computer-based testing has the potential to be far cheaper than its printed counterpart.
Richard Swartz, a senior research director at the Educational Testing Service, in Princeton, N.J., estimates that the actual costs of putting a test online and building a customized scoring model are comparable to those of developing a good paper-and-pencil exam. “Once the tests are implemented,” he adds, “the difference in scoring costs is enormously in favor of the computer.”
Still, other problems with computerized assessment have emerged.
One prickly issue involves the use of what is called adaptive testing, in which the computer adjusts the level of difficulty of questions based on how well a student is answering them. Proponents of this form of testing argue that it provides a more individualized and accurate assessment of a student’s ability.
But the No Child Left Behind law, a revision of the Elementary and Secondary Education Act that puts a higher premium than ever on schools’ accountability for student achievement, continues to mandate that states measure student performance against the expectations for a student’s grade level.
With adaptive testing, a 7th grader, for instance, might be bumped up to questions at the 8th grade level—or dropped down to the 6th grade level. As a consequence, debate is growing about whether adaptive testing can meet the purposes of the federal law, and if it doesn’t, how the technology should be modified to meet the requirements.
To give educators a head start on understanding computer-based testing, Technology Counts 2003—the sixth edition of Education Week’s annual report on educational technology in the 50 states and the District of Columbia—examines these new developments from a host of angles, beginning with an analysis of the impact of the No Child Left Behind law. Surprisingly, perhaps, the story points out that the law is having the effect of both encouraging and discouraging the use of computerized assessments.
As another part of this year’s focus on computer-based testing, Technology Counts 2003 takes a close look at adaptive testing, with analysis from proponents and critics, and a description of how it works. The upshot of the adaptive-testing debate is that developers of such assessments are worried that they may be left out of what could be the greatest precollegiate testing boom in history.
Computerized assessment may turn out to have its biggest impact in the area of online test preparation, observers of the field say. Last year, for instance, more than 200,000 students in 60 countries signed up for the Princeton Review’s online demonstrations of such tests as the SAT and state exit exams. Technology Counts 2003 tracks the online test prep trend.
As educators face the new federal requirement to test all 3rd through 8th graders annually in reading and mathematics, states are experimenting with new ways of using technology to evaluate the abilities of special education students. Testing experts say that what educators learn from how to tailor assessments to the needs of special education students could also shape how they test other students, who may have more subtle individual needs. This year’s report examines those lessons.
Technology Counts 2003 also includes a story about teachers who are using computer-based testing to give classroom quizzes and tests, an examination of the benefits and drawbacks of essay-grading software, an analysis of the growing business of computer-based testing, and a look at national trends in educational technology.
Snapshots of the steps each state has taken to use computer-based testing—or simply to use educational technology more effectively—are also included in the report, as are data tables with state-by-state statistics on technology use in schools.
We hope you’ll find information here that will help you understand computer-based testing and its evolving role in education.
A version of this article appeared in the May 08, 2003 edition of Education Week