Computer Testing: As the computer skills of students improve, policymakers need to consider whether paper-and-pencil tests are the best way to measur achievement, the results of a continuing research project suggest.
In the second study comparing test scores of students on computer-based assessments and traditional printed tests, a Boston College researcher confirmed that computer-savvy students tend to score better on open-ended items if the test is administered on a computer.
While the impact of a test’s medium on scores varied in different academic subjects, highly skilled computer users consistently performed better on the open-ended questions in language arts and science if they entered their answers on keyboards rather than wrote them on paper. That didn’t hold true on open-ended math questions, however.
“If you want more accurate measures, then maybe you should allow these kids to take an open-ended section [of tests] on computers,” said Michael Russell, the study’s author and a research associate at BU’s Center for the Study of Testing, Evaluation, and Educational Policy.
But Mr. Russell acknowledges that giving exams in two mediums raises technical questions about a test’s accuracy and security. Schools probably don’t have enough computers to give tests simultaneously to every student who elects to use the technology.
And other research has found that test-graders tend to give higher marks to handwritten essays than ones printed from a computer file, Mr. Russell says. “It just gets more and more complicated,” he said. “We may be mixing apples and oranges.”
Mr. Russell’s new research yielded results similar to those of a 1997 study he published with Walt Haney, a BU professor, but it goes into more detail than the earlier research.
For the new study, Mr. Russell categorized students by their computer abilities. He found that the better a student’s skills, the bigger the gap between his or her scores on the computer responses and the handwritten ones.
The one exception was in math. Using the computer actually lowered the scores of all participants, but computer-savvy students still outperformed their peers on those questions.
“The technology didn’t allow them to show their work the way they’re used to,” Mr. Russell said.
“Testing on Computers: A Follow-up Study Comparing Performance on Computer and on Paper” is available in the Education Policy Analysis Archives, an online academic journal. The article is on the World Wide Web at olam.ed.asu.edu/epaa/v7n20/.
--David J. Hoff dhoff@epe.org