Published Online:

Letters: Computerized Testing, Pro And Con

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

To the Editor:

The Commentary by Winton H. Manning on the computerized Graduate Record Examinations General Test (related story ) contains several misleading statements about the participation of the graduate-education community, many inaccuracies about the test itself, as well as making criticisms that contradict themselves.

Mr. Manning makes the curious criticism of blaming a highly advanced 1995 assessment because it doesn't contain all the advancements it will eventually have in the future. That would be like saying in 1950 that we should do away with television until we can have digital, high-definition TV. Unlike Athena in Greek mythology, technological advancements do not come into the world fully grown.

Computer-adaptive testing (C.A.T.) is arguably the most significant breakthrough in assessment in the last three decades. It makes possible not only an entirely new array of services to schools and students now, but also allows for a broader and more accurate picture of a student's capabilities. Many changes that educators and students have been clamoring for--and criticizing the paper-and-pencil test for--can be addressed in computerized testing.

Mr. Manning claims that the computerized G.R.E. is the result of a "small group of psychometricians" at the Educational Testing Service. The fact is that graduate deans, graduate-admissions officers, education policymakers, teachers, testing specialists, and nearly 5,000 students--in field-tests all over the country--were consulted at the various stages of development. The decision to have a computerized G.R.E. and the policy direction for such an assessment was made by the G.R.E. board, not the E.T.S. The board, which is composed of representatives of the graduate community and outside experts in educational measurement, has been participating actively in this project since 1988, in order to provide a new type of assessment that would meet the changing needs of both the graduate schools and the students. Thus, the board saw to it that the adaptive G.R.E.--itself a first step in a longer process of transition--had several stages of research, modification, and introduction.

For a "test no one needs," the computerized G.R.E. has been extremely popular with students. The demand for the computerized version has doubled this year over the requests from last year, which has led to a reduction in the administrations of the paper-and-pencil version. Research shows that students appreciate the flexibility of scheduling (several days a month as opposed to five times a year), the more intimate, personal testing environment (an individual computer carrel as compared to a large hall with several hundred test-takers), immediate scoring (as compared with four to six weeks), and the fact that in a computer-adaptive test students don't waste their time or energy on questions that are too hard or too easy for them. Students with little or no computer experience have discovered that the skills necessary to take the test can be easily learned in the five to 10 minutes before they take the test. Graduate schools appreciate the computerized G.R.E. because fewer students miss application deadlines and it is easier to process scores in smaller batches.

To correct some of the inaccuracies in the Commentary, please note:

The lawsuit filed by the E.T.S., the G.R.E. board, and Sylvan Learning Centers against a coaching firm was launched because the firm broke copyright laws when it reproduced a portion of confidential, secure G.R.E. tests, not because representatives of the firm sat for the test for commercial reasons.
The E.T.S. found no statistical evidence that some students gained an unfair advantage on the test.
Saving an hour of testing time is "dramatically good." This will allow students to display additional talents while testing in the same amount of time.
The adaptive G.R.E. requires that a set number of questions be answered during a generous time allotment. The author's description of a C.A.T.'s being "terminated" when an ability level is determined for the candidate is a description of a "mastery" test and does not apply to the G.R.E.
Computer-adaptive testing is not limited to multiple-choice questions. The Praxis Series: Professional Assessments for Beginning Teachers includes computer-based adaptive tests in reading, writing, and mathematics. These tests are not limited to the traditional multiple-choice format, but make use of innovative question types that are immediately scored by the computer.

In 1999, the G.R.E. program will provide two new measures--a measure of writing ability and a mathematical-reasoning measure. This new version will also include a revision of all the test's existing measures to take advantage of computer scoring.

As part of its research-and-development program, the Graduate Record Examination has been exploring the development of a measure of reasoning in context and other measures that take into account advances in cognitive science. Without computer delivery of assessment, with its attendant flexibility in me-dium (sound and video, as well as print) and customization, economical development of truly new assessments is highly problematic.

We all know that there is no breakthrough in any field without some bumps along the way. There will always be some naysayers who will claim the sky is falling at each of those bumps--but the computer is now an essential element in many aspects of our culture. To back away from using it would be to disallow needed improvements to assessments of all kinds as well as to deny vital services to students and schools everywhere.

Charlotte V. Kuh
Executive Director
Graduate Record Examinations Program
Educational Testing Service
Princeton, N.J.

Debra W. Stewart
Chair, Graduate Record Examinations Board
Dean of the Graduate School
North Carolina State University
Raleigh, N.C.

To the Editor:

Winton Manning's Commentary is right on target. Indeed, "problems with the computerized G.R.E. should prompt caution about computer-adaptive testing" whether at the college, high school, or elementary level.

Simply put, computerizing low-quality, multiple-choice tests does not magically transform them into better assessment instruments. Some facets of new technology may make current exams even worse. For example, academic studies by researchers independent from test manufacturers indicate that gender, race, and income disparities may increase due to differential access to computers. Clearly, students are much more constrained on computer-administered tests because they can neither omit items, mark them for later review, nor change answers.

The Educational Testing Service and other promoters of computerized exams are adding to the confusion by failing to comply with "truth in testing" requirements that periodically have made exam items, answers, and backup data available to the public. If their new computerized tests are truly fair, accurate, and educationally sound, what do they have to hide?

Robert A. Schaeffer
Public Education Director
National Center for Fair & Open Testing
Cambridge, Mass.

Web Only

You must be logged in to leave a comment. Login |  Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Commented