Published Online:

Questions About Normed Tests Spur Meeting

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

Washington--Education Department officials have summoned the heads of major testing firms and leading scholars in the assessment field to a meeting here this week on issues raised by a controversial report challenging the accuracy of nationally normed achievement tests.

The report, issued in December by Friends for Education, a West Virginia advocacy group, found that the overwhelming majority of elementary-school pupils scored above national averages on the major commercially available tests.

This phenomenon has occurred, it charged, because norms are artificially low and out of date, and because publishers do little to inform the public about how test results could be misinterpreted. (See Education Week, Dec. 9, 1987.)

On norm-referenced tests, students are measured not against other students taking the test, but against a national norming sample up to seven years old.

Participants at the meeting--which Secretary of Education William J. Bennett is expected to attend--will consider whether the report's charges are accurate, and if they are, what test companies and the federal government should do about it, according to department officials.

"We are serving as a medium-sized ombudsman for the public with respect to testing," said Chester E. Finn Jr., assistant secretary for educational research and improvement. "We want to find out what's going on here."

John Jacob Cannell, the Beckley, W.Va. physician who founded Friends for Education and will attend the meeting, said the department should also take the lead in reforming the tests, since Secretary Bennett has cited rising test scores as evidence that schools have improved during the Reagan Administration.

In fact, Dr. Cannell said, the results of his study show that "children don't know any more than they did when he came into office."

"Does he want to look good, or does he want accurate achievement tests?" he asked.

Referring to Mr. Bennett's theme for his final year in office, Dr. Cannell added: "People who talk about accountability ought to put their money where their mouth is."

Several of the publishers and scholars invited to the meeting suggested that it would contribute to improving test data by publicizing the findings of the Friends for Education report.

Secretary Bennett and Mr. Finn "are highly visible, and people might pay attention to what they say," said R. Bruce McGill, president of the Educational Records Bureau, a nonprofit firm in Wellesley, Mass., that administers achievement tests in independent schools and affluent suburban districts.

"If the findings are as legitimate as they appear from reading the report, they can bring attention to them," he said. "I hope the meeting will not only supply a little light, but a little heat as well."

But commercial test publishers, who have disputed the report's conclusions, said the meeting's chief contribution to public understanding would be giving test makers a chance to explain how normed tests are put together.

The methods publishers use in setting norms are seldom published, and thus little understood, accord4ing to John Kauffman, vice president for marketing of the Scholastic Testing Service Inc.

In addition, argued H.D. Hoover, director of the Iowa Basic Skills Testing Program, the meeting will allow test makers to explain the purpose of the norms. The Friends for Education report, he said, erroneously claimed that norms are used to compare student performance.

They are actually intended, he said, to help schools improve instruction and gauge students' progress over time.

"Norms serve an extremely useful purpose," Mr. Hoover said. "They enable us to compare things like reading and math. I can't compare reading and math [without using norms] any more than I can compare height and weight."

Reference Point

Other invited participants suggested that the meeting could result in concrete steps toward improving test data.

For example, suggested Denis P. Doyle, a senior research fellow at the Hudson Institute, test makers could agree to provide a more accurate method of comparing student achievement by developing a common standard for all tests. That way, he said, the scores of students who took one commercial test could be compared with those of students who took another.

"It would be constructive to find some common reference point," added Lyle V. Jones, alumni distinguished professor of psychology at the University of North Carolina, who noted that a similar effort was undertaken for reading tests in the early 1970's. "That would let the norms reflect the distribution of achievement at that particular time."

But Linda Darling-Hammond, director of the education and human-resources program at the rand Cor8poration, called that proposal "wrong-headed."

"The tests measure different things," she said. "If test developers didn't have different products, there wouldn't be a market for them."

Moreover, said Paul D. Sandifer, director of the office of research for the South Carolina Department of Education, finding a common reference point would be costly and time-consuming, and would not yield much useful information.

"By the time you equated all the tests, you'd have to redo them," he said. "I don't think there is much promise in that practice."

'As Good as Can Be'

Instead, Mr. Sandifer suggested, the federal government should improve the norming process by coordinating data-collection efforts to ease the testing burden on schools.

This year, he noted, two commercial test publishers and the National Assessment of Educational Progress, as well as two federal surveys, are all testing national samples of students. Schools may be reluctant to participate in so many programs, which could skew the samples, he noted.

"The total impact of that on school districts is rather significant," Mr. Sandifer said. "The chances of getting representative data dim considerably when all those things are out there at one time."

But while the federal government can suggest improvements, it should be up to test makers themselves to make whatever changes are necessary to ensure that test data are accurate, said Gregory R. Anrig, president of the Educational Testing Service.

"The federal government has an appropriate leadership role, but you can't resolve this by law or regulation," he said. "Each organization has to do what it can to have the tests be as good as they can be."

"It is in the interest of all test publishers to have creditable tests," he added. "If one is not creditable, it will rub off on the others."

Web Only

You must be logged in to leave a comment. Login |  Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Commented