Published Online:

Aiming for 'Definition of Literacy,'NAEP Considers 1992 Reading Test

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

Washington--Striving for a consensus on one of the most divisive issues in education, the National Assessment of Educational Progress is considering a blueprint for its 1992 reading test that would represent a dramatic shift from past assessments.

The proposal, presented to NAEP's governing board last week, is being closely watched by reading specialists and policymakers, who maintain that the assessment's framework would establish a "de facto national definition of literacy."

Moreover, the 1992 assessment is expected to have a strong influence on reading instruction, since it will be the first to allow state-by-state comparisons of student achievement in the subject.

In perhaps its most far-reaching recommendations, the proposal calls for a new test of students' fluency in oral reading and a pilot assessment to collect portfolios of students' work in the subject.

Barbara Kapinus, coordinator of the reading-consensus planning project for the Council of Chief State School Officers, said the blueprint reflects thinking that is close to a national agreement on "what ought to be the goals of reading instruction."

"This is not necessarily what is happening in the majority of classrooms," she said. "It represents a carrot a little bit in front of the horse, but not so far in front people can't reach it."

Under the plan, NAEP would create three separate scales, in place of the single scale it now uses, to measure performance on three types of reading--literature, informative texts, and documents.

In addition, the plan calls for extensive use of open-ended questions to gauge students' abilities to understand text and elaborate their understanding. In the past, NAEP has primarily relied on multiple-choice questions to measure reading skills.

Members of the governing board's reading committee, who met here last week, praised the plan but cautioned that it could be expensive to administer and confusing to the public. They noted that it likely will undergo changes before it is approved and sent next month to the Educational Testing Service, which operates NAEP.

But reading specialists warned that such changes could undermine the goals of the document. Rather than rush to adopt an imperfect proposal, suggested Richard M. Long, the Washington representative for the International Reading Association, the board should take the time to develop a test all experts can agree on.

"This is such an important statement," he said. "Our concern is that it's being done with more an eye on the calendar at this point" than on the content of the assessment.

Consensus Building

Although NAEP has in the past tried to elicit a national consensus in developing its test objectives, the consensus-building process for the 1992 reading test was particularly elaborate.

For one thing, the 1988 law that reauthorized the Congressionally mandated assessment called for "the active participation of teachers, curriculum specialists, subject-matter specialists, local school administrators, parents, and members of the general public."

In addition, that law authorized NAEP to conduct a pilot state-by-state assessment of 4th-grade reading in 1992. Such a test, NAEP officials pointed out, required greater involvement by state officials, who did not want to be embarrassed if their students performed poorly in comparison with those of other states.

Moreover, reading groups, such as the ira, had expressed reservations about state-by-state comparisons. "We are concerned with the inappropriate use of comparisons of test scores," Mr. Long said.

The consensus-building process was further complicated, the report to the board states, by sharp divisions within the reading community that did not exist in mathematics, the other subject that will be tested on a statewide level.

"Experts, educators, and interest groups in reading often hold diverse and conflicting views that have not been completely illuminated, much less settled, by research in the field," it acknowledges.

Separate Scales

To attempt to reach an agreement on what should be tested, the governing board last fall held a series of public hearings on the issue.

In addition, the board contracted with the Council of Chief State School Officers to draw up a framework.

The council created a 17-member planning committee, led by Ms. Kapinus, a former state reading official in Maryland, that consisted of teachers, reading specialists, state testing officials, and business leaders.

The panel agreed that the test should employ new methodologies to reflect the most current knowledge about reading, the report says.

"Societal goals and values and a consensus of reading theory and reading instruction should drive the design of the assessment, not simply traditional psychometric practice," it argues.

To that end, the panel recommended developing three separate scales to examine reading in three categories: reading for literary experience, the type involved in reading poems, plays, and short stories; reading to be informed, such as reading textbooks and newspaper articles; and reading to perform a task, such as using office memoranda and telephone books.

The recommendation reflects the view among reading experts that NAEP should not "reduce reading proficiency to a single score," Ms. Kapinus said.

In addition, she said, the separate scales would also gauge student abilities on different types of tasks.

"Sometimes readers are comfortable and successful reading stories, but are nonplussed by tax forms," the report states. "Readers may have learned how to read and learn from textbooks but are less able to approach and appreciate a poem."

The differences in student performance on the tasks, Ms. Kapinus noted, may help curriculum specialists determine the strengths and weaknesses of their programs.

But Wilhemina F. Delco, vice chairman of NAEP's governing board, said policymakers want a single score to provide a "quick-and-dirty answer" to how students fared.

To satisfy such desires as well as provide greater information, Ms. Delco suggested, the test should include a single overall score in addition to the scores on the three scales.

Open-Ended Questions

In keeping with current trends in reading assessment, the ccsso committee also recommended using complete texts, rather than passages created solely for the test, and asking a range of questions to assess several cognitive abilities.

The panel also recommended substantially increasing the test's emel10lphasis on open-ended questions, which require students to write their own responses, rather than multiple-choice questions that ask them to choose the best response. Under the proposal, students would spend 60 percent of their test-taking time answering open-ended questions, and many of the multiple-choice items would have more than one correct answer.

"I think multiple-choice tests are still efficient and effective for certain things, but not for everything," said Ms. Kapinus. "If you assess only multiple-choice, some teachers and principals will say don't waste time [writing in classrooms]. We don't want to foster that."

But Ethel J. Lowry, director of Chapter 1 programs in the North Dakota Department of Public Instruction, warned that students with poor writing skills might be hampered by the format.

And, suggested governing-board members, the shift toward open-ended questions may not be feasible because of time and financial constraints. Such questions eat up large chunks of the 45-minute test-taking time, they noted, and they are considerably more expensive than multiple-choice items to score.

Oral Fluency

In addition to the reading-comprehension section, the proposal recommends that the test include a special study of students' oral fluency.

Under the plan, some 2,000 4th-grade students would read a text aloud to a teacher and answer questions on it. The text would be the same as one the student read as part of the regular assessment, which would permit NAEP to determine whether students' written answers reflected their writing ability or their comprehension of the passage.

The oral-fluency test would help officials get at the true nature of student abilities by measuring their ability to recognize and figure out words, not just their understanding of text, said Marilyn J. Adams, a member of the CCSSO planning committee and the author of a forthcoming study on beginning-reading instruction.

"Unless you measure basic skills, you run the risk of saying kids don't have the higher-order thinking skills to read a text correctly," she said. "They may have all the higher-order thinking skills in the world, but are struggling with the graphics."

But Constance Weaver, director of the commission on reading of the National Council of Teachers of English, said the inclusion of such a test might lead schools to "detract attention from what's most important" in reading instruction.

"They will make fluent reading the goal, rather than comprehension,'' she said, when in fact, many children who make errors in spoken reading "can comprehend quite well."

Student Portfolio

In addition to the oral-fluency test, the proposal also urges a pilot portfolio assessment, in which NAEP would collect examples of the best work in reading from a sample of about 2,000 students.

The portfolio would enable students "to demonstrate their ability to focus on reading in their daily work, as opposed to an assessment situation," Ms. Kapinus said.

In addition, she noted, it would permit assessments of students' responses to longer pieces of text, such as novels, and would provide a gauge of classroom work around the country.

Ms. Kapinus said the proposal has been the "most popular" of all the recommendations, since it is the most innovative. Although several testing programs--including the 1990 NAEP--are experimenting with portfolios in writing, she noted, the 1992 reading test would be the first large-scale assessment to use such a technique in that subject.

"Since this is the first state-by-state assessment, we have to indicate that we have the methodology to be at the cutting edge, ahead of the states," she said. "If we don't, states will take us to task, and say, 'You're comparing us with an assessment that's not as good as ours."'

Web Only

You must be logged in to leave a comment. Login | Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories