Critics of American schools, and even President Clinton, often cite the National Assessment of Educational Progress’ finding that 40 percent of 4th graders can’t read at a basic level as proof that the schools are failing.
But a congressionally mandated report released last week questions the research and methods used to derive that statistic and others. Calling the NAEP process for measuring student achievement “fundamentally flawed,” the report says it should be overhauled by the board that oversees the federally financed sampling of student performance in key subjects.
For More Information
Copies of “Grading the Nation’s Report Card--Evaluating NAEP and Transferring the Assessment of Educational Progress,” are available for $47.95, plus shipping, from National Academy Press, 2101 Constitution Ave., NW, Washington, DC 20418, or by calling (800) 624-6242.
Or, read the report at the NAP’s Web site. |
The achievement levels for the assessment are often too rigorous, are the product of a process riddled with inconsistencies, and don’t reflect the results of similar large-scale tests, according to the report from a 16-member panel of testing experts convened by the National Research Council.
But, unlike some previous critics, the panel recommends that the National Assessment Governing Board continue to rely on standards-based reporting.
“While we are critical of the process to establish the descriptions of the achievement levels, we are supportive of the concept of performance standards and reporting in the context of such standards,” said James W. Pellegrino, the panel’s chairman and a professor of cognitive studies at Vanderbilt University in Nashville, Tenn.
But the governing board, known as NAGB, needs to clarify that the standards are based on judgments by experts and don’t “reflect some deeper scientific truth,” says the report, “Grading the Nation’s Report Card--Evaluating NAEP and Transforming the Assessment of Educational Progress.”
Those concessions are a victory for supporters of achievement levels, according to Chester E. Finn Jr., who was the governing board’s chairman while it started the process of setting the standards.
“They’ve acknowledged that achievement levels are here to stay, and that achievement levels are a judgmental process,” said Mr. Finn, the president of the Washington-based Thomas B. Fordham Foundation and a former assistant secretary of education under President Reagan.
Other researchers have either campaigned to delete the achievement levels or to turn the process of setting them into a scientific exercise, Mr. Finn said.
The NRC panel suggests that the governing board revise the achievement levels when it redesigns the assessment after the 2002 exams. Until then, the board should use the existing standards but clearly explain their limitations, the panel says in the report Congress ordered when it reauthorized NAEP and other federal education research and improvement programs in 1994.
Gauging Achievement
The achievement levels, which describe what students need to know to be considered “advanced,” “proficient,” or “basic,” have been controversial since NAGB first set them in the early 1990s. Testing experts and the General Accounting Office, the congressional research arm, have criticized them as being too difficult and yielding results that differ from those of other large-scale assessments, such as the Advanced Placement exams high school seniors take to earn college credit. (“Yet Another Report Assails NAEP Assessment Methods,” Sept. 22, 1993.)
Because of such flaws, critics say, results based on the NAEP achievement levels overstate the academic shortcomings of U.S. students.
Despite such criticisms, the statistics are widely used. In several speeches last year, including his State of the Union Address, President Clinton cited the results from NAEP’s 1994 4th grade reading exam, for instance, to buttress his proposal to put volunteer tutors in K-3 classrooms.
The 1994 NAEP found that 40 percent of 4th graders were unable to read at the basic level, as defined by a group of teachers NAGB assembled to rate the difficulty of the exam questions. The group determined the degree of difficulty of a question and decided where a correct answer would place a child in one of the exams’ three achievement levels.
The governing board uses the same process to set achievement levels for every other subject it tests. This fall, raters will be setting standards for civics and writing exams given earlier this year.
But observers may be jumping to conclusions when they assume, for example, that students who score below “basic” in reading can’t read, say critics of the achievement levels. It may be that the students can read, but not as well as the achievement levels say they should, the critics say.
What’s more, Mr. Pellegrino argued in an interview, the NAEP standards aren’t specific enough to explain what skills those students lack. “What does it mean, they can’t read,” he said. “Can they decode? Do they have problems with reading comprehension?”
The flaws in the standards-setting process were particularly evident when the governing board set the achievement levels for the 1996 science exam, the NRC report contends. After reviewing the work of its raters, NAGB rejected their achievement levels and revised them. The decision altered the difficulty rating of as many as 40 of the exams’ approximately 190 questions, the report says.
“NAGB’s own examination...led it to the same conclusion that multiple evaluation panels had reached: That the result of the achievement-level-setting process were not believable,” it says.
Those flaws, however, should not stop NAGB from reporting scores based on achievement levels, according to the panel, which includes researchers such as Gail P. Baxter of the Educational Testing Service of Princeton, N.J., and Allan Collins of Bolt Beranek and Newman Inc. of Cambridge, Mass., a firm that designed a school reform model for the New American Schools reform project.
The structure is easier for policymakers, educators, and parents to understand, the panel says, than numbers based on average scores or a scale. But for the standards to be credible, it argues, NAGB needs to find a new way to set them.
The NRC experts suggest a new method. Instead of waiting for test questions to be written and then deciding on a difficulty rating based on educators’ perceptions, NAGB should write the standards and test questions simultaneously, the report says. That way, test items could be written to assess whether students have the skills needed to be classified at a specific achievement level.
After field-testing specific questions, NAGB could compare those results with scores on other large-scale assessments, such as the Third International Mathematics and Science Study. Then, the governing board could set final achievement levels, according to the report.
The board also would be ready to describe exactly what scoring in each performance level means, the panel experts say. That final step may be the most important, the report observes. Without specific explanations, NAEP results are subject to misinterpretation, it says.
The report is the product of the 1994 reauthorization of NAEP, but it will play a role in defining changes Congress makes to the program when it is scheduled to be renewed next year.
“It’s going to be very useful over time,” said Marshall S. Smith, the acting deputy secretary of education. “They’re trying to push us in a direction where we feel the achievement levels have some validity.”