Depending on how the numbers are analyzed, U.S. students have either shown steady progress on the National Assessment of Educational Progress tests over the past 25 years or they’ve been treading water.
The conflicting interpretations emerge from a novel analysis of NAEP data by a pair of researchers from the Educational Testing Service.
Traditionally, researchers and policymakers look at trends in test scores over time to figure out whether achievement is on the rise. But the ETS researchers--Paul E. Barton and Richard J. Coley--decided to take a “value added” approach to analyzing data from NAEP. The congressionally mandated program regularly assesses a representative sample of students in grades 4, 8, and 12.
For More Information
Read “Growth in School: Achievement Gains from the Fourth to the Eighth Grade.” Or, for a copy of the report, see ordering information.
The authors looked at how much students had learned between the 4th and 8th grades, then compared that growth with gains by students over the same grade span in previous decades.
They found that despite overall rises in average test scores, American students’ learning growth in most subjects has remained basically flat since the 1970s. In mathematics, there was even a slight decline.
The reason for the discrepancy, Mr. Barton argues, is that most of the gain that students made on NAEP in recent decades showed up first when those student were 9 years old. “That just carried forward into the other grades,” said Mr. Barton, the director of the policy-information center at the Princeton, N.J.-based ETS.
Offering ‘Another Window’
But gains for that youngest testing population are as much a reflection of the richness of a child’s early home life as they are of schooling. To get a closer look at what actually happens in school, the researchers argue, it’s important to look at improvements between the 4th and 8th grades.
In both grades, for example, Maine was a top-scoring state on the NAEP math tests in 1992 and 1996. Arkansas students ranked near the bottom. Yet in each state, students improved by the same amount--52 percentile points--from 4th to 8th grade.
“Does that mean that Arkansas does as good a job as Maine?” Mr. Barton said. “Should students be scored on the basis of how much they learned or on the basis of how much they know?”
Mr. Barton said federal and state officials should report test results both ways, using regular test scores as well as value-added data.
The ETS researchers are not the first to use a value-added approach to reporting test data. Tennessee, for example, focuses on gains in achievement to help judge the effectiveness of both schools and teachers. (“A Question of Value,” May 13, 1998.)
“It’s as important to look at [the data] this way as it is to look at it another way,” Mr. Barton said.
One drawback to the ETS approach, however, may be that researchers can’t be sure they are testing the same students four years later, said Roy E. Truby, the executive director of the National Assessment Governing Board, which oversees NAEP. “There’s so much mobility among students.”
Also, changes to NAEP may make future comparisons impossible. The governing board has begun to move away from score scales that allow all three grade levels to be grouped on the same index so that gains could be measured from one grade to another. The most recent science test uses separate scales for each grade level.
One reason: With separate scales, test developers can include more items geared to a specific grade level. Now, to allow for comparisons over time and to measure all three grades on the same scale, the tests must repeat some questions from grade to grade.
“It’s not a slam-dunk issue. The board thought long and hard about it,” Mr. Truby said of the shift to separate scales. “But this study does show that there are some good things you can do with cross-grade scales.”
A version of this article appeared in the June 17, 1998 edition of Education Week as ETS Study Takes ‘Value Added’ View of NAEP