Students have made slow but steady gains in math since the early 1970s, but their reading achievement has barely budged, says a report scheduled for release this week by the Brookings Institution.
The report, by the Washington-based research organization’s Brown Center on Education Policy, relies primarily on data from the National Assessment of Educational Progress, a federally financed program that gauges young people’s learning in key subjects. While gains in math since 1973 have been significant—roughly equal to a full year of additional learning for 9- and 13-year-olds—gains in reading have been exceedingly small, the study found.
The findings echo those reached in a new NAEP trend report, which concludes that students’ math scores in 1999 were at an all-time high.
But the Brown Center report suggests that most of the recent improvement has been in areas that have received a lot of attention during the 1990s, such as problem-solving and geometry, rather than basic arithmetic skills.
For More Information
|Read the report, “How Well Are American Students Learning?” online, or read a description of the report.|
In both mathematics and reading, achievement gains were greatest at age 9 and shrank for older students. Compared with their peers in 1973, for example, 17-year-olds have gained only a few months of math learning.
Tom Loveless, the report’s author and the director of the center, described this “middle-grade slump” as a “cultural problem.”
“The importance of achievement declines in adolescence among American children more so than in other societies and more so today than in the past in our own society,” he said last week.
The report also looks at the criteria used to identify exemplary schools under such programs as the federal Blue Ribbon Schools awards. It found that many of the award winners actually have worse academic performance than schools serving similar student populations, in part because the awards do not focus enough on what schools actually accomplish.
Two Different Stories?
The Brown Center plans to produce the national overview of student achievement on an annual basis to provide what Mr. Loveless describes as an “accurate, nonpartisan” picture of trends in student learning. “There are a lot of reports out there, but they tend to be put out by either advocacy groups or groups with a special interest, like teachers’ unions,” he said.
While this year’s report focuses on math achievement, the theme will vary each year.
The report found that while long-term trends in achievement are positive, the 1990s “do not stand out as a time of great strides forward in academic achievement.”
The degree to which that conclusion holds depends on which NAEP database is analyzed. One part of the NAEP program has tested samples of students under the exact same conditions and using the same test items in math since 1973, in order to monitor changes in achievement over time. The “main” NAEP test, first administered in math in 1990, is given to a separate random sample of students across the nation. Its content may be altered from time to time to reflect changes in curriculum and testing practices.
Mr. Loveless found that from 1990 to 1996, the most recent data he had available, the main NAEP test showed substantial gains in math scores at all three grades tested (4, 8, and 12). In contrast, the trend test showed very slight or no gains during the same period.
Those results diverge, he argues, because the tests emphasize different mathematical content. The main NAEP reflects the math standards created by the National Council of Teachers of Mathematics, which place a greater emphasis on such topics as problem-solving and geometry. The long-term-trend data focus on more traditional topics, such as arithmetic.
Based on his analysis of test items, Mr. Loveless concludes that “kids in the ‘90s probably made progress in areas like problem-solving and geometry, and those gains are being reflected in the main NAEP tests. In the areas of traditional mathematics—such as whole-number arithmetic, fractions, decimals, and percents—there’s either a very minor gain or no gain at all, in some cases.”
Indeed, while experts are calling for all 8th graders to take algebra, his analysis suggests that many students at this level still have not mastered the fundamentals of arithmetic. In particular, his study shows that only 64 percent of 8th graders in 1996 correctly answered items related to decimals. For fractions, the figure was 54 percent; integers, 44 percent; and decimals, 43 percent.
‘Both Valid Measures’
Gary W. Phillips, the acting commissioner of the National Center for Education Statistics, which oversees NAEP, cautioned that there are too many differences between the two NAEP assessments to compare one very well with the other. “We find the comparison confusing, which is why we don’t do it,” he said. “They’re both valid measures.
“What I think [the data] show is that, currently, in the United States, the focus in mathematics is on these more problem-solving, communication, reasoning, conceptual, development-type skills,” Mr. Phillips said. “And what you find on the current assessment is that students are doing better in those areas. And what you find on the long-term assessment is that they’re not doing any worse in the old computational skills. So they’re learning the new skills without reducing their skills in the old areas.”
He also cautioned against drawing conclusions about such specific skills as working with decimals, based on NAEP, because there are too few test questions on such topics to make the findings reliable. “We don’t have enough information to give a good national estimate for that,” he argued. “If we wanted to give information on decimals, we’d have a whole lot more items on decimals than what we have.”
“The report brings into high relief one of the abiding dilemmas for NAEP itself, which is the trade-off between trend data, on the one hand, and staying current with curricular developments on the other,” said Chester E. Finn Jr., the president of the Washington-based Thomas B. Fordham Foundation and a former assistant secretary of education for educational research and improvement under President Reagan.
“If you want unbroken trend information, you basically can’t change the test,” Mr. Finn added. “If, on the other hand, after 30 years of a math test, you discover that curricular practice and pedagogical practice in American schools have changed, and you want to capture that, you have to change the test, and then you pay a price, which is the trend information. I think NAEP has to do various versions of both.”
In future years, Mr. Loveless said, the Brookings report will focus more on analyzing data from state tests. Based on an initial analysis of the data available from 36 state Web sites, the report points out that more states reported gains than losses in both reading and math achievement from 1998 to 1999. But Mr. Loveless warned against drawing any conclusions from two years’ worth of data.
States using customized tests were more likely to report reading gains than those administering commercial, off-the-shelf tests, the report found. But that outcome did not hold true in mathematics. Although the data were too slim to reach any meaningful conclusions, Mr. Loveless speculated that the customized tests may be more closely aligned with a state’s curriculum, making it easier for schools to determine what they can do to improve low scores.
The state data also reflected the national trend, with more states reporting achievement gains for 4th graders than for 8th and 10th graders.
This year’s report also looks at the criteria used to identify exemplary schools under three different recognition programs, including the federal Blue Ribbon Schools Program.
|Blue Ribbon Schools?|
|Below are the number of public elementary schools in individual states to receive Blue Ribbon designations last year from the U.S. Department of Education. The middle and right-hand clumns show their achievement levels when their test scores were compared with other schools’ in their respective states with similar socioeconomic levels.|
|State||Total||Top 10%||Bottom 50%|
|Source: The Brookings Institution.|
It found that many of the schools that receive such awards are not, in fact, high achievers compared with schools serving similar student populations.
An analysis of Blue Ribbon winners in seven states found that only 19 of 70 elementary schools scored in the top 10 percent of similar schools in their states, based on state math and reading tests. Seventeen schools scored in the bottom 50 percent, meaning their students score lower on reading and math tests than the average school with a similar socioeconomic population. The remaining schools scored somewhere in between.
“Achievement is just one of the many criteria on which these awards are being given,” said Mr. Loveless. “Some of these schools truly are high-achieving. Unfortunately, there’s a pretty big chunk—between 25 percent and one-third of the schools—that are not really high-achieving schools. And they’re being rewarded for all kinds of trendy, unproven practices.”
He suggested that in the future, such rewards should focus more squarely on high academic achievement or be relabeled. He also recommended getting rid of any self-selection process that requires schools to submit voluminous application forms, and instead base the awards on more objective criteria.
U.S. Department of Education officials did not return calls last week to respond to the report’s findings on Blue Ribbon schools.