Does Money Matter?
The panel of economists that put together the report based its conclusions in part on studies done in the 1980s by Eric Hanushek, the lead author of the new book. Hanushek, a professor of economics and political science at the University of Rochester, analyzed all existing studies that looked for relationships between additional resources and student learning. He determined that, in most cases, those resources had had no effect. Over the years, his findings have found champions in a number of conservative critics of schools, most notably former U.S. Secretary of Education William Bennett.
But in another study published last year, a different group of researchers looked at the same data and came to the opposite conclusion. Writing in the April 1994 issue of the journal Educational Researcher, Larry Hedges, Richard Laine, and Rob Greenwald said higher spending on schools had produced higher student achievement.
Who is right? The answer, a number of observers say, is that both may be.
Each study used markedly different approaches to analyze the data. Hanushek, using a common method known as "vote counting,'' essentially calculated the proportion of studies that found significant correlations between increased spending and improved student achievement. Of the several dozen previous studies he examined, only 20 percent showed a strong positive effect.
The method used by Hedges, who is a statistician and education professor at the University of Chicago, is known as a "meta-analysis.'' It took into account the magnitude of the effects that were found. In other words, if students' standardized-test scores rose, Hedges wanted to know by how much. Employing that method, he and his colleagues found strong ties between student achievement and both per-pupil-funding levels and teacher experience. Other factors--such as class size, teacher education, teacher salary, administrative staffing, and facilities--showed less connection to achievement.
Richard Murnane, an economist and professor at the Harvard Graduate School of Education who contributed to the Brookings study, points out that even if the majority of school districts do not use money in ways that make a large difference in student learning, the minority of districts that do could compensate for all the others. Both of the studies "are telling you useful information,'' says Murnane, who is writing a book on the subject.
Even Hanushek, who in the May issue of Educational Researcher gave a pointed response to the Chicago researchers, now says the two studies are "in complete agreement.''
"There are some places that use money ineffectively and some that use it effectively,'' he says. "If you throw money at schools, you get at about the rough average.'' He points out that his study found no systematic links between money and results, not a complete absence of links. But the links he found, he says, are not enough to build policy on.
But neither are Hanushek's findings, Hedges argues. To the extent that the existing studies have found relationships between more money and performance, he asserts, "the relationships are positive and some are quite positive.''
To some extent, the debate is as much about research methodology as it is about whether giving schools more money improves student achievement. Meta-analyses, like the one Hedges and his colleagues carried out, have been used increasingly over the past 15 years or so in psychology, medicine, and social science research. But Hedges points out that such studies are still relatively rare in the realm of economics, Hanushek's field.
Betsy Becker, a professor of statistics and quantitative analysis at Michigan State University's college of education, says she prefers the meta-analysis. "Magnitudes are always more interesting,'' she says. "But if you don't believe in looking at magnitude of effects, you're not going to believe the numbers anyway.''
Hedges says the vote-counting method is flawed, in part because errors in the studies can compound. "If the individual studies are relatively weak, which is the norm,'' he says, "then there is a good chance the results won't be there.''
In truth, both studies leave something out. In Hanushek's work, for instance, studies that have small, positive--but statistically insignificant--effects would have been discounted. And in the Chicago study, the researchers had to discard studies that did not include enough information. Typically, those were studies in which researchers may have said there was "no significant effect'' without indicating whether that tiny effect was positive or negative. In his written response to Hedges' study, Hanushek contends that practice reduced the pool of studies the Chicago researchers used by 20 percent to 30 percent.
According to Becker, the task now is to go beyond the question of
whether money makes a difference. "If we didn't think it made a
difference, we wouldn't have been spending it all these years,'' she
says. "When you find an effect, the next question ought to be: