Does Money Matter? Both Sides in Debate Have a Point
When the Brookings Institution this month unveiled a book calling for schools to adopt more disciplined spending practices, it waded into an academic and political morass that has been growing in recent months.
The central question at issue is this: Does spending more money on schools improve students' academic achievement? The Brookings report, called Making Schools Work: Improving Performance and Controlling Costs, concludes that more money makes no difference--that is, at least, if it is spent in the ways schools have typically used it. (See Education Week, Oct. 12, 1994.)
The panel of economists that put together the report based its concluions in part on studies done in the 1980's by Eric A. Hanushek, the lead author of the new book.
Mr. Hanushek, a professor of economics and political science at the University of Rochester, analyzed all existing studies that looked for relationships between additional resources and student learning. He determined that, in most cases, those resources had had no effect.
Conservative critics of schools, such as former U.S. Secretary of Education William J. Bennett, over the years have championed those findings.
But, in another study published earlier this year, a different group of researchers looked at the same data and came to the opposite conclusion. Writing in the April 1994 issue of the journal Educational Researcher, Larry V. Hedges, Richard D. Laine, and Rob Greenwald said higher spending on schools had produced higher student achievement. (See Education Week, March 23, 1994.)
Who is right?
The answer, according to some experts, is that maybe both sides are.
Counting and Synthesizing
Each study used markedly different approaches to analyze the data. Mr. Hanushek, using a common method known as "vote counting," essentially calculated the proportion of studies that found significant correlations between increased spending and improved student achievement.
Of the several dozen previous studies he examined, only 20 percent showed a strong positive effect.
The method used by Mr. Hedges, who is a statistician and education professor at the University of Chicago, is known as a "meta analysis." It took into account the magnitude of the effects that were found. In other words, if students' standardized-test scores rose, by how much?
By that method, he and his colleagues found strong ties between student achievement and both per-pupil-funding levels and teacher experience. Other factors, such as class size, teacher education, teacher salary, administrative staffing, and facilities, showed less connection to achievement.
"If in the majority of places money is not used wisely and, if in the minority of places where it is used in economic ways, that money could make a large difference, that could compensate for the large number that aren't using it wisely," said Richard J. Murnane, an economist and professor at the Harvard Graduate School of Education who contributed to the Brookings study. He is also writing a book on the subject.
"They both are telling you useful information," he said.
Even Mr. Hanushek, who in the May Educational Researcher gave a pointed response to the Chicago researchers, now says the two studies are "in complete agreement."
"There are some places that use money ineffectively and some that use it effectively," he said. "If you throw money at schools, you get at about the rough average."
He pointed out that his study found no systematic links between money and results--not a complete absence of links. But those links, he concluded, were not enough of a foundation on which to build policy.
But neither are Mr. Hanushek's findings, Mr. Hedges countered last week in an interview.
"To the extent that any of them are finding relationships, the relationships are positive and some are quite positive," he said.
A Matter of Method
To some extent, the debate is as much about research methodology as it is about whether giving schools more money improves student achievement.
Meta-analyses, like the one Mr. Hedges and his colleagues carried out, have been used increasingly over the past 15 years or so in psychology, medicine, and social-science research. But, said Mr. Hedges, they are still relatively rare in the realm of economics, Mr. Hanushek's world.
Betsy J. Becker, a professor of statistics and quantitative analysis in Michigan State University's college of education, said she prefers the meta-analysis.
"Magnitudes are always more interesting," she said. "But if you don't believe in looking at magnitude of effects you're not going to believe the numbers anyway."
Mr. Hedges said the vote-counting method is flawed in part because errors in the studies can compound.
"If the individual studies are relatively weak, which is the norm, there is a good chance the results won't be there," he said.
Discounted or Discarded
In truth, both studies leave something out. In Mr. Hanushek's work, for instance, studies that have small positive--but statistically insignificant--effects would have been discounted.
In the Chicago findings, the researchers had to discard studies that did not include enough information.
Typically, those were studies in which researchers may have said there was "no significant effect" without indicating whether that tiny effect was positive or negative.
Mr. Hanushek, in his written response to Mr. Hedges's study, contends that practice reduced the pool of studies the Chicago researchers used by 20 percent to 30 percent.
The researchers also differed over technical matters. For instance, if several findings from the same study were used in the larger analysis, could those findings have been influenced by some other factor that is unique to the study from which they were taken?
Ms. Becker said the task of researchers now is to go beyond the question of whether money makes a difference.
"If we didn't think it made a difference, we wouldn't have been spending it all these years," she said. "When you find an effect, the next question ought to be: Why?"
Vol. 14, Issue 07