The Bottom Line
School finance experts may have refined their models for determining how much it should cost to adequately educate students, but that doesn't mean they always agree on the results.
How much does it cost to provide students with a sound basic education? It depends on whom you ask.
For the past 15 years, school finance experts have been searching for ways to quantify the costs of providing an education that are based on the best research available. Those experts have devised their own methods of estimating such costs, and the approaches they use and the assumptions they make sometimes result in drastically different dollar figures.
|Making Every Dollar Count|
|The Bottom Line|
|• Uncertain Costs|
|A Level Playing Field|
|Table of Contents|
The price tag recommended by studies in Kentucky varied by as much as 40 percent, depending on the researchers’ methods and assumptions.
In New York state’s ongoing school finance case, studies cited by the state and the plaintiffs diverged dramatically. Researchers hired by the plaintiffs estimated that the state should spend an additional $8.4 billion a year to comply with a 2003 decision by the state’s highest court. The state’s study, by contrast, suggested New York could meet the court’s demands for providing adequate resources to New York City schools with an increase between $2.5 billion and $5.6 billion. New York now spends about $30 billion a year on pre-K-12 education.
And in Maryland, two studies conducted by the same team of researchers, but using different methods, differed by nearly $1 billion in their estimates of an adequate funding level. One study said the state should raise spending by 34 percent; the other called for a 44 percent increase.
While the scholars who produce such reports acknowledge that they often provide different cost estimates, they say that their work can be a valuable starting point for political and judicial debates over how much money is needed to provide an adequate education.
But the conflicting numbers may simply lead policymakers and judges to question the validity of the studies, other observers say. “It begs some serious questions about which ones are accurate,” says Steve Smith, the former director of the National Conference of State Legislatures’ school finance project.
For more than 15 years, courts have been telling states that they aren’t adequately financing their schools. Starting with a Kentucky Supreme Court decision in 1989, jurists have outlined the outcomes of an “adequate” education. But they haven’t answered the central question: How much does it cost?
Rather, the courts have deferred to the political branches of government on that bottom-line issue. So, state officials, and the researchers working for them, have been searching for scientifically valid ways to put a dollar amount on adequacy.
“These studies will help us focus on what we ought to spend resources on and make a plan to get there,” says Lawrence O. Picus, a professor of education at the University of Southern California, who has conducted adequacy studies in Arkansas, Kentucky, and elsewhere.
“That’s what we need to be thinking about in school finance: What is it going to take and how are we going to get there?” he adds. “Adequacy studies are the way to lay out the beginning of those plans.”
According to the Education Week Research Center’s annual policy survey for Quality Counts 2005, 30 states have had adequacy studies conducted, with six of those still underway as of last fall. For this year’s edition of Quality Counts, Education Week examined eight adequacy studies in three states—Kentucky, Maryland, and New York—to figure out why they’ve come up with such different estimates, the assumptions behind the studies, and the strengths and weaknesses of each approach. In each state, policymakers and jurists have weighed, or are now weighing, the studies’ findings as they decide how to ensure an adequate education for every child.
School finance experts like Picus have been advising state legislatures and testifying in finance lawsuits for decades.
Starting in the late 1970s, Jay G. Chambers and Tom Parrish conducted initial studies estimating the costs of providing education in Illinois and Alabama.
But not until the court decisions in the 1990s started the debate over adequacy did such experts hone a scientific way to tally the educational costs of reaching student-achievement goals.
School finance litigation turned in a new direction in 1989, when Kentucky’s highest court handed down its decision in Rose v. Council for Better Education, declaring the state’s entire K-12 governance and finance system unconstitutional.
Kentucky’s funding approach failed to pass muster in large part because it didn’t provide adequate resources for the state’s schools. The supreme court detailed a set of skills and knowledge that students should acquire before the school system would be considered constitutional.
Until then, the legal debate in finance cases had focused on whether state funding was equitable across districts. To help answer that question, finance experts came up with statistical methods to evaluate the fairness of school funding formulas.
In the Kentucky case, John G. Augenblick, an independent consultant based in Denver, says he and other school finance experts provided the state with a general estimate of how much it might cost to achieve the court-mandated outcomes. “It was not [a number] that anybody felt particularly good about,” he says now.
The quality of adequacy estimates began to improve in 1995, when Wyoming hired a team of experts to help it respond to a state supreme court ruling there that deemed school funding inadequate.
“We didn’t have any alternative available to us but to invent the ‘professional judgment’ model,” says James W. Guthrie, a Vanderbilt University professor of public policy and education.
Guthrie and his team gathered groups of educators familiar with research on effective educational strategies. They gave the educators a list of programs with compelling research supporting the interventions, such as small class sizes in the early grades and preschool for disadvantaged children.
Then Guthrie and his team asked the focus groups to design model schools from the ground up based on that research—including, for example, the number of teachers and paraprofessionals required. Later, the research team estimated what it would cost to create such schools.
The team delivered its answer in 1997: Based on the advice, Wyoming needed to spend about $6,200 per elementary school pupil, $6,400 per middle schooler, and $6,800 for every high school student.
The Wyoming Supreme Court eventually endorsed the results, but asked Guthrie’s team to recalculate a few of its numbers.
Since then, experts have crafted three other methods for producing the cost estimates in addition to the “professional judgment” model.
Under the “successful schools” approach, experts examine the expenditures in a state’s most effective schools or districts, typically as defined by test scores. The assumption is that other places could achieve similar results for the same costs. Although the approach adjusts for differences in student needs, even its biggest advocates suggest those adjustments aren’t as reliable as they could be.
In the “evidence based” model, consultants identify practices verified as effective by research—such as small class sizes in the early grades—and tally the cost of using those strategies in all schools.
Finally, under the “cost function” approach, economists use complicated statistical analyses to examine the relationship between current spending and student achievement. They then determine what it would cost to bring all students to a particular level of performance, after accounting for differences in student and district characteristics, such as poverty.
While proponents of adequacy studies say they can help provide a rational basis for school spending decisions, critics worry that those studies will drive up the cost of public education and, potentially, lead policymakers to be even more specific about how schools must spend their money.
Experts point out that each of the approaches has its strengths and weaknesses. And different people prefer to use different methods. Cost-function analyses, for example, often narrow educational outcomes to what can be easily measured, such as test scores. Panels of education experts, meanwhile, often have a hard time agreeing on what it would take to reach a particular level of school performance.
Augenblick, who favors the successful-schools model, encourages states to conduct studies using at least two approaches, thus giving policymakers the ability to weigh the pluses and minuses of each one.
“You learn something from doing it multiple ways,” he says. “It gives people somewhat more flexibility” when making funding decisions, which ultimately are political.
The professional-judgment model is one of the most popular methods because it relies on experienced educators to determine what’s needed to provide a sound basic education, and the results are easy for legislators and others to understand.
“We prefer the professional-judgment approach, not because we believe it is more precise than statistical or inferential methods [it may not be more precise],” Guthrie and fellow economist Richard Rothstein write, “but rather because the imprecision is more transparent.”
But many experts suspect the method generates inflated cost estimates. Because the focus groups of educators are encouraged to disregard costs, they are prone to design a generous package of services for schools.
For example, in 2002, when Picus of USC and his team convened professional-judgment panels in Kentucky, the educators recommended that the state add five days to the school year and offer preschool to all 3- and 4-year-olds from families at 150 percent of the poverty level. They also wanted to limit class sizes to 15 pupils in K-3 classrooms and 20 students through the end of high school. The total bill for pre-K-12 schools: $6.2 billion a year—46 percent more than Kentucky was spending at the time.
A separate professional-judgment study conducted by Deborah Verstegen, a University of Virginia professor of education finance and policy, would have lengthened the Kentucky school year by 10 days and given teachers five additional professional-development days. It would have guaranteed class sizes of no more than 15 pupils up to grade 5. Kentucky fell almost $900 million shy of what it would take to provide those services, the report said.
When Picus’ team used the evidence-based approach, it proposed more modest strategies. The team suggested keeping the school year at its current length, but recommended class sizes of 25 in grades 4-12. His researchers also proposed the same preschool services as the professional educators. The total bill: $4.5 billion, or a 14 percent increase.
Educators “tend to overspecify,” Picus says. “They ask for very small classes and lots of support.”
And all of that pads the payroll—the biggest item in most schools’ budgets.
Guthrie, who pioneered the professional-judgment model, says the high cost estimates result because researchers who conduct such studies are not reining in the educators whose judgment they solicit.
In Wyoming and other places where he has used the method, Guthrie says he limits the array of options available for educators to choose from to those strategies that have been validated by research.
His panels, for instance, wouldn’t be allowed to demand reading specialists or after-school programs, as the Kentucky studies did, because those services lack such validation, Guthrie believes. He does, though, let his assembled educators choose small class sizes and preschools for disadvantaged students. “There are five to seven things you can rely on [in the research],” he says, “but there isn’t a whole long list.”
Instead of depending on educators to interpret research, Picus and his partners prefer to analyze it themselves.
For the evidence-based approach they used in Kentucky and Arkansas, they survey a state’s school services and then calculate the cost of upgrading them to meet best practices.
In Arkansas, their analysis proposed the same class sizes they had recommended for Kentucky, additional professional development for teachers, and $250 per student for technology.
After he completed his initial analysis for Arkansas, Picus says, he shared his findings with focus groups of educators.
“They said: ‘Wow, that’s a rich number of resources. That would really work,’ ” he says. “It turns out to be less than what professional-judgment panels would want.”
The weakness with the approach, Picus concedes, is that education research isn’t definitive in enough areas.
For example, he’s confident that research confirms the benefits of small class sizes, but no definitive research helps him estimate the cost of building and maintaining schools. “We don’t always have good evidence in every area,” says Picus. “But I don’t think that’s a reason to slow down the train and stop using our model.”
For the successful-schools model preferred by Augenblick, researchers analyze how money is spent by the best schools or districts in a state, typically defined by scores on state tests. They then calculate how much it would cost to spend that amount in every school.
When Augenblick and John Myers, both independent finance consultants, conducted a study for a blue-ribbon panel studying school finance in Maryland, they found that successful schools had lower enrollments of special education students and fewer disadvantaged youngsters. They try to control their estimates to account for the costs of reaching those students in other schools.
In the 2001 Maryland study, the team found that the state should spend $7.9 billion a year on precollegiate education—a 34 percent increase over what it was then spending. When the same team completed a professional-judgment estimate for the same state, it recommended that $8.8 billion be spent—10 percent more than under the successful-schools model.
Augenblick says the benefit of the successful-schools approach is that it is based on actual spending in schools that are already meeting the desired goals. But he acknowledges that the method doesn’t decipher exactly how much it costs to educate disadvantaged, special education, and language-minority students.
Skeptics counter that there’s no guarantee that successful schools are spending their dollars as efficiently as possible, and a lack of information on that score could drive up price estimates. Modified analyses based on the successful-schools model try to address such concerns by, for example, focusing on schools that meet the desired outcomes for the least cost or by identifying and excluding “outlier” schools and districts because they could skew results.
For the cost-function method, statisticians conduct a regression analysis, a statistical procedure used to examine the relationship between school spending, student achievement, and such data as student demographics and teacher salaries. Like the successful-schools model, cost-function analyses require policymakers to establish explicit, measurable outcome goals.
The advantage of the cost-function approach, according to Bruce D. Baker, a school finance expert at the University of Kansas, is that such studies can then look at how the costs of achieving those outcomes differ in districts with different characteristics, such as large concentrations of poor and minority students. If policymakers don’t like the price tag, they can adjust their achievement targets.
When school finance expert Andrew Reschovsky analyzed student-achievement data and district spending in Texas, he estimated that the state would need to double its current funding to boost student learning in the worst-performing districts. “The results ... seem to be pretty consistent in showing that high concentrations of poverty matter a lot, and substantially more resources will be needed to achieve whatever the standard in the state is,” Mr. Reschovsky, a professor of applied economics and public affairs at the University of Wisconsin-Madison, said in an interview.
Although the mathematical approach is scientifically valid, it too comes in for criticism. “It’s wildly subject to the assumptions you make,” Guthrie of Vanderbilt says. And it has the additional challenge of being hard to understand.
While scholars debate the pros and cons of each method, it will eventually be state legislatures and the courts that decide how any such research influences policy.
Alfred A. Lindseth, an Atlanta-based lawyer who helps states defend their school finance systems, suggests that “there are a hundred reasons why these studies can be challenged.”
For example, he says, a panel recommending a series of research-based programs doesn’t know whether those programs would work together to yield results. “These panel members are not experts on the cumulative effects of these programs,” Lindseth argues.
What’s more, no one has ever proved that schools will succeed based on the studies’ proposals. “There’s no empirical evidence because no school district or state has done what these studies purport to do,” Lindseth says. “None of [the studies] have ever been implemented and reached the results they said they were going to reach.”
But the studies will continue to play an important role in political debates over how much to spend on education, their purveyors say.
At the very least, they will be one of several factors that judges weigh. “They’ll look at them as a useful tool … and use them as a piece of evidence to base their decision on,” says Smith, formerly of the NCSL. “It won’t be the end-all and be-all, but it’ll be another piece of evidence they’re considering.”
And policymakers likely will continue looking to the studies for guidance, the experts predict.
“More than anything, what you’re trying to do is to get the legislature to be rational,” Augenblick says. “The message from the courts is: You better have a rational basis for your findings. It can’t be pure politics.”
For now, school finance experts say that their studies are the best estimates available. “The [studies] provide a process and a systemic structure,” says Jay G. Chambers, a senior research fellow for the American Institutes for Research, a Washington-based nonprofit organization.
“It’s a starting point for a debate,” he adds. “It’s not the final answer.”
Vol. 24, Issue 17, Pages 29-30, 32, 35-36Published in Print: January 6, 2005, as The Bottom Line