After Congress created a hefty grant program in 1997 designed to encourage some of the nation’s poorest schools to adopt comprehensive, research-proven improvement programs, Robert E. Slavin expected to be deluged with requests.
After all, Success for All, the reform model the Johns Hopkins University researcher had helped develop, had accumulated a long record of studies pointing to its effectiveness. Mr. Slavin figured the program was a prime candidate for program grants.
But the deluge never came for Success for All. Nor did it come, for that matter, for Direct Instruction or the Comer School Development Program or many other national school improvement models that are widely considered to have solid evidence of success.
In fact, some experts and program developers say, such “research proven” programs are getting a smaller and smaller share of the pie under the 7-year-old initiative now called the Comprehensive School Reform program, as schools opt instead for home-grown and commercial programs with weaker research bases.
The trend is worrisome, those scholars contend, because the federal program adopted under President Clinton blazed the trail for the Bush administration’s ongoing campaign to prod schools to rely on “scientifically based research” for the educational decisions they make.
“If it turns out within a short period of time that their scientifically based research policies didn’t lead to any scientifically based research practices, the whole concept could be discredited for a generation,” Mr. Slavin said. “That would be an enormous tragedy.”
But federal education officials suggest the developers’ concerns may stem from their own declining market share in the face of rising competition.
“You see less use of brand-name models, yes,” said one Education Department official who asked not to be named. “But I don’t think you can jump from that to saying schools are using less research-based strategies.”
With its push for research-proven strategies and its insistence on moving away from piecemeal, hit-or-miss approaches, the Comprehensive School Reform Demonstration program broke new ground in the 1990s.
Under the No Child Left Behind Act, the program was renamed the Comprehensive School Reform program. The 2001 law bolstered the program’s commitment to research-proven improvement strategies by sweeping it under a wide umbrella of educational practices that the law says should be governed by “scientifically based research.”
To be sure, a majority of the $1.3 billion in grant money that has been disbursed under the program since its inception has gone to Title I-eligible schools that have adopted either research-tested or recognized national models, such as Success for All. A small amount of grant money, which is disbursed by the states, has been awarded to schools that aren’t Title I.
But a report released earlier this year by the Washington-based Center on Education Policy found that the percentage of those grants going to programs with some evidence of effectiveness has shrunk. For its yardstick on research quality, the center used a 1999 report by the American Institutes of Research, a Washington think tank, that ranked programs by their research bases.
The center found that the percentage of schools using programs deemed by the think tank to have “positive” effectiveness evidence declined from 20.2 in 1998 to 8.1 in 2002.
Over the same period, the proportion of schools using programs characterized as having “weak or mixed” bases, or no research at all, grew from 12.4 to 18.2 percent.
The numbers may be misleading, though, said Arthur W. Gosling, the director of the National Clearinghouse for Comprehensive School Reform, located at George Washington University in Washington, because some of the models rated as having weak evidence in 1999 have since built up their research bases.
For example, the AIR researchers rated the evidence for the Comer School Development Program, a program developed by Yale University professor James P. Comer, as merely promising in 1999. But a more recent—and more rigorous—research review of 39 improvement programs singled out the Comer model as one of just three with the “strongest evidence of effectiveness.”
The program’s credibility boost, however, didn’t win it many more takers.
“We were disappointed that we didn’t get the demand we thought was coming,” said Edward T. Joyner, the executive director of the Comer program. “It didn’t look to me that there was any kind of oversight to make sure schools made choices that were research-based.”
Mr. Gosling said several other factors might account for what appears to be a trend away from research-based models. For one, some popular programs may not be able to show they are improving student achievement, because their models have a different focus. They seek to change the processes that go on within schools.
In addition, Mr. Gosling said: “A lot of schools might develop something with a local university that’s unique to their situation. It’s impossible to do a large-scale study of those because they’re idiosyncratic. That doesn’t mean they’re not effective, but it does mean it’s not a transportable model.”
Some federal officials agreed this month, noting that the shift could reflect local decisions to move to districtwide kinds of strategies, rather than schoolwide reform models.
Comprehensive School Reform statistics show that schools are cobbling together more strategies in order to address all the components the program requires.
Mr. Gosling said he has noticed that some schools are bending to commercial pressures. “Some schools, quite frankly, are subjected to heavy marketing from commercial organizations, so they take the path of least resistance,” he said.
Mr. Gosling declined to name those for-profit companies.
One commercial developer that has benefited under the federal program is Lightspan, Inc.
Recently merged with Plato Learning Inc., of Bloomington, Minn., Lightspan is the second-most-popular model schools are buying with the $50,000-plus annual grants they get through the program, according to the Southwest Educational Development Laboratory, which tracks those awards.
In its literature database, the National Clearinghouse for Comprehensive School Reform lists 14 documents for Lightspan, compared with 541 for Success for All. Most of the entries for Lightspan are conference papers or implementation studies. None was published in a peer-reviewed journal.
“We’re caught in a chicken-and-egg situation when it comes to publishing,” said Bernice Stafford, Plato’s vice president of school strategies and evaluation. “Because we’re associated with a commercial firm, we’re shut out” from publishing in education research journals.
Ms. Stafford said Lightspan’s research literature may be sparse because it doesn’t fit the conventional mold. It’s meant, instead, to complement efforts already under way in schools.
A major thrust of the program, for instance, is its computerized testing-and-evaluation products, which enable schools to use data to chart their own progress and to align their curricula with national and state standards.
The program is nonetheless scientifically based, Ms. Stafford added, because all of its parts, taken individually, are based on strategies that research has shown to be effective.
Donna Carozza, a former principal of Union Elementary School in Texarkana, Ark., said she relied on the company’s research in choosing Lightspan for her school.
She also liked the program’s out-of-school component. As part of the program, parents can come to the school to check out Sony PlayStation equipment for their children to use at home in practicing their academic skills.
“I did not want to put any more stress on teachers than they already had,” Ms. Carozza said, noting that the K-5 school was facing pressure to raise its students’ test scores. “And we really felt we had to bring parents in.”
In the final analysis, federal officials say, dictating the kinds of programs schools can use is beyond their purview. No oversight mechanisms exist at the federal level to gauge the degree to which states ensure that the programs their schools choose are research-proven.
“It’s also partly a reflection of the amount of scientifically based research in education that’s available,” said C. Todd Jones, the Education Department’s associate deputy secretary for budget and strategic accountability. He said too few education programs have the kind of rigorous scientific evidence that leaves no room for dispute.
What’s more, he added, programs that are scientifically proven to work in, say, a rural Midwestern district won’t necessarily pass muster in urban classrooms with high concentrations of English-language learners.
Coverage of research is underwritten in part by a grant from the Spencer Foundation.