Who's In, Who's Out
What's at stake? Egos and reputations, of course, but also cash. The list is part of a $150 million federal grant program designed to spur school reform, and critics charge that much of that money will flow to the programs that carry what they say is the government's seal of approval.
In the world of school reform, "the list" has caused quite a fuss. It's an honor roll of sorts, a roster of reform programs cited as models in recent federal legislation. And who's on the list--and who isn't--is the talk of education's research community. Robert Slavin's Success for All program made the cut, but E.D. Hirsch Jr.'s Core Knowledge did not. Ted Sizer's Coalition of Essential Schools is there, but not the famed Foxfire teacher network.
Touting model programs is something new for Congress. Federal lawmakers rarely recommend specific programs or curricular approaches by name; such decisions are generally left to local communities. But bipartisan 1997 legislation aimed at spurring "whole school" reform broke that tradition, naming 17 models that schools serving poor children can adopt to get a share of the federal money.
The two sponsors of the measure, Democratic Representative David Obey of Wisconsin and Republican Representative John Edward Porter of Illinois, say they were just trying to be helpful when they cited the reform models in their bill. They wanted to give concrete examples of what they meant by "successful, externally developed, comprehensive school-reform approaches."
But "the list," as it has come to be known, has created division and rancor. Researchers and program developers alike question why a number of unproven programs won Congressional blessings while others with better track records did not. "It's a little bit analogous to the federal government saying we want you to use good cars, such as Fords and Plymouths," says Herbert Walberg, a research professor of education and psychology at the University of Illinois at Chicago. "In my view, it was a grave mistake to name them."
But supporters of the Obey-Porter legislation say the list is working exactly as it should. Educators, they point out, are asking tough questions--in some cases, for the first time--about popular reform models. And program developers are scrambling to prove that their models deserve a spot on future lists.
When the Obey-Porter list first surfaced, it struck many educators as a curious mix of models. Besides well-known programs such as Sizer's Coalition of Essential Schools, the roster also includes a number of relatively obscure programs, such as the Direct Instruction model at the University of Oregon.
How these programs made the cut--and why others did not--is not clear. Success for All and the Coalition of Essential Schools are immensely popular reform models, but both have their critics. Slavin, a Johns Hopkins University scholar, has been chastised for promoting Success for All with findings from his own research. And while the coalition has produced remarkable results in some schools, it has not generated the kind of test-score gains policymakers like to see.
|"This became a windfall for a very few programs," Stanley Pogrow complains, "and endangered the existence of programs not on the list."|
Some models on the list are new and unstudied. A number of these were piloted by New American Schools, a private corporation that in 1991 began underwriting a variety of experimental reform projects. Although the Rand Corp. has evaluated the New American Schools models, the research looked only at whether schools were faithfully implementing the programs, not at whether they produced results.
Inexplicably left off the Obey-Porter roster was Core Knowledge, a nine-year-old program pioneered by Hirsch, the University of Virginia professor and cultural literacy guru. Studies of the popular program, used by teachers in some 800 elementary schools, show test-score hikes--particularly in schools with high percentages of low-income students. Also ignored were Different Ways of Knowing, a program used in 300 schools with good results, and the Child Development Project, a decade-old initiative that has a string of studies to back it up. Independent researchers have taken the measure of all three of these programs.
Part of the problem, Walberg contends, is that little research exists to back up the effectiveness of most popular reform models, including those on the Obey-Porter list. "Very few have any evidence at all, especially evidence that is independent of developers," he says. "This kind of screening would never be acceptable in medicine."
But federal lawmakers drafting the Obey-Porter law never meant to offer a definitive list of effective programs, according to Cheryl Smith of Obey's staff. "The intended purpose was to give people an idea of what we were talking about," she says.
William Kincaid, manager for the federal program at the Department of Education, agrees. To qualify for funding, programs must be research-based and must be suitable for a whole school, he says. But they must also provide professional development for teachers, set measurable benchmarks and goals for progress, and involve parents. What's more, programs must be rooted in an organization--a university or reform group--that can guide schools. "We've made clear that locally developed approaches are acceptable," Kincaid says, "as long as they address the criteria in the legislation."
But those criteria aren't universally embraced. Stanley Pogrow, an associate professor of education at the University of Arizona, argues that the legislation excludes narrowly focused models that aren't comprehensive, such as his own computer-based program for teaching higher-order thinking. "This became a windfall for a very few programs," Pogrow complains, "and endangered the existence of programs not on the list."
It's too early to tell whether Obey-Porter will winnow the field of school reformers, as Pogrow predicts. So far, only 231 schools have received the federal grants, and they have sunk the money into more than 60 different reform models, according to the Southwest Educational Development Laboratory, a federal research facility in Austin, Texas, that has begun compiling a database on the federal program. Department officials hope the program will be supporting innovation in some 2,500 schools by next year.
Success for All has won the most converts, with 30 schools, lab researchers say. But the next most-frequent choice is not even on the Obey-Porter list. It's a strategy developed by the DePaul University Center for Urban Education in Chicago. Twenty-two Chicago-area schools chose that program, which encourages teachers to reexamine the academic calendar and curriculum.
Phil Hansen, chief accountability officer for the Chicago district, says the schools chose the DePaul model because they had already invested time and money in it. In Chicago, schools placed on probationary status by the district because of consistently poor test scores must recruit outside partners and undertake comprehensive reforms, much like those required in Obey-Porter.
"It would've been foolish to say to schools that were already working successfully with partners, 'OK, stop what you're doing,' " Hansen says.
Although still in its infancy, the federal program may already have prompted more discussion about what works in school reform. At least three reports reviewing the costs and benefits of popular reform programs have been made public in the 14 months since Obey-Porter was signed into law.
Another potentially revealing study is in the works. Researcher Rebecca Herman is analyzing data on the various programs for the American Institutes for Research, a nonprofit organization in Washington, D.C. The report was commissioned by five national education groups--the American Association of School Administrators, the American Federation of Teachers, the National Education Association, the National Association of Elementary School Principals, and the National Association of Secondary School Principals. It will feature a table that gives a Consumer Reports-style effectiveness rating to 25 of the most widely known programs. Though AIR will not divulge Herman's findings until the report's release, she says only three programs get high marks for raising student achievement.
Such public scrutiny of reform programs is exactly what Obey-Porter intended, backers of the legislation contend. Schools receiving the grants are supposed to produce and pool data on the efficacy of the programs, they argue, and the result will be like pumping water to a statistical desert. Educators for the first time will have reliable data with which to judge a wide range of reform designs. "Over the next five or 10 years," predicts researcher Samuel Stringfield of Johns Hopkins, "we'll get a better idea of what works, when, and why."
Vol. 10, Issue 6, Pages 20-21