In the world of education reform, the list has caused quite a fuss. Success for All made it, but Core Knowledge did not. The Coalition of Essential Schools is in. Foxfire is out.
The roster isn’t a social registry, but to some critics it has conferred membership in an exclusive club. It is a list of suggested reform models that schools can adopt to qualify for their share of nearly $150 million in federal grants. The grant program seeks to raise performance at schools that serve large numbers of poor children through comprehensive--or “whole school"--reform.
Few federal education laws ever mention by name specific programs or curricular approaches. By law and tradition, such matters are left for local communities to decide. But the bipartisan legislation creating the Comprehensive School Reform Demonstration Program, passed in 1997, lists 17 reform models.
Reps. David R. Obey, D-Wis., and John Edward Porter, R-Ill., were trying to be helpful when they cited the models in their bill. Rather than set down vague criteria, they wanted to give concrete examples of what they meant by “successful, externally developed, comprehensive school-reform approaches” backed by rigorous research.
Since then, however, “the list,” as it has come to be known, has taken on a life of its own. Researchers and program developers question why some as-yet-unproven programs are on the list while others with better track records are not.
And states and districts are wondering whether they must use one of the “name brands” to qualify for the new grants.
“It’s a little bit analogous to the federal government saying we want you to use good cars, such as Fords and Plymouths,” said Herbert J. Walberg, a research professor of education and psychology at the University of Illinois at Chicago. “In my view, it was a grave mistake to name them.”
But supporters of the program point out that the list has stimulated some positive effects, too. Educators are having to ask tough questions--in some cases, for the first time--about the evidence justifying popular reform models. And program developers are scrambling for the proof they need to earn themselves a spot on future versions of “the list.”
Who’s In, Who’s Out
When it first surfaced, the roster of programs named in Obey-Porter, as the federal program is known, struck some researchers and program developers as a curious mix. Besides Success for All and the Coalition of Essential Schools, two of the most widely known reform models, the list also names: Accelerated Schools, ATLAS Communities, Audrey Cohen College, Community for Learning, Co-NECT, Direct Instruction, Expeditionary Learning Outward Bound, High Schools That Work, Modern Red Schoolhouse, the National Alliance for Restructuring Education (since renamed America’s Choice), Paideia, Roots & Wings, the School Development Program, the Talent Development High School, and the Urban Learning Centers.
Over its 13-year existence, Success for All has generated a long list of studies that for the most part show that test scores improve more in schools using the program than in demographically similar schools that do not. The Coalition of Essential Schools, a program based on the philosophy of noted educator and author Theodore R. Sizer, has also seen widespread use in schools.
But both programs also have their critics. Success for All has been criticized as relying too heavily on its own research, for example. And while the coalition’s strategies have produced some extraordinary successes, there are few hard data pointing to big test-score gains in member schools.
Some of the programs on the list are too new to have much of a history at all. Most of the younger programs are designs being piloted by New American Schools, a private corporation that in 1991 began underwriting a stable of reform models.
Two evaluations of the New American Schools models by the RAND Corp., a Santa Monica, Calif.-based research company, so far have looked only at whether schools were faithfully implementing the programs, not whether they work. Newer test-score data, however, suggest that some of those schools are beginning to see progress.
Out in the Cold?
Left off the list, meanwhile, were programs such as Core Knowledge, a 9-year-old cultural-literacy program pioneered by University of Virginia professor E.D. Hirsch Jr. Studies of the popular program, used by teachers in nearly 800 elementary schools, have also found improved test scores--particularly in schools with high proportions of poor students.
Likewise, the legislation made no mention of Different Ways of Knowing, a program used in 300 schools with similarly good results, or of the Child Development Project, a decade-old model developed in Oakland, Calif. that also has a string of studies to back it up. All three programs have been evaluated by independent researchers who compared students in those schools with matched groups of children in nonprogram schools.
Part of the problem, Mr. Walberg contends, is that very little research exists to back up the effectiveness of most of the popular reform models, including those on the Obey-Porter list.
“Very few have any evidence at all, and especially evidence that is independent of developers,” he said. “This kind of screening would never be acceptable in medicine.”
Not a Binding List
But Cheryl Smith, Mr. Obey’s appropriations assistant, said the federal lawmakers never meant to offer a definitive list.
“The intended purpose was to give people an idea of what we were talking about,” she said recently.
William R. Kincaid, the project manager for the program in the U.S. Department of Education, agreed. “In fact,” he noted, “we have said that states and districts should not assume that just because a program is on that list that it necessarily addresses all the criteria in the legislation.”
Besides having a research base, for example, the law requires that qualified programs be comprehensive, include professional development, and set measurable benchmarks and goals for progress. They must also have support from the faculty, involve parents in their efforts, and recruit an outside partner such as a university or a program developer to guide them. Plans for evaluating the program have to be in place, and educators must tie in other resources to bolster their reform efforts.
“We’ve also made clear that locally developed approaches are acceptable as long as they address the criteria in the legislation,” Mr. Kincaid added.
The problem, however, with the Obey-Porter criteria, argues Stanley Pogrow, an associate professor of education at the University of Arizona, is that they are tailor-made for programs such as Success for All. More narrowly focused approaches, such as Mr. Pogrow’s computer-based program for teaching higher-order thinking skills, might not by themselves qualify for funding. “This became a windfall for a very few programs and endangered the existence of programs not on the list,” he said.
Some program supporters characterize such comments as sour grapes. For example, schools may cobble together several models to broaden their reform efforts.
For now, it’s too early to tell whether Mr. Pogrow is right or wrong.
The Southwest Educational Development Laboratory, a federal research facility in Austin, Texas, has begun compiling a database on the Obey-Porter program.
It shows that the 231 schools across the country that are already receiving the federal grants have put the money into more than 60 different programs. By next year, the grant program aims to support reforms in about 2,500 schools.
Success for All, developed at Johns Hopkins University in Baltimore, has won the most converts so far, with 30 schools. But the next-most-frequent choice was not on the federal list. It’s a home-grown strategy developed by the DePaul University Center for Urban Education in Chicago. Twenty-two Chicago-area schools chose that program, which is based on re-examining the way teachers think about the school calendar and the curriculum.
Phil J. Hansen, the chief accountability officer for the Chicago school district, said the schools chose that program because they had already invested time and money in it. In Chicago, schools placed on probationary status by the district because of consistently poor test scores must recruit outside partners and undertake comprehensive reforms, much like those required in Obey-Porter.
“It would’ve been foolish to say to schools that were already working successfully with partners, ‘OK, stop what you’re doing,’ ” Mr. Hansen said.
Seventeen other Chicago schools chose another DePaul program, known as School Achievement Structure, which is based on the effective-schools principles espoused by the late Ron Edmonds in the 1970s.
Moreover, some experts note that the new federal program may already be spurring better, more comprehensive, and more analytical information about what works in school reform. At least three overview reports reviewing the cost and effectiveness of popular reform programs have been made public in the 14 months since Obey-Porter passed; a fourth is due out this month.
The forthcoming publication, commissioned by five national education groups, may take the most critical look at reform models to date. Rebecca Herman, who is analyzing data on the various programs for the American Institutes for Research, a nonprofit organization based in Washington, said a key feature of the report is a table that gives Consumer Reports-style effectiveness ratings for 25 of the most widely known models. Only three, she said, get a rating indicating that their programs show “strong evidence of positive effects in student achievement.” She declined, however, to name them before the scheduled release of the AIR report late this month or early next month.
Robert E. Slavin, Success for All’s founder, praises the beneficial effects of including specific models in the legislation. “Just the process of tying federal funds to effectiveness had enormous benefits in terms of getting people to look at the evidence,” he said.
He and other experts pointed out that the legislation also calls for participating schools to produce and pool their own data on whether the programs they choose are producing results. The effect, they said, may be like pumping water to a statistical desert. Educators for the first time will begin to have good data on a wide range of reform designs.
“Different honest scholars would argue whether some programs should’ve been on the list, and the same honest scholars would argue that others should not have been on the list,” said Samuel C. Stringfield, a principal research scientist at the Center for the Study of the Organization of Schools at Johns Hopkins.
While he argued “there both should and should not have been a list,” he predicted the result will be that ''over the next five or 10 years, we’ll get a better idea of what works, when, and why.”
A version of this article appeared in the January 20, 1999 edition of Education Week as Who’s In, Who’s Out