Will Success Spoil Success for All?

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments
"We truly put enormous efforts into making sure that if somebody calls themselves a Success for All school, they are, certainly at the outset."

Robert E. Slavin,
Success for All

One of the primary limitations on Success for All's growth is the need to recruit and prepare enough trainers to keep pace. Three years ago, about 80 percent of the trainers were based in Baltimore. Today, most work out of their homes to be closer to participating schools around the country.

In addition, Success for All has contracted with three regional training centers that work with schools in their areas. These are the University of Memphis in Tennessee; WestEd, a federally funded education laboratory based in San Francisco; and Educational Partners, a for-profit consulting company also located in San Francisco.

"We truly put enormous efforts into making sure that if somebody calls themselves a Success for All school, they are, certainly at the outset," Slavin says. "We are sweating the details to a great degree. And, eventually, we will walk away from schools that are not implementing the key elements of the program."

But despite such precautions, Slavin readily admits that the program doesn't always work--especially in schools that are pressured to adopt it. Others, he adds, do not carry it out as planned.

An article in the December issue of Evaluation Review, written by three Maryland researchers who are not affiliated with Success for All, chronicles the difficulties of implementing the model in one school in Charleston, S.C., over a three-year period.

The school was required to participate in the program, rather than choosing it voluntarily. It did not implement the family-support component. And the regrouping of students based on assessments did not happen as often as the program calls for. In addition, interpersonal difficulties between the in-school facilitator and some of the teaching staff disrupted the implementation further. And when Hurricane Hugo struck Charleston in mid-September 1989, it threw all the district's schools into turmoil.

After three years, the researchers found, kindergartners in the Success for All school generally outperformed those in a control school on reading assessments. But they found no positive effects in the later grades, where many students continued to read below district standards.

Teachers' expectations for their students also declined over the three-year period. "Although lack of positive findings in one or two schools should not lead to the conclusion that a program does not work," the researchers write, "it does call into question whether it will always, without fail, succeed."

Slavin agrees. "Success for All does not always work," he says. "It has to be implemented."

In Louisiana, some teachers say they were pressured into adopting the program without knowing much about it.

Mary Jane Hollingsworth, the president of the St. Mary Parish Association of Educators, an affiliate of the National Education Association, says all elementary schools in her 12,000 student district are using the program in grades K-3. "There were some schools that voted no for the program--that did not have the 80 percent approval," she says. "And they kept going back and having to revote and revote until finally just everyone gave in and said yes."

"I don't think that's exactly correct," says Glenda Comeaux, the supervisor of elementary education for the St. Mary Parish district. "I think that some of them reacted at first with reluctance because of a lack of information. And then once everything was explained to them, and they understood the program, they did opt to go to the program."

But Hollingsworth says some teachers still complain that they are under tremendous stress and must work late into the night preparing for class. While some teachers are seeing improvements, she adds, others remain skeptical.

Such stories could become more common as states and districts try to mandate Success for All's adoption. In New Jersey, for example, Slavin has insisted that the program will not work with more than 50 schools in the first year, and that those schools must agree to participate.

"Where you have schools that basically are under compulsion to make a choice," he says, "this is going to be much, much more difficult."

Some educators, such as Allington, who chairs the reading department at SUNY-Albany, say Success for All invests too heavily in materials at the expense of teaching.

"I'm a reading person, and I don't think that the program actually reflects some of the best practices in reading," Allington says. "I don't think that it invests heavily enough in developing teacher expertise in reading."

Other researchers have begun to question the methods used to evaluate the program. Most of the studies have relied on individually administered reading tests chosen by Slavin and his colleagues, rather than standardized tests used by the districts.

Slavin explains that the researchers chose the tests--which measure word-attack skills, reading comprehension, oral reading, and letter-word identification because group-administered, standardized tests are not valid measures for young children. In addition, he adds, the high-stakes tests used by districts are often subject to manipulation.

"They do research, and they try to think about how to get these things to work in schools, and there's evidence that it actually works."

Anthony S. Bryk

But whatever the virtues of such tests, argues Gary D. Gott-fredson, one of the co-authors of the South Carolina study, eventually Success for All schools will have to perform on districts' own measures of student achievement. And, so far, it is not clear what proportion of students in Success for All schools are actually reading at grade level according to those measures.

In addition, Gott-fredson and others contend, it is almost impossible to achieve a perfect match between a Success for All school and a control school, throwing the validity of such comparisons into question.

Slavin brushes off such criticisms. "There's a reason that, for 80 years, science has insisted on control groups," he says. "They're the best representation of what would have happened had you not implemented the program. Of course, there is a difficulty getting a perfect match between any pair of schools, but once you look at large numbers of comparisons that's just not likely."

Susan Bodilly, a senior social scientist at the RAND Corp., a Santa Monica, Calif.-based research organization, says Slavin deserves credit for evaluating the program at all.

"So few education programs have been evaluated in any kind of fashion," she says. But, she adds, now that there are several hundred Success for All schools, "you can't get by much longer without a much more formal evaluation."

Her praise of the program is echoed by others in the research community.

"There are a lot of criticisms that one hears about it," the University of Chicago's Bryk says. "But in terms of the kind of organization it's developed ... it's much further along than any organization I know of. They do research, and they try to think about how to get these things to work in schools, and there's evidence that it actually works."

Success for All, he says, approaches school reform "as a kind of engineering problem."

Web Only

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories