A team of Johns Hopkins University researchers, looking at studies on 29 popular schoolwide improvement programs, has concluded that the comprehensive models are better than the status quo when it comes to raising student achievement.
Read the 44- page report, “Comprehensive School Reform and Student Achievement: A Meta-Analysis,” from the Center for Research on the Education of Students Placed at Risk. (Requires Adobe’s Acrobat Reader.)
The analysis, which was published online last week by the university’s Center for Research on the Education of Students Placed at Risk, is the first to take a look across a wide range of different programs to gauge whether such schoolwide strategies, overall, make a difference in student achievement. It found that students in schools taking part in such programs outperformed 55 percent of their counterparts in nonparticipating schools.
While that kind of performance edge may seem slight, the researchers said it beats more conventional approaches to educating students from poor families, such as the “pullout” programs schools traditionally provided with federal Title I grants.
“It’s a big effect when you look at it in the context of other sorts of interventions that preceded comprehensive school reforms,” said Geoffrey D. Borman, the lead researcher on the two-year project. The other authors are Gina Hewes and Laura Overman, both researchers from the university in Baltimore, and Shelly Brown, who has since moved to the University of North Carolina at Greensboro.
Schoolwide approaches to improving student achievement got a boost from the federal government in 1998. That year, Congress enacted the Comprehensive School Reform Demonstration Program, which provides grants for schools to try out comprehensive improvement models. Now, researchers estimate that thousands of schools across the country are using such approaches, which include many popular, off-the-shelf programs such as Success for All and Core Knowledge.
The movement also spawned a handful of practical, Consumer Reports-style reviews that were intended to help educators sort out which programs had a strong research base and which didn’t.
Strong Evidence
Like earlier reports, the new study identifies three programs with the “strongest” evidence of effectiveness.
They are: Direct Instruction, an approach developed in the 1960s by University of Oregon professor Siegfried Engelmann; the School Development Program, a model created by Yale University’s Dr. James P. Comer to address students’ social and emotional as well academic needs; and Success for All, a program pioneered by Robert Slavin and Nancy Madden, also from Johns Hopkins.
The Hopkins reviewers delved deeper, however, and took a look at whether common elements of programs were linked to improved student achievement and whether the results varied when different methodologies were used or when the developers conducted the studies themselves.
One of the team’s findings is that results were stronger the longer a program was in place. The positive effects for schools with programs in place for five years, for example, were 21/2 times larger than those for all programs combined.
But the research team found no evidence linking particular elements of schoolwide programs, such as whether they require specific pedagogical practices or curricular materials, to improved academic performance by students.
Of 11 such characteristics analyzed, only one—whether a program involves parents and the community in running the school— seemed to affect achievement. That association, however, was negative. One reason for that finding, the report suggests, may be that such efforts sidetrack schools from the main purpose of improving student achievement.
In comparison, studies yielded much stronger positive effects when the results were measured through test-score gains of a single group of students, rather than by more experimental approaches comparing students in participating and nonparticipating schools.
Studies conducted by program developers also produced much bigger effect sizes than those by third-party evaluators.