Commentary

The School Improvement Gains Nobody's Talking About

—Getty

Three recommendations for improving education research

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

In Houston, high doses of tutoring helped raise test scores at struggling secondary schools. In San Francisco, extended learning time for students and workshops for parents made a difference. And throughout Massachusetts, greater autonomy for schools and a collegial, collaborative, and professional culture for teachers were among the factors helping to turn around schools.

You won’t find these results in the federal study of the School Improvement Grant program, an Obama-administration initiative that eventually directed as much as $7 billion into low-performing schools across the country.

The 2017 study released by the U.S. Department of Education reported no significant improvement for schools receiving the grants, a finding that U.S. Secretary of Education Betsy DeVos has repeated when she talks about how pointless it is to spend more money on struggling public schools. But that’s not the whole story.

With states set to spend more than $1 billion in federal money in the coming year on school improvement, it’s important that we learn the full lessons of the SIG initiative—and consider revamping how the Education Department evaluates progress.

"A shift in the federal research mindset is critical so that we can produce more useful findings to support school improvement under the Every Student Succeeds Act."

In a report from the independent think tank FutureEd, which is based at Georgetown University, we looked beyond the federally funded study at every published state and local study of SIG programs. In 12 of the 17 studies we uncovered, involving about 450 schools receiving grants, these schools made significant gains in student reading or math achievement compared to public schools that didn’t get SIG support. For a typical 3rd or 4th grader, these average gains represent 40 percent of the expected yearly increase in reading achievement and half the expected gain in math.

In California, a study found a dollar spent on SIG grants was more cost effective in improving student achievement than a dollar spent lowering class size. In an Ohio study released this year, SIG schools made significant gains in reading and math compared to schools not in the program. At the local level, SIG schools in San Francisco reduced the achievement gap with other schools in the district by 35 percent and reduced the odds of unexcused absences by 24 percent. And in Ohio and Texas, graduation rates increased by an additional 9 and 5 percentage points, respectively, in high schools that received the grants.

Why did the federal study show no significant gains? We believe that the study’s “statistical power” was so weak that the student performance gains in SIG schools “would have had to be unrealistically large for the study to have been able to detect them,” in the words of Brown University researcher Susanna Loeb. Moreover, the sample of about 190 schools was not nationally representative, so the results can’t be generalized to the nation. And the states they reviewed had widely varying processes for awarding and administering the grants, introducing a lot of static that can drown out positive findings.

The shortcomings of the SIG analysis warrant big changes in the way the federal Education Department and its research arm, the Institute of Education Sciences, conduct program research. A shift in the federal research mindset is critical so that we can produce more useful findings to support school improvement under the Every Student Succeeds Act. Of course, accountability is still important, but we believe studies have their greatest utility when they identify how to improve the neediest schools in each state.

Which brings us back to Houston, San Francisco, and other districts. In contrast to the national study, the state and local studies we reviewed yielded valuable insights into what works in turning around failing schools. And beyond specific strategies, they point to the importance of comprehensive reform work (composed of four or five components) that is supported externally and sustained over multiple years.

These efforts were not reinventing the wheel. The San Francisco work, for instance, was replicating five approaches identified by the University of Chicago Consortium on Chicago School Research, including an emphasis on strengthening school leadership and a student-centered learning climate.

In Houston, Denver, and Lawrence, Mass., SIG initiatives produced positive findings using the strategies that Harvard researcher Roland Fryer found in successful charter schools: increased instructional time, a rigorous approach to building human capital, high-dosage tutoring, frequent use of data to inform instruction, and a culture of high expectations. In fact, the 2014 Houston study found that adding small-group, high-dosage tutoring to reforms increases student outcome gains in secondary schools by 200 percent over their peers without tutoring.

In North Carolina, researchers learned that the required replacement of principals had no effect on teachers’ perceptions of the quality of leadership, perhaps because many of the new principals were inexperienced. Such findings point to the importance of understanding how interventions work within local contexts. It’s hard to learn these lessons from a national study.

Making matters worse, the national study didn’t publish outcomes until after the SIG work was completed; that is the Education Department’s practice, but it meant delay for policymakers and practitioners eager to learn what worked under SIG and why.

All of this calls for a different approach to education research, one based on a federal-state partnership. Toward that end, we make three recommendations:

    1. The U.S. Department of Education should use the school outcome and other data it already collects from every public school in each state to generate annual or biennial state-specific studies of school improvement efforts.

    2. The department should help states conduct their own in-depth studies of school improvement using their greater access to school data and knowledge of educational contexts.

    3. The department should initiate a National Research Council committee to synthesize the research in the United States and internationally on turning around low-achieving schools.

This fundamentally different approach to studying program interventions would keep a single, nationwide analysis from being the last word on a program playing out differently in schools and districts across the country. And the new approach can inform how valuable education resources could be used more effectively to improve academic success for our most vulnerable students.

Web Only

Related Opinion
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented