New federal guidance on using research to improve schools suggests that it’s not enough to find a study that supports a program—district leaders and researchers alike have to think more about who really benefits from an intervention and how.
If states and districts take up the guidance released this morning, it could deeply change how researchers and educators work together for education studies and could significantly broaden the array of students and schools who get studied. Experts also warn that without significant supports and training, it could be a high bar for most districts to reach.
First, a brief recap: The Every Student Succeeds Act gives states and districts much more room to be creative when it comes to school improvement than the narrow range of options available under the No Child Left Behind Act, but they must give evidence that their proposed interventions are likely to work. The law provides tiers of evidence, from strong (experimental trials) to moderate (quasi-experimental) to promising studies that don’t meet the higher standards of rigor but still statistically control for differences between the students using an intervention and those in a control group. In areas aside from school turnaround where there just is no rigorous research, states and districts can test an intervention while conducting their own study.
The U.S. Department of Education’s new guidance, released on Friday morning, is intended to flesh out that tiered system of evidence. It’s not legally binding in the way regulations are, but it encourages state, district, and school leaders to do more than just check a box for finding a good study that supports their chosen intervention. (For more on the likely accountability impact of the guidance, check out my colleague Alyson Klein’s coverage at Politics K-12.)
For example, to show strong evidence, the guidance calls for at least one randomized controlled trial that meets the standards of the federal What Works Clearinghouse or is of otherwise equal quality. That study has to show statistically significant benefits for the students on a relevant outcome, without being overshadowed by negative findings on other high-quality studies, and it has to be performed using a large, multisite sample that includes similar children and settings where leaders hope to use the interventions.
Moderate evidence calls for the same standards, but for quasi-experimental studies. The guidance does not call for large or multisite studies for promising evidence, but it does tell education leaders to avoid programs with very mixed results on studies and make sure the results are relevant to what the school wants to improve.
Providing More Context for Education Findings
Let’s stop and break that down a little, because it could spur a lot more nuance in education research. To meet the strongest level of evidence for a program in this guidance, it’s not enough to a have a big, well-implemented randomized controlled trial. You have to make sure that you are measuring the thing that actually needs to change.
For example, a school with a high absenteeism problem may look at a program found to improve school climate. A closer look could reveal students report “feeling more connected” to school, but no change in the actual number of students who miss school. That could mean the district should look at another climate program, or use the program in coordination with other programs designed to address other potential reasons students miss school, and then study the programs’ effects.
“What we’ve learned in the last eight years is really thinking about what’s the data we’re trying to collect—not just for compliance, which we were really good at, but about what we are trying to prove,” said Sonal Shah Beeck, the director of the Center for Social Innovation and Impact at Georgetown University, during a White House symposium Thursday on using social sciences in policy.
Who Improves in School Improvement?
Moreover, the guidance suggests that school improvement research needs to be studying the kind of children and schools that match the places where the intervention actually will be used. Critics have pointed out for years that there are an awful lot of studies that use middle-income white students from around college towns, and an awful lot that focus on poor minority students in urban schools. Those can be very high-quality studies, but the narrower populations are one reason it can be hard to scale a successful program for poor urban black students to an equally poor suburban school with high numbers of Vietnamese English-learners.
“Just as one size doesn’t fit all when it comes to clothes or education initiatives, one study doesn’t fit all district and school contexts,” said Ash Vasudeva, the vice president for strategic initiatives at the Carnegie Foundation for the Advancement of Teaching, who was pleased with the new guidance. “When educators examine the quality of research, they should be looking at whether the studies were conducted with populations that reflect their own.”
The Institute of Education Sciences provided support for a more contextual approach earlier this week, with a new What Works Clearinghouse that allows users to find research based on specific types of students, like English-learners, or school locations, like rural or urban schools.
Cycle of Improvement
The guidance also calls for states, districts, and schools to use research as part of an ongoing cycle of improving their own practice. As the chart below shows, the Education Department suggests an improvement science approach, in which districts build evaluation into planning and implementing interventions, and then use their results to improve the programs or change them going forward.
“It’s a great climate right now for an academic to be generating quality research about what policies work,” said Ben Castleman, an assistant professor of education and public policy at the University of Virginia, at the White House symposium. “As we continue to innovate, we need to rigorously evaluate whether these strategies work before we put a lot of money in.”
But at the symposium, Georgetown University’s Shah Beeck also warned that there must be more outreach and support from the research community itself to help districts and schools think about and use evidence in education differently.
“We have to train people on how to do this in communities that are not next to the University of Texas or the University of Oklahoma, but they are in communities where there are a lot of problems,” she said. “We need to make sure when we do evidence-based policy we aren’t leaving out large swaths of communities who don’t have access to [big university researchers]. We need to think about this tension that exists.”
Chart Source: U.S. Education Department
A version of this news article first appeared in the Inside School Research blog.