The Institute of Education Sciences, the U.S. Department of Education’s primary research arm, today launched a $7 million project to identify and quickly scale up effective practices to help students recover academically from pandemic disruptions.
The LEARN network, for Leveraging Evidence to Accelerate Recovery Nationwide, is one of three new research initiatives geared to pandemic recovery in schools, with others focused on supporting interventions in community colleges and helping state, regional, and districtstaff implement promising practices. But IES Director Mark Schneider believes it will take a widescale overhaul of education research and data to accelerate progress for the students who have fallen furthest behind.
Schneider spoke to Education Week about what’s needed to help students recover academically from the pandemic. This interview has been edited for length and clarity.
We’ve been trying to find effective ways to help struggling learners catch up for decades. What’s different about how you will be using networks like LEARN?
Mark Schneider: I’m concerned that the pace of traditional educational research is just too slow. My analogy is that when COVID came, we did Operation Warp Speed [to develop a pandemic vaccine]. The federal government invested across several vaccine producers: They preordered millions of doses from all of them and promised distribution—and the reason was that they were covering their bets.
What if instead we said, hey, why don’t you do pharmaceutical research like we do education research? Let’s give, you know, a couple million dollars to Moderna and then three, four, five years later it didn’t work—because most of [the attempted vaccines] don’t work. Then we’ll give money to Johnson & Johnson for a couple years; then if that doesn’t work, we’ll give it to Pfizer, etcetera. We’d have died using serial long-term investments, and with the stakes so high in COVID, [vaccine research] was never going to be like that. But that’s pretty much the way it is in education research.
If we are facing a crisis of the size that we’re facing, we can’t run serial five-year contracts. We have to fail fast. Run experiments fast; replicate the few things that work in different geographies and in different demographic groups. Rinse and repeat. That’s the model we have to be pursuing.
What do you think needs to change in our approach to understanding struggling students?
Schneider: Since No Child Left Behind in 2002, by law and by practice, we’ve focused on proficiency, because the goal under NCLB was to wipe out ‘below basic,’” [That’s the lowest possible score on the National Assessment of Educational Progress.] [The goal was] to turn everybody into a proficient reader, writer, science, math [student]. Obviously, that didn’t happen, but because we were focused on getting everybody past the proficiency mark, we paid less attention than we should have to what was going on below basic. But the trend of the below-basic [students’ achievement] falling is something that’s gotten worse.
First of all, I think that NAEP has to change. Everything about NAEP is slow and cumbersome. And I don’t think there’s any disagreement in the NAEP world that paying more attention to below basic is essential. There’re not enough questions at the bottom of the distribution. And everybody knows that changing that is like redirecting a very big ship.
Many approaches to academic recovery rely on data use, and the pandemic caused a lot of disruption in state and federal data. How can we fix that?
Schneider: Look, we spent over $900 million building [State Longitudinal Data Systems], version one, right? And half of the money was spent in two years, 12 years ago, when these systems were built. I was commissioner of [the National Center for Education Statistics] in 2005 and signed the first two or three rounds of SLDS grants. So we’re talking ancient history. We need to think about a modern infrastructure for these incredibly important data systems and we need to think about how to integrate data across systems.
There’re worlds of information buried in data streams all over the place. We have to get much more sophisticated about using data. If we don’t figure out how more effectively to merge data and protect privacy, we’re leaving lots of chips on the table. At the same time, I’ve been struggling with how to build a standard for ethical AI [artificial intelligence], because ... machine data can easily fall into all kinds of traps about building in prejudice and building in discrimination into our models.