As education research becomes saturated with more massive data and statistical analyses, it can get less transparent and harder for teachers and policymakers to understand.
That’s why Larry Hedges, Northwestern University education professor and winner of the latest $3.9 million Yidan Prize, plans to put the bulk of the world’s largest education research prize toward building better statistical and methodological bridges between researchers and educators.
Hedges and fellow statistician Elizabeth Tipton are launching a new Center on Statistics in Evidence-Based Policy at the Institute for Policy Research at Northwestern, intended to improve the quality and transparency of studies for educators and policymakers. The center will tackle a lot of methodological grunt work: improving guidelines for research clearinghouses and standards for research meta-analyses, for example, as well as tackling the problem of how to replicate and scale up studies successfully.
He spoke with Education Week about what he sees for the future of education research and the role for the Institute of Education Sciences, where he chairs the advisory National Board for Education Sciences.
Education is entering a lot of new ground, with massive data sets and machine learning driving new interventions. Do you see that as a benefit, or a distraction?
“Clearly data has been collected on a larger scale and in different types than before. Those are things that are going to have important implications for education and they will allow us to improve education research—but I don’t think for a minute they’re going to make obsolete the kind of principled data collection—surveys, experiments—that we do today.
But that mean it isn’t valuable to think about wholly new kinds of data and using them in different ways. When I come to the office, I sit down in front of my computer and except for meetings I might have when I’m talking to somebody, I’m working on the computer all day. That is at least one kind of modern work life. People will probably spend more time in front of their computer and it’s how a lot of productive work gets done in the world today. Well, eventually schools are going to look a lot like, you know, if what we’re doing in schools is preparing people to be productive citizens. And at some point school’s going to look a lot more like the way we do things in the workplace. And I’m not saying that because I’m somebody who believes that all education is just vocational training. I’m saying that because it’s the way we work in society and it’s going to increasingly be the way we teach.
What would that look like in practice?
“Embedded assessment is probably something that will be ubiquitous in the near future. And once you have embedded assessment as well as some instruction that’s digital, then it opens the opportunity for doing all kinds of experiments, all kinds of studies that don’t involve disrupting the whole classroom, in the way our studies have to do today. We frequently do the cluster randomized trials where everybody in the same school or a classroom gets the same variation of a treatment. And we do that largely for practical reasons: you can’t give five different curriculums in the same classroom without the teacher having to pull their hair out. But that might work in a classroom of the future.
In the 1980s, I was involved in the evaluation component of a Chicago school mathematics project. ... We spent $1 million doing randomized trials ... several rounds of studies. ... Now we can be in 30 or 40 schools around the country and you can imagine in the future we could test dozens of little questions that emerge: Should I teach this before that? How much emphasis should I put on this skill before I go to the next skill? There are tons of little questions that come up in development and they also come up in instruction. ... You could imagine a situation in which you could get out very complicated, very complex designs to evaluate a large number of issues that are design questions about how to put a curriculum together and even begin to think about things like personalized education.”
What would the new center do?
We’re going to be worried about the methodology for doing those studies because I’m a methodologist. ... We’re going to be trying to contribute to better practices for synthesizing research and ... disseminating that evidence to communities that can use it.
I’ll give you an example. Clearinghouses [like the What Works Clearinghouse] often divide outcomes into groups called domains ... like reading and math for example. So if a study finds a significant treatment effect for at least one outcome in a domain and no negative effects of outcomes in that domain, then it becomes a promising program. But then the question is, well, is that the right thing to do? Suppose there are a hundred different outcomes that are measured in a given domain, if one out of the 100 domains has a significant effect, is that promising? ... Almost everybody would say, well, that doesn’t seem quite right.
We understand that there’s a problem there, that we ought to do something. But the question is, well, what should you do? What we wind up doing is making up rules of thumb, but what we don’t do is spend a lot of time evaluating those rules of thumb to make sure we understand their properties as decision procedures. So that’s one example [of what the center will do.]
What do you see as the Education Department’s role?
“I think the last 20 years has been the first serious effort to try to [improve rigor in research]. And although I would not say IES has done everything right, I will say that IES at least had a comprehensive strategy for how to build an education science that was meaningful and efficacious and changing American education. I think we’re 20 years into an effort that’s going to last much longer.”