The Institute of Education Sciences is getting a lot of support for its proposal to go beyond research on “what works” in education to explore the process of how schools in different contexts can continue to improve over time.
Back in October, I reported that the Education Department’s research arm was asking for input about a proposed new education research program covering “continuous improvement research in education.” It’s obvious IES really wants to make this new topic a centerpiece in the coming year. In addition to the standard requests for comment, IES Director John Q. Easton personally reached out to top researchers in the learning sciences field—Chris Dede, a professor of learning technologies at Harvard Graduate School of Education, Stephen Raudenbush, the chairman of the University of Chicago’s committee on education, Douglas Fuchs, a special education professor at Vanderbilt University, and Bror Saxberg, the chief learning officer of Kaplan, Inc., among them—all of whom seemed enthusiastic about the new topic.
“If taken to the extreme, school improvement is reduced to regarding schools as purchasers of outside interventions validated by research done by an ‘FDA’ of education,” Raudenbush writes. “While I do believe we can and should learn an enormous amount from interventions ... This won’t produce the broad changes we need and could even distract. More systemic change is needed within schools and districts. Good information can play a central role.”
Likewise, writing on behalf of the Knowledge Alliance, which represents research organizations, President Michele McLaughlin says the focus on implementation research is needed because “even the most cutting-edge practices, built on high-quality research and proven through rigorous testing, will have little measurable impact on teaching and learning if not properly implemented.”
However, several researchers ask for more guidance on how to show evidence of improvement in a system when working outside the box of a tightly controlled experimental design. “In this [proposal], interventions, programs, practices, etc., will be developed, in vivo, in the existing system, with less control over fidelity,” says Deanne Crone, a co-principal investigator of the Middle School Intervention Project at the University of Oregon. “It is likely the impact on outcomes of interest will be muted under those circumstances.”
Saxberg, however, argues that IES’s proposed method for study, including frequent cycles of short-term testing and tweaking, will actually give much better insight into the effectiveness of programs and interventions in real classrooms. “One of the problems now is that it is quite hard to learn from either success or failure at scale in systems—even if a study of a major instructional change is well-enough designed to see if it is successful or not, that’s not enough,” he says. “We want to know WHY something well-founded worked—and especially, why something well-founded did NOT work, in order to carry on improving. Doing smaller changes, more quickly, allows a build-up of principles that work—and also a much clearer appreciation of interaction effects between elements....”
While commenters generally approve of the new research area, IES’s proposed funding was another matter. The almost-universal consensus, particularly among researchers already working with interagency partnerships, was that the proposed $1.5 million over four years for each grant was “far too low.” Patrice Iatarola, an associate professor of education policy and evaluation at Florida State University breaks it down: "$1.5 million over four years isn’t going to effectively cover cross-institutional partnerships, especially when overhead costs are taken into account—at a modest estimate of 40 percent, this leaves just $900,000 or $225,000 per year. How many investigators, research assistants, etc., will be covered by this? Perhaps not enough to really fund the ‘plan, do, study, act’ cycle that is the goal of the program.”
William Penuel, an educational psychology and learning sciences professor at the University of Colorado in Boulder, goes even farther, calling for IES to support the new grants at $7 million to $15 million each, about the same level the research agency does for evaluations of signature policy initiatives such as Reading First.
But Jon Baron, president of the Washington-based Coalition for Evidence-Based Policy, notes that it is possible to arrange low-cost experiments to evaluate district reforms. For example, he notes that under a separate IES grant, a team at the University of Wisconsin-Madison was able to evaluate the effectiveness of quarterly benchmark assessments in 59 districts for less than $100,000.
Many in the field ask that IES broaden the topics available for study, with many arguing in favor of including districts’ implementation of the Common Core State Standards. For example, Kenji Hakuta, a co-chair of Sanford University’s Understanding Language Steering Committee, wrote on behalf of the committee that efforts to support English-language learners during implementation of the common core would be a prime opportunity for districts and researchers to work together.
A detailed online analysis by Michael Goldstein, founder of the MATCH Charter School in Boston, praises IES for including a focus on “the single biggest issue facing high-poverty schools: Creating a safe, orderly and supporting learning climate for students....” However, he and other commenters warn that IES will have to pay careful attention to whether researchers and practitioners are really working together closely, rather than the “much more typical ... ‘fake’ collaboration where the researchers know what they want to do and just want a ‘practitioner sign-off.’”
Penuel agrees, recommending that IES require, not just letters of commitment from researchers, district leaders and other stakeholders, but evidence of governance processes for the collaboration to ensure all sides continue to work together throughout the project.
Increasing the speed at which studies turn around results could also help draw new researchers, notes Kaplan’s Saxberg; testing and tweaking small details multiple times a year might improve the “papers-per-year productivity” many young researchers must show, he writes.