Nearly three years after winning millions of federal dollars to help grow their ideas for improving education, the early recipients of Investing in Innovation grants appear to be largely fulfilling an important mission of one of the Obama administration’s signature programs: conducting solid research that shows whether those ideas ultimately work.
Among the 72 winners from the 2010 and 2011 seasons of the i3 program, 59 are on track to meet the rigorous research standards of the What Works Clearinghouse, according to the U.S. Department of Education. That includes all of the “scale up” and “validation” winners, which brought the most experience and research to the table and won the largest grants.
The i3 contribution to the research base could be considerable, given that the What Works Clearinghouse, which vets evidence on educational effectiveness on behalf of the Education Department’s Institute of Education Sciences, has previously identified only 30 single-study reviews that meet their standards.
An increase of nearly 60 studies receiving the clearinghouse’s top billing from a single Education Department program would be “particularly meaningful,” said Nadya Chinoy Dabby, the acting assistant deputy secretary for the department’s office of innovation and improvement.
“We are starting in a universe where there is not a super-rigorous research base,” she said. “This is a chance to figure out what is working and what is not working.”
The Investing in Innovation program, launched as part of the 2009 economic-stimulus package passed by Congress, was designed to find innovative ideas to improve education and scale them up. But there was another goal as well: require the winners to conduct independent, rigorous evaluations of their projects that could be shared with the public.
The original 49 winners of the Obama administration’s Investing in Innovation competition are about halfway through their grants and in the middle of evaluating how well their ideas are working to improve student achievement. Research-plan highlights for the biggest grant winners include:
SOURCES: Grant Applications; U.S Department of Education
“What’s game-changing here is you’re providing an incentive for them to evaluate what they are doing in a way that’s rigorous and meaningful,” said Michele McLaughlin, the president of the Washington-based Knowledge Alliance, which represents education research organizations.
Since the inception of the i3 grants, nearly $1 billion has been awarded to 92 districts, schools, and their nonprofit partners during three rounds of competition. A fourth round is ongoing, for another $135 million in awards.
In each round, the largest grants—which in 2010 were $50 million each—go to the projects with the most evidence of past success, with small “development” grants going to those with promising ideas that haven’t been tested.
The winners spend from less than 10 percent to more than 30 percent of their awards on evaluations, which can be expensive and labor-intensive, especially when randomized controlled trials are being conducted.
Among the 2010 winners, about a dozen randomized controlled trials—the gold standard in research—are being done by independent evaluation firms. Those “experiments” involve randomly assigning students, schools, or other subjects to the “treatment,” while another group serves as the “control.” Such trials are widely used in fields such as medicine, but less so in education.
The other 2010 winners are using other methods, such as quasi-experimental statistical-matching studies, to evaluate their work.
The questions being asked in all of the i3 research—whatever the method—are more than just “Does the program improve test scores?”
For instance, the Harvard Graduate School of Education also wants to know which version of its summer reading program is most cost-effective. TNTP, formerly called The New Teacher Project, wants to know the effect of its teacher preparation program on content knowledge. And the WestEd professional-development program is examining whether that effort changes high school students’ reading behavior.
The Success for All Foundation, which provides intensive turnaround efforts in high-needs elementary schools, is studying their effects on reading outcomes and technology use. The foundation is using its $49 million grant to add 65 schools to its portfolio every year for five years.
The Baltimore-based Success for All recruited 40 schools across the country for the randomized part of its program: 20 that receive the services, and 20 that serve in the control group. For the 25-year-old organization, the i3 grant was a chance to update a randomized controlled trial that ended in 2006, said Robert E. Slavin, the director of the Center for Research and Reform in Education at the Johns Hopkins University School of Education and the co-founder and chairman of the Success for All Foundation.
“The program itself has evolved,” he said. “And perhaps the rest of the world has evolved.”
For example, when Success for All began, it used systematic phonics to teach reading though most other educators did not, Mr. Slavin said. Now, he said, almost everyone is using phonics.
For the eMINTS National Center at the University of Missouri, landing an i3 grant meant it could conduct a randomized controlled trial of its professional-development program—something the center couldn’t afford before. The “enhancing Missouri’s Instructional Networked Teaching Strategies” program, or eMINTS, uses interactive group sessions and in-classroom coaching and mentoring to help teachers integrate technology into their teaching.
“For us, that’s the whole driving factor behind why we wanted to apply,” said Lorie Kaplan, the executive director of the center, which is using about 15 percent of its $12.3 million grant to pay for its research trial, conducted by the American Institutes for Research.
“We’ve seen results, but this allows us to say we have unbiased results. It’s very much defensible,” she said.
Now, 58 schools in rural Missouri are split into three groups: a control group, a treatment group that’s receiving two years of eMINTS professional development and new technology, and a second treatment group that’s getting all of that plus an extra year of training through the Intel Teach program.
As an incentive to participate the control group will get the same professional development and technology as the first treatment group, but not until data collection ends in 2014.
More Time Needed
Barely three years after the first i3 grants were awarded, it’s too early to see whether any of the winners are showing statistically significant effects on student achievement. It usually takes at least two years—more often, three to five years—to see real change, researchers and grantees say.
But early findings of the eMINTS program are promising, according to research from the AIR evaluators presented at the 2013 American Educational Research Association conference in San Francisco.
Even after five years—the typical length of an i3 grant—it may be too early to determine a program’s effect. The Niswonger Foundation is using dual-enrollment, online and Advanced Placement courses to try to improve college and career readiness in 29 rural Tennessee high schools.
“We would like more time,” said Laura Holian, a senior research scientist at CNA, a not-for-profit research and analysis organization in Alexandria, Va., and the project director for the Niswonger research. “After five years we would hope there are changes in college readiness. We will know if there are differences in college enrollment. But how are those students performing in college? It will be too soon.”
Coverage of entrepreneurship and innovation in education and school design is supported in part by a grant from the Carnegie Corporation of New York. Education Week retains sole editorial control over the content of this coverage.
A version of this article appeared in the June 05, 2013 edition of Education Week as ‘i3' Winners on Track Toward Meeting Goal in Research Arena