School & District Management

Investing in Innovation ‘Gems’ Show Tricky Path for Districts Using Evidence Under ESSA

By Sarah D. Sparks — June 11, 2018 2 min read
  • Save to favorites
  • Print

Few of the hundreds of programs proposed under the federal Investing in Innovation, or i3, program ultimately led to significant interventions—but those that did provide a window into what it will take for school districts to really use evidence to improve student learning.

That’s the main takeaway from the Institute of Education Sciences’ final evaluation report on the $1.4 billion i3 program—the only Obama-era competitive grant to be codified into the Every Student Succeeds Act. Of the 67 grants with evaluations completed by last May, nine, or 13 percent, had both tight implementation and strong positive effects.

“I think this is a highly credible evaluation,” said Patrick Lester, the director of the Social Innovation Research Center, who has conducted separate studies of the i3 program. “This report shows how difficult it is to do good work. There are absolutely gems in there, so it can be done well. The gems are where the progress lies.”

That’s a small number of successful programs, but still sightly better than the success rate for most rigorously evaluated education interventions, according to a study by the Coalition for Evidence-Based Policy. Moreover, the program’s tiered model awarding more money to programs with a larger evidence base seemed to be effective:


  • The $2 million to $5 million in development grants, awarded to interventions that seemed promising but had little evidence showed significant positive effects only 8 percent of the time;
  • Thirty-three percent of the midrange validation grants, provided at up to $30 million each, showed benefits; and
  • Half of the $50 million scale-up grants, which are given to projects with the most solid research track records, showed significant benefits.

Abt Associates conducted the evaluation, and previously also provided technical assistance for some of the i3 grantees. “They had the advantage of having gone over every single one of these evaluations with a fine-toothed comb so if there were any warts, they knew about them,” Lester said.

While both i3 and ESSA encourage districts to use administrative data to study interventions and make changes quickly, the evaluation found administrative data didn’t help much for many programs, because some states did not test annually in subjects such as science, and it was difficult to gauge the effects of interventions when states or districts changed their testing systems.

Even among the programs that monitored how closely schools were implementing interventions, the study found only six projects that evaluated the intervention being used in a sample that fully represented the students who would typically be served. “This means that three of the four impact evaluations supported by i3 scale-up grants were not able to test the effectiveness of the i3-funded intervention at scale...” the study found.

Lester was not surprised. “It’s one thing to get into whether something has a genuine causal impact and quite another to go, OK, why did you have such a good impact?” he said. “If something works well and you want to replicate it—which of course we’re trying to do with under ESSA—fidelity measures matter because ... you better know what it is you’re trying to actually replicate.”

The evaluation called for better technical assistance support for those trying to develop interventions under the Education Innovation and Research grants, which is the name for the grant program that took the place of i3 under ESSA.

Related Tags:

A version of this news article first appeared in the Inside School Research blog.