Hill Study Hints at ESEA Evidence Definition
Lawmakers Suggest Taking a Lesson From 'i3' in Weighing Effectiveness
Language buried in a report on a Senate appropriations bill may provide a glimpse of the bar Congress will set for judging the effectiveness of school improvement interventions in the next iteration of the Elementary and Secondary Education Act.
Yet education watchers worry Congress won’t back up its call for rigor with the cash to pay for the research.
In its report on the education budget for the 2011 fiscal year, the Senate Appropriations Committee calls on the U.S. Department of Education to “encourage and support” states and districts to use their Title I school improvement money only for interventions that would meet the evidence required for the two most stringent evidence standards in the federal Investing in Innovation, or i3, research grants—the “validation” and “scale-up” categories.
The language applies to both the 4 percent of Title I, Part A, that is set aside for reform aimed at schools that repeatedly miss making adequate yearly progress under the law and the rapidly expanding the School Improvement Fund targeted to the lowest-performing 5 percent of schools.
“While the committee acknowledges that the state of research on school improvement and school turnaround is not as strong as it needs to be, every effort should be made to utilize the knowledge base that does exist while additional research is conducted that will inform future activities,” Senate appropriators wrote in the report. They added the Education Department would have to show in its fiscal 2012 budget request that it had made progress in helping states and districts beef up evidence for any education reform strategies paid for with Title I school improvement money, including U.S. Secretary of Education Arne Duncan’s four recommended turnaround options.
While the language is far from having the force of law, it expands the influence of the $650 million i3 program to potentially more than $1 billion in Title I school improvement money, and it could become the first spark to rekindle a long-running debate over what evidence and effectiveness means in the context of school reform.
Officials for the Education Department said they could not comment on the language, but former Institute of Education Sciences director Grover J. “Russ” Whitehurst said the i3 criteria mirror “gold standard” research as defined by the department’s What Works Clearinghouse.
“There are no functional evidence requirements in place presently for these programs,” said Mr. Whitehurst, the director of the Brown Center on Education Policy at the Washington-based Brookings Institution. “Thus, the i3 requirements would be a big upgrade.”
The appropriations language comes as districts struggle to find the best use for the money flooding in from the American Recovery and Reinvestment Act, the federal economic-stimulus program.
Need for Proof
“It’s like trying to get into a Starbucks; there are people lined up everywhere looking for somebody to do an evaluation or have some line of scientific reasoning on which to base a program,” said Gerald E. Sroufe, the director of government relations for the Washington-based American Educational Research Association.
At a May hearing of the House Education and Labor Committee, lawmakers and witnesses said they doubted Mr. Duncan could show proof that his turnaround, transformation, restart, and closure models are effective at improving student achievement. Even Democratic lawmakers voiced skepticism, and House education committee Chairman George Miller, D-Calif., stated after the hearing that the ESEA reauthorization would focus on “research-based, proven, core elements of successful turnaround.”
The Senate language, if adopted, would represent the most detailed and rigorous definition to date for what constitutes scientifically based research in school improvement.
The No Child Left Behind Act has always called for administrators to base school improvement plans on scientifically based research, which it defined as applying “rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs.” That definition never translated, though, into regulations for how a district could prove its reforms were scientifically based, and enforcement of that requirement has been spotty as a result.
By contrast, under i3-style school improvement criteria, administrators looking to offer proof for a mathematics tutoring program in their school improvement plan would have to show, using evidence that met the “strong” or “moderate” bar laid out in i3, that the program significantly improved the performance of students like those being targeted in the district improvement plan.
“I am glad that the Senate is taking a practical—rather than ideological— approach to building a strong knowledge base,” said James W. Kohlmoos, the president of the Knowledge Alliance, a Washington-based group that represents research organizations. He called the Senate’s decision to include detailed evidence criteria in its report on two Title I programs “significant.”
“It acknowledges the importance of evidence in developing turnaround solutions and the current shallowness of the knowledge base in this arena,” Mr. Kohlmoos said. “It also suggests a new standard for moving forward not just with the school improvement money but also in other programs.”
Yet the first round of the i3 competition itself shows the ptential pitfalls of more-detailed evidence requirements for school and district improvement plans. Many districts that had hoped to expand promising education practices became confused by the i3 requirements, according to Steve Fleischman, director of the Portland, Ore.-based REL Northwest, which provides research and technical assistance to districts going through the improvement process.
“Educators have strong beliefs about what is effective, without research,” Mr. Fleischman said, “and it’s often a situation of people finding research to justify your decisions rather than being guided by the research.”
House appropriators still must agree to the language in the conference report, and Mr. Sroufe and Mr. Whitehurst noted that the committee did not accompany its definition for intervention research with additional money to pay for that research.
“The Appropriations Committee conference report is not law,” Mr. Whitehurst said. “Unless the department provides a financial incentive for states and [local education agencies] to use evidence-based programs, the pace at which they begin to do so systematically will be glacial.”
Vol. 30, Issue 02, Pages 14,17
Access selected articles, e-newsletters and more!