Senate Report Hints at a Definition for What Works
Language buried in a report on a Senate appropriations bill may provide a glimpse of the bar Congress will set for judging the effectiveness of school improvement interventions in the next iteration of the Elementary and Secondary Education Act.
Yet education watchers worry Congress won’t back up its call for rigor with the cash to pay for the research.
In its report on the education budget for the 2011 fiscal year, the Senate Appropriations Committee calls on the U.S. Department of Education to “encourage and support” states and districts to use their Title I school improvement grants only for interventions that would meet the evidence required for the two most stringent evidence standards in the federal Investing in Innovation, or i3, research grants—the “validation” and “scale-up” categories.
The language applies to both the 4 percent of Title I, Part A, that is set aside for school reform aimed at schools that repeatedly miss making adequate yearly progress under the law and the rapidly expanding 1003(g) School Improvement Fund targeted to the lowest-performing 5 percent of schools.
“While the Committee acknowledges that the state of research on school improvement and school turnaround is not as strong as it needs to be, every effort should be made to utilize the knowledge base that does exist while additional research is conducted that will inform future activities,” Senate appropriators wrote in the report. They added that the Education Department would have to show in its FY2012 budget request that it had made progress in beefing up state and district evidence for any education reform strategies paid for with Title I school improvement money, including U.S. Secretary of Education Arne Duncan’s four recommended turnaround options.
While the language is far from having the force of law, it expands the influence of the $650 million i3 program to potentially more than $1 billion in Title I school improvement money, and it could become the first spark to rekindle a long-running debate over what evidence and effectiveness means in the context of school reform.
Officials for the Education Department said they could not comment on the language, but former Institute of Education Sciences director Grover J. “Russ” Whitehurst said the i3 criteria mirror “gold standard” research as defined by the department’s What Works Clearinghouse.
“There are no functional evidence requirements in place presently for these programs,” said Mr. Whitehurst, director of the Brown Center on Education Policy at the Washington-based Brookings Institution. “Thus, the i3 requirements would be a big upgrade.”
The appropriations language comes as districts struggle to find the best use for the money flooding in from the American Recovery and Reinvestment Act, the federal economic-stimulus program.
“It’s like trying to get into a Starbucks; there are people lined up everywhere looking for somebody to do an evaluation or have some line of scientific reasoning on which to base a program,” said Gerald E. Sroufe, director of government relations for the Washington-based American Educational Research Association. “This is kind of a reflection of the concern that we are raising the stakes very high and we’re not sure we have the kind of evidentiary base to do that.”
At a May hearing of the House Education and Labor Committee, lawmakers and witnesses said they doubted Mr. Duncan could show proof that his turnaround, transformation, restart, and closure models are effective at improving student achievement. Even Democratic lawmakers voiced skepticism, and House Education Committee Chairman George Miller, D-Calif., stated after the hearing that the ESEA re-authorization would focus on “research-based, proven, core elements of successful turnaround.”
The Senate language, if adopted, would represent the most detailed and rigorous definition to date for what constitutes scientifically based research in school improvement.
The NCLB law has always called for administrators to base school improvement plans on scientifically based research, which it defined as applying “rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs.” This definition never translated, though, into regulations for how a district could prove its reforms were scientifically based, and as a result there has been spotty enforcement on how districts justify the interventions they choose.
By contrast, under i3-style school improvement criteria, administrators looking to offer proof for a mathematics tutoring program in their school improvement plan would have to show, using evidence that met the “strong” or “moderate” bar laid out in i3, that the program significantly improved the performance of students like those being targeted in the district improvement plan.
“I am glad that the Senate is taking a practical—rather than ideological—approach to building a strong knowledge base,” said James W. Kohlmoos, the president of the Knowledge Alliance, a Washington-based group that represents research organizations.. He called the Senate’s decision to include detailed evidence criteria in two Title I programs “significant.”
“It acknowledges the importance of evidence in developing turnaround solutions and the current shallowness of the knowledge base in this arena,” Mr. Kohlmoos said. “It also suggests a new standard for moving forward not just with the school improvement money but also in other programs.”
Yet the first round of the i3 competition itself shows the potential pitfalls of more-detailed evidence requirements for school and district improvement plans. Many districts that had hoped to expand promising education practices became confused by the i3 requirements, according to Steve Fleischman, director of the Portland, Ore.-based REL Northwest, which provides research and technical assistance to districts going through the improvement process.
“Educators have strong beliefs about what is effective, without research,” Mr. Fleischman said, “and it’s often a situation of people finding research to justify your decisions rather than being guided by the research.”
Without training districts in how to identify high-quality research, Mr. Fleischman noted, the Senate could set up a fight over who approves or rejects a given research base—a debate like the one that ultimately scuttled the Reading First program.
The force behind the Senate request remains tenuous, as House appropriators still must agree to the language in the conference report. Mr. Sroufe and Mr. Whitehurst noted that the committee did not accompany its definition for intervention research with additional money to pay for that research.
“The Appropriations Committee conference report is not law, and… Unless the department provides a financial incentive for states and local education agencies to use evidence-based programs, the pace at which they begin to do so systematically will be glacial,” Mr. Whitehurst said.
Vol. 30, Issue 02