The mood was contemplative and the talk was candid at yesterday’s meeting of the National Board for Education Sciences. The board meets three times a year, as you know, to share its collected wisdom with the Institute of Education Sciences, which is the U.S. Department of Education’s key research arm.
At board Chairman Eric Hanushek’s invitation, members mused on what’s going right—or wrong—with the 7-year-old agency. While members agreed that the institute has made enormous strides in improving the quality of the education research that the department underwrites, they also seemed to think that more could be done to get the word out on the fruits on that research.
“Many of the reports done by IES never get out there,” said member Sally E. Shaywitz, a noted Yale University researcher on reading disabilities. “People aren’t aware of them.”
The long string of “no effects” studies coming out of IES was on board Vice Chairman Jon Baron’s mind. (See my story on this topic in EdWeek.)
“If all IES produces is null findings from here on out, the enterprise is not going to be long for this world,” warned Baron, who, ironically, is also the executive director of the Coalition for Evidence-Based Policy, a Washington-based group that has promoted randomized controlled studies.
He suggested the agency might get better results by being more strategic in choosing the interventions it evaluates and focusing on those with stronger research bases.
That assertion drew a rise from Hanushek, a fellow at Stanford’s Hoover Institute. “Do you think IES has purposely chosen to evaluate policies they know are going to fail?” he said. “Directing them to do good doesn’t seem to provide any guidance.”
Newcomer David C. Geary, a psychologist from the University of Missouri in Columbia, suggested that closer analysis of the “null findings themselves” may also be in order to find out why some interventions worked on a small scale but not in the larger IES evaluations.
But John Q. Easton, who is nine weeks into the job as IES director, offered yet another perspective. “The school-improvement process isn’t always intervention-based,” he said. “Achievement is affected by multiple factors...so the question might be what is it about schools that allow them to self-evaluate, monitor, and make improvements?”
Agreed, said Hanushek, “But is that researchable?”
It sounds like the times they are a’ changing at the agency. If you’re interested in reading more of this interesting conversation, keep checking IES’s Web site for minutes of the July 27 board meeting.