Education programs are generating and reporting unprecedented amounts of data in the wake of new federal requirements, but federal agencies are only just beginning to build the processes needed to ensure those data are timely and accurate enough to use for measuring program performance, according to experts at a research forum in Washington, D.C., last week.
“One thing we are learning from the experience of ARRA"—the American Recovery and Reinvestment Act, which authorized the fiscal stimulus in 2009—"is it’s raising the bar on what the public is expecting from reporting,” said Elizabeth Curda, the assistant director of strategic issues at the Government Accountability Office, speaking at the forum held by Mathematica Policy Research’s Center for Improving Research Evidence. “There is a level of transparency there that is unprecedented.”
At every level, from federal program administrators to individual school districts, the stimulus law required both more frequent and higher-visibility reporting on the use of the money, according to Robert Shea, a partner with Grant Thornton and a former associate director at the federal Office of Management and Budget. Shea said that the White House’s task force on managing ARRA reporting, the Recovery Accountability and Transparency Board, “credits low fraud or abuse of recovery [act] spending with the frequency of the reporting. They are so pleased with it, I expect that will trickle down to all federal grantees soon.”
This past winter, Congress and the president overhauled the way federal agencies monitor, report, and evaluate their programs. As a result, the Education Department must review the data it collects on grant programs to determine:
• Are the measures valid ways to gauge the program’s performance?
• Are the data accurate?
• Are data reported and turned around quickly enough to be used for making decisions?
• Can the data be verified?
• Is the data reliable enough to be used to make policy and program decisions?
The Government Accountability Office is expected to report on the quality of all federal reporting and planning data by June 2013. So far, Curda said, the GAO has found many grant programs have widely disparate reporting procedures, often leading to duplication and incomplete reporting.
William Borden, director of Mathematica’s performance management group, said that’s because there are no government-wide standards for reporting the mountains of new data being produced; every agency—and often each individual program—has different processes. Education in particular is at “higher risk” of having inaccurate data, because each state and sometimes each district can report information differently.
“People are really reinventing the wheel and learning the hard way the same things over and over again on their own,” Borden said. “Even states may not get consistent data, much less at the federal level.” He called for the development of uniform standards for all programs in how they collect information and report on programs’ progress.
“There’s resistance to standards. Every program thinks it’s unique and doesn’t want to be compared to anybody, but ... that lack of comparability provides protection from effective oversight,” Borden said.