A new report calls for fixing—but not abandoning—the Department of Education’s process for designating “exemplary” and “promising” educational programs.
Through that effort, which began with a 1994 Congressional mandate, the department has so far conferred its seal of approval on programs in a handful of areas, including mathematics, drug education, and educational technology.
The program drew heated complaints five years after its creation with the publication of its list of 10 recommended programs for teaching math to students in kindergarten through 12th grade.
Opponents of those programs, who took out full-page advertisements in national newspapers to publicize their stance, called the list an “abomination” and accused the federal government of taking sides in the national debate raging over the best way to teach math. (“Ed. Dept. to Release List of Recommended Math Programs,” Oct. 6, 1999.)
Part of the problem with the entire review process, says the report by Caliber Associates, a Fairfax, Va., consulting group, is that neither federal law nor the Education Department has given reviewers clear guidelines on how to judge program effectiveness. And standards, as a result, have varied from review panel to review panel.
The department could strengthen the process, the report says, by offering more specific effectiveness criteria, linking the topics studied to existing federal research priorities, keeping in closer touch with program applicants, and doing a better job of screening and training the experts recruited for the panels that review programs.
In addition, the report calls for setting up separate reviews for “exemplary” and “promising” programs and, in the case of the “exemplary” reviews, casting a wider net for good programs.
“The exemplary program should eventually be a designation of the best in the field, rather than depend upon an applicant-driven process,” says the report, which was presented Nov. 30 to the National Educational Research Policy and Priorities Board. The 15-member board, which advises the department on its research operations, commissioned the $264,000 evaluation.
‘Long-Standing Concern’
“This has been a long-standing concern of the board,” said Kenji Hakuta, a Stanford University education professor and the chairman of the policy board, referring to the system of reviewing and highlighting programs. “Our idea is that peer review and expert panels are the appropriate way to judge programs, but there is a need for a better mix of expertise.”
But some board members suggested discontinuing the program altogether, in part because federal law prohibits the federal government from imposing curricula on schools.
“I raised the question of whether this wasn’t awfully close to that,” said Williamson M. Evers, a newly appointed board member and a research fellow at the Hoover Institution, a public-policy think tank housed at Stanford. He was one of the dozens of signers of the newspaper ads criticizing the 1999 list of recommended math programs.
“I thought maybe we should have the research base in place first before we did this,” he added.
Mr. Evers voted with other board members to recommend improving the program. Under the administration of President Bush, however, it’s still unclear what the program’s fate will be.
In talks with various groups, Grover “Russ” Whitehurst, the new assistant secretary in charge of the department’s office of educational research and improvement, has talked about setting up a clearinghouse on “what works,” an entity that would presumably serve a function similar to that of the expert-review panels on promising and exemplary programs.
Department officials declined to comment last week on the recommendations in the Caliber Associates report.