A new analysis argues that the Race to the Top competition used flawed and inconsistent scoring, and that federal reviewers should have applied more uniform standards in judging the applications for federal cash.
The review by The New Teacher Project also argues that U.S. Secretary of Education Arne Duncan should have intervened to look at and possibly overrule individual states’ scores from independent reviewers—even though the organization acknowledges such a move might have led to charges of political favoritism.
The New Teacher Project, a nonprofit that works to improve instruction among disadvantaged students, advised both winning and losing states in the $4 billion competition. A number of advocates were surprised to see states such as Louisiana and Colorado among the losers (both of which were helped by the TNTP) despite those states going further than many of their rivals in approving new laws aimed at improving teacher evaluation, a priority in the federal competition.
“Difficulties with the scoring process were most evident in the losses of Colorado and Louisiana, states that have made concrete promises where many other applicants have only made promises,” the analysis states. “But evidence of scoring problems and inconsistencies stretched across many applications.”
The report was released on the heels of the Department of Education announcing that it is conducting a “lessons learned” review of its competitive grant programs, including Race to the Top, which will no doubt fuel speculation about the timing of the agency’s review. A department spokesman, however, said the agency’s examination was planned in advance of the TNTP report, with the goal of improving a host of competitions.
The New Teacher Project’s analysis focuses on round two of the competition. It conducted an earlier review of round one, and it found that many problems spanned both sides of the competition.
One of The New Teacher Project’s central complaints is that the scoring process gave reviewers too much freedom to assign or deduct points subjectively, resulting in different states being held to different standards.
One state they focus on is Illinois, which received 35 out of 45 points in the category of local education agency commitment, four fewer points than it received in round one. The lower score came despite Illinois having built substantially more support from districts and teachers’ unions for its state plan between rounds, the authors say. Other states, such as California, Ohio, and Maryland, appeared to be treated more leniently by their reviewers in that category, despite varying degrees of buy in.
Another criticism is that states that were able to muster “massive political will” and firm commitments behind their proposals weren’t sufficiently rewarded by reviewers, while states that left some of the “thorniest questions unresolved,” with tenuous state and local agreements, fared pretty well.
The report notes that while Duncan had the power to overrule states’ review scores, he chose not to do so. That’s understandable, they say, because a “hands-off process insulates the contest from politics.” But the authors say that going forward, the department must “more actively manage the work of reviewers” to ensure consistent scoring by comparing how applications were judged.
Justin Hamilton, a spokesman for the department, declined to comment in detail on the report’s conclusions, but said the agency considers it a resource, as it conducts its own, internal review.
“We appreciate their thoughtful analysis,” Hamilton said, “and we’ll definitely take a strong look at it.”
A version of this news article first appeared in the State EdWatch blog.