Your Education Road Map

Politics K-12®

ESSA. Congress. State chiefs. School spending. Elections. Education Week reporters keep watch on education policy and politics in the nation’s capital and in the states. Read more from this blog.


Ed. Dept.'s ‘Soviet Judges’ Review of Race to Top Scores

By Michele McNeil — April 22, 2010 2 min read
  • Save to favorites
  • Print

There’s been a lot of talk about how fair the scoring was in the first round of Race to the Top. Did reviewers follow the guidance and always award the correct number of points? Did a few outliers skew the results? Did some states get the luck of the draw and benefit from a bunch of easy graders, or did others draw the short end of stick and get all of the hard graders?

The Education Department, as part of its technical assistance seminar in Minneapolis yesterday for state applicants, said it did its own statistical analysis to examine these issues. Joanne Weiss, the department’s Race to the Top guru, called it the “Soviet judges” review (in a nod to notorious figure-skating scoring scandals of years past). UPDATE: For a summary of their review, fast forward to slide 15 of this PowerPoint presentation the department did yesterday.

In any competition that involves judgment calls, there will be easy graders and hard graders. The department checked to see if all of the easy graders were concentrated on one panel (e.g. the winning ones) or vice versa. The answer was no, Weiss said. In fact, she added, Delaware—one of two winning states—had one of the toughest panels of graders.

The department checked to make sure the outliers, who gave out scores that were wildly different than the other judges, weren’t always the same people. In other words, were there one or two renegade graders skewing the results? Weiss said that wasn’t the case either. She said in most cases, the outliers were all different judges who genuinely disagreed on the strength of an application. In addition, she defended the outliers, saying they’re an important part of this subjective process and that disregarding the opinion of any trained peer reviewer would compromise the entire competition.

Unfortunately, we can’t duplicate the analysis the Education Department did. It had the benefit of knowing the identities of the peer reviewers and the ability to match the judge, scores, and comments to particular applications.

Let me register an official Politics K-12 request to the Education Department: Assign each peer reviewer a unique identifying number (think student data systems!) for round two. And put those numbers on the score and comment sheets when they’re released to the public. This would maintain their anonymity, but it would allow the public to do its own data-crunching and track the decisions of individual judges across the competition.