Scoring Race to the Top: A Look Behind the Curtain
In an article I wrote for The New York Times Magazine about Race to the Top that is being published this Sunday and which has just been posted on the Times website, I only touch briefly on issues related to administering the contest. But readers of Education Week might be interested in more detail of what I discovered in finding, as the article puts it, that “good intentions can’t guarantee perfect execution in a federal bureaucracy.”
When the federal government gives out billions of dollars in grants, it can’t be done based on the gut feel of some policy wonks, however honest and well-meaning, that this state deserves it and that one doesn’t. So before he left the government last fall, U.S. Department of Education adviser and Race to the Top architect Jon Schnur recruited Joanne Weiss, who has an impressive résumé in both the nonprofit and business sectors running education-related ventures, to create a rigorous process for giving out the money by using vetters who would be screened rigorously for conflicts of interest. Like jurors, they were also instructed, Ms. Weiss told me, “not to consider anything outside the actual four corners of what was submitted in the applications.”
A review of the vetters’ score sheets and written comments juxtaposed against the applications they judged suggests that their standards were inconsistent, that some were naive about the difference between promises and the capacity to deliver, and that others fell victim to the propensity of many states to misstate the status of their programs and overstate the buy-in they had from key stakeholders, especially the teachers’ unions.
For example, the vetter who gave Louisiana the low score I cite in the Times article arguably confirmed the fears that state Superintendent Paul G. Pastorek expressed to me of being graded by people with little experience trying to reform American public school systems. He was probably referring to Alan Ruby, who asked Mr. Pastorek the most skeptical questions during the Louisiana team’s presentation in Washington. Mr. Ruby is a senior fellow for international education at the University of Pennsylvania Graduate School of Education. The key items on his résumé prior to his arrival at Penn are his years working at the World Bank and as Australia’s deputy secretary of employment, education, training, and youth affairs. Mr. Ruby acknowledged in an interview that he vetted Louisiana, but he rejected the notion “that my questions were hostile or skeptical,” and declined to comment on whether his was the 349 score, a score 38 points lower than what California got from one of its vetters, despite the fact that California’s widely panned application committed to do nothing to link teacher pay to performance until 2012 and even then promised only to do it for 10 percent of the state’s teachers—if the union agreed.
“We made the attorney general of each state sign an assurance that everything in the application was accurate,” Ms. Weiss told me when I asked about all the boxes that, the Times article reports, were checked inaccurately to indicate school district and union commitments to implement the plans in the proposals. Actually, the 146 pages of regulations that Ms. Weiss and her team drafted only require that the attorneys general certify that any statements made in the application about existing state law were accurate, a point emphasized by a spokesman for New York State Attorney General Andrew M. Cuomo when I asked him about New York’s inaccurately checked boxes.
Ms. Weiss told me that no one in her office checked the accuracy of the assurances, as conveyed by the checked boxes, before the awards were announced. Instead, she said, the vetters, who were paid $5,000 each to read and score the applications, were supposed to examine everything in the applications, including the appendices that contained the memoranda of understanding (MOUs) that the stakeholders had actually agreed to—which, as I report, often turned out to be conditioned on the unions’ making collective bargaining concessions. Some did: A reviewer who gave New York a relatively low score wrote in his or her comments that “despite checking the box that the applicant’s MOU uses the standard terms and conditions, the state did not in fact use the standard MOU. … These terms do not reflect the strongest level of commitment. ...” But one of the other four New York vetters—who gave the state a surprising 454 out of 500, which was higher than winner Tennessee’s average score—praised the breadth of commitments by local school systems that the state’s application listed.
In fact, even the first-round winner, Delaware, broke the rules, without anyone seeming to notice. Delaware checked all the boxes for its 38 school districts and for union support in each, even though its appendixed MOU, like New York’s, made its commitments conditional on union collective bargaining agreements’ being negotiated “in good faith.” However, in Delaware, the core of its commitments—such as how teachers will be evaluated—will be defined with the union’s input, but under state regulations, they can ultimately be imposed by the state without union sign-off. The collective bargaining caveat in the MOU, Delaware state teachers’ union President Diane Donohue told me, “has to do with other, smaller aspects of the plan, like extending school days at turnaround schools, which I am sure we will agree on.” Besides, Ms. Donohue went to Washington with Delaware officials to assure the vetters of her union’s commitment to the entire proposal.
On May 5, the department, noting “inconsistencies in some instances between the tables and narratives,” amended its rules to require that any state in which a school district’s commitment to implement any part of the state’s plan is conditional on future collective bargaining agreements should mark a C in the relevant box on the grid, rather than a Y or an N. Secretary of Education Arne Duncan has released a list of the 49 vetters with their biographies, but he has refused to disclose which ones vetted which states or awarded which scores and wrote which comments. (Thus, my having to make suppositions about Mr. Ruby.) He also required them all to sign expansive nondisclosure agreements prohibiting them from talking to the press. Asked how he could explain shielding publicly paid officials from publicly explaining their individual decisions to spend taxpayer money, Mr. Duncan cited the “transparency” of having made all the applications, scores, and comments public and said “it’s a set of folks together making the decisions.” Ms. Weiss interrupted to add that “it’s also to prevent these guys from having all kinds of undue pressure brought to bear on them.”
My Times article also refers to the reformers’ concern about Mr. Duncan’s “language of collaboration”—that he might have inadvertently signaled to the states that getting union buy-in was more important than submitting a strong plan. Mr. Duncan has since taken steps to counter that impression, publicly stating that plans that are watered down to win union support won’t win the Race. But some of that collaboration perspective may have seeped into how the vetters scored the states, at least according to one reviewer who was willing to discuss it. The 500-point score sheet had a discrete place where stakeholder buy-in was to be taken into account for up to 45 points, with most states getting 25 to 45 points, which means that about 20 points were typically in play. But one vetter who was willing to be interviewed about this issue—Michael C. Johanek, who also teaches at the University of Pennsylvania Graduate School of Education—told me he thought “there were plenty of places throughout the application, probably hundreds of points’ worth, where if you believe you can’t do successful reform without teacher enthusiasm you could take that into account.” Mr. Johanek would not reveal which states he vetted, but his perception of the scoring rules (in addition to his equating a union leader’s signature with “teacher enthusiasm”) might be telling.
Although a reporter can’t know it by asking them how they arrived at their decisions, Mr. Duncan maintained “the peer reviewers did a phenomenal job.” He shrugged off their seemingly limited backgrounds, saying “I really value a diversity of opinion; what you don’t want here is group think.”
As has been reported by Education Week, last month the New Teacher Project recommended that monitoring of the reviewers’ work in the upcoming second round should be made “more robust” and that the highest and lowest scores out of the five should be eliminated in the next round, a change that would have put Louisiana in sixth place. (Delaware and Tennessee would still have scored first and second.) Mr. Duncan told me he thought that was a bad idea. “My biggest fear,” he said, “is that you throw out the outliers. You need people who are willing to say the emperor has no clothes, or this is a brilliant idea.”
Vol. 29, Issue 32
Get 10 free stories, e-newsletters, and more!
- Accomack County Public Schools, Accomac, VA
- Ridgefield Public Schools, Ridgefield, CT
- Lake Forest School District, Felton, DE
- Principal - Marchman Technical Education Center
- Pasco County Schools, New Port Richey, FL
- Superintendent, Clarke County Public Schools
- Hazard, Young, Attea & Associates, Berryville, VA