An awful lot of ink’s been spilled on RTT in the past week. But I think the abundant commentary is missing an important point: RTT’s diffuse focus--across four reform priorities, with multiple different criteria within each of those priorities and additional criteria outside those priorities, too--and how those multiple competing priorities affected the outcomes.
Press reports have tended to focus on RTT’s support for teacher performance evaluations and charter schools--but those two issues together account for barely 20% of the total available RTT points. Under the RTT application criteria, there were some 40 different ways for states to win--or lose--points. One would never know from media accounts that states could win or lose points for their plans to share data with researchers (6 points), provide professional development and support to teachers and principals (10 points for plans to provide support and 10 points for plans to continuously improve its effectiveness), or ensure equitable distribution of teachers in high-need subject areas (10 points), or for the extent to which their school finance systems provide equitable funding to high-poverty districts and schools (8 points). RTT’s incentives for states to adopt Common Core standards and aligned assessments are widely known, but states could also receive up to 20 points for their plans to support the transition to enhanced standards and assessments.
That complexity and diffuse focus makes it difficult to pinpoint exactly how a state won or lost. For example, New York, Maryland and Hawai’i, whose high RTT rankings surprised many observers, got full or near-full points for “supporting the transition to enhanced standards and assessments,” while reform darlings Louisiana and Colorado both came in near the bottom on this item, each losing 3.8 points on it. Maryland, Ohio, New York, and Hawai’i also got more of the 20 points available for “providing effective support to teachers and principals,” than any state except Georgia, while Louisiana lost 6.2 points on this item (Had Louisiana gotten full points on this item and one other point somewhere else, it would have beat out Ohio for the 10th RTT grant). Take a look at the many issues in RTT that are not sexy reform hot buttons, and you’ll often find surprise winners Maryland, New York, and Hawai’i near the top on these items, picking up a point or so over reform favorites like Colorado and Louisiana. And those points add up. (To help folks analyze these issues in greater detail, I’ve created a spreadsheet of all the finalist states’ scores on the 60 different RTT areas and sub-areas. Since there are so many items in this spreadsheet, it probably contains a few typos--if you find them, let me know in comments so I can fix it.)
Many observers believe RTT’s outcomes raise concerns about the reliability of scoring in high-stakes competitive grant programs. I think the steps Andy Rotherham outlines here probably make sense. We should also try to learn from successful competitive grant programs in other areas of government.
But I think that an equally important lesson in RTT is the perils of trying to cram too many different reform focuses into a single competitive grant program. It’s easy to see why the administration wanted RTT to look at states’ comprehensive reform strategies and support for education, and why they wanted to include a range of issues that appealed to both the school reform crowd (teacher evaluation, charters) and more traditional education reformers (professional development, equitable distribution of teachers and funding). But, ultimately, that diffuse focus--and the difference between that focus and top-level administration rhetoric focused on specific reforms--played a role in producing outcomes that didn’t match expectations. I would imagine the diffuse focus also played a role in some of the scoring issues that have been raised, since it made scoring a lot more complicated. RTT would have been more effective in rewarding state officials who took politically risky reforms in key areas if it had focused more narrowly on those areas. I’m also a bit concerned about the potential impact of RTT’s focus on doing everything at once on states’ capacity to truly implement the reforms they promised.
A number of observers have said the RTT outcomes don’t match the administration’s reform priorities. I’m not sure if that’s true, or if the outcomes are more a reflection that the administration’s priorities themselves are diffuse. If so, that could have important implications going into ESEA reauthorization--and reform-y types could again find themselves looking at outcomes they didn’t expect.
RTT Finalists 2.xlsx