Districts Faced Challenges Implementing Federal Performance-Pay Grants

By Stephen Sawchuk — September 16, 2014 3 min read
  • Save to favorites
  • Print

Districts participating in a federal program that awards performance-based pay didn’t always adhere to all of its components—and they seem to have struggled to communicate the program’s goals and features to teachers.

On the upside, teachers participating in the program generally said they were happy with the performance measures used to evaluate them, and didn’t feel that collaboration in their schools decreased as a result—one of the oft-cited complaints about such programs.

That’s the takeaway from an early study of the initiative, conducted on behalf of the U.S. Department of Education by the policy analysis firm Mathematica Policy Research and released earlier this week. It represents districts from the 2010 awards; the data comes from the 2011-2012 school year.

The Teacher Incentive Fund, begun in 2006, awards grants to states, school districts, and nonprofits to implement performance-based pay programs for teachers and principals. Among other things, districts are to adopt measures of educator effectiveness, establish a pay-for-performance bonus based on those measures, give additional bonuses to teachers who take on additional roles and responsibilities, and provide professional development.

There is a long and somewhat complicated history of this program’s evaluation. In the early years, the Education Department permitted each TIF grantee to choose its own evaluation method, which meant there was no way to compare results. Congress stepped in to fix that by inserting a provision into the 2009 economic-stimulus legislation requiring a future competition to include a randomized experiment.

Here’s how it worked: Some district winners in the 2010 round of grants agreed to permit some of their schools to be assigned to participate in the TIF, while the others implemented everything but the bonus awards. Instead, educators in that latter group of “control” schools got a 1 percent automatic raise, instead of pay based on results.

The new analysis is based on surveys from all 153 TIF districts, and interviews with and surveys of teachers and principals in a subset of 10 districts that participated in the random-assignment evaluation.

Top-line findings include:

  • Although 80 percent of districts met the requirement to use test scores and observations to measure teacher effectiveness, just 46 percent included all four components.
  • On average, districts expected to award a bonus equivalent to about 4 percent of the average U.S. educator’s salary.
  • Even though the program specified that awards should be reserved for educators who were significantly better than average, districts were prepared to give bonuses to more than 90 percent of eligible educators.
  • In the evaluation districts participating in the random-assignment feature, fewer than half of teachers thought they were eligible for the bonus, even though all were. They also perceived the award amounts to be much lower than they actually were (see graphic below).
  • In the evaluation districts, teachers were generally happy with the performance measures, with 65 percent or more approving of them. Still, teachers in schools implementing the programs were somewhat less satisfied with the measures; conversely, they were more likely to be satisified with opportunities to earn extra pay.

In a way, these results seem to reflect a truism about big, complex programs with lots of moving parts: It can be difficult to implement them all at once and challenging to communicate their goals and features. Like the children’s game of “telephone,” a lot seems to have gotten lost in the messaging.

The report is the opening salvo in the TIF program evaluation. Mathematica plans to release three other reports, which will focus on the big-ticket questions of whether, in the evaluation districts, the bonus pay helped improve teacher retention and student achievement compared to those that didn’t participate.

An earlier study of one TIF site, in Chicago, didn’t find effects for student achievement, though the program did seem to improve retention.

A version of this news article first appeared in the Teacher Beat blog.