The Chicago Teacher Advancement Program increased mentoring and improved teacher-retention rates in some participating schools compared with a similar, nonparticipating set—but didn’t appear to raise student achievement, according to a study released today by Mathematica Policy Research.
It is the final report on the Chicago TAP program, looking at all four years of the program’s implementation in the Windy City.
“Chicago TAP was only partially successful in achieving its goals,” researchers Allison Siefullah and Steven Glazerman conclude. “The program can be credited with improved retention outcomes for some of its schools, but it did not have a noticeable positive impact on student achievement over the four-year rollout in Chicago.”
U.S. Secretary of Education Arne Duncan was the head of Chicago schools when the district applied for the Teacher Incentive Fund grant that financed the program.
Officials at the National Institute for Excellence in Teaching—the nonprofit organization that oversees TAP—said that the effort was never fully implemented in Chicago, as in other cities.
“It was a small group of schools, over different area superintendents, so I think it was a fragmented approach from the beginning,” said Gary Stark, the president and CEO of NIET. “I do want to applaud the teachers and the teachers’ union for participating, they absolutely they came together to give this thing a chance. The collaboration was really strong, but the implementation wasn’t there, and I don’t think there was a sense it was going to take root or hold.”
TAP is a complex initiative that knits together professional development, advancement roles for “master” and “mentor” teachers, an evaluation system, and performance-based compensation. It’s being used in a number of places across the country and has expanded significantly under the TIF grant program.
Hybrid Study
The Mathematica researchers used a hybrid of a randomized experiment and a quasi-experimental method to analyze the project.
For the randomization, the researchers were able to randomly assign cohorts of schools, 34 in all, to begin TAP either in 2007 or 2008, and again in 2009 or 2010. The planned delays in implementation allowed researchers to compare schools with one year of TAP under their belts with those with none, and again to compare schools with two years of implementation with those with just one.
For a longer-term look at the data, researchers also compared all the participating TAP schools with a group of some 200 other non-TAP schools of similar size and demographic characteristics, such as achievement, poverty, number of novice teachers, and proportion of special education students.
The results were as follows:
• Student achievement effects. The randomization analyses found no significant differences in student achievement between TAP schools in their first year of implementation and non-TAP schools, nor did an additional year of TAP implementation seem to affect test scores.
Similarly, the quasi-experimental analysis, which looked at cohorts in all four years of implementation, didn’t find that scores had gone up in the schools with more years of TAP. In addition, researchers looked at this data using a variety of different lenses to see which effects were “robust,” or showed up under all of them. An increase in science scores appeared in one calculation, but not in the others.
Overall, the sample size of TAP schools was small, and a handful either closed, reconstituted their staff, dropped the program, or never implemented the program over the course of the four years. But the researchers said the fact that the experimental and non-experimental data came to the same conclusion about student achievement allows for more confidence in the result.
• Teacher retention. In a bit of good news, the program did seem to have an impact on teacher-retention rates, though this seemed most pronounced for the cohort that began implementing the program in 2007. The retention rate over three years for this group was 67 percent, compared with 56 percent of teachers in non-TAP schools. In layman’s terms, teachers were 20 percent more likely to stay in the TAP schools over that period compared to a non-TAP school.
• Teacher support. Teachers in the TAP schools did receive more mentoring, meeting more frequently with their mentors for a total of three hours of contact compared with about an hour and a half for teachers in the control group. Mentor teachers were more likely than non-TAP veterans to provide other teachers with literacy strategies, help in setting instructional goals, preparing lesson plans, and modeling lessons, among other things.
Alterations Made
Chicago officials made some changes to the TAP model before implementing it and were beset with other technical problems that affected how it was instituted. For instance, under TAP, part of compensation is based on an individual value-added measure. But Chicago had technical issues producing teacher-level estimates and never produced them.
In addition, the bonus payouts to teachers were somewhat lower than initially planned or the ones used in other TAP programs, averaging $1,000-$2,500 rather than the planned $2,000-$4,000. And finally, the district weighted each of the TAP components differently in each year of the program’s operation.
NIET officials ended up parting ways with the district in year three of the study, based on these and other alterations.
The Chicago program “is not representative of the national TAP system model that has been implemented in hundreds in schools from South Carolina to Texas and Louisiana,” said Kristan Van Hook, the senior vice president for public policy for NIET. “As a result, Mathematica’s report should not be mistaken as an evaluation in any way of the national TAP system’s proven effectiveness.”
Chicago Public Schools didn’t immediately return a call seeking comment.
Finding Answers
While the results will surely be disappointing for supporters of differentiated pay, they are no picnic for people supportive of induction or professional development, either. Given all the issues with the compensation element, this project put a lot more emphasis on the other pieces.
A handful of other nonexperimental studies have found benefits for TAP schools, but no randomized study has yet found that the program increases student achievement.
The question of TAP effectiveness aside, the study also raises a lot of conceptual questions. When a school intervention program doesn’t have the results desired, who bears the responsibility? Is it a problem with the design of the program, with the fidelity of its implementation, or some other factor?