A study financed by the Eli & Edythe Broad Foundation shows that students taught by Teach For America teachers in Los Angeles outperformed peers who were taught by other teachers—including veterans with many more years of experience.
Initially, the study was performed for internal purposes. Having provided quite a bundle of financial backing for TFA, Broad wanted to get a sense of how its investment was paying off in terms of stronger student learning. But officials for the group said they ultimately decided to make the study public given the growing national conversation about teacher effectiveness.
California state test-score results of students of 119 second-year TFA teachers in grades 2-12 were compared with those of the students of 1,190 non-TFA teachers in the same grade levels, subjects, and schools as the TFA teachers, during 2005 and 2006.
The results are interesting for a few reasons. First of all, TFA teachers were linked to test scores that were 3 points higher overall than non-TFA teachers, even those who had been in the classroom much longer. And, they were even more effective than other teachers with similar years of teaching experience. (The scores for that comparison were 4 points higher for TFA teachers than for non-TFA teachers.)
It’s important to know, though, that since students weren’t randomly assigned to TFA teachers or non-TFA teachers, it isn’t scientifically possible to say that TFA is the reason why the teachers were more effective. These data are certainly suggestive, but they aren’t evidence of a causal link.
And with any study, there are a couple of caveats. For instance, the findings here combine reading and math, so it’s not entirely clear how to interpret them for subject matter. Content area is an important distinction because previous studies of TFA have shown that the group’s high school instructors had a particularly strong correlation with improved math achievement.
The folks at Broad think this type of analysis could be indicative of what will be possible once data systems continue to grow and students can be linked to teachers. One interesting feature of the study is that analysts used two different growth methodologies and found that one was much better at explaining variability in test scores. That’s important because there isn’t really good consensus on the “best” methodology for gauging teacher effect on student achievement.
Second, the paper is an example of the kind of analysis that might be useful for higher ed institutions and programs that prepare teachers as they consider ways of improving the effectiveness of their own programs.
TFA has already begun those efforts, as I reported earlier this year.