Gates Analysis Offers Clues to Identification of Teacher Effectiveness
“Value added” gauges based on growth in student test scores and students’ perceptions of their teachers both hold promise as components of a system for identifying and promoting teacher effectiveness, according to preliminary findings from the first year of a major study.
The analysis, released last month by the Bill & Melinda Gates Foundation, shows that teachers’ value-added histories were among the strongest predictors of how they would perform in other classrooms or school years—as were students’ perceptions of their teachers’ ability to maintain order and provide challenging lessons.
The findings are part of the Seattle-based foundation’s $45 million Measure of Effective Teaching study, which seeks to identify accurate measures of superior teaching. ("Multi-City Study Eyes Best Gauges of Good Teaching," Sept. 2, 2009.)
While underscoring the preliminary nature of the findings, Gates officials said they were heartened to see that several measures being studied do appear predictive of good teaching.
“I was hugely excited and encouraged” by the findings, said Vicki Phillips, the foundation’s director of education programs. “It has implications for what people can be doing right now. It begins to answer questions teachers have had. And I think it shows that valid teacher feedback doesn’t need to be limited to test scores alone.”
Among its education philanthropy, the Gates Foundation provides grant support to Editorial Projects in Education, the publisher of Education Week.
The preliminary findings are based on data from five of the six districts participating in the study. They are New York City; Charlotte-Mecklenburg, N.C.; Hillsborough County, Fla; Dallas; and Denver.
A team of researchers directed by Thomas J. Kane, the foundation’s deputy director of education research and data, analyzed student scores on state tests given in grades 4-8 in the 2009-10 school year, using value-added modeling.
Such modeling purports to control for a student’s past performance and other factors so that learning gains can be attributed to specific teachers.
The researchers also analyzed student-perception data gathered from 2,519 classrooms, grades 4-8. Students rated teachers on a 1-to-5 scale on such aspects as whether teachers made the point of their lessons clear, were considerate of students, and explained material in several different ways.
The analysts found that, in every grade and subject studied, teachers’ value-added histories were strongly predictive of their performance in other classrooms. While such estimates exhibit a degree of volatility in the estimates from year to year, that volatility “is not so large as to undercut the usefulness of value-added as an indicator of future performance,” the study says.
Similarly, the researchers found that student perceptions of a given teacher were generally consistent across his or her classes, and that students gave high ratings to teachers whose classes consistently made learning gains.
In particular, student perceptions of teachers’ ability to manage a classroom and provide challenging academic content were strongly linked to those teachers’ ability to raise scores.
One of the study’s findings appears to complicate the conventional wisdom that teachers can boost scores by “teaching to the test.”
The analysis found that the value-added estimates of teacher effectiveness held up even when students were given supplemental tests with harder tasks less subject to test preparation than those on the state tests, including conceptual questions and open-ended writing tasks.
Meanwhile, student reports of classes spent heavily on test preparation were generally weaker predictors of teachers’ ability to raise scores than other factors, though the study did find a positive relationship between test preparation and teacher value-added estimates.
The value-added findings, in particular, come in the midst of a divisive debate in the K-12 field about whether such methods should count in a teacher’s evaluation.
The Gates Foundation’s findings on student perceptions, in the meantime, raise new questions for states and districts. Spurred by federal grant programs, some states have moved toward including teacher observations and even value-added methods in evaluations. Far fewer, however, are considering student-perception data.
So far, the study also appears to support the notion, advocated by teachers’ unions and others, that evaluations should be based on multiple measures. The analysis concludes that combining both sources of information—value-added and student feedback—yielded a more finely grained estimate of teacher effectiveness than using the student-perception information alone.
One key area not yet studied is the accuracy of teacher-observation ratings on a variety of teaching frameworks. The foundation’s research partners are still collecting and scoring videotaped observations of some 13,000 lessons in 2009-10 as part of that effort.
Other measures under study include teachers’ pedagogical content knowledge and their perceptions of their working conditions.
Gates officials plan to release a second report next spring. It will begin to examine the results of an experiment, already under way, to gauge student performance when students are randomly assigned to teachers identified as being more or less effective. Final results will be released in the winter of 2011-12.
“Things we’ve intuitively known, or thought about, or wished for about teacher effectiveness—there’s now some empirical evidence that they are valid,” Ms. Phillips said.
Vol. 30, Issue 15, Page 11
Get more stories and free e-newsletters!
- Director of Special Education
- Reading School District, Reading, PA
- Director of Math and Science
- Reading School District, Reading, PA
- Science Teacher
- Cle Elum-Roslyn School District, Cle Elum, WA
- Superintendent of Schools
- Fremont County School District #14, Ethete, WY
- Fort Worth Independent School District, Forth Worth, TX