By guest blogger Liana Heitin
This item originally appeared on Education Week Teacher’s Teaching Now blog.
The search for reliable methods of gauging teacher effectiveness—a dominant education policy issue over the last several years—has centered on classroom observation tools and value-added measures. But another potential indicator has emerged and is starting to pick up momentum: student surveys.
Yesterday, a roomful of teachers, administrators, representatives from education organizations, and policy wonks gathered in Washington to discuss the use of student feedback in improving teacher practice. The Center for American Progress event coincided with the release of a report finding that many U.S. students perceive their schoolwork as too easy. (Education Week reporter Erik Robelen has the details on that report, which drew data from National Assessment of Educational Progress student questionnaires.)
Rob Ramsdell, vice president of Cambridge Education, an education consulting company based in England, kicked off the discussion by talking about Tripod student-perception surveys. Developed by Ronald Ferguson of the Achievement Gap Initiative at Harvard University in partnership with Cambridge, the Tripod surveys have been used in 3,000 classrooms across the U.S. as part of the Bill and Melinda Gates Foundation-funded Measures of Effective Teaching Project. The surveys, explained Ramsdell, are administered to students like a formal assessment would be (i.e., so they’re taken seriously) and require about 20 minutes to complete. Teachers are rated on the research-based “7 C’s"—care, control (of the classroom), clarify, challenge, captivate, confer, and consolidate. Ramsdell said the reason to use such surveys boils down to a rhetorical question: “Who spends more time observing the dynamics in the classroom than students?”
Tiffany Francis, a Pittsburgh teacher whose school just used the surveys for the first time, said she was initially “very pessimistic” about administering them to her 2nd graders, but that the process was ultimately “enlightening.” Upon getting her results, she was pleased to see that 100 percent of her students rated her highly on “care,” a point of pride in her teaching. But lower scores in the area of “control,” and on the statement, “to help us remember, my teacher talks about things we already learned.” These responses gave her insight on where she should consider making changes. “I definitely took this as something I’m going to incorporate in my planning,” she said.
Schools in Pittsburgh began using the surveys with the support of the Pittsburgh Federation of Teachers. William Hileman, vice president of PFT, said at the event that his union is “getting hammered” from other union affiliates for conceding on the use of value-added measures and for partnering with Gates. “But we’re going to do it because that’s the world we live in right now ... and because we have to get better about instructing children.”
As of now in Pittsburgh, student perceptions are not being included in formal teacher evaluations, which can carry high stakes. Hileman urged the need to implement new evaluation measures slowly, to make sure they are “fair to teachers.” Ramsdell tiptoed around this idea, emphasizing that the project is still in its early stages and that questions, such as who will proctor the surveys if they are tied to evaluations, need to be answered before that can happen. But there was no denying that linking the Tripod results to evaluations is the logical and likely next step for Pittsburgh—and one that a few other places have already taken.
A version of this news article first appeared in the Teacher Beat blog.