The headline says it all: There is a huge lack of agreement about how to get a sense of what individual teacher-preparation programs are doing well, or aren’t—and what sort of evidence should be used to back up such determinations.
For instance, one of the main complaints about the National Council on Teacher Quality’s rating project had to do with the standards and measures it used. The looming fight over the federal regulations recently proposed by the U.S. Department of Education is also likely to concern this topic. Finally, the national accreditation body for teacher preparation, the Council for the Accreditation of Educator Preparation, has explicitly made evidence-based accreditation a hallmark of its new standards, and is now hard at work trying to flesh out what that will entail.
It’s in that vein that I wanted to highlight two recent papers that dip a toe into this very muddy pond.
The first, released by CAEP, was prepared by Teacher Preparation Analytics, a consulting group. It provides an exhaustive rundown of the state of play in 15 different states across 12 different indicators of preparation-program strength—from how to gauge candidates’ academic strength all the way through placement and retention in high-needs subjects. Take a look at this graphic and you’ll see some interesting patterns. States and programs are a lot further along in assessing content knowledge than ensuring candidates actually come out with concrete skills. (Just to put the following chart in basic terms, the more shading a circle has, the more in-depth a state’s measure of that indicator is.)
The other resource comes from the American Psychological Association. Its report is the product of a task force that included both critics and supporters of “value added,” and examines that measure and several others.
The APA highlights each measure’s potential and limitations, and it also comes to a perfect-as-the-enemy-of-the-good consensus: “These decisions should be made with the best evidence that can be obtained now, rather than the evidence we might like to have had, or that might be available in the future,” the authors write.
In the meantime, if you want to know the measures each state currently uses for program approval, good luck: There is no national database out there that lists them (believe me, I tried to find one when writing this series of stories). You’ll have to rely on legwork and lots of regulatory research. Happy hunting!