About half of the states now review the performance of their teacher-preparation programs on an annual or biennial basis, but they use a swath of different measures—some of higher quality than others, a new survey of state policy concludes.
“The best systems remain incomplete, with many measures that are serviceable but not quite adequate,” says the report, issued by the Council of Chief State School Officers.
But it’s still an improvement over the days when programs got little actionable information, said CCSSO Executive Director Chris Minnich in a call with reporters. “We are seeing progress in this area,” he said. “Feedback is always a good thing to programs, even if it’s not the perfect measure or the perfect test.”
The data give some sense of where states stand, even as the federal government’s pending rules for teacher preparation would require all states to shift to an annual reporting and accountability system.
Behind the Black Hole
It’s no secret that teacher preparation is a bit of a black hole in terms of transparency. No database tracks what information states collect on their programs (unless you count the wobbly stack of program-approval standards under my desk).
The CCSSO report provides what’s probably the most comprehensive look to date.
Some good news here: Nearly every state re-approves its teaching programs every five to seven years, and 32 issue reports every year or biennially.
Still, there are gaps: 17 states don’t review their alternative-route teaching programs in the same ways as their university-based providers. States tend to report data out in the aggregate, at the provider level, but there’s less information about individual programs (like special education or elementary education at a particular institution).
And 20 states don’t make access to provider-level data publicly accessible. (I’ve had to file open-records requests to get data like this in the past, and can attest that this is a big problem.) Sometimes that’s because of privacy concerns with small programs, but the CCSSO officials acknowledged this as an area that needs work.
“We’re going to work with states to make this as transparent as possible; we believe this information is good data for incoming teachers, ... and also for the general public to understand how these programs are doing,” Minnich said.
As for specific criteria, the report’s authors looked at four different categories of program measures: how programs selected candidates; how they measured candidates’ knowledge and skills; candidates’ later performance as classroom teachers; and whether the candidates stayed in teaching and worked in high-needs subjects. There were 20 different performance measures studied in all.
Here’s what the group found.
- Delaware, had the most measures in place, with 17 of the 20. (Its annual report cards are controversial nevertheless.)
- Nine states—Alaska, Connecticut, Idaho, Maine, Montana, Nebraska, North Dakota, South Dakota, and Utah, and Wyoming—reportedly had none of the indicators in place (or their policies were unclear).
- Just because states collected data didn’t mean they did a lot with it. Only six states assign an overall program score, and only five states currently weigh their indicators or plan to do so soon.
- Most states measure candidates’ content knowledge via tests; fewer look at candidates’ “teaching promise” or at candidate impact on student achievement.
The report unfortunately doesn’t give much of a sense of how states use this data internally.
In a 2014 analysis, Education Week found that most states give teacher-prep programs lots of “second chances,” and rarely yank approval altogether.
For more on teacher preparation accountability:
- States Slow to Close Faltering Teacher Ed. Programs
- N.Y. Officials Balked at Closing Ed. Schools Despite Problems
- Federal Teacher-Prep Rules Face Heavy Criticism in Public Comment