Can teacher-preparation programs be ranked by effectiveness? Not if the rankings are based on student test scores, according to a new analysis.
Rankings of teacher-prep programs, when they are based on the test scores of the graduates’ students, are so prone to error that they may as well have been assigned at random. That’s what researchers discovered when they reviewed the evaluations of teacher-prep programs in Texas, Florida, Louisiana, Missouri, Washington state, and New York City.
“Where a program falls in a given year’s rankings, and whether it moves up or down from one year to the next, is typically more a matter of luck than of quality,” write the researchers Paul T. von Hippel and Laura Bellows. Hippel is an associate professor of public affairs at the University of Texas at Austin and Bellows is a doctoral student in public policy at Duke University.
The differences between programs in each of the six states, according to their analysis, were too small to matter. So students with teachers from a good (as opposed to an average) teacher-prep program might see a boost in their test scores of 1 percentile point or less. This was true even for programs in Louisiana and New York City, where earlier reports had claimed considerable differences. It was those reports that sparked the recent drive to hold teacher-prep programs accountable for the quality of their graduates, Hippel and Bellows say.
Their analysis did turn up some exceptions. Up to two programs per state graduate teachers whose impact on student test scores is significantly better than average. Teach For America and UTeach are examples, yet the researchers point out that “their effects are moderate in size and limited to math and science.” Further, the results were gleaned not from state report cards but from evaluations focused specifically on Teach for America and UTeach programs.
You may remember that in October 2016 the U.S. Department of Education released rules requiring states to rate teacher-preparation programs every year using several criteria, such as their graduates’ impact on student test scores. President Donald Trump signed a bill scrapping the rules the following year.
In spite of Trump’s bill, states still have policies that require teacher-prep report cards. As Hippel and Bellows report, Louisiana has been rating its programs for more than a decade. In 2010, 11 states and the District of Columbia won funding through Race to the Top to develop report cards. And some 21 states and the District of Columbia share data connecting teachers’ student outcomes to their prep programs.
So how should states rate the quality of their teacher-prep programs? The researchers say states should avoid relying on reports by principals or supervisors doing teacher observations, as these reports are often biased. Teachers’ own ratings of their prep programs would also likely prove biased, though they might be useful to the programs themselves.
Hippel and Bellows instead suggest that states track how successful teacher-prep programs are at keeping their graduates in the profession, especially at high-needs schools. This can be done by checking program rosters against employment records.
“If a large percentage of a program’s graduates are not becoming teachers, or not persisting as teachers, that is clearly a concern,” the authors write. “Likewise, if a large percentage of graduates are persisting, especially at high-need schools, that is a sign of success.”
See also:
- New Federal Teacher-Prep Rules Draw Praise and Criticism
- Trump Signs Bill Scrapping Teacher-Prep Rules
- Review of Graduate and Alternative Programs Finds Gaps in Teacher Prep