In the wake of a number of disappointing, no-effects-seen evaluations of professional-development programs, several scholars are proposing that the research community upend its traditional approach to studying in-service teacher training.
“Going program by program and—often at great expense—conducting large-scale evaluations involving multiple measures of teaching and learning has not, to date, resulted in an accretion
of credible, usable knowledge within the professional development and practitioner community,” the researchers assert in a new Educational Researcher article.
It’s a pretty tough condemnation of the state of the field, but probably warranted in light of the generally thin empirical research base on PD and the results of several randomized studies of professional-development programs that had minimal to no effects on boosting student achievement. Such findings were especially frustrating, because the programs studied involved a lot of hours of training and follow-up, essentially the components that experts agree are baseline conditions necessary for the programs to be effective.
Another challenge, the researchers note: Even if programs are shown to have a particular effect, the best studies don’t provide much insight into what features of the programs made all the difference. (A recent randomized study of Teach For America with positive effects, for instance, could not say whether teacher selection, math content knowledge, or training content was responsible for the boost.)
Finally, high-quality randomized studies take years to implement, whereas much professional-development is homegrown and the lifecycle of any one program short, the authors note.
The scholars, Heather Hill of Harvard, Mary Beisiegel of Oregon State University, and Robin Jacob of the University of Michigan, lay out their idea for a new approach to research. Rather than studying one-off programs after the fact, they say that the field should do more to conduct rigorous research at the early stages of program development. Chief to this approach would be to fund smaller scale, low-cost trials that hold the program’s content constant, but vary across sites as to how that content is delivered. Ultimately, those approaches that seemed promising could be scaled up to large-scale efficacy trials.
Federal Leverage?
One place to begin a push for such a change might well be the U.S. Department of Education, which recently made a relatively unnoticed, but important, change to EDGAR, the department’s administrative regulations. Those changes allow the department to make outcomes-based research part of a grant’s deliverables, and to make evidence of effectiveness one of the priorities in competitive-grant competitions.
The department also recently wanted to tie states’ renewal of their No Child Left Behind waivers to the use of evidence-based approaches in spending their Title II teacher quality aid, which is doled out by formula to each state to support professional development. (It later rescinded that requirement.)
The idea of developing more evidence for particular PD practices seems to have galvanized other organizations. In a November letter, the Knowledge Alliance, an advocacy group on behalf of researchers, urged the Education Department to begin a new initiative to base allocation of PD funding on better research, building on the EDGAR changes.
Knowledge Alliance’s president, Michele McLaughlin, writes in the letter:
For example, the Department could actively encourage the use of evidence-based professional development practices in competitive programs by making this a competitive or absolute priority in more and more program competitions. For the major formula programs like Title I [for disdvantaged students] and Title II, the Department might seek to ensure that the pending ESEA reauthorization permit only evidence-based professional development to be supported, where this is feasible; might reconsider tying the use of evidence-based practices to the granting of 'ESEA Flexibility' waivers as originally proposed; or, at a minimum, might encourage the use of evidence-based practices through its monitoring and technical assistance activities."