(Readers’ please note: The December 20 posting generated a great deal of email. A comment worth making is one that should be posted on the blog. Emails to me that are not prefaced with “not for publication” are subject to posting.)
On December 20 I posted a piece on Edvance’s review of the Texas Early Education Model. The bottom line of that work, which covered only the first two years of a four-year effort, was equivocal:
There was considerable variation both between and within communities with regards to student performance and teacher outcomes. For about half of the communities, students in the treatment groups (with TEEM) improved more than students in the control groups (without TEEM), and for the other half of the communities students in the control groups improved more than the students in the treatment groups on the student outcome measures. TEEM did lead to overall improvement for teachers, although there was considerable variation, with teachers in both control and treatment groups obtaining both positive and negative difference scores on the teacher outcome measure.
Staci Hupp of the Dallas Morning News translated this into “no proof that most children fared better in TEEM than in conventional preschool programs.” How should policymakers and taxpayers read the results? Like Hupp’s headline - “Landmark preschool program isn’t paying off”? And how should we think about school improvement program evaluation?
Many found the evaluation’s finding discouraging. Quite a few edbizbuzz reader despise the program and the provider - and let other readers know.
As someone with experience in the evaluation of education programs on a large scale, I found this part of the Edvance report intruiging:
“For about half of the communities, students in the treatment groups (with TEEM) improved more than students in the control groups (without TEEM), and for the other half of the communities students in the control groups improved more than the students in the treatment groups on the student outcome measures.”
What was different about the two groups of communities? The Edvance evaluation tells us nothing about this. But we know from other research (for example, see here, here and here) that outcomes relate to the quality of implementation and implementation relates to the quality of teacher and agency support. This also relates to improvements for teachers - it’s quite unreasonable to expect teachers who do not buy into a program to improve by measures designed by that program. If the communities with superior performance had higher levels of program implementation and higher levels of support, it would not be accurate to imply that the program wasn’t working. However, we might infer that the program is only likely to work where it’s wanted, so the idea that it should become a statewide preschool strategy is flawed.
The advocates of TEEM are probably shooting themselves in the foot by pushing for statewide implementation, because they are almost certainly assuring mediocre results “on average.” But opponents equally shortsighted, because it’s quite likely that teachers and district administrators who share a belief in TEEMs efficacy will use it to the benefit of higher student performance.
There’s nothing overly complicated about this logic.
If you really believe in a diet program and find it fits your life style, you are more likely to use it, and so lose weight. Maybe there’s a plan out there that will allow you to lose even more weight, but if you don’t like it you won’t use it. And if you don’t use it, you won’t lose weight.
School improvement is no different. The products and services are not pills; they are programs. If teachers don’t like them, if administrators won’t provide the support, their benefits are purely theoretical. Providers who want to demonstrate high levels of effectiveness should not be eagerly accepting clients who will merely impose their programs on teaching staffs. District administrators who think they can obtain advertised results by imposing a program on teachers are fools. Teachers who don’t protest the imposition of programs they will not implement faithfully are setting themselves up for failure.
It would be nice if more research would focus on this problem, because it lies at the core of program efficacy.