Jason Cascarino is Manager of Program Investments at The Chicago Public Education Fund
On April 24, 2008, heteroskcedasticity became my all-time favorite word. Somehow, I managed to navigate through four years of college never having taken statistics (the Jesuits let me down). So, it took a keynote lecture by Jeffrey Wooldridge, a preeminent econometrician from Michigan State University, to expose me to this magnificent term.
The venue was the first National Conference on Value Added Modeling, somewhat of the latest craze in education research and evaluation, although the method has been used in certain places for some time. My colleague and I from the policy and funding community were decidedly out of place amid the econometricians, psychometricians, research scientists and other over-achieving right-brained folk constituting the balance of participants. Not the least of otherworldliness we encountered was the labyrinth of complex mathematical equations peppered with characters from the Greek alphabet I struggled to remember, to say nothing of the alien language of “random and fixed effects,” “data shrinkage,” and, my new all-time favorite, “heteroskedasticity.” It was a bit scary. And the scariness is what I want to touch upon here.
Value-added modeling (VAM) can be quite powerful and many contend that it holds great promise. No Child Left Behind, with its emphasis on proficiency, has relegated us to an accountability system that can be rather limiting in terms of telling us how much progress our students are making. State standardized exams almost universally simply tell us students are either performing at proficiency levels or they are not; whether more are performing at proficiency levels this year than were last year, or less were.
Meanwhile, value-added models provide a good deal more nuance and depth to these achievement questions. In the simplest terms I can articulate, they seek to predict each individual child’s yearly learning gains based on his or her own past performance and then determine whether a teacher or a school as a whole helped that child meet or exceed the expected gain – their value-add.
The main focus of the conference – reported in Education Week May 7 (“Scrutiny Heightens for ‘Value-Added’ Research Methods”) – was to “get under the hood [of value-added] and have a look around,” as was explained to me by Adam Gamoran, professor of sociology and educational policy studies at the University of Wisconsin-Madison and director of the Wisconsin Center for Educational Research, one of the even organizers. The technical focus of the conference was important because the technical merits ultimately have policy implications.
What I took away from conference goers was the general consensus that value-added models have great advantages over typical “status” methods (meet or exceed a standard) but that the math and the methods had some ways to go before more universal application. Most notably, scholars were less comfortable using VAM for pay-for-performance systems for teachers, because of the difficulty in isolating teacher effects on learning. They were more sanguine about using VAM for school-level accountability.
This is all to bring me back to VAM being a bit scary, less so for me and my curiosity and education policy focus, and more so for the educators “on the ground,” teachers and principals who are ultimately responsible for the student learning we are trying to better understand with VAM. Many educators are downright scared of data so as it is, in part because they often don’t understand what they are looking at, how to analyze it and use it and in part because they see it used against them rather than to their benefit.
It is this latter point that strikes me as the major deficit of VAM. That is, even if we were confident in the models to tell us that teacher A is achieving better outcomes for her students than is teacher B, when normalizing all other possible effects, VAM still doesn’t tell us why this is happening or how teacher B can improve. And so, VAM, despite it having great advantages over the basic meet and exceed metric, will remain another scary data tool for educators, which makes buy-in, and ultimately universal utility, less likely.
To overcome this, two seemingly opposing things need to happen, and they were implied at the VAM conference. First, the models have to get better, specifically better at isolating teacher effects on learning so VAM can have some use at the classroom level and in identifying and rewarding good teaching. This would seem to suggest they need to get more complex and sophisticated. Second, VAM must become more accessible so that on-the-ground educators are less scared about using it to improve their practice. This would contrarily suggest the models need to become less complex and sophisticated.
I’m reasonably confident that researchers will figure out new and creative ways to better the models. I’m less sure educators will ever be able to fully get over their fears of data – more and more complex data – particularly in this era of high-stakes accountability. We’d have to get teachers to make heteroskedasticity their favorite word too.