Some readers challenged me last week on an intriguing question: Why did I react so differently to the underwhelming findings on the performance of Milwaukee voucher students and to the Ravenswood City school board’s effort to shutter the Stanford Graduate School of Education’s charter school for mediocre performance? As you may recall, I casually brushed off the Milwaukee voucher results as telling us nothing important but suggested that the performance of Stanford New School raises real questions about the “expertise” of Stanford’s big-name pedagogues.
A few eagle-eyed commenters asked if this isn’t a case of double standards or even blatant hypocrisy (see, for instance, comments by plthomas and Kronosaurus). As plthomas argued, “I think the double-standard you apply, as noted in one of the comments above, is a clear indication that you have an agenda clouding your commentary . . . Forgive choice no matter what the evidence shows . . . the standard is moving always because it is an ideology, not a conclusion drawn from evidence.” First, let’s be clear: of course my commentary is always and absolutely motivated (or “clouded”) by my view of the world (which can be deemed an “agenda”).
But second, this is a terrifically instructive point, and I’m glad it has come up. Short answer: Nope, I don’t think my stance is inconsistent or the least bit hypocritical. I think the notion that I was playing it loose and fast is due to a broader tendency of both advocates and critics of structural reforms like choice and accountability to try to prove that they “work” (or don’t work) in ways that are politically useful but are also ill-advised and ultimately unproductive.
Longer answer: Understood rightly, structural changes, like the Milwaukee voucher program, are about creating opportunities for students to be better served. They are not prescriptions for improving teachers or learning. Choice-based reform, properly understood, proceeds from the assumption that it’s one way to help align incentives and opportunities so that quality schooling and learning will become more likely. It’s not prescriptive about what those schools or that learning should look like (e.g. it doesn’t say exactly what is to be done with autonomy and opportunity) and it is tolerant of any number of instructional models and pedagogical approaches. When increased autonomy and opportunities are used well, it doesn’t prove that school choice “works"--it just shows that it’s possible to organize systems to encourage excellence.
And when autonomy and opportunity are used poorly, or when choice is designed in a fashion that doesn’t foster quality, it doesn’t mean that choice-based reform “doesn’t work.” Rather, it suggests to me that it may well have been poorly configured, ill-designed, ineptly supported, accompanied by unfortunate inattention to the larger ecosystem, or launched in an environment devoid of effective entrepreneurs. And, it’s entirely fair, if the results of structural reforms consistently disappoint across a variety of contexts and designs, to conclude that those measures aren’t worth pursuing. Of course, I’d argue that the takeaway is actually the opposite--that we’ve seen in places like New York City, Houston, and New Orleans that sensibly cultivated choice ecosystems hold great promise.
On the other hand, the case of Stanford New School involves practitioners who believe they have devised pedagogies, instructional approaches, and classroom management techniques that “work.” When given the chance to run a school and employ their favored approaches, it’s fair to ask whether--at least in those controlled circumstances--their methods work as advertised. If they do, it doesn’t mean they’ll work at scale, but at least it’s something. If, however, when given a laboratory setting where they have enormous expertise at their disposal, get to select their teachers and instructional materials, and determine instructional priorities and classroom management strategies--and the results dramatically disappoint, it raises legitimate questions about whether the strategies are all they’re cracked up to be. This isn’t a question of “going to scale.” It’s not about whether or not it’s reasonable to expect that Stanford’s prescriptions governing instruction, curriculum, teacher training, and the rest can actually work as intended in tens of thousands of schools or thousands of districts across the land. Rather, this is a case where the experts’ preferred practices didn’t even work as intended under carefully controlled circumstances.
When it comes to choice-based reform, I’m simply arguing that opening the system to new kinds of problem-solvers, creating more autonomy and flexibility, and doing so in a quality-conscious manner is likely to facilitate improvement. I’m not promising it will do so and I’m not claiming that evidence proves it will; I’m just arguing this is the way that sensible parents, voters, and policymakers should bet. And, if those broad structural adjustments aren’t delivering, I’m inclined to suspect it’s because we haven’t gotten them right rather than because somehow K-12 schooling alone among human endeavors is the one place where allowing industrial-era bureaucracy to stifle human ingenuity, talent, and creativity yields optimal outcomes.
The difference in the Stanford case is that many of the Stanford faculty involved in Stanford New School do in fact argue they know the “right” way to manage classrooms or organize instruction. If they claim to know what works, and travel the nation prescribing particular practices and models to states, districts, and schools while pocketing hefty fees for their expertise, then their ability to produce impressive outcomes at their proving ground matters a good deal. (And, to be fair to choice skeptics, it’s absolutely true that many choice enthusiasts have tried to claim that this or that study “proved” the efficacy of choice-based reform when it suited them. So, I totally get the temptation to insist on treating choice as one more intervention--I just think it’s a mistake to do so.)
Anyway, that’s how I see it. I’d welcome thoughts and comments, as this is a tricky area and one where I have long thought the lines have been fouled by an inability or unwillingness of advocates and critics alike to make their peace with the innate ambiguity of structural reforms. I am explicitly suggesting that the standards of evidence for structural alterations are necessarily vague, murky, and that research on these is useful mostly for its ability to improve program design and shape public debate rather than for its ability to prove what “works” or what doesn’t. (For more on this, see this 2008 Education Week commentary that I penned with Jeff Henig).
I’m well aware that my stance can be enormously frustrating to folks on both sides--to structural reformers who believe I’m short-changing empirical evidence on merit pay or school choice and to critics who think I’m apologizing for lackluster results. So, I’m happy to stick with this topic, address feedback, and talk it through more fully, if readers desire.
The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.