We’ve taken a bit of a hiatus from the blog lately because of our upcoming conference, The Urban Education Future? Lessons from New Orleans 10 Years After Hurricane Katrina. The event will be live-streamed, and we invite our readers to participate online. We have a great line-up of speakers--local and national, practitioners and researchers, and supporters and opponents of the reforms. Among the national figures participating in the event are (alphabetically): Rick Hess, Senator Mary Landrieu, Pedro Noguera, Kira Orange-Jones, Randi Weingarten, and John White. But most of the speakers are local--we want to hear about the New Orleans reforms from those who have experienced and participated in them first hand. You can learn more here.
Organizing the conference and carrying out our other local work informing the New Orleans debate has led us to a lot of interesting questions about how to have productive school reform debates. When we first started inviting speakers, we explained that we wanted to have reform supporters and opponents on the same panels. Many were uncomfortable with this because New Orleans is such a contentious environment. They thought it would turn into one of those TV talk show free-for-alls.
I can see why they might think that, since much of the communication that comes out of New Orleans is one-sided--for or against. I go to events here hosted by supporters and opponents of the reforms, but I don’t see much overlap in who attends them. And partly for this reason, the conversation is sometimes a bit over the top. It’s easy to get carried away when you are in a room full of people who already agree with you.
Through this conference and our other activities, we are trying to inform the debate and build bridges even among those who disagree with one another. Our thinking is that by rooting the conversation in evidence and facts, we can have a better conversation.
But we have run into challenges here. It has been clear from the response to our first two published reports that some in the local community want us to simply report numbers. But there are two problems with that. First, the types of numbers we produce are often not so simple--we aren’t just describing things, but trying to understand what causes what and why it matters. For our results to have any meaning, we have to interpret them.
More precisely, our goal is to be objective and present the various reasonable interpretations of the results. What’s “reasonable”? Here, we rely on theory and prior evidence to the extent possible. In science, we are never 100% sure of anything. So, we also try to communicate how sure we are about particular conclusions.
In some cases, interpretation also requires value judgments and in those cases we try to be clear about what those judgments are. The most common example for us is to what degree we should be concerned about how well students are doing on standardized tests. This is partly a matter of research (we know students who do better on these tests also do better on longer-term outcomes that almost everyone cares about), but it is also partly a value judgment about the goals of public education. As Larry Cuban and others have ably written, people define “good schools” in different ways.
The problem is that by posing various interpretations and discussing value judgments, some people get the impression that we aren’t really just providing evidence anymore. In reality, the problem is that making the research useful requires us to interpret it. Imagine a doctor sending you your cholesterol results without telling what is considered a healthy range--not very useful.
To thread this needle, we try to be explicit about what is speculation and where value judgments come into play. For example, in our first report when we found that low-income families seemed to give less weight to the school grade when choosing schools, we indicated that this could mean low-income families are less interested in academics or that their incomes and neighborhoods constrain them to choose schools with lower scores, as well as other possibilities. Combining our evidence with other evidence, we concluded it was probably some combination of the two. We gave what we thought was the most reasonable interpretation, communicated some level of uncertainty, and described our value judgment that a key goal of public education is to equalize opportunity.
Again, some did not like that we tried to interpret the evidence. “Wait, I thought you were focused on hard evidence,” was a response I heard several times. This is no easy problem to solve. We could just say: “The reforms increase student achievement by X%. The End.” This, however, would not be very useful and would simply invite misinterpretation.
Others would prefer that we have simpler conclusions. By presenting the various interpretations, we are seen as “wishy washy.” Unfortunately, the scientific world--especially in education research--is often a wishy washy kind of place. The lessons from evidence are often not straightforward. Of course, ultimately policymakers and practitioners need to make decisions and they will have to make their own calls about how to interpret the evidence. Our role is to produce the evidence--and the reasonable alternative interpretations--to inform those decisions.
This brings me back to the conference. I am very appreciative of the 105+ speakers who have agreed to participate, and 280+ who seem interested in attending, listening, and participating. They cover a wide range of ideologies, backgrounds, and roles. From school principals to nationally renowned scholars, their participation says that they want to have a reasoned, evidence-based conversation (which is good because we’ll be discussing new evidence from more than 15 studies). The level of interest gives me hope that evidence can matter, and that it can improve public discourse.
We’ll never all agree on education policy, but we should be able to have conversations that allow us to find and act on the common ground that does exist.