Data Opinion

No Foolproof Measures of Success

By Deborah Meier — October 11, 2007 4 min read
  • Save to favorites
  • Print

Dear Diane,

Did you read The New York Times Magazine piece called “Do We Really Know What Makes Us Healthy?” by Gary Taubes? Or the follow-up on Oct. 9 in the Times Science Section by John Tierney?

Medicine (and nutrition) have all the odds in their favor vs. education when it comes to being “scientific”. There’s a lot less disagreement about what constitutes good health, for one thing. Politics—in the best and worst senses—is less intimately tied to medicine. It’s easier to have placebos and random samples. And it’s easier to track patients for long enough to assess side effects.

John Tierney takes up the “consensus” claims about low-fat diets, and the nutrition end of Taubes’ work. He describes how easy it is for an interesting hypothesis (high fat, bad for heart), with the right “political” support to become a consensus, and thus the continued existence of fads in medicine and nutrition. Ditto for schooling, Diane. And—more another time—the “consensus” around the current “science” of reading is a case in point. At least in medicine it doesn’t get worked into federal law!

Author Taubes concludes that “we end up having to fall back on the following guidelines when it comes to scientific research about medicine”. (What follows is my summary) (1) Look for all other possible explanations for the data. (2) Assume the first reported association is incorrect or meaningless—be skeptical when it first hits the news. (3) If the correlation appears in many studies and populations but is small (in the range of 10s of percents), continue to doubt it. (4) If the correlation involves some aspect of human behavior then question its validity. (5) “The best advice is to keep in mind the law of unintended consequences.” There’s too much that either can’t be measured or in which the measurement itself is subjective—even if it can be coded. The principal investigator of a famous large-scale Nurse’s Health Study concluded: “I’m back to the place where I doubt everything.”

I say all this, Diane, because that’s where I end up, too—and I actually did not begin there—about almost all school-related data. Because above and beyond all the reasons Taubes gives above, there is inevitably more bias—meaning individual values—involved in education research, and more political pressure on schools to comply. Teachers and principals, as you noted, don’t have the freedom professors do. In part it’s also because I have never met two kids who responded the same way to anything—even if on a coded response sheet it might look as though they do.

Yet one cannot fall back on nihilism; so one reaches some conclusions—makes one’s best guess (judgment) and leaves the door open for those with other conclusions! It’s still a one-on-one kind of diagnosis. One even encourages, as in medicine, continued research of less likely minority views.

It’s not all that much different than what we have to do in any field which is bigger than our own anecdotal evidence. Like Iraq, or human-induced global warming, etc. We simply do not and cannot wait for certainty. (So, as laymen, we go along with the experts we most trust.)

But it’s also the reason I keep the door open to the idea of being wrong even as I act as vigorously and persuasively as I can on the assumption I’m right! That’s why I like kids to hear knowledgeable and expert adults disagreeing. My friend Brenda tells me they have a rule at home: we never argue about something that can be settled by looking it up (Googling these days). I think that’s a good rule, although that assumes we can know for sure when it’s “that kind” of argument, and besides there are some advantages in kicking a fact around for a while before looking it up.

Democracy rests on disagreements—if there weren’t any we wouldn’t need it. If there were mostly wrong and right answers to life’s dilemmas, we’d just be able to choose our rulers by a standardized test.

But there are no such tests, nor probably no such foolproof graduation or attendance data, or measures of what we all would even agree is “success”. But that doesn’t make trying to figure out how best to assess this or that less useful. We just need a much healthier sense of tentativeness about our assessments—an open mind.

As with the baby-sitter I described in my last letter, this is in part why I’m for as FEW, not as many, “no-no’s” as we can get away with. Let’s not “make a law against it” unless pressed to the wall. But let’s collect all kinds of data and expose all those interested to a good debate about the data, leading in turn to further data, and more debate. We don’t always need to end it with a vote—or a rule.

I recently attended a Mission Hill Board meeting (what an amazing event it is). On the agenda was a discussion of what kind of data we wanted to collect about the kids who graduate from our school. What kind we collect will, after all, affect the kind of answers available to us. A former teacher who is writing his doctoral dissertation (in Wisconsin) has agreed to sort through the last 13 years of data. That could be the assigned service task for every doctoral student!


The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.