Readers who closely follow media coverage of education know how often studies are used as the centerpiece of reportage and commentary. There’s something about research that lends gravitas to whatever is written. But unless the fundamentals of research are understood, it’s easy to be misled into drawing false conclusions. I was reminded of this after reading “Analytical Trend Troubles Scientists” on May 4 and “Taking Ideas On a Test Drive” on May 7, both of which were published in The Wall Street Journal.
Most studies in education are observational studies. This means that investigators pore over data previously collected by others. They seek correlations between different variables. This approach is far less expensive than other methodologies because it is easier and faster. With research budgets stretched thin, cost is a major consideration. The trouble is that observational studies are subject to biases that sometimes make the results unreliable. If results can’t be replicated by others, the conclusions lose credibility.
The gold standard in education - and in other fields as well - are experimental studies. These often go under the name of randomized controlled trials or randomized field trials. They involve random assignment of students to groups that differ only by the treatment. One group that receives a particular treatment is called the experimental group; the other group that does not is the control group. On the basis of the data collected, investigators draw inferences. The trouble is that experimental studies are costly and hard to implement because not all parents or schools are willing to let their children participate.
No matter which study is used, however, I hasten to point out a common error: correlation is not causation. Just because two things are statistically associated (correlated) does not necessarily mean that one was responsible for the other (causation). According to the Wall Street Journal, observational studies can be replicated only 20 percent of the time, compared with 80 percent for experimental studies. An example of the confusion appeared in a new book by Jonah Lehrer titled Imagine: How Creativity Works (Houghton Mifflin, 2012). The author cites a study finding that highly creative employees consulted more colleagues on their projects than did less creative employees. He erroneously concludes that there is a causal relationship between office conversations and creative production. In fact, productive people with many ideas may simply be more likely to chat than others (“Boggle the Mind,” The New York Times, May 13).
Another hurdle is that studies in the social sciences are viewed as less “scientific” than studies in the physical sciences (“Stop bullying the ‘soft’ sciences,” Los Angeles Times, Jul. 12). Consider the clash between John Witte and Paul Peterson over the effect vouchers had in Milwaukee, home of the nation’s first and oldest voucher school program that began in 1990 (“Dueling Professors Have Milwaukee Dazed Over School Vouchers,” The Wall Street Journal, Oct. 11, 1996). The dust-up is now considered a classic because it involved two respected professors who examined the same data and reached opposite conclusions. The Milwaukee program has since triggered at least three annual evaluations. With the exception of Peterson, the evaluations have found no significant differences between the performance of students in voucher schools and students in regular public schools. But notice the terms used. Peterson used “substantively significant” in arguing the voucher schools produced better results. This is a more ambiguous term than “statistically significant.”
I suppose the issue ultimately comes down to one’s willingness to accept evidence that does not reinforce one’s ideology. I’m not optimistic about this possibility because most people aren’t comfortable in the face of cognitive dissonance.