“We learned from correlational research that students who speak Latin do better in school. So this year we’re teaching everything in Latin.”
The oldest joke in academia goes like this. A professor is shown the results of an impressive experiment. “That may work in practice,” she says, “but how will it work in the laboratory?”
For practitioners trying to make sense of the findings of educational research, this is no laughing matter. They are often left to figure out whether or not there is meaningful evidence supporting a given practice or policy. Yet all too often academics report findings from experiments that are too brief, too small, and/or too artificial to be reliable for making educational decisions.
Looking at the original articles, this problem is easy to see. Would you use or recommend a classroom management approach that has been successfully evaluated in a one-hour experiment? Or one evaluated with only 20 students? Or evaluated in a situation in which teachers in the experimental group had graduate students helping them in class every day?
The problem comes when busy educators or researchers rely on reviews of research. The reviews may make sweeping statements about the about the effects of various practices based on very brief, small, or artificial experiments, yet a lot of detective work may be necessary to find this out. Years ago, I was re-analyzing a review of research on class size and found one study with a far larger effect than all others. After much sleuthing I found out why: It was a study of tennis instruction, where students in larger tennis groups get a lot less court time.
So what should a reader do? Some reviews, including Social Programs that Work, Blueprints for Violence Prevention, and our own Best Evidence Encyclopedia, take sample size, duration, and artificiality into account. Otherwise, if you want to know for sure, you’ll have to put on your own deerstalker and do your own detective work, finding the essential experiments that took place in real schools over real periods of time, under realistic conditions. Evidence-based reform in education won’t really take hold until readers can consistently find reliable, easily interpretable and unbiased information on practical programs and practices available to them.
In case you missed last week first part in the series, check it out here: Bad Science I: Bad Measures
Illustration: Slavin, R.E. (2007). Educational research in the age of accountability. Boston: Allyn & Bacon. Reprinted with permission of the author.
The opinions expressed in Sputnik are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.