Cost and Effect
Do Some Research Experiments Need to Be Done?
To the Editor:
Experiments are powerful analytical tools. They can provide cause-and-effect conclusions more certainly than other approaches. With respect to educational evaluations, however, they are also very costly tools.
Costly tools should be used when there is something that needs to be found out, rather than to add a little bit of confirmation about something already understood.
Your reporting of the second-year results from a true experiment on the Success for All program ("Long-Awaited Study Shows ‘Success for All’ Gains," May 11, 2005.) raised for me the question of whether we needed such an experiment to decide if that program produces small, positive effects on beginning reading. There is a lot of existing evaluation data on Success for All, and a reasonable summary is that the approach often produces small, positive effects on beginning language arts, although not always.
While it is logically possible that all of the positive, nonexperimental evaluations of Success for All were somehow biased in the program’s favor, that seems unlikely. It would require an efficiency of corruption beyond even the most resourceful educational entrepreneur. As often as the correlation between Success for All and small, positive effects on language arts has been observed, a causal relationship is a pretty good bet. What third variable could possibly account for the repeated linkages between Success for All and beginning-reading achievement that has been observed? Since I can’t think of one, I have my doubts about whether a costly true experiment just to make certain there is a causal relationship made economic sense.
My doubts were heightened as I reflected on Herbert J. Walberg’s remark in your article discounting the true experimental result because the developers of Success for All were involved in the study. As a deeply experienced experimenter, as well as an experienced reader and reviewer of the experimental literature in education, I recognize that a very high proportion of experiments are done by folks who either have developed or have a great affinity for the intervention they are studying. Following Mr. Walberg’s logic, much of the experimental education literature that exists should be dismissed.
Is the solution to do another very expensive experimental evaluation of Success for All, this time conducted by some dispassionate third party? Believe me, if such a study were done, there would be serious doubts raised about it, too. In fact, I have never read a single experimental study about which some kind of doubt (often serious) could not be raised.
The only way to get to a firm conclusion about causality is for there to be many experiments by many parties in many settings—with each experiment having differing weaknesses in the eyes of various critics. Given that we already know the program’s use is associated with small, positive effects on beginning language arts, I could never justify the resources that would be required to prove conclusively the causal relationship between the program and the effects.
Rather than funding experiments on interventions that have been studied a lot, the federal government should spend the limited research dollars available on the development of new interventions, ones that have the potential, at least, to produce something better than small, positive effects. And if we get such studies, we’re going to have to have more faith than critics like Mr. Walberg, for first experiments often will be done by program developers, and one (or a very few) experiments may be all the experimentation we can afford.
It is time for the education research leadership to help the country develop an understanding that experiments are precious and rare resources that should be employed judiciously. The scientific leadership also needs to make clear to the country that although the results of individual experiments always should be used cautiously, they should be used even after a single experiment. A second experiment may never come—and the perfect experiment definitely will never come.
Vol. 24, Issue 38, Page 33
Vol. 24, Issue 38, Page 33
Get more stories and free e-newsletters!
- DIRECTOR OF TECHNOLOGY
- Austin Preparatory School, Reading, MA
- Director, Risk Management and Insurance
- Pinellas County Schools, Largo, FL
- Superintendent of Schools
- Allentown School District, PA
- Deputy Executive Director
- National Assessment Governing Board, Washington, DC
- Senior Director, AP Digital Product Management
- The College Board, New York, NY