Earlier this month, a team of researchers at MIT and Harvard released a report contrasting the impact of charter schools, “pilot” schools, and traditional public schools on student achievement. The finding of charter school effects on achievement, using a random assignment research design, fueled the rhetoric of charter school advocates, some of whom saw the findings as a license for unlimited expansion of charter schools.
The researchers themselves were more cautious. They acknowledged that the study was not designed to discern why the effects were found. In fact, if the study had found that students in charter schools had shown less growth in achievement than students in traditional public schools, they wouldn’t have known why either.
Good public policy depends on compelling answers to “why” questions about both the observed effects and non-effects of policies and programs. And these “why” questions pertain both to the inner workings of policies and programs as well as the context in which the policies and programs are situated. Borrowing policies that have been found to be effective in one setting and expecting the same results in another setting makes sense only if we know why the policies were effective in that first setting. A research study showing that a policy or program “worked” in a particular setting doesn’t tell us that.
Our wish, then, is for asking “why?” more loudly, and earlier in the lifecycle of a policy or program. Why might achievement be higher in charter schools? Why do children learn more in smaller classes? Why are some teachers more successful in teaching low-achieving students than high-achieving students? Why don’t school expenditures have a stronger association with student outcomes? In skoolboy’s view, the real leverage in education policy comes from good answers to the “why?” questions. To paraphrase Jim March, research that addresses “why?” questions is more useful than research that addresses “what works?” questions because it has so many more applications.
One challenge posed by our wish is that the researchers who are skilled at addressing “what works?” questions are not necessarily the ones who are good at addressing “why?” questions. Even in large federal evaluations, there typically is a division of labor in which the study of implementation and context is segregated from the study of program impacts, and different research organizations or researchers are responsible for differing parts of the overall enterprise. Asking “why?” more often will require some hard thinking about research training and the infrastructure for education research in the U.S.
The opinions expressed in eduwonkette are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.