If you read media reports about education, a lot of the stories you see make extraordinary claims about remarkable, heart-warming turnarounds in student achievement, which are often debunked some time later. This cycle of enthusiasm-debunking-disappointment gets us nowhere in improving outcomes for kids. Genuine miracles--dramatic turnarounds in formerly low-achieving schools--are just as likely in education as they are in any other field. That is, not very likely at all. In fact, most miracles in education turn out on inspection to be due to a change in the students served (as when a new charter or magnet school attracts higher performing students) or changes in demographics (as when school catchment areas are gentrifying). Apparent miracles may be due to changes in tests (as when an entire state gains in one year due to a change to an easier test), or due to other redefinitions of outcomes (as when districts reduce their standards for high school graduation and graduation rates increase). All too often “miracles” never happened at all, as when “turned around” schools deliver poor scores or graduation rates, or when large changes occur for one year but reverse in the following year, or when schools improve on one measure but all other indicators are poor.
When data on individual schools do in fact show dramatic improvement and cannot be explained by demographic or test changes, there remains a question about whether the so-called “miracle” is replicable anywhere else. Were the gains due to an extraordinary level of funding? An extraordinary principal? Other unusual, never-to-be-repeated conditions?
When false “miracles” are reported and believed, they condition the public and policymakers to expect dramatic outcomes that cannot possibly be produced at scale. They invite debunking, which distracts from the real conversation and undermines faith in the entire reform process.
The antidote to false miracles is good research. In studies that compare many schools using a particular program over the course of a year or more to similar schools that continue business as usual, especially when schools are assigned at random to program or control groups, the outcomes are more likely to be believable, and chances are the program can be disseminated successfully to other schools. The outcomes from high-quality studies are usually much more modest than those reported for high-profile “miracles,” but they represent a more realistic idea of what could be achieved more broadly.
Newspapers hate to report on actual research in education because they consider it complicated and boring. Yet government is increasingly pointing to high-quality research, and the public may be receptive to hearing about exciting new developments and appealing examples of schools using proven programs.
If you want miracles, go to Lourdes, but if you want better schools for America’s children, ask for the evidence. There really is solid evidence for a wide variety of proven, replicable programs, but this evidence is routinely ignored while attention is focused on the making and debunking of implausible claims. This focus on “miracles” adds heat but not light to educational debates.
The opinions expressed in Sputnik are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.