The Research Blues
|Why does education research have so little impact on policy—and even less on practice?|
The printed program for the four-day meeting of the American Educational Research Association in April was the size of a telephone book—369 pages of mostly small print. Of the association's 23,000 members, more than 10,000 were in New Orleans participating in some 1,600 overlapping sessions held in three hotels.
A few months later, more than 100 distinguished researchers, policymakers, and educators met in Chicago to discuss needs and opportunities in education research. The conference was held in recognition of the 30th anniversary of the Spencer Foundation, which awards grants, almost exclusively, for education research.
By the time this column is published, the U.S. Congress will probably have voted on a bill to overhaul the Office of Educational Research and Improvement, which has an annual budget approaching half a billion dollars.
Given the scope of the education research enterprise, one might ask why it has little impact on policy—and even less on practice. An obvious answer is that much of the research merely satisfies academe's foolish and anachronistic insistence that faculty members either "publish or perish."
Perhaps a more serious reason for the limited influence of education research is the lack of an infrastructure linking producers of research to potential users. In medicine, when research leads to new drugs, treatments, or theories, the findings are carefully vetted, then published in highly respected publications, such as the New England Journal of Medicine, which practitioners rely on and trust. Physicians constantly receive brochures from pharmaceutical companies and medical suppliers. And as part of continuing- education requirements, doctors attend conferences where they hear about significant new findings. Medical schools also tend to include such findings in their curricula and clinical teaching.
Virtually none of this structure or process exists with regard to education research. As a consequence, few practitioners ever hear about, let alone use, the significant findings that address virtually every problem facing education.
But the most important reason research has little influence is that traditional public schools are exceedingly hostile to new ideas and constructive change. Neither teachers nor administrators are regular consumers of research, but even if they were, and even if outstanding research journals in education did exist, it wouldn't be enough. To put significant research into practice, you must first change the culture of a traditional school—the curriculum, schedule, structure, union contract, and allocation of resources. Reformers have been trying to do this for a couple of decades, without much success.
The recently passed No Child Left Behind Act raises the stakes. It requires states and districts to use "scientifically based research" as a basis for any education practice supported by federal funds. This may turn out to be a weak standard to rely on and certainly a confusing one for practitioners and policymakers. Most "scientifically based research" findings in education contradict one another. Does money matter? Do vouchers improve student test scores? Do kids learn better in small classes? Should failing students be held back? Yes or no, depending on which study you read.
Because researchers analyze the same problems but often come up with opposing results, politicians and practitioners are able to use findings to buttress their own prejudices or to avoid taking a stand. Perhaps one reason it's so hard to change the status quo is that every proposed improvement is shot down by one study or another.
Or maybe worse, policymakers and educators may seize upon researchers' work and rush to apply it without careful thought or preparation. Motivated by strong evidence that kids learn better in smaller classes, California enacted legislation forcing reduction in class size for kindergarten through 3rd grade. In the process, the state compounded an existing teacher shortage, made inequities among schools worse, and aggravated serious financial problems.
Finally, test scores are the standard by which most education research on student learning and school performance is evaluated. Relying so heavily on such a narrow and flawed measure in itself calls many research studies into question.
—Ronald A. Wolk
Vol. 14, Issue 1, Page 4