‘Scientifically Based Practice’
It’s About More Than Improving the Quality of Research
The pressure is on. As a nation, we are asking teachers and administrators to bring all students to high standards of achievement, and we are holding them accountable. By raising the stakes for demonstrating better student outcomes, we have created a desperate need for information on how to achieve these challenging new goals. Everyone seems to agree that it is time for education researchers to deliver the kind of systematic knowledge that policymakers and practitioners need to do the job the nation is asking of them.
Nowhere has faith in the value of research for informing policy and practice been more forcefully expressed than in the nation’s capital. The U.S. Department of Education’s recent strategic plan claims that “we will change education to make it an evidence-based field.” Indeed, “scientifically based practice” has become the constant refrain of the Bush administration.
But the administration is also recommending significant changes in the way education researchers do business. According to the Institute of Education Sciences’ director, Grover J. “Russ” Whitehurst, the focus of research should be on identifying effective teaching practices. Borrowing from the field of medicine, the federal government has also put its faith, and its money, in a particular methodology—randomized field trials. This methodology is considered to be more rigorous than any other used in education research, and it allows causal conclusions that no other method can boast.
Also concerned with the quality and reputation of education research, the National Research Council Committee on Scientific Principles in Education Research offers a somewhat different set of recommendations. The committee suggests that the fit between the method and the questions being asked is more important than the particular method. Its recommendations focus primarily on the culture of education research—the need to foster a greater commitment to objectivity, high standards of scientific inquiry, replication, and the free flow of constructive critique.
Yet a third set of recommendations is well articulated in two documents—one issued by the National Academy of Education in 1999 (Recommendations Regarding Research Priorities: An Advisory Report to the National Education Research Policy and Priorities Board), and another by the National Research Council (Strategic Education Research Partnership, SERP). These reports promote, as the administration does, research that focuses on the problems of practice. Their recommendations differ from the administration’s strategy in several important ways, however. First, they encourage research in what Donald Stokes, in his 1997 book, calls Pasteur’s Quadrant—research on practical problems that develops, at the same time, general principles that can guide future research and practice. The reports suggest particular qualities of research that they claim will be more useful for improving education practice.
They recommend, for example, research that is embedded in practice and that involves collaborations between researchers and practitioners. Unlike the traditional linear model of “research-into-practice,” their view of productive research and development involves moving back and forth between research and practice. Innovations are developed by researchers collaborating with practitioners. They are tried out in classrooms, refined or developed by practitioners in their schools and classrooms, and then systematically studied by researchers. The link between research and practice is assumed to be complex, reciprocal, and dynamic.
Thus we have three well-developed proposals for how educational researchers can get their act together and then deliver. All three have merit, and they are not mutually exclusive, except inasmuch as time, resources, and talent are limited.
The culture of research organizations, especially universities, has not been particularly supportive of collaborative research that focuses on practical issues. But let us suppose, optimistically, that we are able to effect the needed changes in research contexts and make progress on all of the recommendations: We increase the number of randomized field trials that produce evidence for the value of particular instructional approaches; we increase the commitment and culture of rigorous scientific methods among education researchers; and we develop sustained collaborations between researchers and practitioners in which effective teaching strategies are developed, tested, refined, and disseminated.
We are still only halfway to scientifically based practice. There is more to do.
First, research findings must be made more accessible. Most research evidence is published in places and forms that only other researchers visit and can comprehend. The Bush administration’s effort to give policymakers and practitioners easy access to research findings through its What Works Clearinghouse is a laudable beginning.
We also need to create an appetite for research findings. Practitioners’ decisions are based primarily on their own intuitions and experience and occasionally on advice from colleagues, principals, or workshop leaders. The idea of basing decisions on research findings or even data collected at the local level is not part of the culture of teaching. New technology and the push for data-based decisionmaking and evidence-based practice are beginning to change the situation, but basing decisions on research and data is a new concept. Both the desire to consult research and the skills to interpret it will need to be developed within the teaching community.
We might expect the demand for and use of education research to rise if the quality and clarity of findings improve significantly. This occurred to some degree in medicine. But even in medicine, the path from findings to local use is indirect, often slow, and sometimes nonexistent. Education presents more serious obstacles to the implementation of research findings because the implications for practice are rarely straightforward.
We will also need to change the organization of teachers’ work to make it possible for them to learn new, effective practices. Evidence-based teaching involves more than prescribing the right pill. Research findings can never be specific enough to guide all of the myriad decisions that teachers need to make, moment by moment, in their own classrooms with their own students.
As a consequence, teachers need to have a deep understanding of the innovative methods and programs they are asked to implement. This requires far more time out of the classroom than they have available during the workday, and more training and support than most schools are organized to provide. Without these, however, the instruction that is actually implemented may bear little resemblance to the instruction that research demonstrated as effective.
Productive use of research findings at the policy level also requires many judgment calls. A policy found to be effective in one context is not necessarily effective in another, and there are often many details related to the original conditions of the research that need to be attended to when applying findings in new contexts.
Consider the example of class-size reduction in California. A large, random-assignment study in Tennessee demonstrating the benefits of reducing class sizes to about 15 students was used to support a policy of reducing class size to 20 in California. But unlike in Tennessee, where trained teachers were in good supply, in California there was a serious teacher shortage. Because crucial variables related to the context of the study were ignored, the implementation of this very costly policy in California may have done more harm than good, at least for children in the low-income communities that could not compete for the limited supply of trained and experienced teachers.
Another example is a random-assignment study of the High/ Scope preschool intervention in Ipsilanti, Mich., cited repeatedly as support for preschool education. True, the study has demonstrated impressive and long-term effects of a preschool experience, but the devil is in the details. Many of the preschool programs that were spawned by this compelling research evidence look nothing like the Ipsilanti program. It is very likely that many of the preschool programs based on this research do not give anything close to the same advantages seen in the original High/Scope program.
These examples illustrate the complexity of making evidence-based policy decisions. Researchers will need to make sure that they communicate clearly what contextual variables and details of the intervention or program are necessary to achieve positive results. And policymakers will need either training or assistance to make judgments about the implications of research findings for their local context.
It is also important to consider that evidence-based education practices will not be implemented broadly without cooperation from the private sector. In the field of medicine, pharmaceutical companies use a substantial portion of their profits to develop and study more effective strategies to prevent or cure illness. The motive is profit, to be sure, but the rigor of the research is monitored, and an elaborate federal bureaucracy exists to constrain dissemination of products that have not met high standards of evidence for effectiveness and safety.
The situation is quite different in education. Although educational practices are hugely influenced by products developed in the private sector, objective evidence on the effects of these products on student learning is rare. Until recently, there have been no incentives for carefully designed studies because buyers haven’t asked for evidence, and no outside agency has monitored the quality or even the existence of evidence.
There are signs that this situation may change as a consequence of the Bush administration’s policy of limiting funding (for example, in the Reading First initiative) to instructional programs that are research-based. The potential value of such a policy is clearly evident. Companies that produce educational products are beginning to figure out how to do credible research that will demonstrate the positive effects of their products on student learning. But we have a long way to go to develop mechanisms and organizational structures that will ensure critical and fair reviews of the evidence offered.
Finally, when evidence, however rigorous, is pitted against politics politics always wins. Student retention is a good example of evidence that is consistently ignored. The lack of evidence for positive effects of retaining children in their current grade when they fail to meet minimum standards appears not to have stemmed the trend of “no social promotion” policies. More rigorous, clearer, and more consistent findings may help, but policymakers will need to be willing to give more weight to research findings than they now do if evidence is to have an impact on practice.
The bottom line is that education researchers, like educational practitioners, are being asked to approach their work differently from how they did in the past. We are being challenged to impose high standards of scientific rigor on ourselves, to focus on problems of practice, and to develop sustained collaborations with practitioners. If the resources needed to do this kind of research become available (they currently are not), we should be able to live up to the challenge.
But until many other institutional changes occur, and the organizational structures to support evidence-based practice are developed, research findings, however clear and useful, will have a feather’s weight on teaching and student learning in the nation’s schools.
We do need to improve the quality and relevance of education research, but that’s not all we need to do.
Vol. 24, Issue 28, Pages 33,44