The phrase pops up over and over, mantra-like, in the new federal education law: “scientifically based research.”
Those words, or an approximation, appear more than 100 times in the reauthorization of the Elementary and Secondary Education Act, which requires practices based on research for everything from the provision of technical assistance to schools to the selection of anti-drug-abuse programs.
Reflected in that repetition is a desire by Congress and the Bush administration to base school improvement efforts less on intuition and experience and more on research-based evidence. That desire also mirrors other attempts in the field to set standards of quality for education research and to synthesize what is known, or identify successful programs and practices, based on those standards.
“There are a number of groups and individuals who have, for years, been interested in grounding education in a culture of evidence,” said Grover J. “Russ” Whitehurst, the assistant secretary for educational research and improvement in the U.S. Department of Education. “That’s always been their message. But there’s an opening here.”
At the same time, the revised ESEA, also known as the “No Child Left Behind” Act, inspires such questions as: Who decides what counts as “scientifically based research”? Does enough good research exist for schools to use, and in a form that’s usable? Can Uncle Sam enforce a requirement that schools use only research-based programs and practices?
Some scholars also worry that the language is, in part, ideological and aims to promote a particular view of education.
“The reason, I think, for the cry for more research and better research is to delay implementing the research that we have,” said David C. Berliner, a professor of education at Arizona State University in Tempe. He said, for instance, that policymakers have largely ignored well-established research on such topics as the beneficial effects of high-quality preschools and the harmful effects of holding students back a grade.
Frederic A. Mosher, a former program officer with the Carnegie Corporation of New York, who now consults widely on education research, said, “One of the best things I’ve seen about the Bush administration is this focus on trying to get the system to work from evidence, as long as it doesn’t become too Draconian.”
But he added: “The idea that we can shift immediately to only things that are justified by research is superambitious and, probably, a fantasy.”
In the past year, a series of “cascading events” has dramatically ratcheted up the debate over the quality and usefulness of education research, according to Ellen Condliffe Lagemann, the president of the Chicago-based Spencer Foundation, which helps support coverage of research in Education Week.
On the one hand, studies such as, “The Report of the National Reading Panel,” by the National Institute of Child Health and Development, and “Preventing Reading Difficulties in Young Children,” by the National Research Council, have encouraged a hunger for similar research syntheses on other educational topics.
At the same time, the increasing pressure on schools to show results— or be subject to stiff penalties—has increased the demand for information about programs and practices that work.
Meanwhile, Congress, starting with the 1998 passage of the Reading Excellence Act, has signaled its growing frustration with what some members regard as the poor quality of much education research, and its lack of utility for the field. The 1998 legislation included a definition of what constitutes rigorous scientific methods for conducting education research that many researchers considered too strict.
A similar definition cropped up again last year in an early version of a bill to reauthorize the office of educational research and improvement, the Education Department’s primary research arm. Although the bill never passed—and subsequent drafts contained a broader, more generous definition—it outraged some in the research community that politicians would try to impose standards on them. Legislation to reauthorize the OERI is expected to be introduced as early as next month.
“I believe strongly that in education research, while there are many well-informed and well-intended people, the end product has never been of much excellence,” said Rep. Michael N. Castle, R-Del., who chairs the House subcommittee responsible for reauthorizing the OERI.
In the past few years, a number of groups have come together to address both the pent-up demand for quality filters in education research and the need to package or translate that research for the layman. Groups ranging from the National Research Council, to the American Educational Research Association, to the international Campbell Collaboration all have begun to eye ways to synthesize what is known about educational problems for a wider audience.
On another level, a number of new attempts have been undertaken to identify programs and curricula that are effective, based on solid evidence. The Educational Quality Institute, a nonprofit organization based in Washington, for example, plans to publish a series of “consumer reports” that would review education programs claiming to be research-based against a set of standards.
And the federal Education Department plans to set up a What Works Clearinghouse to evaluate the evidentiary claims for products and programs in such areas as textbooks, computer software, and teacher- development videos.
The goal is to follow the lead of the medical profession, in relying more on well-crafted research to guide practice, said Joseph C. Conaty, the acting deputy assistant secretary for the Education Department’s office of elementary and secondary education. “This is the first, early steps of education moving in this direction,” he said at a meeting this month at the National Research Council.
One of the biggest concerns is who decides what counts as “scientifically based” research, or as research worth paying attention to.
The new ESEA, signed by President Bush this month, includes a definition of “scientifically based research” that some scholars fear tilts too heavily toward experimental designs and away from other scholarship, such as case studies and other qualitative research or basic research that leads to the development of specific programs.
“Why the emphasis on experimental and quasi-experimental research, when there’s so much other good stuff out there, I don’t know,” said Nel Noddings, the president of the National Academy of Education and a professor emeritus of education at Stanford University. “I think that’s a far too narrow focus.”
A report released by the National Research Council last fall, “Scientific Inquiry in Education,” concluded that at its core, such inquiry is the same in all fields, from education to physics. It argued that the best way to advance scientific knowledge is through the “self-regulating norms” of the scientific community, rather than through “mechanistic application” of a particular set of research methods.
“One cannot just demand controlled experiments,” said Robert F. Boruch, a University of Pennsylvania researcher who served on the panel that produced the report. “That’s akin to asking people to levitate.”
The report also concluded that while good research adheres to core scientific principles, methods may vary widely, depending on the research question. Such approaches can range from descriptive studies of the relationship between two variables (such as students’ math achievement and their teachers’ math education), to randomized field trials that try to pinpoint cause and effect.
Like many of her colleagues, Susan Fuhrman, the dean of the graduate school of education at the University of Pennsylvania, argues that it’s “not appropriate” to spell out the definition of education research in legislation. “I think these are issues of scientific debate,” she said.
But Mr. Conaty of the Education Department said that, while it’s clear experimental designs are preferred in the new law, it doesn’t discount other forms of scholarship. It does mean, however, that paying close attention to research-design features in evaluating programs and practices is important.
“I think, in fairness, the definition of ‘scientifically based research’ is a definition that focuses on evaluation research,” said Mr. Whitehurst, the assistant secretary for research, and the former chairman of the psychology department at the State University of New York at Stony Brook, “and the definition is good at that level.”
For his part, Rep. Castle acknowledged that scholars and policymakers would eventually have to agree on what counts as education research, because it’s “an esoteric enough subject that it’s hard to do as a controversial piece of legislation.”
‘Toys, Tools, Tests, and Texts’
Others say they’re concerned that, while the goals for research in the ESEA are admirable, there is not enough rigorous research to be found, in a form that educators and policymakers can use.
“Evidence-based, research-based, is the catchword of the day,” said Arthur W. Gosling, the director of the National Clearinghouse for Comprehensive School Reform, based at George Washington University in Washington. But he noted that models for comprehensive improvement of schools “have a hard time really coming up with research that would meet the test of the pure academician, in terms of what constitutes good research.”
For one thing, rigorous research designs, particularly the kinds of randomized experiments called for in the federal law, are expensive. Researchers often point out that education lacks both the funding and the support structure that the health-care field has.
“There’s no driving force in education analogous to the National Institutes of Health,” said Sue Urahn, the director of education programs for the Philadelphia-based Pew Charitable Trusts, which is exploring ways to strengthen the links between education research and policy.
Ms. Lagemann of the Spencer Foundation suggested that to solve some of the most pressing problems in education will require interdisciplinary teams that go at a problem until they crack it.
“I don’t think it’s academic research in the traditional sense,” she said. “It’s got to be a combination of applied and basic research that’s aimed at solving the problem, not at generating more research.”
Equally important, she said, research findings have to be in a usable form for educators. “We have tended to think that if you do research and get results, that will be useful to practitioners,” Ms. Lagemann said. “There’s an intermediary step. You have to take the results of research and build it into toys, tools, tests, and texts. You have to build it into things that practitioners can use. They can’t use the conclusions of a study.”
To get the kind of research the new education act demands might mean changing the way the United States now practices and pays for education research. According to the National Research Council report released last month, only about 15 percent of the OERI’ s budget last year went for actual research. The lion’s share of the research agency’s money went to “service oriented” programs, such as technical assistance to states, districts, and schools to implement “research-based” changes.
“All of us who have responsibility for the research enterprise feel the need to fill in the empty cells because the legislation demands it,” said Mr. Whitehurst of the OERI. “We fully recognize that the base of evidence is uneven, and we’ll be moving just as quickly as we can to fill it in.”
But in the long run, he said, the companies that market educational products also will have to take some responsibility for investing in clinical trials and evaluations. “The time will come when it will become a distinct marketing advantage,” he predicted.
Until now, there’s been little incentive for those marketing educational products to pay for studies of their effectiveness. When Mary Fulton, a policy analyst at the Education Commission of the States in Denver, set out a few years ago to identify reading programs with solid evidence behind them, she soon hit a brick wall.
“If I had set really tough criteria and high standards, there would have been very few,” she said. “If publishers can waltz into a district and sell their program based on the sales pitch and the PR, then why do rigorous evaluations?”
One of the new law’s ironies, social science researchers said last week, is that it contains a provision that could keep some of them from doing their jobs well.
Under the law, districts must create policies for notifying parents when researchers do surveys asking children sensitive questions on personal matters, such as drug use or sexual practices.
The problem, said Patricia C. Kobor, a senior science-policy analyst with the Washington-based American Psychological Association, is that parents typically do not return permission forms, often leaving researchers with inadequately sized or skewed sample populations for their studies.
Whether the federal government can actually enforce the requirement that schools use research-based practices is an open question.
Some predict the pressure on the Education Department to spend the available federal money will, of necessity, require slippage in whether grant recipients have to use scientifically based practices to receive federal aid.
“They have to get money out the door,” Kenji Hakuta, a professor of education at Stanford University, said of the federal department.
Ted Sanders, the president of the ECS, said few state departments of education have the expertise to draw up or enforce standards for what constitutes good research evidence. And they are often under intense political pressure to recognize certain programs or products as meeting standards of evidence, whether they do or not.
In addition, while the legislation may increase a demand for research-based evidence from some superintendents, principals, and teachers, many educators at the district level may be too immersed in their day-to-day challenges to, in effect, research the research.
“There aren’t a lot of school people and principals out tapping into a research database,” said Mr. Gosling of the National Clearinghouse on Comprehensive School Reform. “That’s not part of their world.”
But he added: “I think this legislation at least makes it possible for a consumer, a school principal, teachers, or superintendents to say to a model developer, ‘Prove to me that what you’ve got is workable, beyond just the rhetoric that we have wonderful stuff here.’”
“It at least provides a kind of demand that those questions be asked,” Mr. Gosling said, “and that isn’t all bad.”
Coverage of research is underwritten in part by a grant from the Spencer Foundation.
A version of this article appeared in the January 30, 2002 edition of Education Week as Law Mandates Scientific Base For Research