Slick marketing fliers touting Scholastic Inc.’s education products are designed to coax customers into buy mode. But for the discerning consumer, the New York City-based publisher is armed with something more substantial: dense reports filled with data designed to prove the effectiveness of its offerings.
The company’s education division employs four full-time staff members to oversee its research program. More and more, though, Scholastic pays outside researchers to study the impact of its products on student achievement. It must do so, the company says, to satisfy educators’ appetite for evidence that has the added credibility of an independent analysis and goes beyond an anecdotal advertising pitch.
“At this point in time, most people can sniff out the difference,” said Kristin M. DeVivo, the vice president of research and validation at Scholastic Education. “It is important for us to be very comprehensive and thoughtful about our research strategy and commitment.”
Scholastic is not alone in paying more attention to research. Most big players in the education publishing industry are doing the same amid growing demand for evidence-based practice—a requirement for programs financed through the federal No Child Left Behind Act.
Publishers are trying to meet that demand by designing instructional materials, assessments, and professional-development programs that follow research findings, and by validating their products’ effectiveness in schools and districts. The trend, however, is posing a dilemma for both the industry and the education field, many observers say.
“It’s good that the [educational publishing] industry is behaving more like other industries in investing in research and development,” said Grover J. “Russ” Whitehurst, the director of the Institute of Education Sciences, the U. S. Department of Education’s research arm. “On the negative side, one always has to be concerned about potential conflicts of interest when an entity or individual is collecting data and publishing results with respect to a product in which they have an interest in the success.”
Challenge for Journals
Authors submitting manuscripts to the Reading Research Quarterly are asked to disclose any potential conflicts of interest by completing a questionnaire that asks whether:
• The research reported in the manuscript (or the preparation of the manuscript) was funded.
• They, or a member of their immediate family, have (or are likely to have in the near future) a financial interest in the materials, services, procedures, curricula, tests, and so forth that (a) were the object of study in the manuscript, (b) were used in the conduct of the research reported, or (c) were commented on or alluded to in the manuscript.
• They have an association (or intend in the near future to have an association) with a commercial company, firm, or agency that might benefit from the publication of the manuscript.
• They are aware of other issues related to conflict of interest about which the editors, reviewers, or other readers of the manuscript should be aware.
SOURCE: Reading Research Quarterly
Publishers of education journals are beginning to weigh those issues in deciding whether to print studies commissioned by companies, especially because peer-reviewed articles can be perceived as bearing a seal of approval by experts.
“There needs to be some creative mechanism for companies to encourage research of their products but somehow to distance themselves from it,” said David Reinking, a professor of teacher education at Clemson University in Clemson, S.C. As co-editor of the Reading Research Quarterly, he and Donna Alvermann initiated an extended debate among editorial advisers after the peer-reviewed publication received several manuscripts that raised concerns.
Although the advisers never came to a consensus, they devised new guidelines for the 40-year-old journal to screen submissions for potential bias and conflicts of interest. Now authors must complete a conflict-of-interest checklist that requires them to disclose any funding they received for the project and any benefit they or their family members might get from the findings.
“It’s incumbent on journals to step up their attention to this,” Mr. Reinking said.
Some scholars point to the What Works Clearinghouse, a venture that Mr. Whitehurst oversees at the IES, as a promising model for vetting and disseminating research on education programs. The online clearinghouse reviews studies of programs and then rates the evidence of those interventions’ effectiveness based on the research findings.
The federal clearinghouse is diligent, Mr. Whitehurst said, about identifying any associations between a study’s authors and sponsors—including companies, foundations, and other nonprofit organizations—and then scrutinizing the study design and results for any flaws. Upon submitting research to the clearinghouse, authors must sign statements that they did not withhold any related studies they conducted on the products and identifying any potential conflicts of interest.
Questions of Credibility
Taking money from a publisher, or even a nonprofit curriculum developer, shouldn’t automatically raise doubts, some researchers say.
Instead, judgments should be based on the reputation of the researcher and the quality of the study, said Steven M. Ross, the director of the Center for Research in Educational Policy at the University of Memphis.
In the 1990s, Mr. Ross was hired by the Success for All Foundation in Baltimore to conduct studies on its model of whole-school reform. Although the program has a comparatively extensive research record, that research is sometimes criticized because much of it was either paid for by the foundation or conducted by Robert E. Slavin and his wife, Nancy A. Madden, who developed the program together. But the literature on Success for All has been held up as a model by Mr. Whitehurst and others.
“If it worked, it worked. If it didn’t, Bob didn’t change the results,” said Mr. Ross, who is the editor in chief of the scholarly journal Educational Technology Research & Development. “People like me were hired to do research, and I can honestly say that Bob Slavin never had a hand in the research or the results.”
S. Jay Samuels, who has been a professor of education at the University of Minnesota-Twin Cities for more than four decades, found similar freedom in grants he received from Renaissance Learning Inc., the Wisconsin Rapids, Wis.-based publisher of the popular Accelerated Reader program.
Mr. Samuels, a member of the influential National Reading Panel, wanted to discover more about the effect of independent reading on students’ comprehension. In its 2000 report, the congressionally mandated panel found little research to support independent reading as a way to improve students’ fluency. But Mr. Samuels deemed the subject a “no-brainer,” and sought funding from Renaissance Learning to test that view. The company approved his grant request and then let him work.
“There was an iron firewall set up, and the deal was that they are not going to bother me,” he said.
Mr. Samuels’ findings—that independent reading can improve students’ fluency when they are given books of an appropriate skill level for a proper period of time—tended to support the basis of Renaissance Learning’s key product, Accelerated Reader. But that doesn’t take away from the value of the findings, he said.
“You’re not endorsing a product,” Mr. Samuels said. “It’s not like you said you drove a Mercedes-Benz and it was a lovely experience. No, you say it went from zero to 60 in 8.2 seconds.”
Bad News Buried
If results don’t turn out favorably for a publisher, though, the potential exists that the findings will be buried. Mr. Ross said that has happened with at least one of his studies, which he declined to identify.
Many researchers negotiate for freedom to analyze and draw their own conclusions from the data, he said. But publishers can demand final approval over the release of the findings or decide to withhold them altogether, which Mr. Ross said had happened to him.
“There’s tension when results come out not so positive, and [the funder] might have a lot of input on how they want you to interpret the results, or what they want you to include in the report or leave out,” he said. “I will always hear their case, … then perhaps put in a caveat,” if appropriate, he said, to address their concerns.
That’s not always the best approach, even when results are not stellar, said Ms. DeVivo of Scholastic Education. While studies ideally can show how well a curriculum product works, she noted, they can also reveal the conditions under which it doesn’t work. Some reading programs, for example, may work well with struggling readers, but not with those who are on track toward proficiency.
“It’s very important to us to let administrators know when a program is not having an effect,” Ms. DeVivo said. “What we need to do is understand the conditions under which the program is effective.”
Such research can also help the company improve its programs, or design new ones that meet varying student needs, she said. It can also reveal other areas needing study.
“Everyone always wants a sound bite on how good a program is,” Ms. DeVivo said. “But so often we’re answering a number of complex research questions and determining what further research should reveal. That’s not always the best for marketing.”
Karen R. Harris, a professor of education at Vanderbilt University in Nashville, Tenn., and the editor of the Journal of Educational Psychology, said a single study or even a couple of studies on a particular product cannot establish whether something works or not.
“It’s going to be frustrating to curriculum companies and the education field that a single study, whether it has nebulous, positive, or negative findings, is not going to be able to answer the question of whether it works,” said Ms. Harris, who advises the New York City-based McGraw-Hill Cos. on reading research. For that reason, she added, “you need a program of research.”
Educators, ultimately, must do their own homework when deciding what to buy for their classrooms, said Mr. Whitehurst of the Institute of Education Sciences. When reviewing evidence presented by publishers, such as test scores from a particular school or district, they should look at what outcomes were measured and for results from comparison groups, he advises. They should also ask staff members with knowledge of research to critically evaluate the literature.
“Always, caveat emptor is the appropriate stance whenever someone is trying to sell you something,” Mr. Whitehurst said.
A version of this article appeared in the November 29, 2006 edition of Education Week as Surge in Company-Sponsored Studies Sparks Concern