A long-awaited federal study of reading and math software that was released last week found no significant differences in standardized-test scores between students who used the technology in their classrooms and those who used other methods.
Representatives of the educational software industry immediately took issue with aspects of the $10 million study of 15 commercial software products, arguing that its findings did not mean that classroom technology had no academic payoff.
Still, Phoebe H. Cottingham, the U.S. Department of Education administrator whose office commissioned the study, said in an interview that the report should be used as “one input into people’s decision about how much, and where, to use education technology.”
“Effectiveness of Reading and Mathematics Software Products: Finding From the First Student Cohort” is available from the National Center for Education Evaluation and Regional Assistance, part of the Institute of Education Sciences.
“We took very leading products and put them through a very careful study,” said Ms. Cottingham, the commissioner of the National Center for Education Evaluation and Regional Assistance. “It’s a little disappointing we didn’t find what people were hoping we would find.”
Congress mandated the study as part of the 5-year-old No Child Left Behind Act. It is the most extensive federal study yet on education technology to follow methods the Education Department considers scientifically rigorous. The study followed an experimental design that was drawn up with the help of leading education researchers and vetted by the department’s Institute for Education Sciences. The lead researcher was Mark Dynarski of Mathematica Policy Research Inc., a research organization based in Princeton, N.J. Also taking part was the Menlo Park, Calif.-based SRI International Inc.
Software products were selected in four categories: 1st grade early reading, 4th grade reading comprehension, 6th grade pre-algebra, and 9th grade algebra. While the companies will get results for their own products, the public will see only aggregated findings for the four categories of programs.
Most of the products’ developers or publishers are well known in K-12 education:PLATO Learning Inc., Carnegie Learning Inc., Houghton Mifflin Co., Scholastic Inc., iLearn, LeapFrog Schoolhouse, AutoSkill International Inc., Pearson PLC, and Headsprout Inc.
“Because the study implemented products in real schools and with teachers who had not used the products, the findings provide a sense of product effectiveness under real-world conditions of use,” the report says.
Test Scores Compared
The study compared classes overseen by teachers who used the technology-based products with those of other teachers who used different methods. Those other approaches also included the use of technology in some cases.
Achievement in reading or math was measured by standardized-test scores, with complete data collected for 9,424 students.
In recruiting districts, the study team favored those with low student achievement and large proportions of students in poverty.
The technology-based curriculum packages were chosen from more than 160 products submitted in 2003 by their developers. One criterion for selection was that the products had shown previous evidence of effectiveness.
The overall finding of no net test-score gains from the software is sure to complicate the efforts of advocates of technology in education, who are lobbying the Bush administration and members of Congress to continue providing millions of dollars annually in support for classroom technology.
Fifteen computer-based reading and math products were evaluated in the 2004-05 school year as part of a major federal research project.
GRADE 1 EARLY READING
• Academy of Reading, AutoSkill International Inc.
• Destination Reading,Riverdeep Inc.
• Headsprout, Headsprout Inc.
• The Waterford Early Reading Program, Waterford Institute Inc.
• PLATO Focus, PLATO Learning Inc.
GRADE 4 READING COMPREHENSION
• Academy of Reading, Autoskill International Inc.
• Read 180, Scholastic Inc.
• KnowledgeBox, Pearson Digital Learning
• LeapTrack, LeapFrog Schoolhouse
GRADE 6 PRE-ALGEBRA
• SmartMath, Computaught Inc.
• PLATO Achieve Now, PLATO Learning Inc.
• Larson Pre-Algebra, Meridian Creative Group
GRADE 9 ALGEBRA
• Cognitive Tutor, Carnegie Learning Inc.
• PLATO Algebra, PLATO Learning Inc.
• Larson Algebra, Meridian Creative Group
SOURCE: Mathematica Policy Research Inc.
Note: Some of the developers and companies have since sold their product lines or been involved in corporate acquisitions.
The study’s second main finding was that reading-test scores for 1st graders were higher when their teachers had fewer students and lower when the teachers had more students. Similarly, the test scores of 4th graders were higher when they spent more time using the reading software and lower when they spent less.
But Mr. Dynarski said those findings were “observed associations” that suggested, but did not prove, that smaller classes and more time spent made the technology more effective.
“That doesn’t’ mean that if you reduced your class size, you would necessarily experience larger effects,” he said. Moreover, the study found no such associations with the math products.
Experts with some companies whose products were studied said the report does not support the conclusion that using educational technology does not have academic benefits in reading and math.
Companies complained about the government’s decision not to disclose individual results for the 15 computerized curriculum packages being studied, given that some products fared better than others in their categories.
“I have serious questions about the report—about the limitations of the design and the conceptualization of the study,” said Kristin DeVivo, the vice president of research and validation at Scholastic, which publishes Read 180, which was among the 4th grade reading products. “We know this is the first study of its kind, and really any kind of findings should be considered tentative until further data and research is reported.”
In addition, the groupings affected the “fidelity” of how teachers used the software, Ms. DeVivo contended, because the study’s classroom observers used a common form to evaluate the software in each group, rather than the software companies’ recommended practices for their particular product.
Time on Task
The Software and Information Industry Association, a Washington-based trade group that represents software and digital-content companies, including many of those participating in the study, also issued a statement. “Proper implementation of education software is essential for success,” it said. “Unfortunately, it appears the study itself may not have adequately accounted for this key factor, leading to results that do not accurately represent the role and impact of technology in education.”
Software-industry representatives and independent researchers also pointed to the limited time that students used the technology in the classrooms studied.
In all of the software groupings, students on average spent only about 10 percent of the time devoted to instruction in a specific subject using the technology.
“I continue to be baffled by why [the Institute for Education Sciences] thinks that interventions that are used for a very minimal amount of instructional time will have a major impact on student learning,” said Margaret A. Honey, the director of the Center for Children and Technology, in New York City. Ms. Honey, a researcher, was part of the advisory panel on the study design.
But Mr. Dynarski said all the teachers in the study received training in how to use the technology and support from the software developers during the school year.
“When we observed the classrooms through the year and interviewed the teachers, we feel pretty confident that 10 percent of use reflects the sound professional judgment of the teacher about how often and for what kinds of instructional modules they wanted to use technology,” he said.
Several professional groups devoted to helping educators use technology said in a joint statement that the study looked only at a small slice of a broad spectrum of educational uses of technology. “This study misestimates the value of information and communication technologies by focusing exclusively on older approaches that do not take advantage of current technologies and leading-edge educational methods,” Christopher J. Dede, an education professor at Harvard University, argued in the statement, which included comments by several officials of the groups.
Said Keith R. Krueger, the chief executive officer of the Washington-based Consortium for School Networking, which represents school technology officials: “[E]ducational software, like textbooks, is only one tool in the learning process. Neither can be a substitute for well-trained teachers, leadership, and parental involvement.”
More Findings to Come
Teachers who volunteered for the trial were randomly assigned to use the products or not. They were expected to be equivalent in their teaching skills in both groups, the report says. All told, 439 teachers in 132 schools took part.
Teachers who used the software products implemented them as part of their reading or math instruction. Teachers in the control group were expected to teach reading or math as they would have normally, possibly using some form of technology.
Each classroom was visited three times during the 2004-05 school year. Teachers were interviewed about implementation issues and filled out questionnaires. Nearly all the teachers said they would use the products again.
Ms. Cottingham said that the report on the study’s second year, with data collected from the same teachers’ classrooms in 2005-06, would provide “much more concrete results by product.”
Mr. Dynarski said: “It will also have a test of teachers who have used the product for one year. The students will be fresh, but the teachers will be experienced. We will be able to test whether a year of experience increases the effectiveness of technology.”
The second year of the study will push its total cost to more than $14.5 million. No timetable for the release of the results has been announced.
A version of this article appeared in the April 11, 2007 edition of Education Week as Major Study On Software Stirs Debate