A major federal study of reading and mathematics software has found no difference in academic achievement between students who used the technology in their classrooms and youngsters who used other methods.
The $10 million study of 15 educational software products is the most extensive federal study yet to follow methods that the U.S. Department of Education considers scientifically rigorous.
The report on the first year of a two-year study is expected to be presented today to Congress, which commissioned the study in the No Child Left Behind Act. The report is slated to be posted on the Education Department’s Web site tomorrow, according to sources.
The lead researcher for the study was Mark Dynarski of Mathematica Policy Research Inc., an independent, for-profit research organization based in Princeton, N.J. Also taking part was the Menlo Park, Calif.-based SRI International Inc.
“Because the study implemented products in real schools and with teachers who had not used the products, the findings provide a sense of product effectiveness under real-world conditions of use,” the report says.
The study followed an experimental design that was drawn up with the help of leading education researchers and vetted by the Education Department’s Institute for Education Sciences.
Software products were selected in four categories: 1st grade early reading, 4th grade reading comprehension, 6th grade pre-algebra, and 9th grade algebra. While the companies involved will receive results for their own products, the public will see only aggregated findings for the four categories of programs.
The products were chosen from more than 160 products submitted in 2003 by their developers. One selection criteria was that the product had shown previous evidence of effectiveness.
The developers or publishers of the software are well-known in K-12 education: PLATO Learning Inc., Carnegie Learning Inc., Houghton Mifflin Co., Scholastic Inc., iLearn, Leapfrog Schoolhouse, Autoskill International Inc., Pearson PLC, and Headsprout Inc.
Test Scores Compared
The study compares classes overseen by teachers who used the technology-based products with those of other teachers who used different methods. Those other approaches also included the use of technology in some cases, though the selection team tried to avoid schools that used technologies similar to the ones being studied.
Student achievement in math or reading was measured by standardized-test scores, with complete data collected for 9,424 students.
The study team recruited the school districts, favoring districts that had low student achievement and large proportions of students in poverty. Researchers sought districts and schools that did not already use products like those being studied.
The school districts identified schools, based in part on their having adequate technology infrastructure and being involved in other initiatives.
The finding of no gains from the software is sure to complicate the efforts of advocates of technology in education, who are lobbying the Bush administration and members of Congress to continue providing millions of dollars annually in support for classroom technology.
The release of the study will take place about a year after its original target of spring 2006. It appears at a time when the Education Department officials are speaking more about education technology than they have in recent years.
Several officials, including Education Secretary Margaret Spellings, have commented recently that the public has not seen much of a return on the federal government’s investments of millions of dollars in grants to states and school districts for educational technology.
The findings may be disturbing to the companies that provided their software for the trial. As rumors spread this week about the study’s findings, companies revived old complaints about how the study has been conducted—particularly the government’s decision not to disclose individual performance results for the 15 computerized curriculum packages being studied.
Company officials point out that the blended findings also blur the results for any specific software product, which may have fared better than others in its category.
Random Assignment
Teachers who volunteered for the trial in the selected schools were randomly assigned to use the products or not. The teachers were expected to be equivalent in their teaching skills in both groups, the report says. All told, 439 teachers took part in 132 schools.
Teachers who used the software products implemented them as part of their reading or math instruction. Teachers in the control group were expected to teach reading or math as they would have normally, possibly using some form of technology.
How well the technology is implemented is a critical question in any evaluation, so the study placed trained classroom observers to assess the quality of implementation of the products.
Each classroom was visited three times during the school year, with observers following a common format for their observations. The teachers were also interviewed about implementation issues and filled out questionnaires.
Technical glitches and similar problems cropped up—unavoidable in educational technology—but were mostly of the minor variety and were easily corrected or worked around. Nearly all the teachers said they would use the products again.