Analyzing the Tech Effect

Researchers examine whether technology has an impact on student achievement

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

In a time of educational accountability and revenue shortfalls, the first question on the minds of policymakers seeking to trim already-lean school budgets often is: How does this program improve student achievement?

In many research studies, the link between educational technology use and improved student achievement is uncertain, at best. There are just too many variables to control. And so, technology programs are often among the first victims of the budget knife. That has surely been the case in many districts and states this year

But researchers are continuing to look for connections between educational technology and student achievement.

The North Central Regional Educational Laboratory, a nonprofit research organization based in Naperville, Ill., and known as NCREL, recently conducted a meta-analysis combining the results of 20 peer-reviewed studies published from 1997 to 2002. Similarly, the U.S. Department of Education commissioned SRI International’s Center for Technology in Learning, a nonprofit corporation based in Menlo Park, Calif., to conduct a meta-analysis to assess the effectiveness of different types of educational software.

The 20 studies scrutinized by NCREL ranged in size and in types of technology examined. Most were small-scale studies, with sample sizes.

The 20 studies scrutinized by NCREL ranged in size and in types of technology examined. Most were small-scale studies, with sample sizes of fewer than 100 students. The meta-analysis standardized the results of the studies and determined a mean “effect size” of 0.30 for a combined sample of 4,314 students, which suggested that teaching and learning with technology had a small but positive effect on student outcomes when compared with traditional instruction. An effect size is an estimate of where the experimental or treatment group stands in comparison with the control group. An effect size of “0” indicates that there is no difference between the experimental and control groups, whereas a positive effect size indicates, in this case, that the group of students benefiting from instructional technology received higher scores than the control group of students.

Many of the small-scale studies included in the NCREL analysis tried to determine the effect of technology on student achievement in specific subjects or on specific assignments. In one of the studies, Joseph Akpan of Morehead University in Kentucky and Thomas Andre of Iowa State University examined the use of computer-simulated frog dissection to improve students’ understanding of a frog’s anatomy. They found that students who had performed computer-simulated dissections before actual dissections, as well as students who conducted simulated dissections only, learned significantly more about a frog’s anatomy than did students who had performed only the physical dissections.

Deborah Doty from Northern Kentucky University, Scott Popplewell from Ball State University, and Gregg Byers from Grissom Elementary School, both based in Muncie, Ind., conducted a study that examined the influence of interactive CD-ROM storybooks on reading comprehension. Students who read the conventional print books and students who read the CD-ROM storybooks of Robert Munsch’s Thomas’ Snowsuit did not differ significantly in their ability to retell the story orally. Students who read the CD-ROM storybooks scored higher on comprehension questions, however, than their peers who read the print versions.

The SRI meta-analysis of 31 studies resulted in an effect size of 0.38, suggesting a somewhat stronger association between educational software and student achievement than in the NCREL meta-analysis.

The SRI research project examined studies that focused on student improvement in mathematics and reading. It found that the effect of educational software on math achievement was a little larger than its effect on reading achievement.

But it also noted that elementary and middle school pupils, as well as students with special reading needs, seemed to benefit the most from the use of educational software as part of reading instruction.

Tracking New Advances

Little research has been done around classroom Internet use and its impact on student learning. That is so even though 87 percent of classrooms nationwide had Internet access by 2001, according to the Department of Education. Undoubtedly, experts say, the research needs to catch up with technological advances.

Still, some researchers are attempting to gauge the academic effects of such technological changes.

Austin Goolsbee and Jonathan Guryan from the University of Chicago’s graduate school of business, writing for the National Bureau of Economic Research, analyzed the federal E-rate program’s effect on access to technology and student achievement in every public school in California. Their report maintains that while the federal subsidies districts have been receiving since 1998 have increased access to the Internet in California schools, particularly poor schools, spending for the Internet had no measurable influence on student performance.

The authors suggest some reasons for that lack of a connection, including the fact that it could be too early to investigate the impact of long-term investments in technology, or that improvements in student achievement could be manifesting themselves in areas that might not be picked up by standardized testing.

On top of the nearly $10 billion in E-rate funding since the program’s inception, states have allocated a significant portion of their own resources to educational technology, and they have started to question returns on their investment. According to Market Data Retrieval, a Shelton, Conn.-based market research firm, schools spent a projected $5.6 billion on technology in 2001-02.

A recent research review by WestEd, a nonprofit regional educational laboratory with headquarters in San Francisco, points out that schools and districts put a lot of money into educational technology before devising practical plans for how to use such technology to full advantage.

The review offers several suggestions for how educators and policymakers could make the most of their technology purchases. The report suggests that the availability of technology alone cannot do much to compel change. Instead, to raise student achievement, its use should be supported by other improvement efforts, such as sufficient technical support, teacher technology training, and long-term planning.

Return on Investment

Some states have attempted to determine the effects of their instructional technology investments.

A West Virginia study, conducted by the Santa Monica, Calif.-based Milken Exchange on Education Technology in 1999, probed the impact of the state’s Basic Skills/Computer Education program. The program was started in the 1990-91 school year and sought to improve the basic skills of West Virginia elementary students through the use of technology.

The program used software that emphasized the state’s basic-skills goals, provided an adequate number of computers so that all pupils had easy and regular access, and offered training for teachers in the use of the computers and software.

Although the program was implemented statewide, individual students received varying amounts of each aspect of the program. After controlling for other variables, the study found that the more of each program component a student experienced, the more his or her score on the basic-skills test increased.

The study also concluded that the West Virginia technology program accounted for as much as 11 percent of the students’ improved basic skills scores in one year. What’s more, students who lacked technology at home, came from poorer families, or tended not to do well in school often posted the greatest learning gains because of their participation in the program, the authors concluded.

The Milken Exchange study went a step further and attempted to assess the cost-effectiveness of the West Virginia program. It did so by looking at the hypothetical cost of reducing class sizes in the state and comparing that step with the actual cost of the technology program. While the estimates were crude, they suggested that the technology program resulted in increased student achievement at a much lower cost, roughly $86 per student per year, compared with a statewide initiative to reduce class sizes that would have cost an estimated $636 per student per year.

Meanwhile, some Missouri students and teachers are part of a technology initiative called eMINTS. The program helps elementary teachers develop a student-centered and inquiry-based approach to teaching through the use of multimedia and computer technology. A study of the program’s second cohort of pupils and teachers showed that 4th graders performed significantly better on the 2002 Missouri Assessment Program tests compared with their peers in the same schools who did not participate in eMINTS.

However, similar results were not found for the 3rd graders.

Illinois also commissioned a study of technology, conducted in 2000, that included a look at technology’s connection to student achievement in that state. The report by Westat—a Rockville, Md.-based research corporation—found that technology use had a small, but significant, impact on student achievement as measured by the Illinois testing program.

The study warned, though, that results should be interpreted with caution and stressed that a school’s socioeconomic status was a much stronger predictor of student performance than was technology use.

Westat’s Illinois study used an additional approach to assess the impact of technology on student achievement. It surveyed principals and teachers on their perceptions of how strongly different technologies worked.

A little more than half (56 percent) of the teachers surveyed believed that integrating learning technologies into the curriculum had improved achievement of the skills embedded in the Illinois Learning Standards. The teachers also indicated that technology had positively affected classroom practices and student engagement.

The principals’ perceptions of technology use in the classroom closely paralleled those of the teachers. The principals indicated that multimedia and Internet-connected computers had mostly positive effects on their teachers’ classroom practices.

Beyond Test Scores

While it has proved difficult to determine whether the use of technology can really improve student achievement, some proponents of instructional technologies wonder if that is even a question educators and researchers should be asking.

Beyond the obvious benefit of promoting a familiarity with technology that students will need for almost any type of employment, advocates argue that computers can become as much a part of learning as books, pencils, and chalkboards.

And a direct effect on achievement isn’t the only way technology can influence schools.

New research from Pittsburgh’s Carnegie Mellon University examines how a computer-based Cognitive Tutor—a software program developed by Carnegie Learning Inc.—is conducive to the use of different educational strategies. In this case, the researchers show how the program helps students explain problem-solving steps in geometry.

Similarly, the eMINTS study in Missouri looked at the instructional methods of teachers within the program. It found that those teachers who best integrated inquiry-based practices and a student-centered approach in their technology-rich classrooms tended to have students who scored higher on assessments than those teachers who did not integrate such practices.

In addition, data from a number of studies conducted in recent years have been used to determine the effect of technology on school climate and student engagement. For the most part, the use of technology seems to be associated with better attendance, more time on task, and fewer disciplinary referrals.

Yet the link between educational technology use and higher student achievement remains tenuous.

Vol. 22, Issue 35, Pages 50-52

Published in Print: May 8, 2003, as Analyzing the Tech Effect
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented