Sponsored Online:
Published Online: March 22, 2007
Published in Print: March 29, 2007, as Collecting Evidence

Collecting Evidence

The focus of the research agenda for school technology has shifted from innovation to effectiveness.

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints

If you had to pick a word to characterize research in educational technology in the 1990s, it probably would have been “innovation.” Fueled by public and private dollars, experts were in full-bore research-and-development mode for much of that decade, exploring all kinds of classroom applications for digital technology.

The problem was that researchers paid less attention to documenting how—and, in some cases, whether—their innovations improved learning. And they spent even less time thinking about how to sustain and spread their use.

“Maybe 10 years ago, there was more latitude to say, ‘Let’s try out some crazy technology ideas and see if kids find it useful,’ ” says Chris Quintana, an assistant professor of learning technologies at the University of Michigan, in Ann Arbor. “Then it would be, ‘OK, let’s move on the next idea.’ ”

Experts now agree that the climate has shifted. Nowhere is that change more apparent than at the federal level, where the pool of available money for innovative applications of educational technology has shrunk, and policymakers are putting pressure on developers to prove their products improve academic achievement.

Yet there hasn’t been a complete lack of studies over the past decade suggesting that digital educational technology does produce results in the classroom.

Several recent research reviews and meta-analyses published in the United States and in Britain suggest that, when measured across the board, educational technology yields “small, but significant” gains in learning and in student engagement. The problem is that those modest gains fell short of advocates’ promises.

See Also
See the accompanying story, Technology Counts, Times 10”

Looking back, experts say the case for educational technology could have been much stronger by now if researchers had spent more time assessing learning gains and less time innovating.

“Not measuring the gains was an absolute error on our part, and we need to go deeper and deeper with good research,” says Donald G. Knezek, the chief executive officer of the Washington-based International Society for Technology in Education.

For instance, earlier studies showed that students and teachers liked using technology in the classroom, and that students knew more at the end of a particular study than they did before the intervention began, says Cheryl L. Lemke, the CEO of the Metiri Group, a Culver City, Calif.-based research organization.

But the studies failed to document those improvements in scientifically valid ways. They also didn’t probe for links between learning gains and what researchers call “fidelity of treatment”—the extent to which students and teachers used the technology being tested.

Gains Found in Diverse Areas

Plus, says Glen Bull, a professor of instructional technology at the University of Virginia, in Charlottesville, researchers are just now understanding how much greater the payoffs can be when digital-learning programs combine specific academic content with lessons from cognitive science and developmental psychology on how children learn in those subjects. In teacher education, it is called “technological pedagogical content knowledge.”

“When you’re working with technology, it cannot just be dropped in school,” Bull says. “That’s the thing that’s really emerging.”

Beyond the concern about missed opportunities, some scholars are skeptical that educational technology itself can improve student achievement. Larry Cuban, a professor emeritus of education at Stanford University, suggests that studies can’t isolate any gains that might be due to technology from those that might be because of teachers’ methods or to classroom climate or the size of a class.

He does note, though, that some studies are getting more sophisticated about taking into account those factors by having the same teacher give lessons with and without the technology intervention under study. Still, Cuban says, “over the past 10 years, I don’t think technology has produced any gains except those that experimentalists are looking for.”

Other scholars report qualified evidence, however, that computer technology can bolster achievement.

In distance learning, for instance, the Metiri Group determined in a November 2006 report that students’ performance in virtual classrooms was as good as or better than their performance in face-to-face classrooms. The achievement gains were stronger in Web-based programs than in video-based ones, as well as in programs that included an e-mail component, according to that research review, which was paid for by Cisco Systems Inc., an Internet-network provider based in San Jose, Calif.

Similarly, a 2003 meta-analysis by researchers at Boston College found that students using word processors wrote more and produced better-quality work than did students in comparison groups. The caveat, though, was that for some of the younger technology-savvy students, writing quality suffered when they were asked to write on paper-and-pencil assessments.

“Earlier studies had not found any positive effects for writing with word processors,” says Michael K. Russell, a co-author of the analysis and the director of the Technology and Assessment Study Collaborative at Boston College. “We wanted to see if findings had changed.”

Several “intelligent” tutoring programs, likewise, have accumulated solid research track records, according to the Metiri report and various experts. A prime example is Cognitive Tutor Algebra, which was developed by researchers at Carnegie Mellon University in Pittsburgh.

A randomized trial of the software conducted over the 2000-01 school year in the Moore, Okla., school district showed that middle school students using the program outperformed their peers in other classrooms on standardized end-of-course tests.

Similarly, studies of individual computer-based programs that allow students to simulate frog dissection or provide a molecular-level view of thermodynamics in action—such as those developed by researchers working in the Web-based Science Inquiry Environment initiative at the University of California, Berkeley—have shown that such approaches can be more effective than conventional instruction at generating deeper understanding.

The Metiri Group’s review found few rigorous studies, though, looking at the efficacy of interactive computer whiteboards; personal digital assistants and other kinds of handheld computers; or quick-response devices, such as electronic “clickers” that give teachers an instant read on whether a class is “getting” a lesson.

Reality Checks Needed

The focus on rigorous assessments of achievement gains grew in part out of two federal laws adopted earlier this decade—the No Child Left Behind Act and the Education Sciences Reform Act—that required educators to rely on education programs and practices that have been proved effective through “scientifically based” studies.

But researchers say private foundations and federal agencies beyond the U.S. Department of Education have picked up on that trend, too, in a search for tried-and-true strategies that educators can quickly put to practical use.

Like most swings of the pendulum, though, this one has drawbacks, according to observers. One fear is that researchers adhering to the new model could miss out on opportunities to document other benefits that have been linked to digital learning. Those include improvements in writing quality and communication, heightened student engagement, deeper understanding of some abstract concepts, changes in teaching practices, and the opportunity to give students new windows opening onto previously unseen worlds.

“A lot of what we’ve been saying is we’re not using the right metrics. We’re not measuring the full impact of learning,” says Lemke of the Metiri Group.

Researchers also need to be able to study how the programs they develop in the hothouse settings of university laboratories work in different, often inhospitable, classroom environments, says Christopher J. Dede, a professor of learning in technologies at Harvard University. That kind of “scale up” research, he says, could give educators a more realistic idea of how programs could work in their own classrooms, and perhaps point to new hybrid models of the same programs tailored to specific, more difficult settings.

“It’s probably not hard for something that’s reasonably well designed to find some site where it works,” Dede says. “It’s like asking, ‘Is chicken Kiev a good thing to eat at a Russian restaurant?’ It’s a fantastic thing to eat, but if you get it in a diner, it’s often not good.”

Another concern among researchers is that the focus on improving test scores could altogether crowd out the kind of inventive research-and-development work that characterized so much research in educational technology 10 years ago.

That’s a problem now, they say, because students’ use of technology outside school is already outstripping their use of it in classrooms. Yet it is becoming harder to find funding to design educational programs to capitalize on those new uses—a digital-learning network, perhaps, that can engage students as powerfully as the online YouTube video-sharing site, or social-networking Web sites such as MySpace.

As for video games, which are particularly expensive to build, the Federation of American Scientists, a prominent Washington-based group, issued a report last year calling on the departments of Education and Labor, along with the National Science Foundation, to pay for the development of “serious” games.

“I think there’s more enthusiasm around gaming for learning than almost any topic I’ve ever seen,” says Roy D. Pea, an education professor at Stanford University. He adds, nevertheless: “This is a very big hunch. Lots of research questions need to be addressed.”

Digital-Literacy Skills a Concern

Part of the problem is that experts don’t know exactly what students are doing with technology, either inside or outside school, and how it affects their thinking. The last large-scale survey of school-based educational technology practices occurred in 1998, several experts say, and less is known about how students use digital technology at home.

To fill that void, the Chicago-based John D. and Catherine T. MacArthur Foundation in September 2006 announced plans for a five-year, $50 million digital-learning initiative to research technology’s effect on students, use social networking and other online tools to help students learn, design and develop online games, and create media-literacy curricula for a digital age.

“Also, the use of Google opened up a raft of questions around what learning students need to have in order to be productive researchers,” says Pea, referring to the highly popular online search engine.

Rather than just learn how to use technology, students in today’s Web-dominated environment need to learn how to prioritize and manage a dizzying array of information coming at them through Web sites and e-mails, how to think critically about what they find, and how to use multiple media to communicate well, among other skills. Educators, scholars, and policymakers have yet to agree on what those new skills should be, much less on how best to teach them.

“We still have a lot to learn about supporting a whole range of digital-literacy skills,” says Margaret A. Honey, a vice president of the Education Development Center Inc., a Newton, Mass.-based research group, and a co-director of its Center for Children and Technology, in New York City. And, she says, new research in that area could provide a lasting payoff.

“Technologies are always changing,” she says, “but skills of discernment don’t change.”

Vol. 26, Issue 30, Pages 30,32-33

Related Stories
Web Resources

Back to Top Back to Top

Most Popular Stories