“Rigorous” and “scientific” are words not often associated with studies of educational technology. “Poorly designed” and “advocacy driven” are often more accurate descriptions, a recent guide by the U.S. Department of Education noted.
But the federal government is attempting to change that perception with a set of research grants it has awarded, as well as the guide, which is intended to help states conduct better research about education issues. (“Ed. Dept. Issues Practical Guide to Research-Based Practice,” this issue.)
“What we hope here is to produce a series of instruments that other projects throughout the country can use,” said John P. Bailey, the educational technology director for the department.
The $15 million in grants, funding included under the No Child Left Behind Act, will support “rigorous, scientific evaluations of how technology impacts student achievement in elementary and secondary education,” according to the department’s announcement of the grant winners.
Still, researchers not involved in the upcoming studies are already questioning just how much rigor and science there will be in the evaluations.
Programs Under Review
“There are serious limitations, and some of them have to do with politics, and some have to do with ethics and problems of methodology,” said Ronald E. Anderson, a professor of sociology at the University of Minnesota-Twin Cities who has done extensive research on educational technology.
The grant winners are eight states and their research partners, which include universities, research firms, and school districts. The three-year studies, chosen from applications submitted last spring and awarded in November, received financing in amounts ranging from $1.3 million to $1.9 million. The states are Arkansas, Iowa, Maine, North Carolina, Pennsylvania, Texas, West Virginia (which won separate grants for two projects), and Wisconsin.
Each of the state grantees is required to plan and conduct an evaluation of how an education program uses technology to raise student achievement in one or more core academic subjects; to test and document the methods, practices, and instruments used to assess the impact of the technology on student achievement; and to share that information with other states, according to the department.
But some researchers say those are daunting goals, given the relatively modest size of the grants.
“It’s too bad they’re not tackling this on a bigger scale, using more funds and not [given] just to the state,” Mr. Anderson said.
The studies appear to be a mixed bag. Some feature randomized selection of subjects for the technology intervention and for the control group, which does not undergo the intervention—an approach federal officials call the “gold standard.” But other studies will not meet that standard.
“Only two out of the nine studies are randomized at the individual-student level,” based on the summaries released by the Education Department, Mr. Anderson said. “About five are doing it at the classroom level or at the quasi- experimental level. And two or three aren’t doing any randomization at all.”
Some of the studies are likely to be beefed up, however, as a result of meetings last month between federal officials and the grant-winning researchers and state officials.
For instance, after a meeting at the federal department, Arkansas plans to add random assignment to its study of a program that involves students in project-based learning, said James Boardman, the assistant director for information technology for the Arkansas Department of Education.
Started in 1996, Arkansas’ Environmental and Spatial Technology Initiative aims to promote students’ problem-solving abilities and thinking skills. The study, which received a $1.8 million grant, will include at least 10 schools that have been using the program and 10 schools that will have implemented it in a subsequent year, according to Mr. Boardman.
Meanwhile, Iowa aims to use its $1.9 million grant to evaluate and improve its teacher technology training in math and reading, develop data linking teacher professional development to student achievement, and share information on best practices.
Working with school districts, researchers from the Psychology in Education Research Laboratory at Iowa State University’s school of education and from regional education agencies will track 6th through 8th grade reading and math instruction statewide and conduct random classroom trials in selected districts. They will devise and evaluate a research model that identifies the best practices in teacher training that uses online discussion groups and videoconferencing.
The goal is to link those practices with measurable improvements in student achievement.
“In the past, we’ve done a great deal of teacher training,” said John O’Connell, the instructional technology consultant in the Iowa education department. “But due to lack of support and monitoring of these teachers, that [training] hasn’t been effectively implemented in the classroom.”
Other states are using the grants to evaluate different approaches.
Texas, for example, received $1.9 million to evaluate the effectiveness of a state-sponsored middle school laptop-computer program that will combine all the elements that experts say are necessary for the effective use of laptops.
That “immersive” approach is in contrast to the more typical, incremental approach schools use when starting laptop programs, said Anita Givens, the senior director of the educational technology division of the Texas Education Agency.
She insisted that her state’s evaluation of the program would be scientifically rigorous. Texas schools will be invited to apply to join the three-year pilot study, set to begin next fall, and will be selected in a stratified random sample that reflects the state’s demographics. Each middle school that applies will have to agree to be randomly selected as an “immersed school” or a “control school.” In the immersed schools, all teachers and students will receive laptops.
Students will be tested using the statewide assessment program, as well as through online math diagnostic assessments, she said.
Measurement is one of the chief challenges of the studies, researchers say.
“The problem is these approaches don’t have the right outcome measures to evaluate whether those approaches are successful,” argued Henry Jay Becker, an education researcher at the University of California, Irvine. “They tend to go with whatever tests are available.”
Students’ improvement on standardized tests is often used, for example, to judge the success of a program or technology. But many experts believe standardized tests miss what they say are many of the benefits of technology, such as enhancement of thinking skills.
One new way of measuring student outcomes, Mr. Becker said, would be for students to do project work with some controls. Projects done in different classrooms with different treatments would conform to a certain set of rules and be scored by experts who had received the same training. The result would be project-based work that had a reasonably high reliability for evaluation, making it usable for research.
But Mr. Becker said the federal government has so far been uninterested in studying such outcome measures.
Mr. Bailey, however, said the projects are being encouraged to look at other benefits of educational technology beyond student achievement, such as improving student attendance.
Still, other researchers say another problem is that schools are complex institutions, with many variables.
"[Texas] will not be able to say: Were the laptops the critical factor or was it the teachers’ background?” said Larry Cuban, an education historian and professor at Stanford University. “The research tool can’t tease apart the many factors.”
Still, Mr. Becker said he approved of attempts to measure an entire program, including the outcomes, even if it was a complex or expensive “Cadillac” initiative. Then, if a positive result is found, he said, further research could determine if benefits could be derived from a cheaper, “Chevy” version of the idea.
Coverage of research is underwritten in part by a grant from the Spencer Foundation.