For K-12 leaders, asking whether education technology works isn’t a good enough question. It’s important to also drill down into details about what kind of technology you’re considering, for what purpose, with which students, and under what circumstances.
In recent years, researchers have started building an evidence base robust enough to consider such angles. Now, a center at the Massachusetts Institute of Technology has taken a stab at summarizing what the field has learned.
“We recognized that the hype and investment around ed tech has so far outpaced the rigorous research in terms of what’s actually effective,” said Vincent Quan, the senior policy manager at, on why such a summary was needed now.
J-PAL North America’s new publication, “” looks at evidence from 126 rigorous experimental studies, most of which involved randomized control trials. Here, Education Week highlights some of the key takeaways for K-12 administrators and policymakers.
For Quan, what makes the new report significant is its focus on only the most rigorous studies, all of which were conducted by independent third parties.
Such a big-picture review is “extremely helpful in allowing us to step back and take stock,” said Joseph South, the chief learning officer for the International Society for Technology in Education, or ISTE. “It shows that we are making inroads, but have a long ways to go.”
Still, the lessons to be drawn from the J-PAL North America review remain subject to debate.
Stanford professor and long-time ed-tech skeptic Larry Cuban, for example, said the conclusions support what he’s been saying for three decades.
“The results for students [are] paltry given the huge investment in dollars and time,” Cuban said. “For policymakers, practitioners, and parents, the takeaway from this report on technology studies is simple: Sharpen focus on the expertise and skills of teachers, not machines.”
1. Expanding access isn’t enough (and may even be harmful)
The J-PAL North America team put its top-level finding here pretty plainly:
“Initiatives that expand access to computers and internet alone generally do not improve kindergarten to 12th grade students’ grades and test scores” and “may have adverse impacts on academic achievement,” they wrote in the new report.
But there is some nuance that K-12 leaders should consider, Quan said in an interview.
For one, distributing hardware to students does seem to lead to improved proficiency using computers, which may have long-term benefits (in terms of workforce preparation, for example) that don’t show up on standardized tests.
Widespread availability of digital devices is also necessary for providing access to some of the software programs and other technologies that research says are effective.
Furthermore, there’s not necessarily a clear consensus in the field about what the research says about 1-to-1 digital device initiatives in particular. A previousby researchers at Michigan State University found significant positive benefits, for example. And new research, including a generally positive in Mooresville, N.C., is starting to look at the impact of expanded technology access beyond the one or two years considered in most of the currently available studies. “There are a lot of open questions on long-term impact,” Quan acknowledged.
2. Beware of online-only courses
Take the human component out of classroom instruction at your own peril, the J-PAL North America researchers warned.
“We found that relative to courses with some degree of face-to-face teaching, students taking online-only courses may experience negative learning outcomes,” the report says.
That only partially jibes with other research about online learning.
A, for example, found that evidence generally suggested that students enrolled in supplemental classes at virtual schools performed the same or better than counterparts who take the same classes in brick-and-mortar settings. Many of those studies may not have been rigorous enough to be included in the J-PAL North America analysis.
But research on full-time online charter schools and online credit-recovery programs has generally been much more negative.
It all leaves K-12 leaders to confront a thorny question: If your district can’t offer a face-to-face class, should it still offer an online-only version?
Doing so does seem to expand access, Quan said. He noted one study in which a school that couldn’t offer face-to-face 8th grade algebra offered the course online instead, leading to more students not only taking algebra, but going on to take other advanced math courses.
“When there are unlimited resources and there can be a teacher in every single class, experimental evidence seems to suggest in-person is better,” he said. “But we also understand there are many constraints.”
3. Adaptive math software holds ‘enormous’ promise
The J-PAL North America researchers reviewed 30 rigorous third-party studies of “computer-assisted learning” programs that target instruction to each student’s particular skill level.
Twenty of the 30 reported statistically significant positive effects. Of those, 15 were focused on math.
Again, though, the results should be read with a careful eye.
Not all the studied software programs worked equally well, for example. Some had eye-popping positive results. Among them: SimCalc, an interactive math simulation for 7th and 8th graders, and a program called Cognitive Tutor, for helping students learn foundational algebra content.
But other programs—including Cognitive Tutor-Geometry, the popular ST Math program, and the highly touted “” approach to middle school math—showed small or negative results.
In addition, the evidence for adaptive reading programs was far more limited, according to the report.
Quan and ISTE’s South highlighted some key points for K-12 leaders to remember when making decisions in such a messy, incomplete landscape.
The field still knows very little about what specific mechanisms in a software program—the adaptive algorithms? the data provided to teachers? the computer-generated feedback for students?—make it effective, Quan said.
And just because a software program worked in one context doesn’t mean it will work in others, South cautioned.
“No ed-tech solution stands alone,” he said. “It really depends on the specific environment.”
4. Consider ‘nudges…'
Text and other technology-based messages to students and parents can be an affordable way to make a meaningful difference, the J-PAL North America team concluded.
In the early years, that could take the form ofabout reading with their young children.
By middle school, it might mean providing parents with information about their children’s grades and attendance.
And by high school, it could be automated reminders and personalized support to help students complete tasks related to the college-application and financial-aid process.
Rigorous studies have yielded positive evidence on all three approaches—and even more encouraging results in higher ed, the J-PAL North America team said.
“Technology-based nudges that encourage specific, one-time actions … can have meaningful, if modest, impacts on a variety of education-related outcomes, often at low costs,” their report says.
5. But be careful about ‘social psychology’ nudges
There’s a growing push from researchers and ed-tech companies alike to explore prompts that seek to alter a student’s attitude or feelings—by encouraging a “growth mindset” in response to failure or struggle, for example.
This raises some problems, though.
One is that these prompts don’t seem to be reliably effective, the J-PAL North America researchers determined.
Large-scale studies have consistently found that “technology-enabled social-psychology nudges do not improve academic outcomes on average,” the report concluded. There’s some reason to hope that students who start out further behind on subject matter may benefit more from such messages, but the research to date is far from certain.
In addition, such social-psychological nudges have raised big questions about privacy and consent. Last year, for example, Education Week reported on ain which it tested growth-mindset messages in a college-level computer science software program—without first alerting participating students or institutions.
The experiment yielded generally disappointing results, while also sparking significant controversy about the ethics of Pearson’s approach.
A version of this article appeared in the March 13, 2019 edition of Education Week as Tech Research: 5 Key Lessons for Educators