Here’s an article, written by my colleague Debra Viadero, about whether or not reading and math software programs lead to learning gains. The study didn’t find many differences between the control groups, who did not use the software programs, and the ones that did, but critics of the study say that the experimental research methods used for the study were flawed.
It does seem to be one of those studies that anyone can look at and see what they want.
If you already have the hardware in the classroom and you want one of these products, this would not dissuade you,” said Mark Dynarski, the lead researcher on the project for Mathematica Policy Research Inc., the Princeton, N.J.-based company that conducted the study. “If you’re quite skeptical of the software and very budget-pinched, I think you would feel this is evidence in favor of your position,” he added. “And if you’re really right in the middle, I think it comes down to how much you want to move test scores, because you’re really not going to see that happen with these products.”
This is the second year of the study, which also stirred up controversy in its first year for similar reasons.
For me, this points to a couple of issues. The first is that just adding technology into a classroom is not necessarily going to make a difference in what kids learn and how fast they learn it. As I hear over and over from people in all areas of ed-tech: it’s not the technology, but what you do with the technology that counts.
The second point this brings to mind is how difficult it is to be a school administrator—trying to navigate through research like this, attempting to figure out what’s right for students, and then measuring those factors against the resources you have available (not just financial resources necessarily, but also the level of technical training that teachers have had and their comfort level with technology, among others.)
Read the full version of the study here.
A version of this news article first appeared in the Digital Education blog.