Both practitioners and researchers hold a set of beliefs and theories about what qualities help teachers effectively shape student-centered, inquiry-based, technology-rich classrooms. Those theories drive our research design experiments, our professional development designs, and our thinking about how to support teachers.
What if our theories are wrong?
What I want to do with this post is share some of my preliminary thinking about a research study I have in progress. I’m inspired to do this because I’ve just submitted a few abstracts to the American Educational Researchers Association Conference, and I thought it would be a good time to think out loud about some of my ongoing research, which raises questions aroudn our establish thinking about how to help teachers use technology in rich and powerful ways.
Big Caveats
This post is a thought piece. I’m not giving you any data, any findings, or any working papers, and I’m certainly not sharing something that’s published. You can’t use this as evidence in making arguments or actual decisions. But I’m going to share a bit about some tentative results, and then think through the conclusions. If my findings hold up to my scrutiny, and the scrutiny of my peers, then you’ll have a headstart thinking through the conclusions.
Research Background
So I ran a pretty cool research project a couple of years back, where PBworks sent a survey solicitation to everyone who created an education wiki. The survey asked teachers a bunch of questions about their attitudes and practices, and then asked for a link to the wiki they had just made and permission to view it. We then waited 6 months and evaluated their wikis using the Wiki Quality Instrument, a tool designed to measure the degree to which wikis provide opportunities for students to develop skills such as expert thinking, complex communication, and new media literacy. We had about 250 wikis to look through. Although our sample is one of convenience (despite complicated efforts to randomly sample, which basically failed), the distribution of Wiki Quality Scores in this group is pretty similar to the distribution of scores found in other random samples. For the purposes of this thought experiment, assume our methods are robust. (My job in the next few months, is to beat up the data, try to prove myself wrong, and make sure I think they are robust.)
As far as I know, it’s the largest study ever to survey teachers and objectively evaluate their technology use. There are lots of big surveys (where we basically ask teacher to self-report their use of technology) and lots of small studies objectively evaluating tech use (where it’s hard to be sure your sample represents the population), but I think we’re the first/biggest study to look at lots of teachers and look objectively at their work.
We asked teachers to describe themselves a few ways in the survey: demographic stuff like age, access to technology, confidence with technology, and their their adherence to constructivist beliefs: are they willing to take pedagogical risks?, do they empower students to explore independently?, how much do standards and testing affect their instruction? do they give students choices?, do their students publish and present their work?, do they value 21st century skills over content skills, etc. We didn’t ask outright “Are you a constructivist?” because most teachers wouldn’t know what that means and would give us unreliable answers.** Instead, we tried to ask questions that sampled beliefs aligned with constructivism.
Hypotheses
I thought going in that we’d have two surprising results and one expected results.
My hunch was that we’d find no relationship between technology confidence and wiki quality, and no relationship between teacher age and wiki quality. I think most educators assume that young teachers are better with technology than older teachers and technology confidence translates into technology success. My observations are that older teachers with pedagogical experience are often much better at using technology in classroom settings than young teachers and that many teachers not particularly confident with technology do just fine because they are really confident at creating good learning environments.
But I thought for sure that adherence to constructivist beliefs would align with wiki quality. I suspected that I’d be able to say “see, we’ve believed this forever, and now we’re pretty sure.”
Tentative Results
Right now, contrary to the literature and common beliefs, it’s looking like absolutely nothing that we surveyed predicts wiki quality. Virtually every predictor in our survey has no relationship to wiki quality, and the few that do have very modest relationships. When we put every predictor in our model, we find out that everything that we measured all together predicts only a tiny portion of the variation in wiki quality.
These findings are very disappointing for a number of reasons. First, null findings are always hard to publish. People want to hear that you found something, not that you didn’t find something that you expected to. Moreover, lots of scholars (like the type who review journal articles) have much invested in the idea that constructivist values and technology confidence can increase teacher’s effectiveness with technology. Nobody wants to hear, “Hey, maybe, those things that we’ve spent decades studying aren’t that important out in the wilds of regular classrooms.” (Though pretty much all of my major research findings to date have been depressing for ed tech advocates. Wikis are really used not collaboratively; free technology exacerbates inequality, etc.)
The Problems of Publishing Null Findings
When I submit it for review, the article will probably be attacked methodologically pretty hard, since the findings are not what people hope, so I’ll have to spend some months dotting i’s and crossing t’s and making sure I didn’t screw anything up (which I very well may have). And at the end of it, the study still will be hard to publish because it doesn’t provide any new directions for us to follow as educators. It raises questions about our operating theories and beliefs without really providing new directions or new insights. That may be useful, but even I find it unsatisfying. (Frankly, I’m really having a hard time being motivated to push forward on this, because it will probably be a lot of work to land a mid-tier publication that no one wants to read.)
Significance
Let’s assume for a minute that I didn’t screw up the study, and that these results end up being found to be true (and remember they aren’t yet; no using this as evidence). How would we need to change our worldview?
What if all the graduate classes on constructivism, the keynote speakers, the workshops, or the great readings on social constructivism don’t actually—on average, in the population—help teachers use technology better? (If they affected you personally, that’s great. Data is not the plural of anecdote.) What if boosting technology confidence through trainings and online courses doesn’t actually lead to better outcomes in real classroom settings?
If our operating theories are true, then my study should have shown clearly that constructivism and technology confidence predict wiki quality. Either my study was conducted incorrectly, or our understanding of our theories are incorrect.
What if all the really important levers of effective technology use are ones we haven’t studied much yet, like school culture, or having a common instructional language, or teachers’ disposition towards playfulness, or something else? If that were true, then we’d need to spend much more time exploring new trees than barking up the old ones. My study doesn’t say where the new trees are, but it does raise some serious questions about the old ones (if it’s true; remember, this is still a thought experiment).
A Final Anecdote from my Past as a Search and Rescue Leader
When I was younger, I worked for the Blue Ridge Mountain Rescue Group as a SAR Incident Commander. We’d get hundreds of volunteers at searches, and our group had hundreds of members. Most people, for their whole careers, never got on a team that actually found a lost person. We reminded people constantly that was completely fine, that every team that went out in the field and came back empty handed was critical to our success. When you are looking for something hard to find, figuring out where it isn’t is as important as figuring out where it is.
Alright, fire away. Ask me about all the things I haven’t thought of. Help make it better.
For regular updates, follow me on Twitter at @bjfr and for my papers, presentations and so forth, visit EdTechResearcher.
** One of the places where people will probably critique our methods is in our approach to surveying dimensions of Constructivism. We didn’t use a lot of existing surveys because we didn’t think that they were designed with contemporary best practices in survey methodology. When we made our own, we tried to do some smart sampling, but reasonable people can disagree with our sampling. Still, even with all the things we did sample, our R-squared values appear to be very low.