Sam Wineburg is a co-founder of the Digital Inquiry Group (formerly the Stanford History Educational Group), a national leader in the provision of free social studies curricula—including materials like Civic Online Reasoning, Reading Like a Historian, and Beyond the Bubble. The Margaret Jacks Professor of Education and professor of history, emeritus, at Stanford University, Wineburg has long focused on challenges involving civic education, curriculum, and technology. His most recent book is Verified: How to Think Straight, Get Duped Less, and Make Wise Decisions about What to Believe Online. Given the interest in misinformation, how we teach students to navigate social media, and the challenges of civic literacy, I thought it worth reaching out to Wineburg to get his take. Here’s what he had to say.
—Rick
Rick: You’ve become a leading authority on digital literacy and misinformation. Can you talk a bit about how you got into these issues?
Sam: Fortuitously. Back in 2015, I got an email from a program officer at Chicago’s McCormick Foundation. This person had seen our innovative history assessments, in which students analyze primary sources from the collection of the Library of Congress. This person wanted to know if we could create an instrument that directly measured students’ ability to assess online sources. We accepted the challenge. The next year, Trump was elected, and “fake news” became part of the public discourse. During this time, the conventional wisdom preached by people like Marc Prensky and others was that adults were the digital knuckleheads but that young people—also known as “digital natives”—had game. But we weren’t so sure, so we set out to measure students’ abilities to sift fact from fiction, in many cases by having them analyze actual material from the web. After combing through nearly 8,000 responses from students in middle school through college, we found them to be just as confused as the rest of us. A Wall Street Journal reporter featured our study, which led to appearances on NPR, BBC, ABC, and countless other outlets. From that point on, there was no turning back.
Rick: Can you tell me more about that study? When you say you found the students were “just as confused as the rest of us,” what did you see?
Sam: One of the findings that the Wall Street Journal highlighted was that 82 percent of middle school students couldn’t tell the difference between an ad and a news story. What the Journal didn’t say was that in a study conducted by Edelman-Berland, a global communications firm, 59 percent of adults couldn’t tell the difference, either. Findings like these made us realize that were all in the same boat—and that boat was rapidly taking on water.
Rick: Is there an appetite for schools taking this on?
Sam: There’s increased attention at the legislative level to issues of information literacy. States like Illinois, California, and New Jersey have passed curriculum mandates, and there’s legislative action in something like 15 other states. What’s heartening is that this concern spans the red state/blue state divide. Teaching students to be wise consumers of digital information can’t be a partisan issue. Without the ability to tell the difference between information backed by solid evidence and sham, democracy doesn’t stand a chance.
Rick: I love the goal. But, as you know, we live in a time of sometimes intense disagreement about what’s fact and what’s “misinformation.” I mean, we’ve seen credible authorities vehemently denounce some statements as falsehoods, on topics like the origins of COVID or Hunter Biden’s laptop—only to later learn the statements were actually true. How do you navigate those tensions?
Sam: Listen, there are topics where authorities rushed to pronounce judgment—case in point, the COVID lab-leak hypothesis. To broach the idea in 2020 branded you a racist; today, the origin of the virus is an open question. But to generalize from this instance—to go from “authorities sometimes err” to “you can’t trust them at all”—leads to a crippling nihilism. Let’s stick with medical issues for a second: The rage on TikTok is a procedure called “mewing,” the idea that by doing repetitive jaw exercises, you can change your jawline and achieve a sleeker profile. There are hundreds of videos with millions of views attesting to the procedure, including endorsements from supermodels. But if you know how to separate signal from noise on the internet, you quickly learn that there are no medical studies that attest to the efficacy of the procedure and that the dentist who promoted it had his dental license stripped. You won’t die from mewing, but there’s a lot of scary medical advice floating that can lead to serious illness or even death. When it doubt, it’s wise to go with authorities like the Mayo Clinic over sketchier places such as the [fictional] Dave and Tom’s Homeopathic Supplements.
Rick: How has the emergence of AI affected your work?
Sam: AI magnifies the challenge. We have a wondrous tool that’s been programmed to offer persuasive responses—accurate or not. In too many cases, the responses of large language models—LLMs—are the linguistic equivalents of a green smoothie—a phrase from a Facebook post combined with text drawn from a RAND report, abutting content from Wikipedia, and a sprinkling of text from The Onion. In fact, the now-famous “Elmer’s glue keeps cheese on pizza” LLM response originally came from a satirical Reddit post. AI weakens the most important bond we need to consider when evaluating information: the nexus between claim and evidence. In the words of cognitive scientist Gary Marcus, generative AI is “frequently wrong, but never in doubt.” Rather than rendering traditional search skills obsolete, AI has made the ability to verify information even more imperative. Letting kids loose on AI without establishing that they have search skills in place is like framing a house without first pouring a foundation.
Rick: Your book Verified, published last year, is a resource for helping to sort fact from fiction on the internet. What are a few key takeaways?
Sam: We think of our book as the driver’s manual for the internet that none of us ever received. It helps readers determine what’s true and what’s not. In the days of print, newspapers gave us tactile clues to decipher information: news on the front page, editorial content on the interior, advertisements set off in boxes, etc. The internet erases these clues. When a post appears in our feed, do we really know what it is? Imagine, for example, when searching for nutrition information, we land on the site of the “International Life Sciences Institute.” At first glance, this looks like a credible scientific organization. That sense increases as we spend more time on the site, examining the group’s refereed publications and perusing the impressive bios of its scientific advisers. Only when we leave the site and read laterally—i.e., using the internet to check the internet, as we explain in Verified—do we learn that the group receives the bulk of its funding from the food, chemical, and agribusiness industries. This is how public policy is transacted on the internet. Front groups, lobbyists, and partisan organizations portray themselves as “nonpartisan” or “grassroots” or “citizen-led.” In many cases, these sites are the handiwork of public relations firms that specialize in creating digital masquerades. With a few right moves, however, you can often detect these ruses in as little as 30 seconds, which we show how to do in Verified.