Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Classroom Technology Opinion

Students Are ‘Digital Natives,’ But Here’s Where They Struggle

A leading researcher argues ‘digital literacy’ can help
By Rick Hess — September 24, 2024 6 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
  • Save to favorites
  • Print

Sam Wineburg is a co-founder of the Digital Inquiry Group (formerly the Stanford History Educational Group), a national leader in the provision of free social studies curricula—including materials like Civic Online Reasoning, Reading Like a Historian, and Beyond the Bubble. The Margaret Jacks Professor of Education and professor of history, emeritus, at Stanford University, Wineburg has long focused on challenges involving civic education, curriculum, and technology. His most recent book is Verified: How to Think Straight, Get Duped Less, and Make Wise Decisions about What to Believe Online. Given the interest in misinformation, how we teach students to navigate social media, and the challenges of civic literacy, I thought it worth reaching out to Wineburg to get his take. Here’s what he had to say.
—Rick

Rick: You’ve become a leading authority on digital literacy and misinformation. Can you talk a bit about how you got into these issues?

Sam: Fortuitously. Back in 2015, I got an email from a program officer at Chicago’s McCormick Foundation. This person had seen our innovative history assessments, in which students analyze primary sources from the collection of the Library of Congress. This person wanted to know if we could create an instrument that directly measured students’ ability to assess online sources. We accepted the challenge. The next year, Trump was elected, and “fake news” became part of the public discourse. During this time, the conventional wisdom preached by people like Marc Prensky and others was that adults were the digital knuckleheads but that young people—also known as “digital natives”—had game. But we weren’t so sure, so we set out to measure students’ abilities to sift fact from fiction, in many cases by having them analyze actual material from the web. After combing through nearly 8,000 responses from students in middle school through college, we found them to be just as confused as the rest of us. A Wall Street Journal reporter featured our study, which led to appearances on NPR, BBC, ABC, and countless other outlets. From that point on, there was no turning back.

Rick: Can you tell me more about that study? When you say you found the students were “just as confused as the rest of us,” what did you see?

Sam: One of the findings that the Wall Street Journal highlighted was that 82 percent of middle school students couldn’t tell the difference between an ad and a news story. What the Journal didn’t say was that in a study conducted by Edelman-Berland, a global communications firm, 59 percent of adults couldn’t tell the difference, either. Findings like these made us realize that were all in the same boat—and that boat was rapidly taking on water.

Rick: Is there an appetite for schools taking this on?

Sam: There’s increased attention at the legislative level to issues of information literacy. States like Illinois, California, and New Jersey have passed curriculum mandates, and there’s legislative action in something like 15 other states. What’s heartening is that this concern spans the red state/blue state divide. Teaching students to be wise consumers of digital information can’t be a partisan issue. Without the ability to tell the difference between information backed by solid evidence and sham, democracy doesn’t stand a chance.

Rick: I love the goal. But, as you know, we live in a time of sometimes intense disagreement about what’s fact and what’s “misinformation.” I mean, we’ve seen credible authorities vehemently denounce some statements as falsehoods, on topics like the origins of COVID or Hunter Biden’s laptop—only to later learn the statements were actually true. How do you navigate those tensions?

Sam: Listen, there are topics where authorities rushed to pronounce judgment—case in point, the COVID lab-leak hypothesis. To broach the idea in 2020 branded you a racist; today, the origin of the virus is an open question. But to generalize from this instance—to go from “authorities sometimes err” to “you can’t trust them at all”—leads to a crippling nihilism. Let’s stick with medical issues for a second: The rage on TikTok is a procedure called “mewing,” the idea that by doing repetitive jaw exercises, you can change your jawline and achieve a sleeker profile. There are hundreds of videos with millions of views attesting to the procedure, including endorsements from supermodels. But if you know how to separate signal from noise on the internet, you quickly learn that there are no medical studies that attest to the efficacy of the procedure and that the dentist who promoted it had his dental license stripped. You won’t die from mewing, but there’s a lot of scary medical advice floating that can lead to serious illness or even death. When it doubt, it’s wise to go with authorities like the Mayo Clinic over sketchier places such as the [fictional] Dave and Tom’s Homeopathic Supplements.

Rick: How has the emergence of AI affected your work?

Sam: AI magnifies the challenge. We have a wondrous tool that’s been programmed to offer persuasive responses—accurate or not. In too many cases, the responses of large language models—LLMs—are the linguistic equivalents of a green smoothie—a phrase from a Facebook post combined with text drawn from a RAND report, abutting content from Wikipedia, and a sprinkling of text from The Onion. In fact, the now-famous “Elmer’s glue keeps cheese on pizza” LLM response originally came from a satirical Reddit post. AI weakens the most important bond we need to consider when evaluating information: the nexus between claim and evidence. In the words of cognitive scientist Gary Marcus, generative AI is “frequently wrong, but never in doubt.” Rather than rendering traditional search skills obsolete, AI has made the ability to verify information even more imperative. Letting kids loose on AI without establishing that they have search skills in place is like framing a house without first pouring a foundation.

Rick: Your book Verified, published last year, is a resource for helping to sort fact from fiction on the internet. What are a few key takeaways?

Sam: We think of our book as the driver’s manual for the internet that none of us ever received. It helps readers determine what’s true and what’s not. In the days of print, newspapers gave us tactile clues to decipher information: news on the front page, editorial content on the interior, advertisements set off in boxes, etc. The internet erases these clues. When a post appears in our feed, do we really know what it is? Imagine, for example, when searching for nutrition information, we land on the site of the “International Life Sciences Institute.” At first glance, this looks like a credible scientific organization. That sense increases as we spend more time on the site, examining the group’s refereed publications and perusing the impressive bios of its scientific advisers. Only when we leave the site and read laterally—i.e., using the internet to check the internet, as we explain in Verified—do we learn that the group receives the bulk of its funding from the food, chemical, and agribusiness industries. This is how public policy is transacted on the internet. Front groups, lobbyists, and partisan organizations portray themselves as “nonpartisan” or “grassroots” or “citizen-led.” In many cases, these sites are the handiwork of public relations firms that specialize in creating digital masquerades. With a few right moves, however, you can often detect these ruses in as little as 30 seconds, which we show how to do in Verified.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

Jobs Regional K-12 Virtual Career Fair: DMV
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Making AI Work in Schools: From Experimentation to Purposeful Practice
AI use is expanding in schools. Learn how district leaders can move from experimentation to coordinated, systemwide impact.
Content provided by Frontline Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being & Movement Webinar
Building Resilient Students: Leadership Beyond the Classroom
How can schools build resilient, confident students? Join education leaders to explore new strategies for leadership and well-being.
Content provided by IMG Academy

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Classroom Technology Students Can Hear Questions Aloud When They Take Many Tests. Does It Help?
Text-to-speech tech helps some students answer questions correctly, but hurts others' performance.
2 min read
Young student in a school computer lab concentrates on a laptop while wearing pink headphones; classmates work nearby in a bright, collaborative learning environment focused on technology and study.
Vanessa Solis/Education Week + Getty Images
Classroom Technology Screen Time Dos and Don'ts: A Downloadable Guide to Healthier Tech Habits
This guide outlines how schools and educators can build heathier student screen habits.
1 min read
Collage of digital devices with an overlay of a clock.
Liz Yap/Education Week via Canva
Classroom Technology How to Lessen Screen Time in Schools—and Make It More Effective
Districts have tried monitoring software, tech-free days, and parent education to curb screen time.
7 min read
Open laptops, or tablets for younger students, are a common sight during class time post-Covid, as in this 6th grade class period during a "What I Need" period at Cedar Park Middle School in Beaverton, Ore., on April 3, 2026. Cedar Park is experimenting with storing Chromebooks on a classroom cart, instead of assigning them directly to each student, to try to reduce the amount of time students spend on screens during instructional time.
Sixth-graders work on laptops during a class at Cedar Park Middle School in Beaverton, Ore., on April 3, 2026. The school is experimenting with storing Chromebooks on a classroom cart, rather than assigning them directly to each student, to try to reduce the amount of time students spend on screens. Teachers and parents say the pilot program is working.
Mark Graves/The Oregonian via TNS
Classroom Technology Explainer The Good, Bad, and Ugly of Screen Time: An Explainer
Too much screen time is bad for kids. But what does that mean for schools?
9 min read
EdWeek Screen Time
Taylor Callery for Education Week