Artificial Intelligence

How to Teach Kids to Spot AI Manipulation

By Alyson Klein — June 24, 2024 3 min read
Motherboard image with large "AI" letters with an animated magnifying glass pans in from the left.
  • Save to favorites
  • Print

Students are growing up in a world where even teachers and other media-savvy adults struggle to distinguish a paragraph crafted by a generative artificial intelligence tool, such as ChatGPT, from one written by a professional journalist.

AI can also create “deepfakes,” manipulated images and videos that can appear shockingly realistic. And it can mimic voices, literally putting false words into someone’s mouth.

All that makes teaching news literacy—already a charged topic in politically polarized times—especially challenging, educators and experts said June 23 during a panel discussion at the International Society for Technology in Education’s annual conference here.

“It’s a misinformation crisis,” said Cathy Collins, a one-time journalist turned library and media specialist for Sharon Public Schools in Massachusetts, and a member of ISTE’s board. “And on top of that we have social media, which is spreading misinformation widely and rapidly.”

The good news is that “social science research shows it’s possible to inoculate people against misinformation,” Collins said. That means “we can use our bag of tricks and strategies to help students learn how to separate fact from fiction across subject areas and grade levels [so that] students grow into informed, media-literate adults who will be better equipped to make wise decisions.”

While it may be tough to decide if a piece of writing is the work of a human reporter or an AI chatbot, there are subtle differences students can look for, she said.

Humans typically stick with a consistent tone or voice throughout a piece, while AI is more likely to vacillate between different writing styles, from technical to conversational and back.

Real writers are more apt to show emotion or give opinions in their work, which is tough for AI to mimic. AI also tends to use the same constructions and phrases over and over, while human writers try to engage readers with interesting, varied language.

Beyond that, students should learn to ask questions of what they see online, such as: Has this information been confirmed or posted by a credible source? Are different platforms reporting the same piece of information, or is this a one-off?

And if they are examining an image, do they see abnormalities like six fingers on a hand instead of five? And they can do what’s called a “reverse image search,” looking up an image without using text or keywords. That strategy allows students to get more information about the context behind a picture posted online.

Students should be “practicing healthy skepticism,” said Darshell Silva, a librarian at Nathanael Greene Middle School in Providence, R.I. “Students these days are really not that skeptical. If it’s on the internet, it must be true because it’s there.”

Students need to learn how to pause and reflect

Students can also try creating—though not publicly sharing—their own “misinformation.” They could tweak historic pictures or create sound clips using famous voices, all with the aim of showing how easily AI can manipulate information and images, Collins said.

Since credible sources—like major newspapers—often have paywalls, Silva recommended directing students to databases most schools and public libraries subscribe to that offer free access to professionally written and reported publications and content to students and educators.

“Most of my students prefer not to log in to a database, but I do tell them and teach them that you’re getting the better information from there,” Silva said. “I do require [it] for certain research projects so that they do get the knowledge of how to use them.”

Panelists also recommended resources from the News Literacy Project, a nonprofit organization that works on media literacy—including its Checkology platform—and from TeachAI, a nonprofit that promotes AI literacy.

Social-emotional skills can also be a part of the news literacy process, said Kimberley Zajac, a speech and language pathologist for Norton Public Schools in Massachusetts.

Students need to be able to slow down so that they can pause and reflect on what they are reading, she explained.

Students and even some teachers need to understand the “importance of self-regulation,” Zajac said. Telling students to “pause, focus, [be] present, mindful, also helps temper some of the feelings that might be bubbling around” as they consume media.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, as well as responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Unlocking Success for Struggling Adolescent Readers
The Science of Reading transformed K-3 literacy. Now it's time to extend that focus to students in grades 6 through 12.
Content provided by STARI
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Opinion I’m Not Worried AI Helps My Students Cheat. I’m Worried How It Makes Them Feel
AI is undermining students’ trust in a shared reality. Here’s how schools can step up.
Stan Williams
4 min read
Photo illustration of high school students with pixelated headshots masking their faces.
iStock
Artificial Intelligence Q&A The Risks and Rewards of AI in School: What to Know
Brookings Institution's report details the best ways to minimize risk and utilize benefits of AI for students.
4 min read
Students engage in an AI robotics lesson in Funda Perez’ 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025.
Students engage in an AI robotics lesson at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025. A new report from the Brookings Institution outlines the benefits and drawbacks of AI use in education.
Erica S. Lee for Education Week
Artificial Intelligence Letter to the Editor I’m Pro-Technology, But AI’s Role in Education Worries Me
A parent shares his concerns with artificial intelligence in K-12.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Artificial Intelligence 'Grok' Chatbot Is Bad for Kids, Review Finds
The chatbot on X suggests risky behavior, and is unsafe for teens, Common Sense Media says.
4 min read
Workers install lighting on an "X" sign atop the company headquarters, formerly known as Twitter, in downtown San Francisco, July 28, 2023. Grok is the artificial intelligence chatbot built into the social media platform X.
Workers install lighting on an "X" sign atop the company headquarters of X, a social media platform formerly known as Twitter, in San Francisco on July 28, 2023. Grok is the artificially intelligent chatbot built into the social media platform.
Noah Berger/AP