Artificial Intelligence

More Teens Than You Think Have Been ‘Deepfake’ Targets

By Olina Banerji — March 03, 2025 4 min read
A photograph of a 13-year-old girl using her smartphone in a dark room. The content she is browsing from a social media feed projects over her face and on the wall behind her and shows a partial view of a pillow and mattress.
  • Save to favorites
  • Print

A growing number of teenagers know someone who has been the target of “deepfake” pornographic images or videos generated by artificial intelligence, a new survey shows.

One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.

These statistics come from a survey of 1,200 young people, conducted Sept. 7 to Oct. 7 and released by Thorn, a nonprofit group that advocates for child safety online. The report highlights the relative ease with which young people can create deepfakes: 71 percent of respondents who created deepfake imagery of others said they found the technology to do so on social media; 53 percent report they found tools through an online search engine.

See Also

Custom illustration by Stuart Briers showing two identical male figures sitting in a chair with a computer dot matrix pointing to different parts of the body. The background depicts soundwaves, a play button, speaker icon, eye, and ear.
Stuart Briers for Education Week

Schools nationwide have battled the rising challenge of deepfake nudes over the last few years. Boys as young as 14 had used artificial intelligence to create fake, yet lifelike, pornographic images of their female classmates and shared them on social media sites like Snapchat.

These cases have spawned new questions for schools about how to discipline students who create these types of images and prompted them to review policies on the proper use of technology and sexual misconduct. The concern over online safety has also sparked legislative action by a bipartisan group of lawmakers. To date, 136 bills to address nonconsensual intimate deepfakes have been introduced in 39 states, according to Public Citizen, a nonprofit consumer advocacy firm.

The number of young people who are personally familiar with deepfakes is “really shocking,” said Melissa Stroebel, the head of research at Thorn and a co-author of the study.

The number of young people—1 in 17—who have been targets of deepfakes represent “a small percentage, but when we put that in context, that’s [at least] one in every classroom,” Stroebel said, adding: “That’s a startling rate of exposure to this particular harm at this point.”

More than 80 percent of the young people surveyed said they recognize that deepfake nude imagery “causes harm” to the person depicted. The top reasons they identified as causing harm were the “emotional and psychological impact” of the image and “reputational damage.”

This finding, Stroebel said, indicates that even if the adults are still debating the “reality” of these synthetic images and the harm caused by them, most young people feel strongly that creating or viewing this kind of imagery is abusive.

“That’s a good sign,” she said. “When young people recognize this type of imagery as harmful and abusive, they may be more likely to report it, provided [that] awareness also reinforces the fact that this threat is serious, rather than just a normal part of being online.”

Teens recognize the harm. But to what extent?

The report highlights a disconnect between the common knowledge of deepfakes among teenagers—1 in 3 teens and 1 in 2 young adults have heard of the term “deepfakes”—and the perception of harm caused by these images.

Too many young people don’t automatically consider deepfake images to be harmful, Stroebel said.

Teenage boys and young men are more likely than their female counterparts to think there’s no harm caused by deepfakes, or that the harm is “context dependent.” For instance, 7 percent of boys aged 13 and 14 thought the harm depended on the context compared to 2 percent of girls in the same age group. Among boys between ages 15 and 17, 10 percent thought the harm was context dependent, while 7 percent of their female peers thought so.

See also

AI Education concept in blue: A robot hand holding a pencil.
iStock/Getty

Overall, the 9 percent of young people who didn’t think deepfakes cause any harm thought so mainly because these images aren’t real and don’t cause physical harm.

It’s crucial for educators and other adults to teach young people the harms of deepfakes because that can affect how teens navigate the risks from deepfakes they’re increasingly encountering online, Stroebel said. It can also affect how often teens use AI tools—easily available online—to create and share deepfake images of others.

The Thorn report also captured responses from a small subset—2 percent—of young people who have created deepfake images, with a large majority of the creators—74 percent—targeting women. Over 30 percent of the creators indicated they had made nude imagery that depicted minors.

More than half of this group of creators reported that they shared these images with their friends or people at their school. Notably, 27 percent of the creators said the images they made were not shared and meant only for personal consumption. This could mean that people victimized by a deepfake don’t know they’re depicted and won’t have any recourse.

Schools and adults need to talk about risks with young people

To mitigate the risks, schools can start by clearly identifying deepfake nude imagery as a form of abuse and including it in their policies against bullying and harassment.

While most young people understand that deepfake nudes are a form of abuse, the survey found that 16 percent of respondents targeted by a deepfake don’t seek support to deal with the abuse because they fear being shamed, carry a sense of personal blame, or have concerns about not being believed.

Of those who did seek support, 60 percent said they either reported the image online or blocked the person who created it. More than half also sought guidance from a parent, teacher, or adult in their community. Most respondents who acted took both online and offline actions to deal with the abuse, the report noted.

Parents, guardians, or adults in the community around young people should be prepared to have “necessary conversations around relationship awareness, consent, and sexual education,” Stroebel said. “The digital world is just another place where that development is happening at this point.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Unlocking Success for Struggling Adolescent Readers
The Science of Reading transformed K-3 literacy. Now it's time to extend that focus to students in grades 6 through 12.
Content provided by STARI
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
MTSS + AI in Action: Reimagining Student Support
See how one district is using AI to strengthen MTSS, reduce workload, and improve student support.
Content provided by Panorama Education

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence A Group of Students Took a Deep Dive Into AI. Here’s What They Told Teachers
They came away with new skills, but also confronted some thorny ethical questions.
6 min read
JL838
Students at Percy Julian Middle School in Oak Park, Ill., volunteered this school year to use some of their recesses and lunch periods to investigate AI tools. They presented to the faculty as part of a panel discussion on April 8, 2026. Teacher Ashley A. Kannan, right, developed the idea for the project.
Joshua Lott for Education Week
Artificial Intelligence Opinion Bloom's Taxonomy Needs an Update for the AI Age
Here’s how one superintendent is reimagining the classic framework of learning objectives.
Jeffrey Schoonover
5 min read
Concept of AI, Digital brain with ai chip on generate bar. AI created generate art, text, video, and audio with prompt. Big data visualization and machine learning. Vector illustration.
Education Week + iStock/Getty Images
Artificial Intelligence Opinion Is Your School’s Approach to AI Too Flexible?
It’s tempting to prioritize adaptability when dealing with AI tools. It can also be a mistake.
Laura Arnett
3 min read
040726 opinion Arnett principal is in hendrie fs
F. Sheehan/Education Week via Canva
Artificial Intelligence Opinion Can AI Support Student Learning? Depends Who You Ask
Ed tech is supposed to give teachers more time to mentor. It’s not clear if it does.
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week