Artificial Intelligence

More Teens Than You Think Have Been ‘Deepfake’ Targets

By Olina Banerji — March 03, 2025 4 min read
A photograph of a 13-year-old girl using her smartphone in a dark room. The content she is browsing from a social media feed projects over her face and on the wall behind her and shows a partial view of a pillow and mattress.
  • Save to favorites
  • Print

A growing number of teenagers know someone who has been the target of “deepfake” pornographic images or videos generated by artificial intelligence, a new survey shows.

One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.

These statistics come from a survey of 1,200 young people, conducted Sept. 7 to Oct. 7 and released by Thorn, a nonprofit group that advocates for child safety online. The report highlights the relative ease with which young people can create deepfakes: 71 percent of respondents who created deepfake imagery of others said they found the technology to do so on social media; 53 percent report they found tools through an online search engine.

See Also

Custom illustration by Stuart Briers showing two identical male figures sitting in a chair with a computer dot matrix pointing to different parts of the body. The background depicts soundwaves, a play button, speaker icon, eye, and ear.
Stuart Briers for Education Week

Schools nationwide have battled the rising challenge of deepfake nudes over the last few years. Boys as young as 14 had used artificial intelligence to create fake, yet lifelike, pornographic images of their female classmates and shared them on social media sites like Snapchat.

These cases have spawned new questions for schools about how to discipline students who create these types of images and prompted them to review policies on the proper use of technology and sexual misconduct. The concern over online safety has also sparked legislative action by a bipartisan group of lawmakers. To date, 136 bills to address nonconsensual intimate deepfakes have been introduced in 39 states, according to Public Citizen, a nonprofit consumer advocacy firm.

The number of young people who are personally familiar with deepfakes is “really shocking,” said Melissa Stroebel, the head of research at Thorn and a co-author of the study.

The number of young people—1 in 17—who have been targets of deepfakes represent “a small percentage, but when we put that in context, that’s [at least] one in every classroom,” Stroebel said, adding: “That’s a startling rate of exposure to this particular harm at this point.”

More than 80 percent of the young people surveyed said they recognize that deepfake nude imagery “causes harm” to the person depicted. The top reasons they identified as causing harm were the “emotional and psychological impact” of the image and “reputational damage.”

This finding, Stroebel said, indicates that even if the adults are still debating the “reality” of these synthetic images and the harm caused by them, most young people feel strongly that creating or viewing this kind of imagery is abusive.

“That’s a good sign,” she said. “When young people recognize this type of imagery as harmful and abusive, they may be more likely to report it, provided [that] awareness also reinforces the fact that this threat is serious, rather than just a normal part of being online.”

Teens recognize the harm. But to what extent?

The report highlights a disconnect between the common knowledge of deepfakes among teenagers—1 in 3 teens and 1 in 2 young adults have heard of the term “deepfakes”—and the perception of harm caused by these images.

Too many young people don’t automatically consider deepfake images to be harmful, Stroebel said.

Teenage boys and young men are more likely than their female counterparts to think there’s no harm caused by deepfakes, or that the harm is “context dependent.” For instance, 7 percent of boys aged 13 and 14 thought the harm depended on the context compared to 2 percent of girls in the same age group. Among boys between ages 15 and 17, 10 percent thought the harm was context dependent, while 7 percent of their female peers thought so.

See also

AI Education concept in blue: A robot hand holding a pencil.
iStock/Getty

Overall, the 9 percent of young people who didn’t think deepfakes cause any harm thought so mainly because these images aren’t real and don’t cause physical harm.

It’s crucial for educators and other adults to teach young people the harms of deepfakes because that can affect how teens navigate the risks from deepfakes they’re increasingly encountering online, Stroebel said. It can also affect how often teens use AI tools—easily available online—to create and share deepfake images of others.

The Thorn report also captured responses from a small subset—2 percent—of young people who have created deepfake images, with a large majority of the creators—74 percent—targeting women. Over 30 percent of the creators indicated they had made nude imagery that depicted minors.

More than half of this group of creators reported that they shared these images with their friends or people at their school. Notably, 27 percent of the creators said the images they made were not shared and meant only for personal consumption. This could mean that people victimized by a deepfake don’t know they’re depicted and won’t have any recourse.

Schools and adults need to talk about risks with young people

To mitigate the risks, schools can start by clearly identifying deepfake nude imagery as a form of abuse and including it in their policies against bullying and harassment.

While most young people understand that deepfake nudes are a form of abuse, the survey found that 16 percent of respondents targeted by a deepfake don’t seek support to deal with the abuse because they fear being shamed, carry a sense of personal blame, or have concerns about not being believed.

Of those who did seek support, 60 percent said they either reported the image online or blocked the person who created it. More than half also sought guidance from a parent, teacher, or adult in their community. Most respondents who acted took both online and offline actions to deal with the abuse, the report noted.

Parents, guardians, or adults in the community around young people should be prepared to have “necessary conversations around relationship awareness, consent, and sexual education,” Stroebel said. “The digital world is just another place where that development is happening at this point.”

Events

Reading & Literacy K-12 Essentials Forum Supporting Struggling Readers in Middle and High School
Join this free virtual event to learn more about policy, data, research, and experiences around supporting older students who struggle to read.
School & District Management Webinar Squeeze More Learning Time Out of the School Day
Learn how to increase learning time for your students by identifying and minimizing classroom disruptions.
Recruitment & Retention Webinar EdRecruiter 2026 Survey Results: How School Districts are Finding and Keeping Talent
Discover the latest K-12 hiring trends from EdWeek’s nationwide survey of job seekers and district HR professionals.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Q&A How This District Got Students, Teachers, Parents, and Leaders to Agree on AI
One Southern California school system went slower in developing guidelines in order to build buy-in.
3 min read
A team of people collaborate with AI to create policy.
iStock/Getty + Education Week
Artificial Intelligence Opinion AI-Drafted Emails Aren't as Good as You Think: A School HR Director Explains
I prompted ChatGPT to write a teacher’s work accommodation request. Here’s what it got wrong.
Anthony Graham
4 min read
Two silhouettes facing away from each other. Circuit board in human shape on blue. High-tech technology background.
iStock/Getty + Education Week
Artificial Intelligence Q&A How to Teach Digital Citizenship Amid the ‘Need to Just Scroll’
This Kentucky district is rethinking its digital citizenship efforts in the age of AI.
4 min read
Elementary teacher and her students using laptop during computer class at school.
iStock
Artificial Intelligence Opinion The Question You Need to Answer Before Crafting Any New Ed-Tech Policy
When debating the appropriate use of AI in schools, don't get ahead of yourself.
Stan Winborne & Karl Johnson
4 min read
Concept art of freedom life dream success and hope concept , ambition idea artwork, surreal painting group of people with sky in an AI portal doorway , conceptual illustration
Jorm Sangsorn/iStock + Education Week