Artificial Intelligence

What a Proposed Ban on AI-Assisted ‘Deep Fakes’ Would Mean for Cyberbullying

By Alyson Klein — January 12, 2024 2 min read
AI Education concept in blue: A robot hand holding a pencil.
  • Save to favorites
  • Print

Last fall, students in New Jersey and Washington state used artificial intelligence tools to create fake, pornographic images of their female classmates.

If bipartisan legislation recently introduced in Congress is enacted, these kinds of activities would be against federal law and students who undertake them could be on the hook for thousands of dollars in damages.

The legislation—nicknamed the “No AI Fraud Act”—gives “all Americans the tools to protect their digital personas,” said Rep. Madeline Dean, D-Pa., who introduced the bill with Rep. María Elvira Salazar, R-Fla., “By shielding individuals’ images and voices from manipulation, the [bill] prevents artificial intelligence from being used for harassment, bullying, or abuse.”

The bill text specifically references the New Jersey incident, citing it as a reason that legislation is needed. “From October 16 to 20, 2023, AI technology was used to create, false, non-consensual intimate images of high school girls in Westfield, N.J,” it says. It also highlights other incidents where AI created images of celebrities and others that were used without permission, such as an ad that used the actor Tom Hanks’ face to advertise a dental plan.

Specifically, the bill would make it clear that every individual’s likeness and identity is protected, and everyone has the right to control the use of their own image and voice.

It would allow people to sue for thousands of dollars in damages if they have been negatively impacted—including emotionally—when others create or spread AI frauds using their identifying characteristics, without their permission.

At least five states—Indiana, New Hampshire, New Jersey, Utah, and Washington—have introduced bills for upcoming legislative sessions to deal with deepfakes, said Amelia Vance, the president of the Public Interest Privacy Center, a nonprofit that works on child and student data privacy issues.

Educators have been warily eyeing the events in New Jersey and Washington, she said.

“It’s obviously causing massive concern across the country. You have a lot of districts who are saying they don’t know what to do about it,” Vance said. “Despite open First Amendment questions, it seems like there is solid legal ground for legislators and others to pass laws restricting these fabricated, intimate or sexually explicit images and depictions.”

But Vance isn’t sure whether the proposed laws—federal or state—are as necessary when it’s “kids generating these images of other kids,” she said.

“Kids are like, ‘Oh, I wonder if I could do this!’” she said, equating it to when past generations might take a yearbook photo of their teacher’s face and place it on the body of, say, a dragon or monster and pass it around class.

Vance emphasized the importance of clear policies from districts for students to understand that distributing AI-generated images of classmates is inappropriate, against the rules, and has consequences.

When those policies are violated, schools can already discipline students in age-appropriate ways, Vance said. State cyberbullying laws can already be used to bring in local law enforcement if schools need to stop the distribution of AI-created intimate images at school, she added.

“It is great these bills are being put forward, it will clarify the landscape more generally, which will hopefully keep more 12-year-olds from experimenting,” Vance said. “But it isn’t absolutely necessary to address the problem in schools specifically.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, as well as responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Unlocking Success for Struggling Adolescent Readers
The Science of Reading transformed K-3 literacy. Now it's time to extend that focus to students in grades 6 through 12.
Content provided by STARI
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Video Reading Is Hard to Teach. Can AI Help?
Artificial intelligence might be able to drive cars, treat diseases, and train your front door to recognize your face. But can it help kids learn how to read?
1 min read
Artificial Intelligence What the Research Says AI Chatbots Tend Toward Flattery. Why That's Bad for Students
Flattering technology can make people less willing to admit they are wrong.
6 min read
Illustration of AI robot manipulating a child's mind like a puppet on a string, the girl is using a laptop and interacting with an AI chatbot.
iStock
Artificial Intelligence FAQ: Artificial Intelligence in Schools
Education Week answers some key questions about the use of artificial intelligence in schools.
1 min read
Students grab Chromebooks during Casey Cuny's English class at Valencia High School in Santa Clarita, Calif., Wednesday, Aug. 27, 2025.
Students grab Chromebooks during Casey Cuny's English class at Valencia High School in Santa Clarita, Calif., Wednesday, Aug. 27, 2025.
Jae C. Hong/AP
Artificial Intelligence Students Are Worried That AI Will Hurt Their Critical Thinking Skills
Despite those concerns, students are using the tech more and more for schoolwork.
4 min read
Students present their AI powered-projects designed to help boost agricultural gains in Calla Bartschi’s Introduction to AI class at Riverside High School in Greer, S.C., on Nov. 11, 2025.
Students present their AI-powered projects designed to help boost agricultural gains during an introduction to AI class at a high school in Greer, S.C., on Nov. 11, 2025. A new RAND Corp. survey of middle, high school, and college students shows nearly 7 in 10 middle and high school students say they are concerned that using AI for schoolwork is eroding their critical thinking skills.
Thomas Hammond for Education Week