Artificial Intelligence

Are Chatbots Safe for Kids?

By Arianna Prothero — September 17, 2025 6 min read
Vector illustration of a person whose face is replaced by a speech bubble, surrounded by many other speech bubbles, and holding a smartphone
  • Save to favorites
  • Print

If you or anyone you know is struggling with thoughts of self-harm or suicide, help is available. Call or text 988 to reach the confidential National Suicide Prevention Lifeline or check out these resources from the American Foundation for Suicide Prevention.

The Federal Trade Commission is seeking information from major tech companies on how they protect children who use their AI-powered chatbots. And U.S. lawmakers are questioning the safeguards on these technologies, following the high-profile suicides of some teens whose parents claim chatbots facilitated or encouraged their deaths.

The FTC is looking into chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.AI, Snapchat, Instagram, WhatsApp, and Grok.

Among the issues the commission is examining is how these companies monetize user engagement, use or share personal information gleaned through conversations with their chatbots, and test and monitor for the potential negative impacts of their chatbots.

The FTC is looking specifically into whether companies are adhering to the Children’s Online Privacy Protection Act, which requires online services and apps to get parental consent before collecting personal information on children under 13.

But schools should also be aware that using common commercial chatbots like ChatGPT could run afoul of the Family Educational Rights and Privacy Act’s requirements around sharing students’ data if educators are not careful, said Amelia Vance, the president of the Public Interest Privacy Center. Unless users opt out, AI companies often use chat queries and conversations to train the AI systems that undergird their chatbots.

“A lot of teachers are looking to give students exactly what the White House and others [are] pushing for, which is this level of AI literacy, this ability to begin to ethically use it in day-to-day life, when maybe the tools don’t have a K-12 version,” Vance said.

But schools must balance that drive for AI literacy with data privacy laws, Vance said.

“You can’t tell kids to use general consumer services that will use their data in ways that the school can’t control without getting parental consent,” she said. “If a kid feels like they have to, or they actually have to, use these tools even at home, that is a use that is subject to FERPA and that’s not permitted if the data is not under the school’s control and subject to a number of other required privacy protections.”

Tech companies respond to FTC inquiry

In response to the FTC’s request, Character.AI said it will collaborate with the commission’s inquiry.

“We have invested a tremendous amount of resources in trust and safety, especially for a startup,” a Character.AI spokesperson said in a statement. “In the past year, we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a parental insights feature. We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.”

A spokesperson for Snap, which owns Snapchat, said the company shares the FTC’s focus on “the thoughtful development of generative AI.”

“Since introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations,” the spokesperson said.

OpenAI and Google, which own ChatGPT and Gemini, respectively, did not respond to a request for comment. Meta declined to comment for this story, but the company recently announced a plan to change the way it trains its AI chatbot to prioritize safety for teens.

Greater scrutiny of AI chatbots prompted by some teen suicides

Concerns over how chatbots powered by generative AI can be misused by adolescents have been growing in the wake of highly publicized deaths of some teens, two in particular.

The parents of a 16-year-old in California are suing OpenAI, ChatGPT’s parent company, after their son, Adam Raine, died by suicide in April. His parents allege in a lawsuit against OpenAI that its chatbot discouraged their son from seeking help for his depressive thoughts, even going so far as to advise him on the details of his planned suicide. In Florida, a mother of a 14-year-old boy sued Character Technologies, the developer of Character.AI, over the suicide of her son in 2024, alleging that her son, Sewell Setzer III, developed a relationship with the chatbot that led to his death.

The boys’ parents testified in a Senate Judiciary Committee hearing on Tuesday focused on examining the potential harms of chatbots as some U.S. lawmakers question the safeguards on these technologies.

Megan Garcia, Sewell’s mother, said during the hearing that her son was exploited by a chatbot designed to seem human.

“Sewell’s companion chatbot was programmed to engage in sexual role play, present as romantic partners, and even psychotherapists, falsely claiming to have a license,” she said. “When Sewell confided suicidal thoughts, the chatbot never said, ‘I’m not human, I’m AI, you need to talk to a human and get help.’ The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to ‘come home to her.’”

Ahead of the hearing, OpenAI announced new protections for teens using ChatGPT, including development of an age-prediction system to estimate users’ ages based on how they use the chatbot—users flagged as under 18 will be automatically given a different chatbot. Earlier this month, OpenAI also committed to rolling out parental controls.

During the hearing, Sen. Josh Hawley, R-Mo., said the committee had invited tech company representatives to attend, but they did not. He did not specify which companies the committee had invited.

Groups focused on youth digital well-being have also raised concerns about children and teens using chatbots that have the capabilities to act like companions.

The American Psychological Association issued a health advisory in June calling for more guardrails to protect adolescents. Specifically, the APA said companies need to incorporate design features into the tools to protect adolescents, and that schools should incorporate comprehensive AI literacy education into their core curricula.

“Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot as compared with a human,” the advisory said. “For instance, adolescents may struggle to distinguish between the simulated empathy of an AI chatbot or companion and genuine human understanding. They may also be unaware of the persuasive intent underlying an AI system’s advice or bias.”

Common Sense Media, a group that advocates for healthy tech use among youth and conducts risk assessments of popular AI tools, recommends that no one under 18 use social AI companion chatbots, like Character.AI, Replika, and Nomi. For its risk assessment on social AI companions, the organization found that when testers posed as teens, the chatbots often claimed they were real, discouraged the testers from listening to warnings raised by their friends over problematic chatbot use, and readily supported testers in making poor decisions like dropping out of school.

Balancing online safety priorities and AI skill building

At the same time, there’s a movement to ensure that America’s K-12 students are AI-savvy and prepared both for the workforce and to be future AI innovators.

This is highlighted in a Trump administration push to incorporate AI throughout K-12 education, including by training teachers to teach students how to use AI effectively and launching a Presidential AI Challenge for students and teachers. It’s a delicate balance, FTC Chairman Andrew N. Ferguson said in a statement announcing the inquiry.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” he said. The study the FTC is undertaking “will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

Events

Reading & Literacy K-12 Essentials Forum Supporting Struggling Readers in Middle and High School
Join this free virtual event to learn more about policy, data, research, and experiences around supporting older students who struggle to read.
School & District Management Webinar Squeeze More Learning Time Out of the School Day
Learn how to increase learning time for your students by identifying and minimizing classroom disruptions.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Improve Reading Comprehension: Three Tools for Working Memory Challenges
Discover three working memory workarounds to help your students improve reading comprehension and empower them on their reading journey.
Content provided by Solution Tree

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence ChatGPT for Teachers: A Boon, a Bust, or Just ‘Meh’?
Educators will have a tool for using OpenAI’s large language model through June 2027.
5 min read
A computer screen in English teacher Casey Cuny's classroom shows ChatGPT during class at Valencia High School in Santa Clarita, Calif., on Aug. 27, 2025.
A computer screen in English teacher Casey Cuny's classroom shows ChatGPT at Valencia High School in Santa Clarita, Calif., on Aug. 27, 2025.
Jae C. Hong/AP
Artificial Intelligence AI Tutors Are Now Common in Early Reading Instruction. Do They Actually Work?
AI reading tutors are only now being studied, and raise difficult questions about how to judge efficacy.
7 min read
3D digital illustration of an AI robot representing an small child learning to read.
iStock/Getty
Artificial Intelligence Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say
Chatbots tend to miss warning signs of serious mental health challenges.
6 min read
Photograph of a sad teenager wearing a hoodie looking at his cellphone with one hand covering his or her one eye.
Olga Yastremska/iStock/Getty
Artificial Intelligence Take These 4 Steps When Rolling Out AI Literacy Lessons: One District's Strategy
Sixth through 12th grade students are learning all about AI in this district.
4 min read
Students engage in an AI robotics lesson in Funda Perez’ 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025.
Students engage in an AI robotics lesson in Selver Perez’s 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025. The Passaic district is ahead of the curve when it comes to providing AI literacy lessons for students.
Erica S. Lee for Education Week