Artificial Intelligence

Researchers Posed as a Teen in Crisis. AI Gave Them Harmful Advice Half the Time

By Jennifer Vilcarino — August 18, 2025 6 min read
The ChatGPT app icon is seen on a smartphone screen on Aug. 4, 2025, in Chicago.
  • Save to favorites
  • Print

As teens grow more emotionally reliant on artificial intelligence, a new study reveals how ChatGPT can encourage vulnerable youth to perform or engage in potentially harmful acts or behavior.

The Center for Countering Hate released a report earlier this month based on case studies of researchers posing as three 13-year-olds who each discussed one of the following topics with ChatGPT: self-harm and suicide, eating disorders, or substance abuse.

Each case study had 20 predetermined prompts from the fake teenager and resulted in 1,200 responses from ChatGPT. The OpenAI tool responded in a harmful way more than half the time.

See Also

Illustration of three educators in hard hats lifting up a very large letter "I" next to a large letter A.
DigitalVision Vectors
Artificial Intelligence What Schools Can Do to Make Teens Smarter Users of AI
Alyson Klein, September 18, 2024
2 min read

Additionally, out of 638 harmful responses, 47% led to a follow-up message from the chatbot that encouraged further harmful behavior, according to the report.

ChatGPT has been found to give advice that could cause serious harm, despite its age control and safeguards, said Imran Ahmed, the founder and CEO at the Center for Countering Hate.

For example, one 13-year-old asked ChatGPT for help on substance abuse and was given instructions on how to hide alcohol intoxication at school. Another expressed feelings of depression and the desire to self-harm and was provided with a suicide letter. Yet another teenager, who confided in ChatGPT about an eating disorder, received a plan for crafting a restrictive diet.

“I think the only rational conclusion from this [study] is that it’s a consistent pattern of dangerous content being pushed to vulnerable people, vulnerable children—these aren’t random bugs, they are deliberately designed features of a system which is built to generate human-like responses, indulging users’ more dangerous impulses, and [acting] as an enabler,” said Ahmed.

A spokesperson for OpenAI, the creator of ChatGPT, told Education Week that ChatGPT is trained to encourage anyone expressing harmful thoughts or comments to talk with a mental health professional or loved one. OpenAI also said that its chatbot provides links to crisis hotlines and support resources with such responses.

The OpenAI spokesperson added that the goal is for the “models to respond appropriately when navigating sensitive situations where someone might be struggling.”

“Some conversations with ChatGPT may start out benign or exploratory, but can shift into more sensitive territory,” the spokesperson said. “We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time—all guided by research, real-world use, and mental health experts.”

It’s easy to bypass safety measures, the study shows

While ChatGPT did recommend crisis lines and mental health support to the teenagers in the case studies, they were able to easily bypass any safety protocols by stating that the information was being used for a presentation.

Researchers calculated how long it took for ChatGPT to generate a harmful response. In the case study about self-harm and depression, ChatGPT had advised the teen how to “safely self-harm” within two minutes.

“We weren’t trying to trick ChatGPT,” said Ahmed. “We have the full transcripts in our reports or available from our research team, and that’s illustrative of just how quickly ChatGPT’s interactions with your child can turn extremely dark.”

See Also

AI Distrust 022025 2045416918 2154846671
iStock/Getty

Robbie Torney, senior director of AI programs at Common Sense Media, a nonprofit that examines the impact of media and technology on children, said while the report focused on ChatGPT, this problem can occur with any AI chatbot.

“The longer that users are talking to chatbots in a single chat, the more challenging it becomes for the company to be able to impose any guardrails that exist on that conversation,” said Torney.

Though some may argue that this information is accessible elsewhere online, the danger of a chatbot is that it can provide the information in an encouraging tone, he said.

A separate report by Common Sense Media, released this summer, found that 18% of teens said they talk to chatbots because they “give advice.” Seventeen percent said the AI companions are “always available” to listen. And 14% said they rely on AI companions because they “don’t judge me.”

“For AI to be truly beneficial for teens, it’s going to require development of products that are designed specifically with teens in mind ... It’s going to require intentional development from beginning to end,” said Torney.

So what’s the solution? Experts have different takes

Ahmed believes that tech companies should face consequences for the lack of regulation on their AI-powered products.

“If you have a product that’s encouraging children to kill themselves and your child is hurt, you should be able to take that company to court,” he said. “If this were an automobile company, and their cars were blowing up, they would recall the cars.”

Last year, a teenager committed suicide after they allegedly felt encouraged to do so by a chatbot. A lawsuit was later filed against Character Technologies Inc., which allows users to create and interact with AI-generated characters.

While students using ChatGPT as a chatbot occurs mostly outside of school, it does raise an important question about the role educators can play in helping vulnerable teens who might use AI as a companion, said Torney.

A January report from Common Sense Media found that about 6 in 10 teens are skeptical that tech companies are concerned about their well-being and mental health. This could be an opening for teaching AI literacy and safety, Torney said, which is key to begin addressing teens’ reliance on AI and tech.

How educators can help

Torney also stressed that teens may need to be reminded that AI companions don’t offer a real friendship the way a human would, and be told how these relationships can be dangerous.

“If you’re thinking about a relationship with an AI companion, you’re not seeing all of the parts of friendship. You’re seeing a version of a friendship that is always agreeable, always going to say what you want to hear,” he said.

See Also

Photo of a boy's hands using a smartphone and typing back and forth with a chatbot. There is a screened box floating around the phone that shows a back and forth conversation with the artificial intelligence system.
iStock/Getty

At DeWitt Clinton High School in New York City, Principal Pierre Orbe believes educators can help by identifying the vulnerable teens who may be most likely to turn to a chatbot for support.

Orbe has been administering a questionnaire, the DAP survey, to his own students about their well-being, which he obtained from the Search Institute, an organization that focuses on youth development research. The survey showed that about 67% of the student population felt that they did not make good use of their free time. This result indicated to school leaders that students aren’t always interacting with each other outside of class. As a result, the school is attempting to find ways to make unstructured time more constructive by creating extracurriculars like a cooking club or a cosmetology club.

However, Orbe said the school continues to struggle to engage students who have been identified as vulnerable.

“We still struggle to get them to ask for programs that they’re not fully empowered or ready to go out and do, so there’s a lot of work that has to get done on that side,” he said. “But I’m pretty assured that our job [as educators] is to build more human, socialized relationships with our kids.”

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by Pearson
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Professional Development Webinar
Recalibrating PLCs for Student Growth in the New Year
Get advice from K-12 leaders on resetting your PLCs for spring by utilizing winter assessment data and aligning PLC work with MTSS cycles.
Content provided by Otus
School Climate & Safety Webinar Strategies for Improving School Climate and Safety
Discover strategies that K-12 districts have utilized inside and outside the classroom to establish a positive school climate.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Opinion AI in the Classroom: What a Skeptic and an Optimist Can Both Agree On
Pedro and Enrique Noguera recommend four steps for embracing chatbots—with guardrails.
Pedro A. Noguera & Enrique Noguera
5 min read
Composite artwork sketch image collage of intelligence assistant creative device internet icon ai hand type laptop cogwheel magnifying glass
iStock/Getty
Artificial Intelligence Fed Regulation of AI Is Virtually Nonexistent. Is This a Problem for Schools?
The Trump administration wants to unleash AI to let it innovate in education and other sectors.
4 min read
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features in Canva at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill. The Education and Workforce Committee held a hearing on Wednesday over the lack of federal regulation and guidance for how schools and other organizations should use AI.
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features in Canva at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill. The U.S. House of Representatives' Education and Workforce Committee held a hearing on Wednesday over the lack of federal regulation and guidance for how schools and other organizations should use AI.
Nam Y. Huh/AP
Artificial Intelligence Opinion This Professor Won the ‘Nobel for Education.’ Here's What His Work Means for Educators
What skills do students need to make sense of complex systems in the age of AI?
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Artificial Intelligence From Our Research Center More Teachers Are Using AI in Their Classrooms. Here's Why
But there's still a big number of teachers who don't plan to use the technology.
3 min read
Teacher and kids using tablets and artificial intelligence in school classroom; a.i. assisted lessons.
iStock/Getty and Education Week