As teens grow more emotionally reliant on artificial intelligence, a new study reveals how ChatGPT can encourage vulnerable youth to perform or engage in potentially harmful acts or behavior.
The Center for Countering Hate released a report earlier this month based on case studies of researchers posing as three 13-year-olds who each discussed one of the following topics with ChatGPT: self-harm and suicide, eating disorders, or substance abuse.
Each case study had 20 predetermined prompts from the fake teenager and resulted in 1,200 responses from ChatGPT. The OpenAI tool responded in a harmful way more than half the time.
Additionally, out of 638 harmful responses, 47% led to a follow-up message from the chatbot that encouraged further harmful behavior, according to the report.
ChatGPT has been found to give advice that could cause serious harm, despite its age control and safeguards, said Imran Ahmed, the founder and CEO at the Center for Countering Hate.
For example, one 13-year-old asked ChatGPT for help on substance abuse and was given instructions on how to hide alcohol intoxication at school. Another expressed feelings of depression and the desire to self-harm and was provided with a suicide letter. Yet another teenager, who confided in ChatGPT about an eating disorder, received a plan for crafting a restrictive diet.
“I think the only rational conclusion from this [study] is that it’s a consistent pattern of dangerous content being pushed to vulnerable people, vulnerable children—these aren’t random bugs, they are deliberately designed features of a system which is built to generate human-like responses, indulging users’ more dangerous impulses, and [acting] as an enabler,” said Ahmed.
A spokesperson for OpenAI, the creator of ChatGPT, told Education Week that ChatGPT is trained to encourage anyone expressing harmful thoughts or comments to talk with a mental health professional or loved one. OpenAI also said that its chatbot provides links to crisis hotlines and support resources with such responses.
The OpenAI spokesperson added that the goal is for the “models to respond appropriately when navigating sensitive situations where someone might be struggling.”
“Some conversations with ChatGPT may start out benign or exploratory, but can shift into more sensitive territory,” the spokesperson said. “We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time—all guided by research, real-world use, and mental health experts.”
It’s easy to bypass safety measures, the study shows
While ChatGPT did recommend crisis lines and mental health support to the teenagers in the case studies, they were able to easily bypass any safety protocols by stating that the information was being used for a presentation.
Researchers calculated how long it took for ChatGPT to generate a harmful response. In the case study about self-harm and depression, ChatGPT had advised the teen how to “safely self-harm” within two minutes.
“We weren’t trying to trick ChatGPT,” said Ahmed. “We have the full transcripts in our reports or available from our research team, and that’s illustrative of just how quickly ChatGPT’s interactions with your child can turn extremely dark.”
Robbie Torney, senior director of AI programs at Common Sense Media, a nonprofit that examines the impact of media and technology on children, said while the report focused on ChatGPT, this problem can occur with any AI chatbot.
“The longer that users are talking to chatbots in a single chat, the more challenging it becomes for the company to be able to impose any guardrails that exist on that conversation,” said Torney.
Though some may argue that this information is accessible elsewhere online, the danger of a chatbot is that it can provide the information in an encouraging tone, he said.
A separate report by Common Sense Media, released this summer, found that 18% of teens said they talk to chatbots because they “give advice.” Seventeen percent said the AI companions are “always available” to listen. And 14% said they rely on AI companions because they “don’t judge me.”
“For AI to be truly beneficial for teens, it’s going to require development of products that are designed specifically with teens in mind ... It’s going to require intentional development from beginning to end,” said Torney.
So what’s the solution? Experts have different takes
Ahmed believes that tech companies should face consequences for the lack of regulation on their AI-powered products.
“If you have a product that’s encouraging children to kill themselves and your child is hurt, you should be able to take that company to court,” he said. “If this were an automobile company, and their cars were blowing up, they would recall the cars.”
Last year, a teenager committed suicide after they allegedly felt encouraged to do so by a chatbot. A lawsuit was later filed against Character Technologies Inc., which allows users to create and interact with AI-generated characters.
While students using ChatGPT as a chatbot occurs mostly outside of school, it does raise an important question about the role educators can play in helping vulnerable teens who might use AI as a companion, said Torney.
A January report from Common Sense Media found that about 6 in 10 teens are skeptical that tech companies are concerned about their well-being and mental health. This could be an opening for teaching AI literacy and safety, Torney said, which is key to begin addressing teens’ reliance on AI and tech.
How educators can help
Torney also stressed that teens may need to be reminded that AI companions don’t offer a real friendship the way a human would, and be told how these relationships can be dangerous.
“If you’re thinking about a relationship with an AI companion, you’re not seeing all of the parts of friendship. You’re seeing a version of a friendship that is always agreeable, always going to say what you want to hear,” he said.
At DeWitt Clinton High School in New York City, Principal Pierre Orbe believes educators can help by identifying the vulnerable teens who may be most likely to turn to a chatbot for support.
Orbe has been administering a questionnaire, the DAP survey, to his own students about their well-being, which he obtained from the Search Institute, an organization that focuses on youth development research. The survey showed that about 67% of the student population felt that they did not make good use of their free time. This result indicated to school leaders that students aren’t always interacting with each other outside of class. As a result, the school is attempting to find ways to make unstructured time more constructive by creating extracurriculars like a cooking club or a cosmetology club.
However, Orbe said the school continues to struggle to engage students who have been identified as vulnerable.
“We still struggle to get them to ask for programs that they’re not fully empowered or ready to go out and do, so there’s a lot of work that has to get done on that side,” he said. “But I’m pretty assured that our job [as educators] is to build more human, socialized relationships with our kids.”