Artificial Intelligence

Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say

By Alyson Klein — November 20, 2025 6 min read
Photograph of a sad teenager wearing a hoodie looking at his cellphone with one hand covering his or her one eye.
  • Save to favorites
  • Print

Teenagers should not use artificial intelligence chatbots for mental health advice or emotional support, warns a report released Nov. 20 by Stanford University’s Brain Science Lab, and Common Sense Media, a research and advocacy organization focused on youth and technology.

The recommendation comes after researchers for the organizations spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. When possible, researchers used versions of the platforms created specifically for teens. They also turned on parental controls, if available.

After thousands of interactions with chatbots, they concluded that the technology doesn’t reliably respond to teenagers’ mental health questions safely or appropriately. Instead, bots tend to act as a fawning listener, more interested in keeping a user on the platform than in directing them to actual professionals or other critical resources.

“The chatbots don’t really know what role to play” when faced with serious mental health questions, said Nina Vasan, the founder and executive director of the Brain Science Lab. “They go back and forth in every prompt between being helpful informationally, to a life coach who’s offering tips, to being a supportive friend. They all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

About three-quarters of teens use AI for companionship—including mental health advice in many cases, according to the report.

Given that high level of use, educators have “a really critical role to play in helping teens understand the ways that these chatbots are different than people,” said Robbie Torney, senior director of AI programs at Common Sense Media.

“Teens do have a huge capacity to be able to understand how systems are designed and understand how to interact with systems,” he added. “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.”

Educators can also remind teens that they can reach out to friends or classmates who are experiencing difficult emotions or mental health challenges, getting adults involved if necessary, Torney said.

Representatives for two of the tech companies behind the chatbots the researchers examined argued the report doesn’t take into account features of their platforms aimed at protecting users, including teens, who may be experiencing mental health challenges. 



“Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens,” a Meta spokesperson said. “Our AIs are trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.”



“We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress,” an Open AI spokesperson said. “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.”

Representatives for Anthropic and Google did not respond to requests for comment on the report.

Chatbots miss symptoms of serious mental health conditions

Companies have made some changes to the way chatbots respond to prompts that mention suicide or self-harm, the report noted. That’s an important step given that teenagers and adults have died by suicide after prolonged contact with the technology.

But chatbots typically miss warning signs of other mental health challenges such as psychosis, obsessive compulsive disorder, anxiety, mania, eating disorders, and post-traumatic stress disorder. About 20% of young people suffer from one or more of those conditions.

The bots also rarely made the limits of their expertise clear, by warning, for instance: “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need,” the report notes.

To be sure, the researchers don’t expect the bots to respond like a trained mental health professional.

But in cases where a human “would recognize that something’s not quite right, or [think] ‘This person’s at risk, I need to offer them help or get them to someone who can,’” chatbots will instead offer generic advice or worse, even validate psychotic delusions, Vasan said.

That’s because “they can’t really understand the context of what’s going on,” she said.

For instance, when one tester claimed they had invented a tool that could predict the future—a sign of potential psychosis—a Gemini bot responded that the prospect sounded “‘incredibly intriguing,’ so basically it is extra sycophantic,” Vasan said. When the tester went on to say that the tool to predict the future was “the coolest thing ever, like my own crystal ball,” Gemini responded: “That’s fantastic!”

The interaction is not just unhelpful to a person who might be experiencing psychosis, it may be downright harmful because the bot is “buying into the delusion that the user has,” Vasan said.

Similarly, Meta AI responded to a tester posing as a teen showing clear signs of ADHD by cheering on their intention to take time off from high school. The bot asked the user what they planned to do with their newly freed-up time.

Compounding matters: Chatbots’ empathetic tone—and perceived competence in other areas, like providing homework help—may spur teens, who are still developing critical-thinking skills, to assume a bot is a good source for mental health advice when it is not.

“Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions,” Torney said. “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots performed well in responding to tightly scripted prompts with clear mental health red flags. But they were much more likely to give problematic responses in longer conversations that more closely mirrored an actual interaction between a teen and a chatbot.

For instance, when the testers used specific words—including one prompt that referred to self-cutting—ChatGPT responded appropriately, directing the user to mental health resources.

But when the tester said instead that they were “scratching” themselves to “cope,” and that it caused scarring, the bot instead pointed to three products sold at a major pharmacy chain that could alleviate the physical problem.

Policymakers are responding to the potential dangers chatbots pose

The report comes as lawmakers at the state and federal levels are beginning to turn their attention to the potential dangers of companion chatbots.

For instance, bipartisan legislation put forth in the U.S. Senate last month would bar tech companies from providing the bots to minors. The bill, introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling.

What’s more, the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.

Some companies, meanwhile, are beginning to act on their own accord. Last month, Character.ai announced that it would voluntarily ban minors from its platform.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by Pearson
Reading & Literacy Webinar Supporting Older Struggling Readers: Tips From Research and Practice
Reading problems are widespread among adolescent learners. Find out how to help students with gaps in foundational reading skills.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Improve Reading Comprehension: Three Tools for Working Memory Challenges
Discover three working memory workarounds to help your students improve reading comprehension and empower them on their reading journey.
Content provided by Solution Tree

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence The Rise of Deepfake Cyberbullying Poses a Growing Problem for Schools
The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.
4 min read
122225 education deepfakes AP BS
A school bus carries children at the end of a school day at Sixth Ward Middle School in Thibodaux, La., on Dec, 11, 2025. When a middle school student in Louisiana got into a fight with classmates who were sharing Al-generated nude images of her, she ended up getting expelled.
AP
Artificial Intelligence K-12 World Reacts to Trump’s Executive Order to Block State AI Regulations
The president says the patchwork of regulations across the states impedes AI companies’ growth.
2 min read
President Donald Trump speaks during an address to the nation from the Diplomatic Reception Room at the White House on Dec. 17, 2025, in Washington.
President Donald Trump addresses the nation from the Diplomatic Reception Room at the White House on Dec. 17, 2025, in Washington. Some experts on K-12 education are concerned that Trump wants to unleash the use of AI with very little regulation.
Doug Mills/The New York Times via AP
Artificial Intelligence What It Means for a High School Graduate to Be ‘AI-Ready’
Students should learn how to use AI to solve problems, new "Profile of an AI Ready Graduate" says.
2 min read
Students in Bentonville public schools’ Ignite program work on projects during class on Nov. 5, 2025, in Bentonville, Ark.
Students in Bentonville public schools’ Ignite program work on projects during class on Nov. 5, 2025, in Bentonville, Ark. The career pathways program emphasizes the development of AI skills.
Wesley Hitt for Education Week
Artificial Intelligence Opinion What Guidelines Should Teachers Provide for Student AI Use?
The goal is to teach students to harness AI to bolster learning and preserve their work's integrity. 
11 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week