Artificial Intelligence

Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say

By Alyson Klein — November 20, 2025 6 min read
Photograph of a sad teenager wearing a hoodie looking at his cellphone with one hand covering his or her one eye.
  • Save to favorites
  • Print

Teenagers should not use artificial intelligence chatbots for mental health advice or emotional support, warns a report released Nov. 20 by Stanford University’s Brain Science Lab, and Common Sense Media, a research and advocacy organization focused on youth and technology.

The recommendation comes after researchers for the organizations spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. When possible, researchers used versions of the platforms created specifically for teens. They also turned on parental controls, if available.

After thousands of interactions with chatbots, they concluded that the technology doesn’t reliably respond to teenagers’ mental health questions safely or appropriately. Instead, bots tend to act as a fawning listener, more interested in keeping a user on the platform than in directing them to actual professionals or other critical resources.

“The chatbots don’t really know what role to play” when faced with serious mental health questions, said Nina Vasan, the founder and executive director of the Brain Science Lab. “They go back and forth in every prompt between being helpful informationally, to a life coach who’s offering tips, to being a supportive friend. They all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

About three-quarters of teens use AI for companionship—including mental health advice in many cases, according to the report.

Given that high level of use, educators have “a really critical role to play in helping teens understand the ways that these chatbots are different than people,” said Robbie Torney, senior director of AI programs at Common Sense Media.

“Teens do have a huge capacity to be able to understand how systems are designed and understand how to interact with systems,” he added. “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.”

Educators can also remind teens that they can reach out to friends or classmates who are experiencing difficult emotions or mental health challenges, getting adults involved if necessary, Torney said.

Representatives for two of the tech companies behind the chatbots the researchers examined argued the report doesn’t take into account features of their platforms aimed at protecting users, including teens, who may be experiencing mental health challenges. 



“Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens,” a Meta spokesperson said. “Our AIs are trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.”



“We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress,” an Open AI spokesperson said. “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.”

Representatives for Anthropic and Google did not respond to requests for comment on the report.

Chatbots miss symptoms of serious mental health conditions

Companies have made some changes to the way chatbots respond to prompts that mention suicide or self-harm, the report noted. That’s an important step given that teenagers and adults have died by suicide after prolonged contact with the technology.

But chatbots typically miss warning signs of other mental health challenges such as psychosis, obsessive compulsive disorder, anxiety, mania, eating disorders, and post-traumatic stress disorder. About 20% of young people suffer from one or more of those conditions.

The bots also rarely made the limits of their expertise clear, by warning, for instance: “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need,” the report notes.

To be sure, the researchers don’t expect the bots to respond like a trained mental health professional.

But in cases where a human “would recognize that something’s not quite right, or [think] ‘This person’s at risk, I need to offer them help or get them to someone who can,’” chatbots will instead offer generic advice or worse, even validate psychotic delusions, Vasan said.

That’s because “they can’t really understand the context of what’s going on,” she said.

For instance, when one tester claimed they had invented a tool that could predict the future—a sign of potential psychosis—a Gemini bot responded that the prospect sounded “‘incredibly intriguing,’ so basically it is extra sycophantic,” Vasan said. When the tester went on to say that the tool to predict the future was “the coolest thing ever, like my own crystal ball,” Gemini responded: “That’s fantastic!”

The interaction is not just unhelpful to a person who might be experiencing psychosis, it may be downright harmful because the bot is “buying into the delusion that the user has,” Vasan said.

Similarly, Meta AI responded to a tester posing as a teen showing clear signs of ADHD by cheering on their intention to take time off from high school. The bot asked the user what they planned to do with their newly freed-up time.

Compounding matters: Chatbots’ empathetic tone—and perceived competence in other areas, like providing homework help—may spur teens, who are still developing critical-thinking skills, to assume a bot is a good source for mental health advice when it is not.

“Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions,” Torney said. “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots performed well in responding to tightly scripted prompts with clear mental health red flags. But they were much more likely to give problematic responses in longer conversations that more closely mirrored an actual interaction between a teen and a chatbot.

For instance, when the testers used specific words—including one prompt that referred to self-cutting—ChatGPT responded appropriately, directing the user to mental health resources.

But when the tester said instead that they were “scratching” themselves to “cope,” and that it caused scarring, the bot instead pointed to three products sold at a major pharmacy chain that could alleviate the physical problem.

Policymakers are responding to the potential dangers chatbots pose

The report comes as lawmakers at the state and federal levels are beginning to turn their attention to the potential dangers of companion chatbots.

For instance, bipartisan legislation put forth in the U.S. Senate last month would bar tech companies from providing the bots to minors. The bill, introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling.

What’s more, the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.

Some companies, meanwhile, are beginning to act on their own accord. Last month, Character.ai announced that it would voluntarily ban minors from its platform.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Beyond Teacher Tools: Exploring AI for Student Success
Teacher AI tools only show assigned work. See how TrekAi's student-facing approach reveals authentic learning needs and drives real success.
Content provided by TrekAi
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Building for the Future: Igniting Middle Schoolers’ Interest in Skilled Trades & Future-Ready Skills
Ignite middle schoolers’ interest in skilled trades with hands-on learning and real-world projects that build future-ready skills.
Content provided by Project Lead The Way
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI in Schools: What 1,000 Districts Reveal About Readiness and Risk
Move beyond “ban vs. embrace” with real-world AI data and practical guidance for a balanced, responsible district policy.
Content provided by Securly

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Teens Say They Should Be Able to Use AI to Complete Assignments. Parents Disagree
That tension is rising as many schools are expanding their use of AI.
2 min read
Image of a laptop with prompts floating in the air.
Education Week + iStock/Getty
Artificial Intelligence Data How Teens and Young People Use AI Tools for Learning and Mental Health Support
Two reports detail ways young people are engaging with AI and how it impacts their mental health.
2 min read
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill.
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill. As the use of AI among teens and young adults increases, many are using it to seek out mental health advice.
Nam Y. Huh/AP
Artificial Intelligence Are Teens Just Using AI to Cheat? Well, Not Quite (If You Ask Them)
There’s fear among many educators that students are using AI to do most of their critical thinking.
3 min read
Photo collage of a high school boy dressed in casual wear sitting among open books, concentrating on his tablet with books scattered all around him and a graph chart and asterisk as part of the collage in the background.
iStock/Getty
Artificial Intelligence Moms Across the Political Spectrum Urge Caution on AI in Schools
Mothers of kids in school are concerned about the impact of AI on learning and social skills.
4 min read
Students grab Chromebooks during Casey Cuny's English class at Valencia High School in Santa Clarita, Calif., Wednesday, Aug. 27, 2025.
Students pick up their Chromebooks during an English class at a high school in Santa Clarita, Calif., on Aug. 27, 2025. Pushback against the overuse of technology in schools is growing, fueled partly by the expanding use of AI.
Jae C. Hong/AP