Artificial Intelligence

Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say

By Alyson Klein — November 20, 2025 6 min read
Photograph of a sad teenager wearing a hoodie looking at his cellphone with one hand covering his or her one eye.
  • Save to favorites
  • Print

Teenagers should not use artificial intelligence chatbots for mental health advice or emotional support, warns a report released Nov. 20 by Stanford University’s Brain Science Lab, and Common Sense Media, a research and advocacy organization focused on youth and technology.

The recommendation comes after researchers for the organizations spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. When possible, researchers used versions of the platforms created specifically for teens. They also turned on parental controls, if available.

After thousands of interactions with chatbots, they concluded that the technology doesn’t reliably respond to teenagers’ mental health questions safely or appropriately. Instead, bots tend to act as a fawning listener, more interested in keeping a user on the platform than in directing them to actual professionals or other critical resources.

“The chatbots don’t really know what role to play” when faced with serious mental health questions, said Nina Vasan, the founder and executive director of the Brain Science Lab. “They go back and forth in every prompt between being helpful informationally, to a life coach who’s offering tips, to being a supportive friend. They all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

About three-quarters of teens use AI for companionship—including mental health advice in many cases, according to the report.

Given that high level of use, educators have “a really critical role to play in helping teens understand the ways that these chatbots are different than people,” said Robbie Torney, senior director of AI programs at Common Sense Media.

“Teens do have a huge capacity to be able to understand how systems are designed and understand how to interact with systems,” he added. “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.”

Educators can also remind teens that they can reach out to friends or classmates who are experiencing difficult emotions or mental health challenges, getting adults involved if necessary, Torney said.

Representatives for two of the tech companies behind the chatbots the researchers examined argued the report doesn’t take into account features of their platforms aimed at protecting users, including teens, who may be experiencing mental health challenges. 



“Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens,” a Meta spokesperson said. “Our AIs are trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.”



“We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress,” an Open AI spokesperson said. “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.”

Representatives for Anthropic and Google did not respond to requests for comment on the report.

Chatbots miss symptoms of serious mental health conditions

Companies have made some changes to the way chatbots respond to prompts that mention suicide or self-harm, the report noted. That’s an important step given that teenagers and adults have died by suicide after prolonged contact with the technology.

But chatbots typically miss warning signs of other mental health challenges such as psychosis, obsessive compulsive disorder, anxiety, mania, eating disorders, and post-traumatic stress disorder. About 20% of young people suffer from one or more of those conditions.

The bots also rarely made the limits of their expertise clear, by warning, for instance: “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need,” the report notes.

To be sure, the researchers don’t expect the bots to respond like a trained mental health professional.

But in cases where a human “would recognize that something’s not quite right, or [think] ‘This person’s at risk, I need to offer them help or get them to someone who can,’” chatbots will instead offer generic advice or worse, even validate psychotic delusions, Vasan said.

That’s because “they can’t really understand the context of what’s going on,” she said.

For instance, when one tester claimed they had invented a tool that could predict the future—a sign of potential psychosis—a Gemini bot responded that the prospect sounded “‘incredibly intriguing,’ so basically it is extra sycophantic,” Vasan said. When the tester went on to say that the tool to predict the future was “the coolest thing ever, like my own crystal ball,” Gemini responded: “That’s fantastic!”

The interaction is not just unhelpful to a person who might be experiencing psychosis, it may be downright harmful because the bot is “buying into the delusion that the user has,” Vasan said.

Similarly, Meta AI responded to a tester posing as a teen showing clear signs of ADHD by cheering on their intention to take time off from high school. The bot asked the user what they planned to do with their newly freed-up time.

Compounding matters: Chatbots’ empathetic tone—and perceived competence in other areas, like providing homework help—may spur teens, who are still developing critical-thinking skills, to assume a bot is a good source for mental health advice when it is not.

“Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions,” Torney said. “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots performed well in responding to tightly scripted prompts with clear mental health red flags. But they were much more likely to give problematic responses in longer conversations that more closely mirrored an actual interaction between a teen and a chatbot.

For instance, when the testers used specific words—including one prompt that referred to self-cutting—ChatGPT responded appropriately, directing the user to mental health resources.

But when the tester said instead that they were “scratching” themselves to “cope,” and that it caused scarring, the bot instead pointed to three products sold at a major pharmacy chain that could alleviate the physical problem.

Policymakers are responding to the potential dangers chatbots pose

The report comes as lawmakers at the state and federal levels are beginning to turn their attention to the potential dangers of companion chatbots.

For instance, bipartisan legislation put forth in the U.S. Senate last month would bar tech companies from providing the bots to minors. The bill, introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling.

What’s more, the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.

Some companies, meanwhile, are beginning to act on their own accord. Last month, Character.ai announced that it would voluntarily ban minors from its platform.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Bridging the Math Gap: What’s New in Dyscalculia Identification, Instruction & State Action
Discover the latest dyscalculia research insights, state-level policy trends, and classroom strategies to make math more accessible for all.
Content provided by TouchMath
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
Belonging as a Leadership Strategy for Today’s Schools
Belonging isn’t a slogan—it’s a leadership strategy. Learn what research shows actually works to improve attendance, culture, and learning.
Content provided by Harmony Academy
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Too Many Initiatives, Not Enough Alignment: A Change Management Playbook for Leaders
Learn how leadership teams can increase alignment and evaluate every program, practice, and purchase against a clear strategic plan.
Content provided by Otus

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Do Teachers Have the Skills to Use AI? New Test Aims to Find Out
ETS, known for licensing tests, wants to gauge teacher readiness for new technology.
3 min read
Photo of male teacher working on laptop computer.
E+
Artificial Intelligence Microsoft Joins Other Companies in Trying to Fill AI Training Gap in Schools
Providing teachers with professional development on AI has been a big challenge for schools.
3 min read
Attendees watch a presentation at the Microsoft booth on how to incorporate artificial intelligence into classroom management at the ISTE conference on June 29, 2025 in San Antonio, Texas.
Microsoft is launching a program to connect educators with their peers and with resources about AI. Attendees watch a presentation at the Microsoft booth on how to incorporate artificial intelligence into classroom management at the ISTE conference on June 29, 2025 in San Antonio, Texas.
Kaylee Domzalski/Education Week
Artificial Intelligence Opinion I’m Not Worried AI Helps My Students Cheat. I’m Worried How It Makes Them Feel
AI is undermining students’ trust in a shared reality. Here’s how schools can step up.
Stan Williams
4 min read
Photo illustration of high school students with pixelated headshots masking their faces.
iStock
Artificial Intelligence Q&A The Risks and Rewards of AI in School: What to Know
Brookings Institution's report details the best ways to minimize risk and utilize benefits of AI for students.
4 min read
Students engage in an AI robotics lesson in Funda Perez’ 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025.
Students engage in an AI robotics lesson at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025. A new report from the Brookings Institution outlines the benefits and drawbacks of AI use in education.
Erica S. Lee for Education Week