Teachers already struggle to manage divisive classroom conversations. Artificial intelligence tools—notably the chatbots that students use—may make the problem worse.
AI chatbots’ tendency to flatter users can make people more convinced they are right, less willing to consider other people’s perspectives, and less willing to repair relationships after a disagreement, according to a new study in the journal Science, released Thursday.
Although the research focused on adults, it has clear implications for how schools work. Students who don’t learn to respect others’ perspectives and manage conflict may struggle with both social relationships and complex academic discussions. That puts more pressure on educators to teach students tech literacy and conflict skills at a time when they have less opportunity to expose students to potential controversy.
“Teachers need to be in that driver’s seat,” said Maria Elena Guzman, a teacher-trainer for the AFT’s National Academy for AI Instruction, who coaches teachers in how to use AI effectively. “We need to be critical about the information that we’re receiving [from AI], because common sense is not necessarily there.”
The research comes as state policy shapes the contours of what teachers can discuss and as AI use booms among adolescents.
As of 2025, 25 states have passed laws barring or limiting classroom discussions of potentially divisive topics. Nearly a third of teachers in a nationally representative 2024 study told the EdWeek Research Center that they had changed instruction or skipped topics to avoid classroom controversy.
That can make teens see AI as a safer outlet for exploring sensitive topics, according to a recent American Psychological Association brief on AI use.
Can chatbots be too agreeable?
But making students feel safe doesn’t mean chatbots are healthier for them in the long run, concluded researchers Myra Cheng, Cinoo Lee, and Pranav Khadpe of Stanford University’s Natural Language Processing Group, which studies AI large language models.
Increasingly, people ask chatbots like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini—for perspective and advice. (Education platforms such as Khanmigo and MagicSchool also are based on these large language models.) But the Stanford researchers found even short conversations with a chatbot can undermine the “social friction"—the challenges and tensions of interacting with other people—that help people develop accountability, perspective-taking, and moral growth.
“It’s like having an instructor who, every time you’re stuck on a problem, just tells you the answer. That’s not really useful,” Khadpe said. “Some things are hard because they’re supposed to be hard, ... and in our general social development, sometimes social friction is essential.”
Those using AI chatbots “are not really taking the perspective of the others as much,” Lee said. “It does make them more self-centered. So the implications can be even more critical for kids and teenagers’” social development.
Jennifer Watters, a 3rd grade teacher at PS 229 in Queens, sees it happen in real time. More students are using ChatGPT or chatbot apps like Liven and Fabulous, designed to improve mental health and well-being, for emotional support, Watters said. But she’s also noticed that students using the chatbots become less willing to solve problems amongst each other.
“Many times, chatbots are telling the user what they want to hear instead of using an impartial lens,” Watters said.
To test how AI advice shapes behavior, the researchers conducted a multistage experiment.
First, Cheng and her colleagues analyzed responses from an online Reddit community where people vote and comment on whether posters did the right thing in social situations. Then, it compared human responses threaded on Reddit to responses from 11 of the most popular AI models. The researchers asked both the online posters and AI to respond to specific behavior that could harm others, such as lying or breaking the law.
Humans and AI gave strikingly different advice.
The chatbots, researchers found, were on average about 50% more likely than people to tell advice-seekers that they had done the right thing in a conflict—even if the person had lied, manipulated, or done something illegal. In fact, AI advice directly contradicted the moral judgment of a majority of people in the Reddit threads more than half the time.
Even when told to use a neutral tone, the programs tended to justify people’s actions—what researchers call “sycophantic AI.”
“While how someone feels is always worth acknowledging, the thoughts and actions that follow may not always be the most constructive, and AI isn’t making that distinction,” Lee said. “That matters here because the quality of our social relationships is one of the strongest predictors of health and well-being we have as humans, and ultimately, we want AI that expands people’s judgment and perspectives rather than narrows it.”
The researchers then randomly assigned about 800 people to evaluate AI advice that was either in line with external human judgment or, alternatively, sycophantic. Another group of 800 people was directed to have live conversations with chatbots about a conflict. In both cases, people who worked with sycophantic chatbots rather than chatbots that were more aligned with human responses became significantly more likely to believe they were right, and somewhat less likely to want to apologize, compromise, or otherwise reconcile with the other person in the argument.
People also tended to judge sycophantic chatbots more trustworthy and helpful than those aligned with human judgment—regardless of their background or attitudes about AI generally.
“People easily misconceive of AI as being more objective or neutral,” Khadpe said. “This means that uncritical advice under the guise of neutrality can be even more harmful than if people had not sought advice at all.”
This isn’t the first study to suggest AI flattery may damage social development and mental health. Common Sense Media, a research and advocacy group studying youth and technology, reported in November that sycophantic AI regularly misses clear signs of mental health issues, from attention deficits to schizophrenia, and encouraged potentially harmful behavior like dropping out of school.
The Stanford researchers recommended ways to limit the effects of sycophantic AI:
- Teach students to recognize signs of confirmation bias—not just in AI responses but social media filter bubbles and other common situations.
- Discuss when to avoid using the technology to make social or moral decisions.
- When prompting chatbots, teach students to ask the AI explicitly to take the other perspective.
The last technique, in particular, helps students get a fuller picture of ambiguous situations.
“There’s a lot of assumptions that you are making when you’re describing a scenario and the AI’s never able to get the other person’s side of the story,” Lee said.
Watters, the New York City teacher, said her students do use digital and AI tools, but she stresses that they should only ever supplement human connections, conversations, and support, and never rely on a single source of information when making a decision.
Teachers also need to show students what real support, rather than sycophancy, looks like, Watters said.
“Students need to learn how to be confident in who they are and know how to handle their feelings, which is all taught in my classroom,” Watters said. “This helps students use a critical eye when dealing with various chatbots.”