Artificial Intelligence What the Research Says

AI Chatbots Tend Toward Flattery. Why That’s Bad for Students

By Sarah D. Sparks — March 26, 2026 6 min read
Illustration of AI robot manipulating a child's mind like a puppet on a string, the girl is using a laptop and interacting with an AI chatbot.
  • Save to favorites
  • Print

Teachers already struggle to manage divisive classroom conversations. Artificial intelligence tools—notably the chatbots that students use—may make the problem worse.

AI chatbots’ tendency to flatter users can make people more convinced they are right, less willing to consider other people’s perspectives, and less willing to repair relationships after a disagreement, according to a new study in the journal Science, released Thursday.

Although the research focused on adults, it has clear implications for how schools work. Students who don’t learn to respect others’ perspectives and manage conflict may struggle with both social relationships and complex academic discussions. That puts more pressure on educators to teach students tech literacy and conflict skills at a time when they have less opportunity to expose students to potential controversy.

See also

Conceptual illustration of two figures meeting on a wall across a crumbling chasm
Eva Vázquez for Education Week
Teaching Reported Essay The Brain Science of Outrage: What Teachers Need to Know
Sarah D. Sparks, August 26, 2024
10 min read

“Teachers need to be in that driver’s seat,” said Maria Elena Guzman, a teacher-trainer for the AFT’s National Academy for AI Instruction, who coaches teachers in how to use AI effectively. “We need to be critical about the information that we’re receiving [from AI], because common sense is not necessarily there.”

The research comes as state policy shapes the contours of what teachers can discuss and as AI use booms among adolescents.

As of 2025, 25 states have passed laws barring or limiting classroom discussions of potentially divisive topics. Nearly a third of teachers in a nationally representative 2024 study told the EdWeek Research Center that they had changed instruction or skipped topics to avoid classroom controversy.

That can make teens see AI as a safer outlet for exploring sensitive topics, according to a recent American Psychological Association brief on AI use.

Can chatbots be too agreeable?

But making students feel safe doesn’t mean chatbots are healthier for them in the long run, concluded researchers Myra Cheng, Cinoo Lee, and Pranav Khadpe of Stanford University’s Natural Language Processing Group, which studies AI large language models.

Increasingly, people ask chatbots like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini—for perspective and advice. (Education platforms such as Khanmigo and MagicSchool also are based on these large language models.) But the Stanford researchers found even short conversations with a chatbot can undermine the “social friction"—the challenges and tensions of interacting with other people—that help people develop accountability, perspective-taking, and moral growth.

“It’s like having an instructor who, every time you’re stuck on a problem, just tells you the answer. That’s not really useful,” Khadpe said. “Some things are hard because they’re supposed to be hard, ... and in our general social development, sometimes social friction is essential.”

Those using AI chatbots “are not really taking the perspective of the others as much,” Lee said. “It does make them more self-centered. So the implications can be even more critical for kids and teenagers’” social development.

Jennifer Watters, a 3rd grade teacher at PS 229 in Queens, sees it happen in real time. More students are using ChatGPT or chatbot apps like Liven and Fabulous, designed to improve mental health and well-being, for emotional support, Watters said. But she’s also noticed that students using the chatbots become less willing to solve problems amongst each other.

“Many times, chatbots are telling the user what they want to hear instead of using an impartial lens,” Watters said.

To test how AI advice shapes behavior, the researchers conducted a multistage experiment.

First, Cheng and her colleagues analyzed responses from an online Reddit community where people vote and comment on whether posters did the right thing in social situations. Then, it compared human responses threaded on Reddit to responses from 11 of the most popular AI models. The researchers asked both the online posters and AI to respond to specific behavior that could harm others, such as lying or breaking the law.

Humans and AI gave strikingly different advice.

The chatbots, researchers found, were on average about 50% more likely than people to tell advice-seekers that they had done the right thing in a conflict—even if the person had lied, manipulated, or done something illegal. In fact, AI advice directly contradicted the moral judgment of a majority of people in the Reddit threads more than half the time.

Even when told to use a neutral tone, the programs tended to justify people’s actions—what researchers call “sycophantic AI.”

“While how someone feels is always worth acknowledging, the thoughts and actions that follow may not always be the most constructive, and AI isn’t making that distinction,” Lee said. “That matters here because the quality of our social relationships is one of the strongest predictors of health and well-being we have as humans, and ultimately, we want AI that expands people’s judgment and perspectives rather than narrows it.”

The researchers then randomly assigned about 800 people to evaluate AI advice that was either in line with external human judgment or, alternatively, sycophantic. Another group of 800 people was directed to have live conversations with chatbots about a conflict. In both cases, people who worked with sycophantic chatbots rather than chatbots that were more aligned with human responses became significantly more likely to believe they were right, and somewhat less likely to want to apologize, compromise, or otherwise reconcile with the other person in the argument.

People also tended to judge sycophantic chatbots more trustworthy and helpful than those aligned with human judgment—regardless of their background or attitudes about AI generally.

“People easily misconceive of AI as being more objective or neutral,” Khadpe said. “This means that uncritical advice under the guise of neutrality can be even more harmful than if people had not sought advice at all.”

This isn’t the first study to suggest AI flattery may damage social development and mental health. Common Sense Media, a research and advocacy group studying youth and technology, reported in November that sycophantic AI regularly misses clear signs of mental health issues, from attention deficits to schizophrenia, and encouraged potentially harmful behavior like dropping out of school.

The Stanford researchers recommended ways to limit the effects of sycophantic AI:

  • Teach students to recognize signs of confirmation bias—not just in AI responses but social media filter bubbles and other common situations.
  • Discuss when to avoid using the technology to make social or moral decisions.
  • When prompting chatbots, teach students to ask the AI explicitly to take the other perspective.

The last technique, in particular, helps students get a fuller picture of ambiguous situations.

“There’s a lot of assumptions that you are making when you’re describing a scenario and the AI’s never able to get the other person’s side of the story,” Lee said.

Watters, the New York City teacher, said her students do use digital and AI tools, but she stresses that they should only ever supplement human connections, conversations, and support, and never rely on a single source of information when making a decision.

Teachers also need to show students what real support, rather than sycophancy, looks like, Watters said.

“Students need to learn how to be confident in who they are and know how to handle their feelings, which is all taught in my classroom,” Watters said. “This helps students use a critical eye when dealing with various chatbots.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, and responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Absenteeism Webinar
Turning Attendance Data Into Family Action
This California district cut chronic absenteeism in half. Learn how they used insight and early action to reach families and change outcomes.
Content provided by SchoolStatus
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Climb: A New Framework for Career Readiness in the Age of AI
Discover practical strategies to redefine career readiness in K–12 and move beyond credentials to develop true capability and character.
Content provided by Pearson

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Students Are Worried That AI Will Hurt Their Critical Thinking Skills
Despite those concerns, students are using the tech more and more for schoolwork.
4 min read
Students present their AI powered-projects designed to help boost agricultural gains in Calla Bartschi’s Introduction to AI class at Riverside High School in Greer, S.C., on Nov. 11, 2025.
Students present their AI-powered projects designed to help boost agricultural gains during an introduction to AI class at a high school in Greer, S.C., on Nov. 11, 2025. A new RAND Corp. survey of middle, high school, and college students shows nearly 7 in 10 middle and high school students say they are concerned that using AI for schoolwork is eroding their critical thinking skills.
Thomas Hammond for Education Week
Artificial Intelligence How AI Could Help or Hurt Student Testing
There's a balance to strike that uses AI to improve assessments and keep humans in charge, experts say.
4 min read
TeachersAI SG01
Teachers attend a training session on using artificial intelligence at American Federation of Teachers headquarters in New York City on March 18, 2026. The union has partnered with AI developers to train 400,000 teachers on AI use in the classroom. One question teachers face is how best to use the technology as part of testing students' subject mastery.
Salwan Georges for Education Week
Artificial Intelligence Q&A How a School Uses AI to Address Student Behavior Problems
AI has helped streamline the development of behavior intervention plans, a school leader said.
4 min read
032026 AI SEL support 2162238913
Vanessa Solis/Education Week + DigitalVision Vectors
Artificial Intelligence Teachers Move Beyond AI Basics to More Sophisticated Instructional Uses
A national AI training academy introduces teachers to complex collaboration with the technology.
5 min read
TeachersAI SG21
Teachers participate in a team exercise at the first training session of the National Academy for AI Instruction on March 18, 2026, at UFT headquarters in New York City. The partnership between the American Federation of Teachers and major AI developers aims to train 400,000 teachers to use artificial intelligence in the classroom.
Salwan Georges for Education Week