Chad Sussex, an assistant principal at Winterset High School in Iowa, felt “blown away” when he read a news story—passed along by a teacher at his school—about a teenager who died by suicide after communicating with an artificial intelligence chatbot.
Sussex, who leads the AI task force for the Winterset Community Schools, realized immediately that his district should craft policy guidance and conduct educational outreach to “get ahead” of the grim potential downsides of AI companions, digital friends or characters designed to give personal, meaningful feedback to life’s important questions.
Other districts should consider following his lead, according to a report released July 16 by Common Sense Media, a research and advocacy organization focused on youth and technology. The report found that nearly three quarters of teens have engaged with an AI companion at some point, with more than half saying they are regular users of the technology.
That finding is “eyepopping” on sheer scale alone, said Michael Robb, the head of research at Common Sense Media. In fact, the percentage was so high Robb wonders if some teens conflated AI companions with the large language models that power them—such as ChatGPT—even though Common Sense’s survey question clearly defined AI companions.
Even if the usage numbers are a bit inflated by that potential confusion, “it’s likely still a lot of kids using” the companions, Robb said in an emailed response to questions from Education Week.
The survey was based on a representative sample of 1,060 teens, ages 13 to 17. It was conducted in April and May of this year.
AI companions aren’t designed to interact like real humans
Teenagers turn to AI companions—which can be accessed through platforms such as CHAI, Character.AI, Nomi, and Replika—for social connection and to talk through problems that they wouldn’t bring to someone close to them, Common Sense Media found.
About a third of students who have used AI companions say they’ve done so for social interaction. Roughly one in five say they’ve consulted one for social or conversation practice. And more than one in 10 have turned to the technology for mental health advice or emotional support.
Many teens who use AI companions—about a third—do so because it’s entertaining, the survey found. And a similar percentage said that the tech’s responses had sometimes made them feel uncomfortable.
But others found qualities in the technology that they might feel are lacking in their peers or the adults around them, according to the survey.
For instance, 18% of teens surveyed they talk to the bots because they “give advice.” Seventeen percent said the AI companions are “always available” to listen. And 14% said they rely on AI companions because they “don’t judge me.” Another 12% said they feel comfortable telling the bots things they wouldn’t say to a friend or family member.
What’s more, about 1 in 3 teens who use AI companions said they find their time with the technology to be more satisfying than time with real-life friends. About a third have chosen to talk to AI about something important or serious, rather than turning to a real person.
That last finding stood out to Robb.
“I’m not someone ringing the doomsday bell yet for AIs replacing human interaction, but I think that’s a worrying number, and not something I’d want to see grow over time,” Robb said.
AI companions aren’t designed to mimic real human interaction, which naturally includes disagreement and friction, Robb added. Instead, they are developed to “be agreeable and validating,” he said. Educators should be aware that these AI tools were “not designed with children in mind.”
Sussex believes teenagers are especially “vulnerable” to a tech tool that doesn’t challenge their thinking.
“They’re impressionable,” he said. “Let’s say you start up a conversation with a particular AI [tool], and it starts saying things to you that you wish you were hearing from a friend, from a parent, from someone else that’s close to you. You start trusting it [and thinking] ‘I can say anything at all, because [the bot is] not going to go in and share with someone else in my class.’”
Another quarter of teens who’ve used the companions say they’ve shared personal information, including their real name, location, or secrets with an AI companion. That’s problematic, because nearly everything that’s inputted into generative AI is used to help the technology generate better responses, according to AI experts.
Educators should look out for students who seem attached to AI companions
Common Sense Media recommends educators help students understand that AI tools are designed to “create an emotional attachment” with users and explain how that kind of interaction is different from how real humans communicate.
The organization also suggests educators be trained to look out for students who talk about AI companions as if they were real friends or feel social distress when the chatbots are unavailable. Educators should also help students understand the downsides of giving private information to AI tools.
Winterset schools, for one, is already preparing presentations on AI companions, which will be shared with teachers, parents, and students this fall, Sussex said.
“We need to educate them on what this technology is and the possible things that it can do if you take it too far, if you take it to heart too much,” he said.