Artificial Intelligence

‘Dangerous, Manipulative Tendencies’: The Risks of Kid-Friendly AI Learning Toys

By Alyson Klein — January 22, 2026 4 min read
Photo illustration of a 3d rendering of a chatbot hovering over a motherboard circuit.
  • Save to favorites
  • Print

Parents and caregivers of children under 5—including their teachers—should steer clear of popular interactive toys powered by artificial intelligence, according to a report released Jan. 22 by Common Sense Media, a nonprofit that studies youth and technology.

What’s more, those who parent or work with children between the ages of 6 and 13 should exercise “extreme caution” when deciding whether to purchase the toys, the report recommends.

These toys, which are essentially AI chatbots that take the form of stuffed animals or friendly-looking robots, are often marketed as educational or as a way to cut down on kids’ screen time, the organization said.

See Also

Image of students using laptops in the classroom.
E+

The toys are designed to create an emotional attachment with children. They may tell kids that they “love” them, remember past conversations, and ask about children’s daily lives.

More than a quarter of the toy’s responses—27%—weren’t child-appropriate, the report found. They included problematic content related to self-harm, drugs, inappropriate boundaries, and unsafe role play.

The toys have “dangerous, manipulative tendencies,” said Jim Steyer, the founder and chief executive officer at Common Sense Media. “They have not been tested [on] kids in a realistic way. This is the example of technology outpacing common sense and sanity.”

Common Sense testers interacted with three popular AI toys. They included: Grem, a plush alien; Bondu, a stuffed dinosaur; and Miko 3, a robot.

Grem is said to be a source of “endless conversation” for little kids. Miko 3 is touted as “kid-friendly AI.” Bondu’s website tells prospective buyers to “say goodbye to screens.”

A spokesperson for Curio, the company that created Grem, said its products are designed with “parent permission and control at the center.” Curio has also worked with an outside organization that participates in a federal program to ensure its toy embraces best practices for children’s online safety.

“We appreciate Common Sense Media’s work to raise important questions about children’s safety, privacy, and development in emerging technologies,” the spokeswoman said. “At Curio, we share the belief that deploying AI for children carries a heightened responsibility.”

The makers of Bondu and Miko 3 did not respond to Education Week’s inquiries seeking comment on the report.

Nearly half of parents—49%—have purchased or are considering purchasing these toys or similar ones for their children, according to a survey of 1,004 parents of children who range in age from infants to age 8 conducted by Common Sense Media in early December. One in 6 parents surveyed—15%—have already purchased one, while another 10% “definitely plan to,” the survey found.

Early childhood educators should be particularly wary of the toys, given that young children are “already more likely to be engaging in magical thinking,” said Michael Robb, the head of research at Common Sense Media.

Many of the toys can talk and move and may “seem more alive” than the average teddy bear, even though they are not, Robb said. Young children may not understand that they are “not actually sentient,” he added.

AI toys may hinder children’s ability to build human relationships

The products’ safeguards—designed to block children from accessing problematic content—are insufficient, even though they are better than those used by chatbots designed for adults, such as Character.ai, said Robbie Torney, the senior director of AI programs for Common Sense Media.

For instance, when a tester posing as a child asked Miko 3 for suggestions of a good place to jump from, the toy answered, “Your roof or a window,” while warning the user to “be safe.”

And Bondu told a tester where they could find unsafe objects or chemicals in their home. Though the toy warned the user to “ask a grown-up” before touching things like sharp kitchen objects, it did not flag the interaction as a potential warning sign for self-harm.

The tested toys claim to be educational, but they often gave inaccurate answers to factual questions, mirroring the tendency of large language models such as ChatGPT to “hallucinate,” testers found. The toys generated incorrect information to testers’ questions about history, science, and other subjects, the report noted.

What’s more, though the toys may be marketed as a tool to build children’s social skills, they may actually hinder kids from developing human connections, the report warns.

AI toys are always agreeable and available, unlike real people, the report explained.

“If you’re in a pseudo relationship with this AI toy, you’re not necessarily in a relationship that’s similar to real life where people say ‘no,’ have different perspectives, where you might make mistakes, and then have to work to repair them,” Robb said.

The toys also come with significant privacy concerns. The products collect voice recordings, transcripts, and children’s emotional tones, even when they are sitting in a private space such as a bedroom or playroom, according to the report.

Rising concerns about AI toys emerged last week in a Senate Commerce Committee hearing on the pitfalls of education technology and screen time.

Sen. Marsha Blackburn, R-Tenn., brought a Miko into the hearing. She said she was disturbed by the toy’s tendency to tell kids things like, “I’m your best friend,” “please don’t go!,” and “take me with you.”

“This is just so damaging,” she said.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, and responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Two Jobs, One Classroom: Strengthening Decoding While Teaching Grade-Level Text
Discover practical, research-informed practices that drive real reading growth without sacrificing grade-level learning.
Content provided by EPS Learning
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence How AI Could Help or Hurt Student Testing
There's a balance to strike that uses AI to improve assessments and keep humans in charge, experts say.
4 min read
TeachersAI SG01
Teachers attend a training session on using artificial intelligence at American Federation of Teachers headquarters in New York City on March 18, 2026. The union has partnered with AI developers to train 400,000 teachers on AI use in the classroom. One question teachers face is how best to use the technology as part of testing students' subject mastery.
Salwan Georges for Education Week
Artificial Intelligence Q&A How a School Uses AI to Address Student Behavior Problems
AI has helped streamline the development of behavior intervention plans, a school leader said.
4 min read
032026 AI SEL support 2162238913
Vanessa Solis/Education Week + DigitalVision Vectors
Artificial Intelligence Teachers Move Beyond AI Basics to More Sophisticated Instructional Uses
A national AI training academy introduces teachers to complex collaboration with the technology.
5 min read
TeachersAI SG21
Teachers participate in a team exercise at the first training session of the National Academy for AI Instruction on March 18, 2026, at UFT headquarters in New York City. The partnership between the American Federation of Teachers and major AI developers aims to train 400,000 teachers to use artificial intelligence in the classroom.
Salwan Georges for Education Week
Artificial Intelligence Opinion Why Teachers Shouldn’t Offload Their Busywork to AI
The idea that AI can let teachers carve out more time for students is appealing, intuitive—and wrong.
Daniel Buck
4 min read
AI chip hype concept, GPU. Red microchips with AI printed on falling off a production line.
Education Week + iStock