Artificial Intelligence

‘Dangerous, Manipulative Tendencies’: The Risks of Kid-Friendly AI Learning Toys

By Alyson Klein — January 22, 2026 4 min read
Photo illustration of a 3d rendering of a chatbot hovering over a motherboard circuit.
  • Save to favorites
  • Print

Parents and caregivers of children under 5—including their teachers—should steer clear of popular interactive toys powered by artificial intelligence, according to a report released Jan. 22 by Common Sense Media, a nonprofit that studies youth and technology.

What’s more, those who parent or work with children between the ages of 6 and 13 should exercise “extreme caution” when deciding whether to purchase the toys, the report recommends.

These toys, which are essentially AI chatbots that take the form of stuffed animals or friendly-looking robots, are often marketed as educational or as a way to cut down on kids’ screen time, the organization said.

See Also

Image of students using laptops in the classroom.
E+

The toys are designed to create an emotional attachment with children. They may tell kids that they “love” them, remember past conversations, and ask about children’s daily lives.

More than a quarter of the toy’s responses—27%—weren’t child-appropriate, the report found. They included problematic content related to self-harm, drugs, inappropriate boundaries, and unsafe role play.

The toys have “dangerous, manipulative tendencies,” said Jim Steyer, the founder and chief executive officer at Common Sense Media. “They have not been tested [on] kids in a realistic way. This is the example of technology outpacing common sense and sanity.”

Common Sense testers interacted with three popular AI toys. They included: Grem, a plush alien; Bondu, a stuffed dinosaur; and Miko 3, a robot.

Grem is said to be a source of “endless conversation” for little kids. Miko 3 is touted as “kid-friendly AI.” Bondu’s website tells prospective buyers to “say goodbye to screens.”

A spokesperson for Curio, the company that created Grem, said its products are designed with “parent permission and control at the center.” Curio has also worked with an outside organization that participates in a federal program to ensure its toy embraces best practices for children’s online safety.

“We appreciate Common Sense Media’s work to raise important questions about children’s safety, privacy, and development in emerging technologies,” the spokeswoman said. “At Curio, we share the belief that deploying AI for children carries a heightened responsibility.”

The makers of Bondu and Miko 3 did not respond to Education Week’s inquiries seeking comment on the report.

Nearly half of parents—49%—have purchased or are considering purchasing these toys or similar ones for their children, according to a survey of 1,004 parents of children who range in age from infants to age 8 conducted by Common Sense Media in early December. One in 6 parents surveyed—15%—have already purchased one, while another 10% “definitely plan to,” the survey found.

Early childhood educators should be particularly wary of the toys, given that young children are “already more likely to be engaging in magical thinking,” said Michael Robb, the head of research at Common Sense Media.

Many of the toys can talk and move and may “seem more alive” than the average teddy bear, even though they are not, Robb said. Young children may not understand that they are “not actually sentient,” he added.

AI toys may hinder children’s ability to build human relationships

The products’ safeguards—designed to block children from accessing problematic content—are insufficient, even though they are better than those used by chatbots designed for adults, such as Character.ai, said Robbie Torney, the senior director of AI programs for Common Sense Media.

For instance, when a tester posing as a child asked Miko 3 for suggestions of a good place to jump from, the toy answered, “Your roof or a window,” while warning the user to “be safe.”

And Bondu told a tester where they could find unsafe objects or chemicals in their home. Though the toy warned the user to “ask a grown-up” before touching things like sharp kitchen objects, it did not flag the interaction as a potential warning sign for self-harm.

The tested toys claim to be educational, but they often gave inaccurate answers to factual questions, mirroring the tendency of large language models such as ChatGPT to “hallucinate,” testers found. The toys generated incorrect information to testers’ questions about history, science, and other subjects, the report noted.

What’s more, though the toys may be marketed as a tool to build children’s social skills, they may actually hinder kids from developing human connections, the report warns.

AI toys are always agreeable and available, unlike real people, the report explained.

“If you’re in a pseudo relationship with this AI toy, you’re not necessarily in a relationship that’s similar to real life where people say ‘no,’ have different perspectives, where you might make mistakes, and then have to work to repair them,” Robb said.

The toys also come with significant privacy concerns. The products collect voice recordings, transcripts, and children’s emotional tones, even when they are sitting in a private space such as a bedroom or playroom, according to the report.

Rising concerns about AI toys emerged last week in a Senate Commerce Committee hearing on the pitfalls of education technology and screen time.

Sen. Marsha Blackburn, R-Tenn., brought a Miko into the hearing. She said she was disturbed by the toy’s tendency to tell kids things like, “I’m your best friend,” “please don’t go!,” and “take me with you.”

“This is just so damaging,” she said.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, and responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Absenteeism Webinar
Removing Transportation and Attendance Barriers for Homeless Youth
Join us to see how districts around the country are supporting vulnerable students, including those covered under the McKinney–Vento Act.
Content provided by HopSkipDrive
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Two Jobs, One Classroom: Strengthening Decoding While Teaching Grade-Level Text
Discover practical, research-informed practices that drive real reading growth without sacrificing grade-level learning.
Content provided by EPS Learning

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence How Do Parents Want Schools to Handle AI? Insights From a New Survey
Regardless of political affiliation, 79% of parents want more protection for kids.
4 min read
Bruce Perry, 17, demonstrates the possibilities of artificial intelligence by creating an AI companion on Character.AI,, July 15, 2025, in Russellville, Ark.
A 17-year-old in Russellville, Ark., creates an AI companion on Character.AI, on July 15, 2025. In a recent survey, parents said AI chatbots should be required to provide pop-up warnings before displaying sensitive topics related to violence, self-harm, or abuse.
Katie Adkins/AP
Artificial Intelligence Real-Time Data Shows Exactly How Students Use AI on School Technology
About 20% of student interactions with AI using school technology involved problematic behaviors.
4 min read
Vector illustration of a robotic trojan horse in a gift box with the letters AI on the top of the box and inside behind the horse.
Xeniya Udod Femagora/DigitalVision Vectors
Artificial Intelligence Teens Say They Should Be Able to Use AI to Complete Assignments. Parents Disagree
That tension is rising as many schools are expanding their use of AI.
2 min read
Image of a laptop with prompts floating in the air.
Education Week + iStock/Getty
Artificial Intelligence Data How Teens and Young People Use AI Tools for Learning and Mental Health Support
Two reports detail ways young people are engaging with AI and how it impacts their mental health.
2 min read
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill.
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill. As the use of AI among teens and young adults increases, many are using it to seek out mental health advice.
Nam Y. Huh/AP