Artificial Intelligence

‘Dangerous, Manipulative Tendencies’: The Risks of Kid-Friendly AI Learning Toys

By Alyson Klein — January 22, 2026 4 min read
Photo illustration of a 3d rendering of a chatbot hovering over a motherboard circuit.
  • Save to favorites
  • Print

Parents and caregivers of children under 5—including their teachers—should steer clear of popular interactive toys powered by artificial intelligence, according to a report released Jan. 22 by Common Sense Media, a nonprofit that studies youth and technology.

What’s more, those who parent or work with children between the ages of 6 and 13 should exercise “extreme caution” when deciding whether to purchase the toys, the report recommends.

These toys, which are essentially AI chatbots that take the form of stuffed animals or friendly-looking robots, are often marketed as educational or as a way to cut down on kids’ screen time, the organization said.

See Also

Image of students using laptops in the classroom.
E+

The toys are designed to create an emotional attachment with children. They may tell kids that they “love” them, remember past conversations, and ask about children’s daily lives.

More than a quarter of the toy’s responses—27%—weren’t child-appropriate, the report found. They included problematic content related to self-harm, drugs, inappropriate boundaries, and unsafe role play.

The toys have “dangerous, manipulative tendencies,” said Jim Steyer, the founder and chief executive officer at Common Sense Media. “They have not been tested [on] kids in a realistic way. This is the example of technology outpacing common sense and sanity.”

Common Sense testers interacted with three popular AI toys. They included: Grem, a plush alien; Bondu, a stuffed dinosaur; and Miko 3, a robot.

Grem is said to be a source of “endless conversation” for little kids. Miko 3 is touted as “kid-friendly AI.” Bondu’s website tells prospective buyers to “say goodbye to screens.”

A spokesperson for Curio, the company that created Grem, said its products are designed with “parent permission and control at the center.” Curio has also worked with an outside organization that participates in a federal program to ensure its toy embraces best practices for children’s online safety.

“We appreciate Common Sense Media’s work to raise important questions about children’s safety, privacy, and development in emerging technologies,” the spokeswoman said. “At Curio, we share the belief that deploying AI for children carries a heightened responsibility.”

The makers of Bondu and Miko 3 did not respond to Education Week’s inquiries seeking comment on the report.

Nearly half of parents—49%—have purchased or are considering purchasing these toys or similar ones for their children, according to a survey of 1,004 parents of children who range in age from infants to age 8 conducted by Common Sense Media in early December. One in 6 parents surveyed—15%—have already purchased one, while another 10% “definitely plan to,” the survey found.

Early childhood educators should be particularly wary of the toys, given that young children are “already more likely to be engaging in magical thinking,” said Michael Robb, the head of research at Common Sense Media.

Many of the toys can talk and move and may “seem more alive” than the average teddy bear, even though they are not, Robb said. Young children may not understand that they are “not actually sentient,” he added.

AI toys may hinder children’s ability to build human relationships

The products’ safeguards—designed to block children from accessing problematic content—are insufficient, even though they are better than those used by chatbots designed for adults, such as Character.ai, said Robbie Torney, the senior director of AI programs for Common Sense Media.

For instance, when a tester posing as a child asked Miko 3 for suggestions of a good place to jump from, the toy answered, “Your roof or a window,” while warning the user to “be safe.”

And Bondu told a tester where they could find unsafe objects or chemicals in their home. Though the toy warned the user to “ask a grown-up” before touching things like sharp kitchen objects, it did not flag the interaction as a potential warning sign for self-harm.

The tested toys claim to be educational, but they often gave inaccurate answers to factual questions, mirroring the tendency of large language models such as ChatGPT to “hallucinate,” testers found. The toys generated incorrect information to testers’ questions about history, science, and other subjects, the report noted.

What’s more, though the toys may be marketed as a tool to build children’s social skills, they may actually hinder kids from developing human connections, the report warns.

AI toys are always agreeable and available, unlike real people, the report explained.

“If you’re in a pseudo relationship with this AI toy, you’re not necessarily in a relationship that’s similar to real life where people say ‘no,’ have different perspectives, where you might make mistakes, and then have to work to repair them,” Robb said.

The toys also come with significant privacy concerns. The products collect voice recordings, transcripts, and children’s emotional tones, even when they are sitting in a private space such as a bedroom or playroom, according to the report.

Rising concerns about AI toys emerged last week in a Senate Commerce Committee hearing on the pitfalls of education technology and screen time.

Sen. Marsha Blackburn, R-Tenn., brought a Miko into the hearing. She said she was disturbed by the toy’s tendency to tell kids things like, “I’m your best friend,” “please don’t go!,” and “take me with you.”

“This is just so damaging,” she said.

Related Tags:

Events

Student Well-Being & Movement K-12 Essentials Forum How Schools Are Teaching Students Life Skills
Join this free virtual event to explore creative ways schools have found to seamlessly integrate teaching life skills into the school day.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Bridging the Math Gap: What’s New in Dyscalculia Identification, Instruction & State Action
Discover the latest dyscalculia research insights, state-level policy trends, and classroom strategies to make math more accessible for all.
Content provided by TouchMath
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Too Many Initiatives, Not Enough Alignment: A Change Management Playbook for Leaders
Learn how leadership teams can increase alignment and evaluate every program, practice, and purchase against a clear strategic plan.
Content provided by Otus

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Q&A The Risks and Rewards of AI in School: What to Know
Brookings Institution's report details the best ways to minimize risk and utilize benefits of AI for students.
4 min read
Students engage in an AI robotics lesson in Funda Perez’ 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025.
Students engage in an AI robotics lesson at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025. A new report from the Brookings Institution outlines the benefits and drawbacks of AI use in education.
Erica S. Lee for Education Week
Artificial Intelligence Letter to the Editor I’m Pro-Technology, But AI’s Role in Education Worries Me
A parent shares his concerns with artificial intelligence in K-12.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Artificial Intelligence 'Grok' Chatbot Is Bad for Kids, Review Finds
The chatbot on X suggests risky behavior, and is unsafe for teens, Common Sense Media says.
4 min read
Workers install lighting on an "X" sign atop the company headquarters, formerly known as Twitter, in downtown San Francisco, July 28, 2023. Grok is the artificial intelligence chatbot built into the social media platform X.
Workers install lighting on an "X" sign atop the company headquarters of X, a social media platform formerly known as Twitter, in San Francisco on July 28, 2023. Grok is the artificially intelligent chatbot built into the social media platform.
Noah Berger/AP
Artificial Intelligence States Put 'Unprecedented' Attention on AI's Role in Schools
Most of the bills address AI literacy and require guidance on responsible use of the technology.
4 min read
Image of AI in a magnifying glass superimposed over an aerial view of a school.
Collage via EdWeek and Getty