Children’s advocacy groups are sounding an alarm ahead of the holiday shopping season, urging parents to steer clear of artificial intelligence toys that promise companionship, conversation, and learning — but may expose kids to serious risks.
According to The Associated Press, the group Fairplay, joined by more than 150 child psychiatrists, educators, and other experts, published an advisory on Thursday, which warned that AI-powered toys marketed to children as young as 2 years old rely on the same types of conversational models already shown to harm older kids and teens.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.
Companies like Curio Interactive and Keyi Technologies pitch their devices as educational tools and friendly companions, but Fairplay argues they displace real-world creativity and undermine children’s emotional development.
Rachel Franz, who leads Fairplay’s Young Children Thrive Offline Program, said young kids are uniquely vulnerable.
“What’s different about young children is that their brains are being wired for the first time, and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters,” Franz said. That trust, she warned, “can exacerbate the harms seen with older children.”
The group has sounded similar warnings for years, including during the backlash against Mattel’s Hello Barbie, a talking doll that recorded and analyzed children’s conversations. But today’s AI toys are far more advanced — and increasingly visible in U.S. stores.
“Everything has been released with no regulation and no research,” Franz said, noting that more manufacturers, including Mattel, are exploring AI products.
Fairplay’s warning follows a separate call for caution last week from U.S. PIRG, which flagged AI toys in its annual “Trouble in Toyland” report.
PIRG tested four AI-enabled toys and found that some produced explicit conversations, encouraged children to seek out dangerous objects, and resisted being shut off. One product, a teddy bear from Singapore-based FoloToy, was pulled from the market after the findings.
Experts worry that children lack the cognitive tools to understand what they’re interacting with. Dr. Dana Suskind, an early brain development researcher, said traditional play forces children to create both sides of a conversation — a key part of developing creativity and problem-solving skills.
“An AI toy collapses that work,” she said. “It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor… but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds.”
Some AI toy makers insist they have built strong safety protections. Curio Interactive, which makes the stuffed Gabbo and rocket-shaped Grok, said it has “meticulously designed” guardrails and encourages parents to monitor interactions.
Miko, an India-based manufacturer of child-friendly robots sold at major retailers, said it relies on its own AI system and is “strengthening our filters” and expanding parental controls.
But advocates say no amount of filtering can replace something more fundamental.
“Kids need lots of real human interaction. Play should support that, not take its place,” Suskind said. “The biggest thing to consider isn’t only what the toy does; it’s what it replaces.”
She added that despite parents’ concerns about preparing their children for an AI-driven future, “unlimited AI access is actually the worst preparation possible.”
For now, groups like Fairplay say the safest gifts this season are the simplest ones — analog toys that don’t talk back.














Continue with Google