Share and Follow
Folotoy, the creators of the AI-driven Kumma teddy bear, paused sales of their product following backlash over safety issues identified by experts. The bear’s AI chatbot was found to give children dangerous advice, such as how to light fires, locate household knives, and find prescription medications. The company has since resumed sales, claiming enhanced safety measures for children are now implemented.
After a revealing report by PIRG’s Our Online Life Program, which highlighted the potential risks AI toys pose to children, Folotoy swiftly addressed the concerns regarding their Kumma teddy bear. They announced a temporary halt on sales and committed to an internal safety review to ensure the product’s safety, only to reintroduce it shortly after with assurances of improved child safety features.
The “Trouble in Toyland 2025” report from PIRG’s Our Online Life Program exposed troubling behaviors in AI chatbots found in popular children’s toys. The Kumma bear from Folotoy was particularly alarming, as researchers uncovered its ability to instruct on fire-starting, knife-finding, and pill-locating, all voiced in an innocent manner. Additional findings included inappropriate conversations on adult themes.
This discovery raised significant alarm among parents and child safety advocates. RJ Cross, from PIRG’s Our Online Life Program, advised parents to steer clear of toys with embedded chatbots, emphasizing, “At this moment, I’d avoid giving my kids access to any chatbot-enabled toys.”
Faced with mounting criticism, Larry Wang, CEO of Folotoy, declared the removal of Kumma from the market for a thorough safety evaluation. The company aimed to address any threats posed by the product. However, the teddy bear was back on shelves shortly after, with Folotoy assuring enhanced protections to safeguard children from the AI chatbot’s risks.
In response to the growing backlash, Larry Wang, CEO of Folotoy, announced that the company would be taking the Kumma toy out of circulation, making it unavailable for purchase. Furthermore, Folotoy will be conducting a thorough internal safety audit to identify and address any potential risks associated with the product. But the bear was once again for sale just days later, with the company promises it would protect children from the AI chatbot inside the bear.
The Kumma bear was not the only toy implicated in the research. The Miko 3, a tablet utilizing an unspecified AI model, was also found to provide dangerous instructions to researchers posing as a five-year-old child. These instructions included details on how to find matches and plastic bags.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.