Even now that the info is secured, Margolis and Thacker argue that it raises questions on how many individuals inside corporations that make AI toys have entry to the info they gather, how their entry is monitored, and the way properly their credentials are protected. “There are cascading privateness implications from this,” says Margolis. ”All it takes is one worker to have a foul password, after which we’re again to the identical place we began, the place it is all uncovered to the general public web.”
Margolis provides that this form of delicate details about a toddler’s ideas and emotions might be used for horrific types of youngster abuse or manipulation. “To be blunt, this can be a kidnapper’s dream,” he says. “We’re speaking about data that permit somebody lure a toddler into a very harmful scenario, and it was primarily accessible to anyone.”
Margolis and Thacker level out that, past its unintentional knowledge publicity, Bondu additionally seems—primarily based on what they noticed inside its admin console—to make use of Google’s Gemini and OpenAI’s GPT5, and consequently could share details about children’ conversations with these corporations. Bondu’s Anam Rafid responded to that time in an e mail, stating that the corporate does use “third-party enterprise AI companies to generate responses and run sure security checks, which entails securely transmitting related dialog content material for processing.” However he provides that the corporate takes precautions to “decrease what’s despatched, use contractual and technical controls, and function below enterprise configurations the place suppliers state prompts/outputs aren’t used to coach their fashions.”
The 2 researchers additionally warn that a part of the chance of AI toy corporations could also be that they are extra doubtless to make use of AI within the coding of their merchandise, instruments and internet infrastructure. They are saying they think that the unsecured Bondu console they found was itself “vibe-coded”—created with generative AI programming instruments that always result in safety flaws. Bondu did not reply to WIRED’s query about whether or not the console was programmed with AI instruments.
Warnings concerning the dangers of AI toys for youths have grown in latest months, however have largely targeted on the menace {that a} toy’s conversations will elevate inappropriate matters and even cause them to harmful conduct or self-harm. NBC Information, as an example, reported final month that AI toys its reporters chatted with provided detailed explanations of sexual phrases, recommendations on the right way to sharpen knives and claimed, and even appeared to echo Chinese language authorities propaganda, stating for instance that Taiwan was part of China.
Bondu, in contrast, seems to have at the least tried to construct safeguards into the AI chatbot it offers youngsters entry to. The corporate even provides a $500 bounty for reviews of “an inappropriate response” from the toy. “We have had this program for over a yr and nobody has been capable of make it say something inappropriate,” a line on the corporate’s web site reads.
But on the similar time, Thacker and Margolis discovered that Bondu was concurrently leaving all of its customers’ delicate knowledge totally uncovered. “This can be a good conflation of security with safety,” says Thacker. “Does ‘AI security’ even matter when all the info is uncovered?”
Thacker says that previous to wanting into Bondu’s safety, he’d thought of giving AI-enabled toys to his personal children, simply as his neighbor had. Seeing Bondu’s knowledge publicity firsthand modified his thoughts.
“Do I really need this in my home? No, I do not,” he says. “It is type of only a privateness nightmare.”




