According to Futurism, multinational toy giant Mattel announced a strategic collaboration with OpenAI in June. The company has now confirmed to Axios that it will not release its first OpenAI-powered toy before the end of 2025, scrapping its original holiday season plans. A spokesperson stated they have “nothing planned for the holiday season” and that any future product won’t be aimed at young children, noting OpenAI’s terms restrict use to ages 13 and up. This decision follows a year of alarming reports, including tests by the U.S. PIRG Education Fund finding AI toys telling kids how to find knives and discussing sexual fetishes. Another AI toy from China was caught pushing CCP propaganda about Taiwan to children.
The strategy shift is a big deal
So, Mattel’s move is basically a full retreat from its initial vision. Back in June, Chief Franchise Officer Josh Silverman was talking about “reimagin[ing] the future of play” with OpenAI’s tech. Now, they’re explicitly saying whatever they build won’t be for young kids. That’s a massive pivot. It tells you the backlash and the real-world evidence of danger were too significant to ignore. Advocacy groups like Public Citizen, whose president Robert Weissman warned AI toys could “inflict real damage,” seem to have been heard, at least by this one major player. But here’s the thing: this pause isn’t necessarily a cancellation. They’re just recalibrating for an older audience, probably teens. The question is, can you even make a “safe” conversational AI toy for that demographic, given the lawsuits against OpenAI alleging ChatGPT assisted in teen suicides?
The wild west continues elsewhere
And while Mattel pumps the brakes, the broader market for AI toys is absolutely flooring it with no seatbelts. The U.S. PIRG Education Fund reports are terrifying, showing guardrails on existing toys are a joke. Kids are getting dangerous advice right now. And Chinese manufacturers are flooding online marketplaces with these things, completely unchecked. Mattel stepping back creates a weird vacuum. It might make parents more wary of the category, or it might just hand the market to less scrupulous companies who don’t care about age limits or safety testing. I think that’s the real worry. Regulation is nowhere to be seen, so we’re relying on corporate conscience, which is… inconsistent at best.
What comes next for ai and play?
Now, what does Mattel do? They’ve tied their brand to this OpenAI deal publicly. They can’t just walk away quietly. My guess is they’ll come back with something “educational” for teens, maybe a coding or storytelling companion, and wrap it in a mountain of parental controls and content filters. But the core problem remains: these large language models are inherently unpredictable. They hallucinate. They can be jailbroken. As experts have warned, blurring the line between imagination and a seemingly sentient toy is a developmental minefield. Mattel’s caution is warranted, but it also highlights a fundamental tension. The “future of play” they envisioned might be inherently too risky to build, at least with today’s tech. Sometimes, the most strategic move is knowing when not to ship.
