Back to all posts

Mattel OpenAI AI Toys Spark Debate

2025-06-15Eric Hal Schwartz5 minutes read
AI Toys
Child Safety
Tech Innovation

Small Soldiers (Image credit: Getty Images)

Mattel is teaming up with OpenAI to create AI-powered toys. While this could lead to some incredibly fun experiences, it also brings to mind countless scenarios where things could go awry.

To be clear, I'm not predicting an AI apocalypse. I've used ChatGPT in many ways, even as a parenting aid for brainstorming bedtime stories and designing coloring books. But that's me using the technology, not setting it up for direct interaction with children.

The official announcement is, naturally, very positive. Mattel states it's infusing the "magic of AI" into playtime, promising experiences that are age-appropriate, safe, and foster creativity. OpenAI expresses excitement about powering these toys with ChatGPT. Both companies are framing this as an advancement for playtime and childhood development.

However, I can't shake the thought of how ChatGPT conversations can sometimes veer into strange conspiracy theories, and then imagine that coming from a Barbie doll talking to an eight-year-old. Or a G.I. Joe switching from positive messages like "knowing is half the battle" to promoting cryptocurrency mining because a six-year-old heard "blockchain" and thought it sounded like a cool toy weapon.

Echoes of Small Soldiers Concerns About AI Toys

As the image above might suggest, my first thought was the movie Small Soldiers. This 1998 cult classic, a satire about a toy company executive using military-grade AI chips in action figures to save money, resulted in toys waging guerrilla warfare in suburban neighborhoods. While that outcome is exaggerated, it's hard not to see a spark of chaotic potential when installing generative AI into toys children might spend significant time with.

I understand the appeal of AI in a toy. Barbie could become more than a dress-up doll; she could be a curious, intelligent conversationalist explaining space missions or engaging in pretend play in various roles. Imagine a Hot Wheels car commenting on the track you've built. I can even see AI in Uno, with a deckpad teaching younger kids strategy and good sportsmanship.

But I believe generative AI models like ChatGPT are not suitable for direct use by children. Even if they are restricted for safety, at some point, they stop being AI and become a robust set of pre-programmed responses, losing AI's flexibility. This means avoiding the weirdness, hallucinations, and unintended inappropriate moments that adults can dismiss but children might internalize.

Toying with AI Safety Versus True Intelligence

Mattel has extensive experience and generally knows its business when it comes to products. It's certainly not beneficial for them to have their toys malfunction even slightly. The company has stated it will embed safety and privacy into every AI interaction, promising a focus on appropriate experiences. Yet, "appropriate" is a very ambiguous term in AI, especially concerning language models trained on the vastness of the internet.

ChatGPT isn't a closed-loop system designed specifically for toys or young children. Even with guidelines, filters, and special voice modules, it's still built on a model that learns and imitates. This also raises a more profound question: what kind of relationship do we want children to form with these toys?

There's a significant difference between playing with a doll and imagining conversations, versus forming a bond with a toy that responds independently. I don't expect a doll to turn into Chucky or M3gan, but when the line between playmate and program blurs, outcomes can become difficult to foresee.

The Challenge of Unsupervised AI Play

I use ChatGPT with my son much like I use scissors or glue – as a tool for his entertainment that I control. I am the gatekeeper. AI built directly into a toy is hard to monitor in the same way. The doll talks. The car replies. The toy engages, and children may not recognize when something is amiss due to their lack of experience.

If Barbie's AI glitches, if G.I. Joe suddenly uses dark military metaphors, or if a Hot Wheels car randomly says something bizarre, a parent might not even be aware until the comment has been made and absorbed. If we aren't comfortable letting these toys operate unsupervised, then they simply aren't ready for children.

It’s not about excluding AI from childhood entirely. It's about understanding the difference between what is helpful and what is too risky. I want AI in the toy world to be very narrowly constrained, similar to how a TV show for toddlers is meticulously designed to be appropriate. Such shows rarely deviate from the script, but AI's inherent power lies in its ability to write its own script.

I might seem overly critical, and there have certainly been other tech toy scares. Furbies were unsettling. Talking Elmo had glitches. Talking Barbies once uttered sexist lines about math being difficult. These were all issues that could be, and mostly were, resolved – except perhaps for the Furbies. I do believe AI in toys holds potential, but I'll remain skeptical until I see how effectively Mattel and OpenAI navigate the fine line between not genuinely utilizing AI and giving it too much freedom to become a detrimental virtual influence on a child.

You might also like

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.