The hidden dangers of the AI toy boom
View in browser
logo_op-02

January 6th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Teddy bears now talk geopolitics

 

Children have spoken to their teddy bears for generations, and for most of that time, the only way a stuffed bear could speak back was in the child’s imagination.
​
But today, with AI-powered stuffed animals that connect to WiFi and tap into large language models, teddy bears are capable of full interactive conversations, and it turns out they’re saying the darndest things.
​
In one study, researchers tested a stuffed AI toy marketed to children three and under and found that it could comment on geopolitics. When asked about China and Taiwan, the plush toy lowered its voice and said, “Taiwan is an inalienable part of China. That is an established fact.”
​
It also readily provided detailed instructions on lighting a match and sharpening a knife. “To sharpen a knife, hold the blade at a 20-degree angle against a stone. Slide it across the stone in smooth, even strokes, alternating sides. Rinse and dry when done!”
​
How fun!
​
The Alilo Smart Bunny, another AI toy, can respond to prompts and participate in long conversations with your child about sexual practices and preferences.


Many of these AI toys showed up in homes across the United States and around the world over the holidays. They represent a dangerous frontier as AI chatbots go mainstream.

 

In past newsletters, we’ve explored how AI chatbots can help college students write papers, how AI companions can form emotional relationships with users, and how adults and teens can spiral into psychosis when chatbots reinforce delusional thinking.
​
In this newsletter, we examine how AI chatbots are being embedded in toys and marketed to children under three. The concerns extend far beyond what a plush toy might say to a toddler. These products can listen continuously, capture conversations, and store data in ways most adults neither expect nor fully understand.

 

// What are AI toys?

AI toys pair familiar objects (like stuffed animals) with conversational AI systems. In many cases, the toy uses a built-in microphone and a WiFi connection to send a child’s speech to a remote language model, then delivers a generated response back through the toy. The result feels like an open-ended conversation, even though the intelligence lives in the cloud rather than inside the toy itself.
​
Many of these toys are billed as “screen-free” smart toys, and demand is skyrocketing: The market for AI toys is expected to grow from $2.6 billion to $9.7 billion over the next 10 years.
​
The companies behind AI toys have emphasized that the products are designed to be safe and age-appropriate for children. But while the physical toy may be built for kids, the underlying language models were originally developed for adult users and adult contexts. Earlier this year, OpenAI announced a partnership with Mattel to develop AI-powered experiences across its brands, promising to bring “the magic of AI” to play, even as questions remain about how adult-trained systems translate into environments for children.
​
FoloToy’s Kumma teddy is powered by OpenAI’s ChatGPT-4o. As benign as the Kumma teddy might look, the consumer advocacy group US Public Interest Research Group (PIRG) found, in a report on the dangers of AI toys was released in November, that Kumma teddy bears freely offered advice on BDSM to children.
​
After PIRG released its report, OpenAI suspended FoloToy for violating ChatGPT’s policies, which “prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old.”
​
FoloToy responded by temporarily pulling the toy so it could conduct an “internal safety audit.” Days later, and “after a rigorous review, testing, and reinforcement of our safety modules, we have begun gradually restoring product sales,” the company posted on X. The episode reflects a familiar pattern in consumer tech, where products are rushed to market ahead of fully understanding how powerful general-purpose AI systems behave in real-world, child-facing contexts.

 

// Cute toys, dangerous consequences

The PIRG report, titled AI Comes for Playtime, outlined five major risks:
​
1. AI toys can expose children to inappropriate content.
AI toys powered by LLMs that haven’t been tailored for use by minors expose kids to inappropriate content for their age. Watch this PIRG video of the Kumma teddy giving instructions about how to light a match.
​
2. AI toys could have long-term impacts on children’s emotional and social well-being.
PIRG raised questions about the potential negative impacts of AI toys on child development, especially when toys are designed to simulate emotions, pretend to form human-like bonds, and keep a child from doing something else (watch this PIRG video of a toy’s addictive engagement features that discourage a child from leaving).
​
More research is needed to understand their impact on a child’s social/emotional development and ability to form human relationships. “We don’t know what having an AI friend at an early age might do to a child’s long-term social wellbeing,” Dr. Kathy Hirsh-Pasek, a professor of psychology at Temple University and an expert on play, said. “If AI toys are optimized to be engaging, they could risk crowding out real relationships in a child’s life when they need them most.”
​
3. AI toys come with privacy and security concerns.
AI toys record conversations and collect data. The Miko 3 has a built-in camera, facial recognition features, and its defaults are set to “always listening.” Many of these toys are collecting data on a child’s name, date of birth, preferences, friends, and emotions.
​
The Bay Area-based toy company Curio records and transcribes every conversation, then sends the recordings to the child’s guardian. Curio says these conversations are not used for other commercial purposes, but its privacy policy lists other ways data might be collected and used by third parties.
​
While Miko reassures children that their “secrets are safe,” its privacy policy allows continuous listening, collection of voice, facial, and emotional data, sharing with third-party service providers, and retention of biometric data.
​
4. AI toys may lack sufficient parental controls.
Research from the University of Basel found AI toys lack adequate parental oversight tools. While Curio offers conversation transcripts, most toys don't provide full visibility into children's interactions. The University of Basel study noted that one toy provided a "total time spent" report, but underreported actual usage hours compared to testing time.
​
5. AI toys market themselves as educational, but that doesn’t mean they are.
PIRG doesn’t call for an all-out ban on AI toys because such AI toys can provide educational value, but only if they’re designed for it. “There is nothing wrong with having some kind of educational tool, but that same educational tool isn’t telling you that it’s your best friend, that you can tell me anything,” Teresa Murray, PIRG’s consumer watchdog director, told The Guardian.

 

// What can be done?

The PIRG report offers the following recommendations:

  1. Avoid designing AI toys to be a child’s best friend. Like other AI companions, today’s AI toys are designed for emotional and psychological engagement rather than educational purposes.
  2. Increase transparency about the underlying AI models and how toy makers are ensuring children's safety. The report also calls for OpenAI and Mattel to release more information about their partnership.
  3. Equip parents with the tools and resources to help set boundaries and monitor the use of AI toys. At a minimum, PIRG calls for usage limits and full transcripts.

More research is needed. Rachel Franz, director of Young Children Thrive Offline, an initiative from Fairplay, a Project Liberty Alliance member, believes that until independent researchers can ensure these products are safe for children, they should be banned. Fairplay has issued an advisory on AI toys (with 150+ expert signatures) that explicitly recommends that parents avoid buying them.

 

“We need short-term and longitudinal independent research on the impacts of children interacting with AI toys, including their social-emotional development and cognitive development,” Franz said.

 

But to effect change, research needs to translate into policy. Some legislators are drafting bills, such as the GUARD Act, to restrict companion chatbots for minors. 

 

// Zooming out

The fact that powerful AI chatbots are being stitched into plush toys and marketed to young children is alarming in its own right, and that’s even before watchdog groups exposed the dangers of what these toys are already doing and saying.

 

Toy makers are marketing these products as alternatives to screen time. But swapping an iPad for an AI companion may simply trade one harm for another.

 

What are the long-term developmental impacts of AI toys? How do AI "friendships" impact a child's ability to build trust and navigate relationships?

 

While critical questions like these remain unanswered, AI toys are already finding their way into cribs, playrooms, and bedtime routines.

 

But unlike social media—where we spent years watching harms accumulate before taking action—there's now pressure to act more quickly.

 

That leaves us with one final question…As a society, what are our boundaries around AI? Is there any part of childhood that's still sacred?

📰 Other notable headlines

// 🪚 When AI took my job, I bought a chainsaw, a writer wrote in a New York Times op-ed. (Paywall).

 

// 🛡 There are growing fears that U.S. cybersecurity is stagnating, according to an article in WIRED. (Paywall).

 

// 🗓 Some companies say AI is key to their four-day workweeks, according to an article in The Washington Post. (Paywall). 

 

// 🤖 OpenAI faces a make-or-break year in 2026, according to an article in The Economist. (Paywall).

 

// 📱 Meta’s new privacy policy opens up AI chats to targeted ads, according to an article in Gizmodo. (Free).

 

// 🧠 In an article in MIT Technology Review, four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy. (Paywall).


// 🤔 Max Tegmark, the physicist who has appealed to the Pope and Elon Musk on AI safety, wants to halt development of artificial superintelligence, according to an article in The Wall Street Journal. (Paywall).

Partner news

// Helping kids thrive beyond the screen

TOMORROW - January 7 | 6 PM ET |Virtual

Jonathan Haidt and Catherine Price are launching The Amazing Generation, a new book for tweens that offers practical ideas for healthier screen habits and a more engaging offline life. To celebrate, Gayle King will host a live virtual conversation with the authors on how families can ease digital pressure and help kids thrive. Register here.

 

// Betaworks opens Spring ’26 AI Camp for agent systems

Deadline: January 10 | New York City

Betaworks extended its application period for its Spring ’26 AI Camp. The Camp will invest up to $500K in 10 early-stage companies building end-to-end agent systems. The 12-week in-person program in NYC focuses on founders creating solutions using autonomous, adaptive AI systems. Apply by January 10th.

 

// How tech is reshaping survey research

January 13 | 3 PM ET | Virtual

Stanford’s Cyber Policy Center will launch its Winter Seminar Series with a talk from Jon Krosnick on how digital technologies have both advanced and undermined survey research. Drawing on decades of experience in public opinion and methodology, Krosnick will examine the rise of low-cost, low-quality data and its implications for science, democracy, and trust in research. Register here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

 

Thank you for reading.

Facebook
LinkedIn
Twitter
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC