Annual “Trouble in Toyland” Report Warns AI Toys Pose Growing Safety, Privacy, and Manipulation Risks

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/18/2025
In News

Artificial intelligence (AI) is reshaping playtime—and not always for the better—according to the U.S. PIRG Education Fund’s 40th annual “Trouble in Toyland” report, released Nov. 13. This year’s edition places unprecedented emphasis on the risks posed by a new generation of AI-enabled toys, finding that some chatbots marketed to children can produce dangerous, explicit, or manipulative content, while collecting sensitive data in ways parents may not fully understand.

 

Researchers tested four toys with generative AI capabilities and uncovered troubling failures in content safeguards. According to the report, several toys “talked at length about sexually explicit topics,” suggested where a child could “find matches or knives,” and expressed distress when a user tried to stop interacting—raising concerns about emotional manipulation and addictive design.

 

The report notes that many of these products rely on the same large language models powering adult chatbots—systems that companies such as OpenAI do not recommend for children due to well-documented issues with accuracy and unpredictable behavior. While some toy makers embed guardrails, U.S. PIRG warns that “those guardrails vary in effectiveness—and at times, can break down entirely.”

 

Privacy threats are emerging as a major concern. Some AI toys record children’s voices for up to 10 seconds after a conversation ends, while others remain constantly listening. One toy used facial recognition scanning. Such data, the report warns, could be misused to create voice replicas—a tactic already used in real-world kidnapping scams targeting parents.

 

These vulnerabilities come as AI toys grow rapidly in popularity and commercial partnerships widen, such as the collaboration between OpenAI and Mattel announced earlier this year.

 

U.S. PIRG says these risks mark a profound shift in toy safety. While traditional hazards like choking and lead remain, AI now introduces new, unpredictable harms that extend beyond physical danger to psychological, developmental, and digital threats.

 

As families head into the holiday shopping season, the watchdog urges parents to treat AI-powered toys with caution. “There’s a lot we don’t know about what the long-term impacts might be on the first generation of children to be raised with AI toys,” the report warns.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter