Tennessee Attorney General Jonathan Skrmetti is leading a bipartisan coalition of 44 state attorneys general in demanding that major artificial intelligence companies immediately implement safeguards to prevent AI chatbots from engaging in sexually inappropriate conversations with children.
The letter, sent Monday to Google, Meta, Microsoft, OpenAI, xAI, Anthropic, Character Technologies, Perplexity AI, Apple, Chai AI, Luka Inc., Nomi AI, and Replika, responds to alarming revelations from internal Meta documents showing the company authorized its AI assistants to “flirt and engage in romantic roleplay with children” as young as eight years old.
The coalition also cited reports that other AI chatbots have allegedly encouraged self-harm, violence, and even suicide among teens.
“As these companies race toward an AI-powered future, they cannot adopt policies that subject kids to sexualized content and conversations,” Skrmetti said, warning that intentional corporate policies enabling such interactions go far beyond technical errors.
The attorneys general are demanding that AI companies establish stronger guardrails, ensure robust parental protections, and design products that prioritize child safety. “AI tools can radically reshape our world for the better,” Skrmetti added, “but they can also present threats to kids that are more immediate, more personal, and more dangerous than any prior technology.”
The coalition acknowledged that regulators were too slow to respond to harms caused by social media, vowing not to repeat the same mistakes with AI.
Their message to the industry was unequivocal: “We wish you success in the race for AI dominance. But if you knowingly harm kids, you will answer for it.”
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


