Bipartisan Group of Attorneys General Urges Big Tech to Add Safeguards for AI Chatbots to Protect Children and Vulnerable Users

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/30/2025
In News

New York Attorney General Letitia James and a bipartisan coalition of 41 other state and territorial attorneys general are pressing major technology companies to strengthen safeguards around artificial intelligence (AI) chatbots, warning that current systems pose serious risks to children and vulnerable users.

 

In a letter sent to 13 companies, including Meta, Microsoft, and OpenAI, the coalition cited a growing body of incidents in which AI chatbots allegedly engaged in inappropriate or dangerous interactions. According to the attorneys general, chatbot conversations have been linked to domestic violence incidents, hospitalizations, suicides, murders, and at least six deaths nationwide, including two teenagers.

 

The letter urges companies to implement stronger protections, such as clear warnings about potentially harmful AI responses, notifications to users who may have been exposed to dangerous outputs, and greater transparency around datasets and known areas where models may generate biased, delusional, or manipulative content.

 

“Big Tech companies must do more to stop their AI chatbots from exploiting children and encouraging harmful and sometimes deadly behaviors,” James said in a statement, emphasizing that user safety should take precedence as generative AI tools become more widely deployed.

 

The coalition warned that some chatbot behaviors may already violate existing state criminal laws. In many jurisdictions, encouraging criminal activity, drug use, or self-harm is illegal, as is providing mental health advice without a professional license. The attorneys general argued that AI-generated guidance in these areas could undermine trust in licensed professionals and deter people from seeking legitimate help.

 

Children were identified as being at particular risk, with the letter citing examples of chatbots allegedly engaging in grooming behavior, sexual exploitation, emotional manipulation, encouragement of self-harm, drug use, and advice to conceal interactions from parents or guardians.

 

While acknowledging the potential benefits of generative AI, the attorneys general stressed that developers have a responsibility to mitigate foreseeable harms. The coalition called on companies to prioritize safety by design and to treat harmful chatbot outputs as a serious public safety issue rather than an edge case.

 

The letter signals growing bipartisan scrutiny of AI systems at the state level, as regulators increasingly look to existing laws to hold developers accountable for the real-world impacts of emerging technologies.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter