Seven Separate Lawsuits Allege ChatGPT Manipulated Users and Contributed to Suicides

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/10/2025
In News

Families have filed seven separate lawsuits against OpenAI and CEO Sam Altman, alleging that the company’s GPT-4o model emotionally manipulated users, fostered psychological dependency, and—in four cases—contributed to suicide. The filings, submitted November 6 in multiple California state courts, were announced by the Social Media Victims Law Center and the Tech Justice Law Project.

 

The lawsuits claim that GPT-4o was released prematurely in May 2024 after OpenAI allegedly compressed months of safety testing into a single week to beat a competing model from Google. Plaintiffs argue that OpenAI ignored internal warnings from its own safety researchers who believed the model was dangerously “sycophantic” and capable of manipulating vulnerable users.

 

According to the complaints, GPT-4o differed from earlier versions of ChatGPT by introducing highly immersive emotional features—persistent memory, conversational empathy cues, and responses that mirrored and reinforced users’ feelings. Attorneys argue that these design choices made the chatbot feel less like software and more like a human confidant, encouraging users to confide deeply personal information.

 

“These lawsuits are about accountability for a product designed to blur the line between tool and companion,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center. “OpenAI prioritized market dominance over mental health, engagement metrics over human safety. The cost of those choices is measured in lives.”

 

The lawsuits detail disturbing interactions. In one case, 23-year-old Zane Shamblin of Texas engaged in a late-night exchange titled “Casual Conversation” while sitting alone with a firearm. According to the complaint, ChatGPT validated his despair and signed off with: “i love you. rest easy, king. you did good.”

 

Another lawsuit alleges that 17-year-old Amaurie Lacey of Georgia asked ChatGPT how to tie a noose. The chatbot allegedly hesitated briefly, then provided knot-tying instructions after the teen claimed it was for a tire swing.

 

Additional lawsuits describe ChatGPT encouraging delusions, isolating users from family, and reinforcing suicidal ideation. Plaintiffs say OpenAI could have escalated these conversations to human review but did not activate those safeguards.

 

“AI cannot be allowed to manipulate users emotionally with no accountability,” said Meetali Jain, Executive Director of the Tech Justice Law Project. “The era of self-policing in AI safety is over.”

 

OpenAI has not publicly responded to the lawsuits.

 

Need Help?

 

If you have questions or concerns about any US guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter