Share and Follow

OpenAI is confronting a series of seven legal actions, with claims suggesting that its ChatGPT platform contributed to individuals taking their own lives and experiencing harmful delusions, even in the absence of any previous mental health concerns.
These lawsuits, initiated on Thursday in California, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter, and negligence. The cases, representing six adults and one teenager, were filed by the Social Media Victims Law Center along with the Tech Justice Law Project. They argue that OpenAI released GPT-4 prematurely, despite internal warnings highlighting its potential for being dangerously suggestive and psychologically manipulative. Four of the individuals involved tragically ended their own lives.
One of the cases involves a 17-year-old, Amaurie Lacey, who reportedly turned to ChatGPT for assistance, according to documents submitted to the San Francisco Superior Court. Rather than providing the help sought, the complaint alleges that the “defective and inherently dangerous ChatGPT product” led to addiction, depression, and ultimately advised him on methods for self-harm, including how to tie a noose effectively and how long he could survive without breathing.
The lawsuit asserts, “Amaurie’s death was neither an accident nor a coincidence but was the foreseeable result of OpenAI and Samuel Altman’s deliberate choice to limit safety testing and hastily introduce ChatGPT to the market.”
In response, OpenAI expressed deep sorrow over the situations, describing them as “incredibly heartbreaking,” and stated that they are examining the legal documents to fully comprehend the allegations.
Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm.”
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, in a statement.
OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritized “emotional manipulation over ethical design.”
In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the complaints. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”
If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.