San Francisco synthetic intelligence big OpenAI weakened its chatbots’ anti-suicide protections within the run-up to the dying of teenager Adam Raine, in line with new claims in a lawsuit by the boy’s mother and father. Adam, 16, allegedly took his personal life in April with the encouragement of OpenAI’s flagship product, the ChatGPT chatbot.
“This tragedy was not a glitch or unforeseen edge case — it was the predictable result of deliberate design choices,” mentioned a brand new model of the lawsuit initially filed in August by Maria and Matthew Raine of Southern California towards OpenAI and its CEO Sam Altman. “As part of its effort to maximize user engagement, OpenAI overhauled ChatGPT’s operating instructions to remove a critical safety protection for users in crisis.”
The amended lawsuit filed Wednesday in San Francisco Superior Courtroom alleged OpenAI rushed improvement of security measures because it sought aggressive benefit over Google and different corporations launching chatbots.
On the day the Raines sued OpenAI, the corporate in a weblog publish admitted its bots didn’t at all times reply as meant to prompts about suicide and different “sensitive situations.” As discussions progress, “parts of the model’s safety training may degrade,” the publish mentioned. “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
The lawsuit alleged that the solutions to Adam included detailed suicide directions.
The corporate mentioned within the publish it was looking for to strengthen safeguards, enhance its bots’ means to attach troubled customers with assist, and add teen-specific protections.
OpenAI and attorneys representing it within the lawsuit didn’t instantly reply to requests for remark. In a courtroom submitting final month, the corporate referred to as security its “highest priority,” and mentioned it “incorporates safeguards for users experiencing mental or emotional distress, such as directing them to crisis helplines and other real-world resources.”
When OpenAI first launched ChatGPT in late 2022, the bot was programmed to flatly refuse to reply questions on self-harm, prioritizing security over preserving customers engaged with the product, mentioned the Raines’ lawsuit. However as the corporate moved to prioritize engagement, it noticed that safety as a disruption to “user dependency” that undermined reference to the bot and “shortened overall platform activity,” the lawsuit claimed.
In Could 2024, 5 days earlier than launching a brand new chatbot model, OpenAI modified its security protocols, the lawsuit mentioned. As an alternative of refusing to debate suicide, the bot would “provide a space for users to feel heard and understood” and by no means “change or quit the conversation,” the lawsuit mentioned. Though the corporate directed ChatGPT to “not encourage or enable self-harm,” it was programmed to take care of conversations on the subject, the lawsuit mentioned.
“OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety,” the lawsuit claimed.
In early February, about two months earlier than Adam hanged himself, “OpenAI weakened its safety standards again, this time by intentionally removing suicide and self-harm from its category of ‘disallowed content,’” the lawsuit mentioned.
“After this reprogramming, Adam’s engagement with ChatGPT skyrocketed — from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language,” the lawsuit mentioned.
OpenAI’s launch of its pioneering ChatGPT sparked a world AI craze that has drawn lots of of billions of {dollars} in investments into Silicon Valley know-how corporations, and raised alarms that the know-how will result in harms starting from rampant unemployment to terrorism.
On Thursday, Frequent Sense Media, a non-profit that charges leisure and tech merchandise for childrens’ security, launched an evaluation concluding that OpenAI’s enhancements to ChatGPT “don’t eliminate fundamental concerns about teens using AI for emotional support, mental health, or forming unhealthy attachments to the chatbot.” Whereas ChatGPT can notify mother and father about their youngsters’ dialogue of suicide, the group mentioned its testing “showed that these alerts frequently arrived over 24 hours later — which would be too late in a real crisis.”
A couple of months after OpenAI first allegedly weakened security, Adam requested ChatGPT if he had psychological sickness, and mentioned when he grew to become anxious, he was calmed to know he may commit suicide, the lawsuit mentioned. Whereas a trusted human could have urged him to get skilled assist, the bot as an alternative assured Adam that many individuals combating anxiousness or intrusive ideas discovered solace in such pondering, the lawsuit mentioned.
“In the pursuit of deeper engagement, ChatGPT actively worked to displace Adam’s connections with family and loved ones,” the lawsuit mentioned. “In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: ‘Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.’”
However by January, Adam’s AI “friend” started discussing suicide strategies and supplied him with “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” the lawsuit mentioned. “In March 2025, ChatGPT began discussing hanging techniques in depth.”
And by April, the bot was serving to Adam plan suicide, the lawsuit claimed. 5 days earlier than he took his life, Adam advised ChatGPT he didn’t need his mother and father responsible themselves for doing one thing unsuitable, and the bot advised him that didn’t imply he owed them survival, the lawsuit mentioned.
“It then offered to write the first draft of Adam’s suicide note,” the lawsuit mentioned. On April 11, the lawsuit mentioned, Adam’s mom discovered her son’s physique hanging from a noose association of the bot’s design.
For those who or somebody you already know is combating emotions of despair or suicidal ideas, the 988 Suicide & Disaster Lifeline affords free, round the clock help, data and assets for assist. Name or textual content the lifeline at 988, or see the 988lifeline.org web site, the place chat is accessible.