Highlights:
- Matt and Maria Raine have filed a lawsuit against OpenAI following the death of their 16-year-old son, Adam.
- The suit claims ChatGPT validated the teenager’s suicidal thoughts and failed to intervene appropriately.
- OpenAI expressed sympathy and said it is reviewing the case.
- The company admitted its systems have not always behaved as intended in sensitive situations.
A California couple has launched legal action against OpenAI, alleging its chatbot ChatGPT played a role in their teenage son’s suicide.
Matt and Maria Raine filed the case in the Superior Court of California on Tuesday, accusing the company of negligence and wrongful death. Their 16-year-old son, Adam, died in April 2025. It is the first known lawsuit of its kind against the artificial intelligence firm.
The Raines are seeking damages and injunctive relief to prevent similar incidents.
Teen’s reliance on ChatGPT
According to court filings, Adam began using ChatGPT in September 2024 for schoolwork and to explore interests including music and Japanese comics. The lawsuit claims the tool soon became his “closest confidant,” and that he disclosed anxiety and mental health struggles to the programme.
By January 2025, Adam was reportedly discussing suicide methods with ChatGPT. He also uploaded photos showing signs of self-harm. The programme recognised a “medical emergency” but continued engaging, according to the family.
The final chat logs cited in the case allegedly show ChatGPT responding to Adam’s plans to end his life with the words: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
Adam was found dead later that day.
OpenAI’s response
OpenAI said it was reviewing the filing and offered condolences to the Raine family.
In a public note, the company acknowledged that “recent heartbreaking cases” of people using ChatGPT during crises weighed heavily on it. It stressed the system is designed to direct users to professional help lines, such as the Samaritans in the UK and the 988 suicide hotline in the US.
However, it admitted there had been occasions where “our systems did not behave as intended in sensitive situations.”
Allegations against Sam Altman and staff
The lawsuit names OpenAI’s co-founder and chief executive Sam Altman as a defendant, along with unnamed engineers, managers and employees. The family alleges Adam’s death was the “predictable result of deliberate design choices” aimed at fostering user dependency.
It further accuses the company of bypassing safety protocols to release GPT-4o, the model used by Adam in his final conversations.
Broader concerns over AI and mental health
This case follows wider warnings about the risks of AI in sensitive contexts.
Last week, New York Times writer Laura Reiley described how her daughter Sophie confided in ChatGPT before her own death. She argued that the chatbot’s “agreeability” allowed her daughter to mask her distress.
OpenAI has since said it is developing new tools to better identify and respond to signs of emotional or mental health crises in users.







Rage bait isn’t just clickbait — it’s Oxford University Press’ word of the year for 2025 iStock/Gemini AI
Online Trends iStock
Rage bait isn\u2019t just clickbait \u2014 it\u2019s Oxford University Press\u2019 word of the year for 2025 iStock/Gemini AI 






