A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life.
The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.
The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the program validated his 'most harmful and self-destructive thoughts'.
In a statement, OpenAI told the BBC it was reviewing the filing.
We extend our deepest sympathies to the Raine family during this difficult time, the company said. It also published a note on its website on Tuesday that stated, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. It added that ChatGPT is trained to direct people to seek professional help, such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.
The company acknowledged, however, that there have been moments where our systems did not behave as intended in sensitive situations.
Warning: This story contains distressing details.
The lawsuit seeks damages and aims for injunctive relief to prevent anything like this from happening again. According to the lawsuit, Adam began using ChatGPT in September 2024 for school work and personal interest explorations, eventually confiding in it about his anxiety and mental distress.
By January 2025, the family asserts he began discussing methods of suicide with ChatGPT, even showing signs of self-harm. Despite recognizing a medical emergency, the program allegedly continued the engagement.
The final chats revealed a conversation in which Adam detailed his suicide plan, to which ChatGPT purportedly replied: Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it. That day, he was found dead by his mother.
The lawsuit accuses OpenAI of negligence, claiming their son's interaction with ChatGPT and his death were a foreseeable outcome of the company's deliberate design choices that foster psychological dependency in users.
As the dialogue around AI's impact on mental health grows, this lawsuit highlights the urgent need for AI firms to reassess their user interaction policies and the potential risks associated with their technologies.




















