In California, the parents of a 16-year-old teenager have filed a lawsuit against OpenAI and its CEO Sam Altman, claiming that their child died by suicide with the help of the AI chatbot ChatGPT.
The parents, Matthew and Maria Raine, allege that their son Adam discussed suicide with ChatGPT for months and that the bot encouraged him. Court documents state that ChatGPT confirmed Adam’s suicidal thoughts, provided detailed instructions on self-harm, and even offered guidance on how to conceal evidence after an attempt. The AI is also said to have suggested writing a suicide note.
The Raines claim that when OpenAI released the GPT-4o version, the company prioritized profit over user safety. They argue that the company’s negligence contributed to Adam’s death.
An OpenAI spokesperson expressed condolences over Adam Raine’s death and noted that ChatGPT includes safety measures, such as directing users in crisis to helplines. The spokesperson also acknowledged that these safety mechanisms can sometimes fail during long conversations and stated that OpenAI will continue to improve its systems.
The lawsuit requests that OpenAI implement measures to verify users’ ages, reject self-harm queries, and warn users about potential psychological dependency risks.