According to the company, approximately 0.15% of active users engage in explicit conversations about suicide each week. With ChatGPT’s weekly user base exceeding 800 million, this translates to over one million people every seven days. Additionally, OpenAI reports that hundreds of thousands of users display strong emotional attachment to AI or show signs of psychosis during conversations. While the company describes such chats as “extremely rare,” it acknowledges that the issue affects hundreds of thousands of people.
Alongside these figures, OpenAI highlighted its extensive efforts to improve AI responses to users’ mental health concerns. The company consulted over 170 mental health professionals, whose feedback indicates that the latest version of ChatGPT provides more appropriate and stable responses to sensitive topics compared to previous versions.
Recent studies have shown that some AI chatbots can reinforce dangerous beliefs, potentially leading users toward delusional thinking. This has become a serious concern for OpenAI, especially after the parents of a 16-year-old who committed suicide filed a lawsuit against the company.
OpenAI’s latest model, GPT-5, has shown improved performance in handling mental health-related queries, delivering approximately 65% more desirable responses than earlier versions. In suicide-related response evaluations, GPT-5 adhered to the company’s safety guidelines 91% of the time, compared to 77% for the previous model, and also better maintains long-term safety barriers.
In addition, OpenAI has enhanced parental controls, including an age-detection system to automatically identify child users and apply stricter safety rules. While GPT-5 is safer than previous models, some responses are still considered “undesirable”, and older models like GPT-4o remain available, showing that the problem has not been fully resolved.
