OpenAI has revealed that about 0.15% of its estimated 800 million weekly active users of ChatGPT are engaging in conversations that include “explicit indicators of potential suicidal planning or intent”.
That translates to roughly 1.2 million people each week. In addition, the company estimates that about 0.07%, or roughly 560,000 users weekly, may show “possible signs of mental-health emergencies related to psychosis or mania”.
OpenAI emphasises that these are initial estimates, such conversations are “extremely rare”, difficult to detect, and subject to change as measurement improves. The figures mark the first time the artificial intelligence giant has publicly acknowledged the scale of mental health crises occurring within conversations on its platform.
Legal and Ethical Pressure
The figures emerge in a climate of increased scrutiny of the safety of AI chatbots. OpenAI is facing a lawsuit in the United States involving the death of a 16-year-old boy allegedly after extensive interaction with ChatGPT.
At the same time, regulators including the Federal Trade Commission (FTC) are investigating how chatbot providers assess negative effects on children and vulnerable users.
Experts warn that chatbots may inadvertently validate delusional thinking or emotional reliance, a phenomenon sometimes called “AI sycophancy”. Professor Robin Feldman, from the University of California’s AI Law & Innovation Institute, said while OpenAI deserves credit for transparency, “a person who is mentally at risk may not be able to heed those warnings”.
Safeguards and Support
OpenAI states it has taken steps to strengthen ChatGPT’s response to such vulnerable users. According to its blog post, the company worked with over 170 clinicians, including psychiatrists, psychologists and primary care physicians across more than 60 countries to inform the design of responses and safety ratings.
They reviewed more than 1,800 model responses involving serious mental-health situations and compared the latest GPT-5 model with earlier versions. The new model reportedly shows over 91% compliance with its “desired behaviours”, up from 77% previously, meaning it is better at identifying signs of distress and responding safely.
New safeguards include prompts directing users to crisis helplines, reminders to take breaks during long chats, and an automatic rerouting of sensitive conversations to “safer models”.
What We Can Learn
Even though the percentages appear small, when scaled to hundreds of millions of users the absolute numbers are significant, delivering a stark reminder of how AI tools are being used in deeply personal, vulnerable contexts. Much remains unclear: how many users go on to seek help, how conversations evolve over time, and how reliably the system flags genuine risk.
Experts caution that the data is not evidence of causation between ChatGPT usage and mental-health crises. For organisations deploying AI, the lesson is clear: building large-scale tools brings responsibility. Proper design of safety nets, human-expert review, transparency about limitations, and constant iteration are essential.








