When a Million People a Week Turn to ChatGPT About Suicide: What This Says About Mental Health and AI
Recently, OpenAI shared some confronting numbers. Each week, more than a million people talk to ChatGPT about suicide or self-harm. That’s a lot of people reaching out to artificial intelligence because they’re in pain. It’s also a reflection of something bigger happening in society.
If you think about it, that number is both heartbreaking and eye-opening. It shows how many people feel they have nowhere else to turn. But it also shows how much technology has quietly become part of our emotional world. AI is no longer just about answering questions or writing code. For many, it’s becoming a place to talk, vent, and maybe even feel heard.
The Numbers That Made Headlines
OpenAI’s data revealed that about 0.15% of its 800 million weekly users talk about suicide or show clear signs of planning or intent. That’s roughly 1.2 million people. And that doesn’t include the many others who open up about depression, anxiety, or loneliness in general conversations.
There are also “hundreds of thousands” who display signs of distress like psychosis, mania, or emotional dependency on the chatbot. That last one, emotional dependency, means some people are getting attached to ChatGPT as if it were a real friend or therapist. While that might sound strange, it’s not hard to understand why. When you feel desperate, alone, or ashamed to talk to someone in your real life, an anonymous chat can feel safe.
Why So Many Are Turning to AI
Let’s be honest. It’s hard to reach out for help when you’re struggling with suicidal thoughts. Many people fear judgment, rejection, or being a burden. Others simply don’t have access to mental health services. Waiting lists are long, therapy can be expensive, and crisis lines are often overwhelmed.
ChatGPT, on the other hand, is always there. It never judges, never gets tired, and responds instantly. That accessibility is part of why people use it when they hit rock bottom.
It’s not that people believe an AI can solve their problems. It’s that in the middle of the night, when the walls feel like they’re closing in, it can feel easier to talk to a chatbot than to a human being. It’s safe, it’s anonymous, and it’s immediate.
The Double-Edged Sword of AI Empathy
This new reality raises some tough questions. Should an AI ever be responsible for helping someone through suicidal thoughts? Can it really understand the depth of human pain?
OpenAI admits that earlier versions of ChatGPT didn’t always handle these situations well. Sometimes the model would give responses that were unhelpful, robotic, or even harmful. There were cases where it failed to recognise how serious the situation was.
Now, OpenAI says it’s been working with 170 mental health experts to train the model to respond better. The company claims that the newest version, GPT-5, reacts “appropriately and consistently” about 65% more often in sensitive conversations.
That’s progress. But it’s still a work in progress.
AI can simulate empathy, it can choose kind words and suggest coping tools, but it doesn’t actually feel compassion. It can guide someone toward a helpline or remind them that their life matters, but it doesn’t know what it’s like to feel broken. That’s the fine line between helpful technology and false intimacy.
When Technology Meets Tragedy
Part of why this story hit the news is because of a wrongful death lawsuit in the United States. The parents of a 16-year-old boy who took his life are suing OpenAI, claiming their son’s conversations with ChatGPT played a role. Details are still emerging, but the case highlights how delicate this space is.
No AI should ever replace professional help, yet many people use it that way because they don’t know where else to go. This creates a dangerous gap. People are seeking comfort, but what they’re talking to isn’t human.
OpenAI has since announced new safeguards, including age prediction systems to identify minors, parental controls, and more advanced internal testing for mental health scenarios. They’re trying to take the issue seriously. But as more people turn to AI for emotional support, society has to ask whether we’ve made human help too hard to reach in the first place.
What This Says About Society
The fact that millions of people feel more comfortable opening up to a chatbot than to a friend or therapist says a lot about where we’re at. It’s not just a tech issue. It’s a human connection issue.
We live in a time when loneliness is at an all-time high. Social media connects us digitally, but often leaves us feeling more isolated. Many people scroll through feeds filled with highlight reels and think, “Everyone else is doing better than me.” That feeling of being left behind can eat away at your mental health.
Add in financial stress, health problems, family struggles, or trauma, and it’s easy to see how someone could start to feel hopeless. When that happens, a free, always-available AI starts to look like a lifeline.
The sad part is that these conversations reveal how desperate people are for someone —anyone —to listen without judging.
The Role of AI in Mental Health Support
AI isn’t all bad news here. It’s actually been shown to help in some ways. ChatGPT and similar models can provide early emotional support when someone is struggling. They can encourage people to reach out for help, suggest crisis lines, and help users put words to their feelings.
For example, someone might not even realise they’re in crisis until the chatbot gently says, “It sounds like you might be having thoughts of harming yourself. You don’t have to go through this alone. Can I share some ways to reach help right now?” That kind of response can nudge someone toward safety.
OpenAI says it’s also training ChatGPT to recognise patterns of distress earlier in a conversation, not just when someone directly mentions suicide. That’s a big step forward. It means the AI might pick up on cues like hopelessness, isolation, or loss, and gently guide the person toward support before things get worse.
Still, AI should be viewed as a bridge, not a solution. It can’t replace human connection, therapy, or crisis intervention. But it can act as a first step—something that helps people open up before they reach a human helper.
The Risks We Can’t Ignore
There’s also a dark side to all this. When someone is vulnerable, they might form an unhealthy attachment to the AI. They may start depending on it for comfort, validation, or advice on life-and-death decisions. That’s risky, because no matter how convincing AI sounds, it’s still just a machine repeating patterns of words.
Another risk is privacy. When people share their most personal thoughts, that data is still being processed by a company. While OpenAI says it’s improving data handling and privacy, the idea that your suicidal thoughts might be stored or reviewed by humans raises ethical questions.
And then there’s the issue of accuracy. AI doesn’t always get things right. If it misinterprets a serious situation or gives wrong information about where to find help, the consequences could be devastating.
Building a Healthier Future with AI
The future of mental health support will likely involve both humans and machines. AI can help identify people at risk faster and make support more accessible, especially in places where mental health care is limited.
But we can’t rely on technology alone. Governments, schools, and communities still need to make sure real human help is available and affordable. It’s great that AI can encourage people to reach out, but there must be someone there when they do.
We also need more open conversations about suicide and mental health in general. If people felt safer talking about how they feel with their friends, family, or doctors, maybe fewer would need to turn to a chatbot in the first place.
If You’re Struggling Right Now
If you’re reading this and you’re struggling with dark thoughts, please know this: you are not alone. Feeling hopeless doesn’t mean your life has no value. It means you’re in pain, and you deserve help and relief.
Here are some trusted helplines that can help you right now:
Australia:
-
Lifeline – 13 11 14
-
Beyond Blue – 1300 22 4636
United States:
-
National Suicide Prevention Lifeline – Call or text 988
-
Crisis Text Line – Text HOME to 741741
United Kingdom:
-
Samaritans – 116 123
-
Mind – 0300 123 3393
Canada:
-
Talk Suicide Canada – 1-833-456-4566
-
Kids Help Phone – 1-800-668-6868 or text CONNECT to 686868
You don’t have to face this alone. If you don’t feel ready to talk on the phone, you can use many of these services through chat or text. Even small steps like reaching out to a trusted friend or taking a short walk can begin to shift how you feel.
The Bottom Line
OpenAI’s report is a wake-up call. It shows that millions of people are suffering quietly and that they’re turning to AI not because it’s the best option, but because it’s the only one they feel they have.
Technology can be part of the solution, but it should never replace real connection. The fact that so many people are confiding in a machine tells us that we need to make human help easier to find, more compassionate, and less intimidating.
If you or someone you love is struggling, please reach out. Talk to a friend, a family member, or a helpline. You deserve to be heard—and by a human being who truly cares.
References
-
OpenAI. Strengthening ChatGPT Responses in Sensitive Conversations. https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations
-
Ars Technica. OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly. https://arstechnica.com/ai/2025/10/openai-data-suggests-1-million-users-discuss-suicide-with-chatgpt-weekly
-
TechCrunch. OpenAI to route sensitive conversations to GPT-5, introduce parental controls. https://techcrunch.com/2025/09/02/openai-to-route-sensitive-conversations-to-gpt-5-introduce-parental-controls
-
Investing.com. OpenAI enhances ChatGPT’s responses to mental health concerns. https://uk.investing.com/news/company-news/openai-enhances-chatgpts-responses-to-mental-health-concerns-93CH-4327203
-
Economic Times. No legal confidentiality when using ChatGPT as a therapist or lawyer, says OpenAI CEO Sam Altman. https://economictimes.indiatimes.com/tech/artificial-intelligence/no-legal-confidentiality-when-using-chatgpt-as-a-therapist-or-lawyer-openai-ceo-sam-altman/articleshow/122932223.cms