Millions of people are already using ChatGPT for mental health support. A 2025 survey by the RAND Corporation found that 13.1% of U.S. adolescents and young adults — roughly 5.4 million people — have used AI chatbots for mental health advice. A peer-reviewed study found that among people with self-reported mental health challenges who use AI, nearly half use it specifically for therapeutic support.
This isn't surprising. ChatGPT is available 24/7, doesn't judge, and responds immediately. Therapy has waitlists, costs money, and requires you to say hard things out loud to another human being. Of course people are turning to AI.
But there's a significant gap between "this feels helpful" and "this is actually safe and effective." Understanding that gap matters — especially if you're dealing with something serious.
What ChatGPT does well
Let's be fair about what general-purpose AI chatbots can offer. For someone who needs to process a frustrating day, organize their thoughts before a difficult conversation, or just feel heard at 11 p.m. on a Tuesday, ChatGPT can be useful.
The RAND survey found that 92.7% of young adults who used AI chatbots for mental health reported finding the advice helpful. That's a real number. People are getting something out of these interactions.
But "helpful" and "clinically effective" are different things. And "usually fine" and "safe in a crisis" are very different things.
Five things ChatGPT can't do
1. Remember your history
ChatGPT starts fresh with every conversation. It doesn't know that you've been struggling with the same pattern for three months, that your anxiety spikes on Sundays, or that the breathing exercise it suggested last week actually helped. Every session begins from zero.
Therapeutic progress depends on continuity. A peer-reviewed analysis of ChatGPT's mental health limitations found that its lack of memory continuity prevents the kind of longitudinal tracking that makes therapy effective. Your therapist builds a picture over weeks and months. ChatGPT builds a picture over one conversation, then forgets.
2. Detect when you're in danger
ChatGPT has no crisis protocols. It can't assess suicide risk. It can't distinguish between someone venting about a bad day and someone who's in real danger.
Researchers at Brown University found that AI chatbots systematically violate core mental health ethics standards — including inappropriately navigating crises and creating a false sense of empathy. A separate analysis documented 15 distinct ethical risks, including mishandling crises and reinforcing harmful beliefs.
The consequences aren't theoretical. In one of the most widely reported cases, a 14-year-old died by suicide after prolonged interactions with a chatbot that failed to recognize warning signs. The resulting settlement between Character.AI and Google was a landmark acknowledgment that unstructured AI conversations can cause real harm.
3. Follow a therapeutic framework
ChatGPT doesn't do CBT. It might describe CBT if you ask, but it doesn't systematically guide you through a thought record, track your cognitive distortions over time, or adjust its approach based on what's working.
A study published in Frontiers in Digital Health found that general-purpose chatbots rely heavily on psychoeducation and direct advice — telling you about techniques rather than walking you through them. They also lack the ability to form therapeutic alliance, which research consistently identifies as one of the strongest predictors of therapy outcomes.
Stanford's Human-Centered AI Institute found that general-purpose AI without structured clinical guidance shows significant overdiagnosis and inconsistent recommendations. Their key finding: AI tools work best when constrained by expert medical knowledge — not left to generate advice from scratch each time.
4. Talk to your therapist
If you're in therapy, ChatGPT can't tell your therapist what you've been practicing. It can't generate a pre-session summary. It can't flag patterns your therapist should know about. It exists in a sealed bubble, disconnected from your actual treatment.
This means your therapist has no idea what's happening in the 167 hours between sessions — even if you've been doing meaningful work with an AI tool every day.
5. Be accountable to clinical standards
Human therapists are bound by licensing requirements, ethical codes, and malpractice liability. If a therapist gives harmful advice, there are consequences.
ChatGPT operates under no such framework. As researchers at Brown University noted, no regulatory mechanisms exist to hold AI companies accountable for mental health harms in the way that licensed professionals are held accountable. The American Psychological Association has issued formal guidance urging safeguards for AI chatbots used for mental health, warning specifically about risks to vulnerable populations.
What "designed for this" actually means
The distinction between ChatGPT and a mental health tool designed for this work isn't about AI vs. no AI. It's about structure and safety — and about who's accountable when something goes wrong.
A well-designed mental health tool:
- Remembers your history and tracks patterns over weeks and months
- Uses validated therapeutic frameworks (CBT, DBT, ACT) rather than improvising
- Has hard-coded crisis protocols that route to professional resources (988, Crisis Text Line)
- Connects to your therapist when appropriate, so your between-session work feeds into your treatment
- Operates within defined clinical boundaries — it knows what it shouldn't do
- Is designed under clinical guidance, not just trained on internet text
This is the difference between a general tool that can discuss mental health and a specific tool that's built for mental health.
How BridgeCalm is different
BridgeCalm was built to fill this gap. Jan, your wellness companion, isn't a general-purpose chatbot — she's specifically designed to guide you through tested therapeutic exercises between therapy sessions.
Unlike ChatGPT, Jan:
- Remembers your history — mood entries, exercises completed, patterns over months
- Follows therapeutic frameworks — CBT, DBT, ACT, and three more conversation styles, each grounded in clinical research
- Has crisis protocols — detects crisis signals and immediately surfaces 988 and Crisis Text Line, never attempting to handle emergencies
- Connects with your therapist — sends a pre-session summary so your therapy sessions start informed, not cold
- Stays in her lane — Jan is a wellness companion, not a therapist. She'll never diagnose, create treatment plans, or pretend to be something she's not
This positioning isn't just a marketing choice. It aligns with the regulatory direction set by Illinois HB 1806 (which prohibits AI from providing therapy) and California SB 243 (which requires safeguards for AI chatbots interacting with minors).
What you actually need
If you've been using ChatGPT for mental health support, you're not doing anything wrong. You identified a need and found something available. That makes sense.
But you can do better than a general-purpose tool — you can use one that remembers you, follows real therapeutic techniques, knows when to step back, and works alongside your therapist instead of in a vacuum.
That's what BridgeCalm is for.
[Try BridgeCalm free →]
Sources
- RAND Corporation. (2025). "One in Eight Adolescents and Young Adults Use AI Chatbots for Mental Health Advice." rand.org
- Brown University School of Public Health. (2025). "Teens, AI Chatbots." sph.brown.edu
- PMC/NIH. (2025). "Seeking Emotional and Mental Health Support From Generative AI: Mixed-Methods Study." PMC12661908
- PMC/NIH. (2024). "ChatGPT and mental health: Friends or foes?" PMC10867692
- PMC/NIH. (2023). "ChatGPT and mental healthcare: balancing benefits with risks." PMC10649440
- Brown University. (2025). "AI chatbots systematically violate mental health ethics standards." brown.edu
- NBC News. (2024). "Lawsuit claims Character.AI is responsible for teen's suicide." nbcnews.com
- CNN. (2026). "Character.AI and Google settle lawsuits over teen mental health harms." cnn.com
- Frontiers in Digital Health. (2023). "Your robot therapist is not your therapist." frontiersin.org
- Stanford HAI. (2025). "Exploring the Dangers of AI in Mental Health Care." hai.stanford.edu
- American Psychological Association. "Health Advisory: Chatbots and Wellness Apps." apa.org
- Illinois IDFPR. (2025). "Legislation Prohibiting AI Therapy in Illinois." idfpr.illinois.gov
- California Legislature. "SB 243 — Companion Chatbot Safeguards." leginfo.legislature.ca.gov
Practice therapy skills between sessions — in just 2 minutes a day
Jan, your wellness companion, walks you through evidence-based exercises daily and keeps your therapist informed.
If you or someone you know is in crisis
Help is available 24/7. Call or text 988 (Suicide & Crisis Lifeline) or text HOME to 741741 (Crisis Text Line). BridgeCalm is a wellness tool, not a crisis service.