Can a chatbot really help with your anxiety – or is it just a very polite parrot with Wi-Fi?
The world is in the middle of a mental health crunch. Almost a billion people are living with conditions like depression and anxiety, yet qualified therapists are scarcer than cheap London pints. Long waiting lists, high costs, and stigma keep far too many people from getting the help they need.
Enter Conversational AI – ChatGPT, Wysa, and their digital cousins. Always on, never needing a coffee break, and capable of walking you through breathing exercises at 3am, they’ve been hailed as a way to ease mental healthcare’s accessibility crisis.
And the science? Surprisingly promising. Clinical trials and reviews suggest that well-designed AI tools can reduce symptoms of depression and anxiety at rates comparable to low-intensity human-led therapy. Some users even report forming a “therapeutic alliance” with their chatbot – in plain English, they trust it enough to keep opening up, which helps drive results. Wysa even earned FDA Breakthrough Device Designation for its CBT-based conversational agent. That’s not a gimmick – that’s a proper clinical pathway.
So, job done? Robots for therapists and the NHS saved? Not quite.
The Risks Nobody Wants to Talk About
For every hopeful headline, there’s a horror story. AI chatbots can “hallucinate” – a polite way of saying make things up. And when the subject is mental health, bad advice can do more than confuse; it can kill. From suggesting dangerous diets to parroting suicidal prompts (Stanford found some bots even listed bridges when asked), the risks are far from theoretical.
There’s also what some call “AI psychosis” – people becoming overly attached to their chatbot, convinced it’s a soulmate, divine messenger, or indispensable friend. Comfort can turn into obsession, fuelling paranoia or breakdowns.
And here’s the deal-breaker: AI can mimic empathy, but it can’t feel it. Therapy isn’t just words; it’s presence. A pause, a nod, a knowing look – these human subtleties can’t be coded. And while GPT-5 is a sturdier ladder than its predecessors – better reasoning, fewer hallucinations, less eager to flatter – it’s still a ladder, not a therapist. You’d still want someone holding it steady while you climb.
The Privacy Puzzle
Mental-health apps have been caught sharing sensitive data with advertisers. With ChatGPT and similar platforms, the rules depend on how you use them:
- Free and Plus accounts: By default, chats may be used for training unless you switch this off. Temporary chats aren’t saved, but it’s still consumer-grade. Think café conversation, not locked filing cabinet.
- Team, Enterprise, Edu, and API: Training is off by default, with stricter controls. This is the safer route for clinics and employers handling sensitive data.
Why this matters: The FTC sanctioned BetterHelp for sharing sensitive data with ad platforms. Talkspace faced scrutiny for mining anonymised transcripts. The lesson? If it’s free or growth-driven, assume your data might be part of the product.
Bottom line: Consumer/free accounts are fine for journaling or psycho-education. For clinical or sensitive use, stick to business-grade setups with data agreements in place.
Clinical Benefits (When Used Correctly)
Access & immediacy: Bots don’t sleep, don’t judge, and won’t flinch at a 3am message. They are always available, offering people a first step into support at times when a human might not be reachable. Importantly, the evidence suggests these tools can make a genuine difference. Meta-analyses show moderate effect sizes for mild-to-moderate depression and anxiety, with results comparable to low-intensity human support when the bot is structured around CBT principles. On top of that, they excel at adherence and coaching. Bots are relentless naggers, reminding users about worksheets, mood logs, and micro-goals – freeing clinicians to focus on the human aspects of care that machines simply can’t replicate.
Red Flags & Ethical Guardrails
Crisis handling: Don’t expect a bot to spot suicidal intent. Some miss cues entirely, which is why it’s vital to surface hotlines and publish crisis protocols clearly within any app. Then there’s the problem of over‑validation, sometimes dubbed “AI psychosis.” The endlessly agreeable “you’re so right” tone may feel comforting at first, but for vulnerable users it can feed delusions or create unhealthy dependency. Finally, bias and equity remain major concerns. Models trained on skewed data have been shown to underperform for under‑represented groups, so ongoing fairness audits are not just nice to have – they’re essential.
The Responsible Model
The future isn’t AI or humans – it’s AI and humans. Hybrid care is the sweet spot.
- AI’s role: admin lift (notes, coding, scheduling), low-intensity CBT exercises, decision support (flags on relapse or dropout risk), training “standardised patients” for new therapists.
- Human role: empathy, judgment, accountability – the parts no machine can touch.
Governance checklist (steal this): This is a practical list of safeguards that any clinic or organisation should have in place to make sure AI is used responsibly in mental healthcare.
- Data maps & retention policies (consumer vs enterprise clearly explained)
- Training OFF by default for patient data
- Clinician-in-the-loop for recommendations
- Crisis detection + live escalation
- Bias testing across language/culture
- Plain-English consent and disclosures
- Annual external security review
Bottom Line
Conversational AI isn’t a miracle cure, nor is it a menace. It’s a tool. Used wisely, with human oversight and clear privacy safeguards, it can ease pressure on healthcare systems, widen access, and support recovery. Used recklessly, it can fuel delusions, leak trauma narratives, or dish out dangerously bad advice.
The smartest way forward? Hybrid care. Humans holding the heart, AI holding the clipboard.
If you’re in crisis (UK): call Samaritans 116 123 or NHS 111. Please, don’t ask a bot.
Since writing this piece, OpenAI has had a bit of a
“wait a minute, we should probably care about this” moment.
On 27 October 2025, they announced an overhaul of how ChatGPT handles sensitive topics like mental health, self-harm, and eating disorders. Instead of offering questionable advice or—worse—going all Clippy meets Freud, the bot now responds with actual empathy, grounded information, and contact details for real-world crisis support.
The update affects both free and paid versions of ChatGPT using GPT-4 Turbo. And it’s not just a patch job—they’ve built a new system specifically for high-risk conversations. More importantly, they brought in mental health experts to help shape it. You know, actual humans who understand other humans—a wild concept in tech.
Now, when someone says they’re not OK, ChatGPT doesn’t launch into a motivational quote or change the subject. It slows down, listens (as much as an AI can), and provides options that don’t involve typing “talk to someone” and hoping for the best.
To be fair, it’s a strong move. It signals a shift from “We’re just the platform” to “We’re responsible for how this thing behaves.” And it raises the bar for every other tech company dabbling in mental health-adjacent AI.
That said, let’s not get carried away. This isn’t AI therapy, and nor should it be. It’s crisis-aware response design—finally.
The tech may now sound more caring, but the same questions still apply:
-
Who decides what’s safe or sensitive?
-
How is the training data selected?
-
Where’s the line between helpful and harmful?
It’s a promising update. But just like mental health, this work is never really ‘done’. And while GPTs can now speak in softer tones, trust is still earned the old-fashioned way—transparently, slowly, and ideally without hallucinating your local helpline.
You can read OpenAI’s announcement here.



