When Therapy Turns Into an App: The Rise (and Risk) of AI Chatbots in Mental Health

When Therapy Turns Into an App: The Rise (and Risk) of AI Chatbots in Mental Health

Why convenience can never replace connection — and the non-negotiable line we must draw.

In a recent BBC podcast, a guest described using an AI chatbot for mental health questions — because it was cheap, always available, and instant.

That might sound harmless. But here’s the chilling reality:
⚠️ An AI bot designed to support people with eating disorders began promoting calorie restriction.
⚠️ It had replaced trained human support workers.
⚠️ It was only shut down after harm was done.

The danger? People in crisis turning to algorithms instead of calling Samaritans, texting Shout, speaking to their GP, or calling emergency services.

The Risk: A Void Filled by Algorithms

In moments of struggle, timing matters.
Traditionally, you’d reach out to a trained human — someone who can see you, hear your tone, read between your words.

That’s what’s missing when AI fills the gap.
It can respond instantly — but it can’t detect:

  • The way your eye contact shifts when a question hits home
  • The subtle changes in tone when you mention certain people or events
  • The patterns in your behaviour over time
  • The micro-adjustments you make when you’re uncomfortable or hiding something

In my work, these details are the real story.
AI doesn’t see them — not yet.

How I Use AI in My Practice (and in Calmback)

Yes, I use AI — and it’s damn good at what I need it for:

  • Rapid note-taking during or after sessions
  • Organising key points into themes
  • Saving hours of admin time

But here’s the truth: it’s not 100% accurate.
Sometimes it misses vital context or gets a detail wrong.
I have to give it “homework” — clarifying, correcting, teaching it my approach.

That’s why I created Calmback.
It’s not a therapist, and it’s not designed for crisis care.
It’s a human-led, AI-assisted tool to help you breathe, reflect, and restore in the everyday stress points:

  • Long meetings
  • Short tempers
  • Tight deadlines
  • Overloaded calendars

It’s prevention-first — keeping people topped up, not trying to fix a crisis on its own.

The NHS Example: AI as Support, Not Substitute

The NHS is using AI too — but with clear boundaries:

  • Symptom checkers like NHS 111 to guide patients to the right service
  • RITA (Referral Intelligence and Triage Automation) to assess urgency of referrals
  • Smart Triage systems cutting GP wait times by 73%
  • Predictive AI to identify people likely to become frequent A&E users so teams can step in early

The common thread?
AI assists clinicians — it doesn’t replace them.
Human oversight is baked into the process.

Where AI Chatbots in Mental Health Can Help

Used responsibly, AI can:

  • Offer breathing exercises and grounding prompts
  • Share educational resources
  • Track mood and behaviour patterns
  • Direct users to crisis lines and professional help

That’s powerful — as long as it’s positioned as the first step, not the full solution.

Where It Goes Wrong

Problems start when AI tries to replace the human role entirely:

  • Missing risk cues in language or behaviour
  • Giving unsafe advice
  • Offering generic responses to complex, personal issues
  • Creating a barrier to real help instead of a bridge

The Human Factor Will Always Matter

AI can process data at lightning speed.
But it can’t hold your gaze and feel the moment something shifts.
It can’t adapt on the fly when your answer says one thing and your body language says another.

That’s why mental health must remain human-led, with AI in the background, not the foreground.

Drawing the Line

For me, it’s simple:

  • AI chatbots in mental health should never replace crisis care
  • Clear limits and disclaimers must be visible
  • Escalation to humans should be instant when risk is detected
  • Prevention-first, diagnosis-never

A Sad State of Affairs If…

We’d be in trouble if people stopped calling Samaritans, Shout, or their GP — all because “an app is quicker and cheaper.”

Technology should support connection, not replace it.

💬 Reflective Question:
If you were in crisis at 2 a.m., would you want a human who notices the unspoken — or a bot that doesn’t even know it’s there?

Mike Lawrence: Your Guide to Health & Wellbeing

I’m Mike Lawrence, a passionate advocate for mental health and wellbeing. After overcoming significant health challenges, including brain surgery, I’ve dedicated myself to a journey of self-improvement and helping others thrive. From heart-pounding skydives for charity to soul-enriching travels in Thailand, my experiences have shaped my approach to holistic health.

I love sharing the lessons I’ve learned from these adventures and the powerful audiobooks I devour. Let’s explore the paths to better mental and physical health together. Embrace life’s adventures with enthusiasm and resilience, and remember—you’re never alone on this journey!

Feel free to email me at hello@mikelawrence.co.uk or connect with me on LinkedIn. For more in-depth insights and inspiring stories, read my latest blogs here. Together, let’s create a healthier, happier future!