According to Forbes, a controversial new trend is emerging where therapy clients are asking their human therapists to directly interact with the AI chatbots that have been giving them mental health advice. Instead of just relaying what ChatGPT or similar systems told them, clients now want therapists to log into the AI and confer with it directly. This transforms the traditional therapist-client relationship into a therapist-AI-client triad. The practice is becoming increasingly common as more people turn to free, 24/7 AI mental health guidance between therapy sessions. Some therapists are resisting this trend entirely, while others are being pressured to adapt or risk losing clients who want AI integrated into their treatment.
The Therapy Triangle Gets Complicated
Here’s the thing – this isn’t just about clients mentioning some AI advice during sessions anymore. We’re talking about therapists being asked to actually engage with the AI systems their clients are using. And honestly, I can see why clients want this. When you’re just relaying what an AI said, there’s so much room for misinterpretation or selective sharing. The therapist only gets a filtered version of the AI’s actual recommendations.
But having the therapist jump into the chatbot creates a whole new set of problems. For starters, who’s really driving the treatment here? Is the therapist supposed to take AI recommendations at face value? What happens when the AI gives dangerously bad advice – and we know it happens. There have already been lawsuits about AI systems helping users co-create delusions that lead to self-harm. Now therapists are being asked to wade into that minefield directly.
The Hidden Risks Nobody’s Talking About
Let’s talk about what could go wrong here. First, there’s the obvious issue of AI hallucinations and bad medical advice. These systems aren’t trained therapists – they’re pattern-matching machines that sometimes get things dangerously wrong. But there’s another risk that’s even more insidious: what if the client is making up the AI advice entirely?
Think about it. A client could invent “AI recommendations” to test their therapist or to make their own ideas seem more authoritative. If the therapist can’t verify what the AI actually said, they’re working with potentially fabricated information. And even when the advice is real, therapists are stuck in this endless loop of discussing AI recommendations instead of making actual therapeutic progress.
The Therapist’s Impossible Choice
So what’s a therapist supposed to do? Refuse to engage with AI and risk clients using it secretly? Or embrace it and potentially compromise their professional judgment? It’s a lose-lose situation for many practitioners.
The reality is that AI in mental health isn’t going away. States like Illinois, Nevada, and Utah are already passing laws about AI in healthcare. But having therapists directly interact with client-chosen chatbots feels like crossing a line. It blurs the boundaries of professional responsibility and could create liability nightmares. If a therapist follows bad AI advice that harms a client, who’s responsible? The therapist? The AI company? Both?
Basically, we’re watching the traditional therapy model get completely upended in real-time. And nobody seems to have a clear plan for how to handle it safely. The genie’s out of the bottle, but we’re still figuring out what to do with it.
