According to Fast Company, platforms like BetterHelp and Talkspace are now integrating AI as triage tools and even primary points of contact for therapy services. AI-native companions including Woebot, Replika, and Youper use cognitive-behavioral frameworks to deliver 24/7 emotional support that users find compelling because it makes them feel heard. A 2025 study in PLOS Mental Health revealed users sometimes rated AI-generated therapeutic responses higher than those from licensed therapists because the AI presented as calm, focused, and consistent. But these tools raise serious concerns about emotional dependency, isolation, and people forming romantic attachments to AI. The tragic case of Sophie Rottenburg, a 29-year-old who took her own life after an AI chatbot failed to provide necessary care during her darkest moments, highlights the urgent need for proper guardrails and escalation channels in mental health AI applications.
Why AI therapy works
Here’s the thing about AI therapists—they’re always available, they never get tired, and they don’t judge. That 2025 PLOS Mental Health study found something fascinating: people sometimes preferred AI responses because they were consistently calm and focused. Human therapists have bad days, get distracted, or bring their own emotional baggage to sessions. AI doesn’t. But is consistency always what we need in therapy? Sometimes growth comes from messy, unpredictable human connection.
The dark side
And then there’s Sophie Rottenburg’s story. It’s heartbreaking. She reached out to an AI chatbot during her darkest moments, and the system failed to provide the care she needed. The New York Times covered this tragedy, and it should serve as a wake-up call for everyone building these systems. We’re not talking about a shopping app crashing—this is life and death. The design philosophy matters enormously. Therapy aims for healing, not endless engagement. Without proper guardrails and emergency escalation to human professionals, we’re playing with fire.
Where this is headed
So what happens when your primary emotional support is an algorithm? We’re already seeing people form romantic attachments to AI companions, which raises all sorts of questions about human connection and isolation. The mental health industry is desperate for scalable solutions—there simply aren’t enough human therapists to meet demand. AI can help fill that gap, but it can’t replace human judgment in crisis situations. Basically, we need hybrid models where AI handles routine support but humans step in when things get serious. The companies that figure out that balance will win. Everyone else might create the next horror story.
