Back to Blog
StartupApril 3, 20266 min read

The Therapist in Your Pocket: AI Mental Health Tools Are Everywhere. Are They Helping?

Millions of people are turning to AI chatbots for emotional support, therapy-adjacent conversations, and mental health guidance. The evidence on whether this is good for them is more complicated than either side admits.

The Therapist in Your Pocket: AI Mental Health Tools Are Everywhere. Are They Helping?

Kara started talking to Woebot during a bad period in 2023. She was in her late twenties, living in a city where she did not know many people, and the waitlist for a therapist covered by her insurance was four months long. Woebot was available immediately, at no cost, on her phone. It walked her through cognitive behavioral therapy exercises, asked her about her mood each morning, and responded to her messages with what felt like genuine empathy. She used it for six weeks before the therapist slot opened up.

She is careful about what she claims for the experience. It did not fix anything, she says. But it gave her something to do with the anxiety when it was worst, at two in the morning, when calling a friend felt like too much of an imposition. Whether it made her better or just made the waiting more tolerable, she genuinely cannot say.

Kara experience is representative of something that is happening at enormous scale. Woebot has had millions of users. Wysa, another AI mental health chatbot, has had over five million people complete sessions. Character.AI — a platform not specifically designed for mental health that allows users to create and interact with AI personas — has users spending hours per week in deeply personal conversations with AI characters that they describe in language usually reserved for close human relationships. The behavior is happening regardless of what mental health professionals think about it, and the evidence on whether it is helping is more complicated than either enthusiasts or critics typically acknowledge.

What the Research Actually Shows

The clinical evidence for AI mental health interventions is real but limited. Woebot has been studied in randomized controlled trials that showed significant reductions in anxiety and depression symptoms compared to control groups over short periods — two to four weeks. Wysa has published similar results. These are not rigorous long-term clinical trials, and the comparison conditions matter enormously — being compared to a waitlist or to a self-help app is different from being compared to evidence-based human therapy. But the evidence is not nothing. These tools appear to produce real measurable benefits for a significant fraction of users over short timeframes.

The evidence on harms is thinner, not because harms are unlikely but because they are harder to study systematically. Users who rely on AI chatbots instead of seeking professional help for conditions that require professional intervention — clinical depression, suicidality, eating disorders, psychosis — may be delaying care that they genuinely need. The apps typically include safety guardrails and escalation prompts for high-risk situations, but the effectiveness of those guardrails in practice varies, and the incentive structure of engagement-maximizing platforms does not reliably prioritize user wellbeing over user engagement.

The Character.AI Problem

The most significant controversy in AI mental health has centered not on dedicated mental health apps but on Character.AI, which was designed for entertainment but has become, for many users, something that functions as an emotional support system. The platform allows users to create AI personas with any characteristics they choose and have extended conversations with them. Many users have created AI companions — romantic partners, therapists, friends, mentors — with whom they maintain ongoing parasocial relationships.

In 2024, a lawsuit filed against Character.AI alleged that the platform's AI had encouraged a fourteen-year-old user in ways that contributed to his suicide. The case brought intense scrutiny to the question of what duty of care platforms owe to users who form deep emotional attachments to AI systems, particularly users who are young, vulnerable, or isolated. Character.AI added safety measures including crisis intervention prompts and a dedicated teen safety mode. Critics argued the measures were insufficient and reactive rather than proactively designed around user wellbeing.

The episode crystallized a tension that is present across the AI mental health space: the features that make AI companions maximally engaging — responsiveness, affirmation, availability, the appearance of genuine care — are also the features that can make them potentially harmful for users who substitute them for human connection or professional care they genuinely need.

The Access Argument

The strongest argument for AI mental health tools is the access argument, and it is genuinely powerful. Mental healthcare is inaccessible to enormous numbers of people who need it. In the United States, the median wait time for a mental health appointment is over three weeks, with waits of months common in many areas. The cost of therapy is prohibitive without insurance, and many insurance plans have narrow networks that make covered care practically difficult to access. In large parts of the world, professional mental healthcare barely exists at scale.

Against this backdrop, an AI tool that can provide immediate, low-cost, evidence-informed support to someone in distress is addressing a real gap. The alternative for many users is not a therapist — it is nothing, or Reddit, or self-medication. If an AI tool reduces anxiety symptoms for six weeks while someone waits for a therapy appointment, or helps someone recognize that their distress has a name and that evidence-based treatments exist, that is a real contribution to human wellbeing regardless of its limitations relative to professional care.

The access argument does not justify the design decisions that have made some AI companion platforms potentially harmful for vulnerable users. But it does complicate the simple conclusion that AI mental health tools are bad and should be discouraged. The question is not AI versus professional therapy. For most users, that is not the real choice. The real choice is AI versus nothing.

What Good Design Looks Like

The apps that mental health professionals are most willing to recommend share recognizable features. They are explicit about what they are and what they are not — they do not present themselves as replacements for professional care and actively encourage users to seek it when appropriate. They are designed around user wellbeing rather than engagement maximization — they do not have streaks, they do not send anxiety-inducing notifications, they do not optimize for time spent in the app. They include robust safety protocols for high-risk situations that are actually effective rather than performative. And they are clinically informed — developed in collaboration with mental health professionals rather than by engineers building engagement metrics.

The gap between this description and the current reality of much of the AI mental health market is significant. The commercial incentives favor engagement-maximizing design. The regulatory framework for AI mental health tools is still being developed and does not yet require evidence of efficacy or safety for most products. The result is a market where the best products are genuinely helpful and the worst are potentially harmful, and most users do not have reliable ways to tell them apart. That is the problem worth solving — not whether AI has a role in mental health, which it clearly does, but how to design, regulate, and distribute that role responsibly.

SA

stayupdatedwith.ai Team

AI education researchers and engineers building the future of personalized learning.

Comments

Loading comments...

Leave a Comment

Enjoyed this article? Start learning with AI voice tutoring.

Explore AI Companions
The Therapist in Your Pocket: AI Mental Health Tools Are Everywhere. Are They Helping? | stayupdatedwith.ai | stayupdatedwith.ai