Can AI Support Emotional Wellbeing?
Based on Anthropic's internal research, this article examines how users are turning to AI chatbot Claude for emotional support—and what that means for users, developers, and anyone thinking carefully about where AI assistance is appropriate.
Looking for AI training and consulting?
Learn about WARP training programs and consulting services in our materials.
How Claude Is Being Used for Emotional Support
Usage Volume
Emotional support conversations represent approximately 2.9% of Claude's total usage. This appears small in percentage terms, but given Claude's overall scale, it represents a significant number of real people bringing real concerns to an AI.
What People Discuss
The topics covered in these conversations are varied and personal:
- Parenting challenges: Questions about child-rearing decisions, developmental concerns, and the stress of raising children
- Workplace communication: Navigating difficult relationships with colleagues or managers
- Career planning: Thinking through job changes, career pivots, and long-term professional direction
- Personal relationships: Seeking perspective on family dynamics and friendship challenges
Anthropic's Safety Approach
The Clio Privacy Tool
Anthropic has developed Clio, a privacy protection tool that allows the company to analyze usage patterns at scale while minimizing exposure of personal information.
Clio's approach:
- Anonymizes personally identifiable information before analysis
- Focuses on statistical patterns and trends, not individual conversations
- Keeps user privacy as the primary constraint on any research methodology
This allows Anthropic to understand how people are using Claude—including emotionally sensitive uses—without building a database of identifiable personal disclosures.
Analysis Design
Anthropic's research methodology emphasizes:
- No individual user identification in aggregate analysis
- Statistical-level insights only
- User privacy takes precedence over research completeness
The Risks of AI Emotional Dependency
Over-Reliance
The most significant risk is that users come to rely on AI responses in ways that reduce their own judgment or delay seeking appropriate human support. AI systems should be designed to encourage users' own decision-making, not to become a substitute for it.
Important Boundaries
Several things Claude is not—and should not be treated as:
- Not a replacement for mental health professionals: Serious emotional distress, mental health symptoms, and crisis situations require licensed professionals, not AI chatbots
- Not infallible: Claude's responses can be helpful as a sounding board, but should be treated as one input among many—not authoritative guidance
- Not a relationship: AI can provide information and a form of responsive engagement, but it cannot provide the mutual recognition, accountability, or genuine care of human relationships
What Comes Next
AI-supported emotional engagement is likely to improve across several dimensions:
- More contextually appropriate responses: Better understanding of emotional subtext and what kind of response is actually helpful in a given moment
- Clearer referral pathways: More consistent and sensitive guidance toward professional resources when conversations go beyond what AI should handle
- Stronger privacy protections: More advanced anonymization and data handling practices as the field matures
Summary
AI chatbots like Claude are being used for emotional support at meaningful scale—not just as productivity tools. Anthropic's research shows these conversations are real and varied, from parenting worries to career anxiety to relationship stress.
The appropriate response to this reality is not to block such uses, but to understand them clearly and design for them responsibly: building in appropriate limitations, maintaining strong privacy protections, and consistently pointing users toward human professionals when the conversation calls for it.
AI can be a useful thinking partner. It is not a therapist, and it should not be treated as one.
