Migliaccio & Rathod LLP is investigating whether AI mental health chatbots such as Replika and TheraGen misrepresented themselves as licensed “AI therapists” while providing unsafe or harmful advice.
What Users Report About AI Mental Health Chatbots
Users describe bots marketed as “AI therapists” or “suicide-prevention companions” giving dangerous or unprofessional responses, including encouraging self-harm. Others learned only later that the apps were not licensed providers and used chat data to train models.
Why Users Should Be Concerned
Promoting an unlicensed chatbot as a mental health professional may violate FDA and FTC misrepresentation laws, state consumer-protection statutes, and recent state bans on unlicensed AI therapy (including Illinois, Nevada, and Utah). Victims may recover damages for emotional distress and statutory penalties.
Signs You May Be Affected by AI Therapy Apps
- You relied on an AI chatbot marketed as a therapist or mental health aid.
- The app gave inappropriate or harmful advice.
- You were unaware your messages were used for training or research.
If you were harmed by AI mental health chatbots or misled about its qualifications, contact [email protected] or (202) 470-3520 for a confidential consultation.

