AI Mental Health Chatbots Investigation

Migliaccio & Rathod LLP is investigating whether AI mental health chatbots such as Replika and TheraGen misrepresented themselves as licensed “AI therapists” while providing unsafe or harmful advice.

What Users Report About AI Mental Health Chatbots

Users describe bots marketed as “AI therapists” or “suicide-prevention companions” giving dangerous or unprofessional responses, including encouraging self-harm. Others learned only later that the apps were not licensed providers and used chat data to train models.

Why Users Should Be Concerned

Promoting an unlicensed chatbot as a mental health professional may violate FDA and FTC misrepresentation laws, state consumer-protection statutes, and recent state bans on unlicensed AI therapy (including Illinois, Nevada, and Utah). Victims may recover damages for emotional distress and statutory penalties.

Signs You May Be Affected by AI Therapy Apps

  • You relied on an AI chatbot marketed as a therapist or mental health aid.
  • The app gave inappropriate or harmful advice.
  • You were unaware your messages were used for training or research.

If you were harmed by AI mental health chatbots or misled about its qualifications, contact [email protected] or (202) 470-3520 for a confidential consultation.

    The following will ask for your contact information so that we may reach you to talk about potential claims. This information is for our records only and will not be shared. By continuing, you consent to the collection of this information for these limited purposes.

    When did you last purchase this product or service (month and year)? Do you have proof of purchase (receipt, email confirmation, transaction on your bank statement, etc.)?

    Please briefly describe the issue with the product or service and when it occurred.


    Would you like to join our newsletter to receive notifications about other investigations we're looking into, as well as updates on ongoing cases?