Traumatic AI Content Investigation

Migliaccio & Rathod LLP is investigating reports that AI content moderators and prompt generators are being exposed to disturbing and traumatic material without proper warnings, safeguards, or mental health support.

What’s the Problem?

As the artificial intelligence industry rapidly expands, so too does the demand for human workers to review and generate content to train large language models. These workers, often classified as independent contractors, frequently perform high-volume tasks such as content moderation and prompt response generation for major tech companies.

However, many of these workers, referred to as “Taskers,” are allegedly required to engage with extremely graphic and disturbing content, including:

  • Suicide, self-harm, and child sexual abuse;

  • Hate crimes, racist slurs, and violent assaults;

  • Graphic descriptions of rape, murder, and other traumatic subjects.

Taskers claim they were not warned about the nature of the work before accepting assignments, and once hired, were given no meaningful mental health support. Requests for help were reportedly ignored or met with retaliation—including sudden removal from projects.

Why This Matters

Exposure to traumatic content can cause lasting psychological harm—including anxiety, depression, PTSD, social withdrawal, and nightmares. The allegations reveal a deeply concerning pattern in which:

  • Workers are left to navigate trauma alone, with no access to mental health services;

  • Companies profit from AI moderation tasks while outsourcing human risk to vulnerable, unsupported contractors;

  • People attempting to speak out face economic retaliation or blacklisting.

These practices may violate workplace safety obligations, ethical business standards, and consumer protection laws.

What May Be Unlawful

Companies that fail to safeguard the mental health of workers tasked with reviewing traumatic content may be liable for:

  • Negligence, including failure to provide a safe work environment;

  • Unfair and deceptive business practices, under consumer protection laws;

  • Misclassification of workers to avoid providing employee protections and benefits.

Companies must take responsibility for the mental health impact of the work they assign—especially when it involves repeated exposure to violence, sexual abuse, and other trauma.

Have You Worked as a Tasker or AI Content Moderator?

You may have a legal claim if:

  • You worked for Scale AI, Outlier AI, Smart Ecosystem, or a similar AI platform as a Tasker or independent contractor;

  • You were required to review or generate content involving violent or disturbing material;

  • You were not warned about the nature of the work and received no psychological support;

  • You suffered mental health harm as a result;

  • You were ignored or retaliated against when seeking help or speaking up.

Contact Us

If you worked as a content moderator or prompt generator and were harmed by repeated exposure to traumatic material, you may be entitled to compensation or other legal relief.

Please complete the confidential questionnaire below. For more information, email us at [email protected] or call (202) 470-3520 to speak with an attorney.