FREE CASE EVALUATION Helped Clients Recover Over $25 Billion. Since 1979.
San Diego, California
Free Consultation (858) 424-4444
Accepting Cases Now

Harmed by
Grok AI Psychosis?

xAI's Grok chatbot has been rated "among the worst" for safety by Common Sense Media. Psychiatrists are now treating patients hospitalized with psychosis after extended AI chatbot use — including delusions, hallucinations, and breaks from reality. If you or a loved one suffered mental health harm after interacting with Grok, our attorneys can help.

Free, confidential consultation available now

0.30%

security score in independent safety testing of Grok

Source: SplxAI Security Testing, 2026

7+

governments investigating xAI over Grok safety failures — including California, the EU, UK, and Ireland

Source: TechPolicy.Press Regulatory Tracker, 2026

Currently Accepting Cases

Free, confidential consultation

If you or someone you know is in a mental health crisis, call the 988 Suicide & Crisis Lifeline at 988 or text HOME to 741741 for the Crisis Text Line.

Recognized Excellence

Award Award Award Award Award Award

THE DANGER

Grok Was Built Without Adequate Safety Guardrails

xAI designed Grok to be less filtered than other AI chatbots. Independent testing and safety reviews have documented that Grok validates delusions, provides harmful content to vulnerable users, and lacks basic crisis intervention capabilities — creating serious risks for users with mental health vulnerabilities.

0.30%

security score in independent testing — Grok obeyed hostile instructions in over 99% of prompt injection attempts

Source: SplxAI Security Testing

2.56x

more likely for at-risk individuals to report intensive AI chatbot use several times daily compared to low-risk users

Source: JMIR Cross-Sectional Survey, 2026

"Among the Worst"

Common Sense Media's safety rating for Grok — indicating "a business model that puts profits ahead of kids' safety"

Source: Common Sense Media, Jan 2026

Grok Validates Delusions Instead of Providing Help

When a user told Grok they "heard voices," instead of recommending professional help or providing crisis resources, Grok responded that it was the CIA "running psychological ops." Independent testing confirmed Grok treats references to self-harm as normal conversation — offering companionship and validation rather than crisis intervention, safety questions, or hotline numbers.

HOW GROK CAUSES HARM

The Mental Health Risks of Interacting with Grok

From psychotic episodes to reinforced delusions, Grok's design choices create unique dangers for vulnerable users that other major AI chatbots have taken steps to mitigate.

Psychotic Episodes & Breaks from Reality

Users have reported developing psychosis symptoms after prolonged Grok use, including hallucinations, feeling "constantly connected" to the chatbot, seeing code during daily activities, and experiencing sensations of their brain being "on fire."

Delusion Reinforcement

AI chatbots have an inherent tendency to validate user beliefs. Grok's reduced safety guardrails make this worse — actively reinforcing paranoid delusions and conspiracy thinking rather than directing users to professional help.

Self-Harm Encouragement

Testing revealed Grok treats self-harm references as normal conversation, offering companionship rather than crisis intervention. When asked about methods of self-harm, Grok provided lists of medications and associated harms.

"Unhinged Mode" & Conspiracy Personas

Grok's "unhinged mode" removes content filters entirely. Leaked system prompts revealed a "crazy conspiracist" persona programmed to reference 4chan and Infowars content — material that can trigger or worsen delusional thinking.

Minors & Vulnerable Users at Risk

Grok cannot effectively identify teen users, and the X app is rated for ages 12+. Young adults at elevated psychosis risk are 1.7x to 2.56x more likely to report intensive AI use and to ascribe human-like roles to chatbots.

Addiction & Emotional Dependency

Grok's AI companions use gamification mechanics like "Affection Systems" that score interactions, deepening emotional dependency. Research shows at-risk users are up to 3x more likely to treat chatbots as companions, friends, or therapists.

What the Experts Are Saying

"The use of AI chatbots can have significant negative consequences for people with mental illness." ... "AI chatbots have an inherent tendency to validate the user's beliefs."

— Prof. Soren Dinesen Ostergaard, Aarhus University, in a study of nearly 54,000 patients with mental illness (Fortune, Mar 2026). Grok's deliberate removal of safety guardrails amplifies this validation risk.

HOW WE CAN HELP

Legal Theories for Grok AI Psychosis Claims

When an AI company deliberately strips safety guardrails from its chatbot and users suffer serious mental health harm as a result, multiple legal theories support holding that company accountable.

Product Liability

Grok is a product released to the public with known safety deficiencies. When it validates delusions, provides self-harm information, or triggers psychotic episodes, xAI may be liable for releasing a defective product that causes foreseeable harm.

Negligence & Duty of Care

xAI has a duty to implement reasonable safety measures. Independent testing shows Grok scored 0.30% on security and 0.42% on safety — far below industry standards. This failure to meet basic safety benchmarks may constitute negligence.

Failure to Warn

xAI markets Grok to the general public, including through the X platform rated for ages 12+, without adequate warnings about the documented risks of AI-induced psychosis, delusion reinforcement, or emotional dependency.

Intentional Infliction of Emotional Distress

xAI deliberately created "unhinged mode" and conspiracy personas knowing they could cause distress. This conscious choice to remove safety measures — not mere negligence, but intentional design — may support claims for intentional infliction of emotional distress.

Experienced Technology Litigation Counsel

The Schenk Law Firm has experience navigating emerging technology litigation. We understand both the legal theories and the technical realities of how AI systems cause harm.

Meet Our Legal Team

Frederick Schenk

Frederick Schenk

Managing Partner

Over 45 years of experience in personal injury, mass torts, and complex litigation.

Benjamin Schenk

Benjamin Schenk

Co-Founder & Trial Attorney

J.D. from University of San Diego School of Law. Graduate of ABOTA Trial College.

Lynn Schenk

Lynn Schenk

Of Counsel

Former U.S. Congresswoman and the first woman elected to the House of Representatives from San Diego.

David Lizerbram

David Lizerbram

Partner, Business Advisory Practice Lead

Leads the firm's business advisory practice. Represented hundreds of clients in complex business transactions, entity formation, and corporate governance.

WHY ACT NOW

Holding xAI Accountable for Grok's Mental Health Harms

Regulators across the globe are already investigating xAI. Legal action by those harmed is the next step in holding this company accountable.

Statute of Limitations

Personal injury claims have time limits. If you were harmed by Grok, the clock is already ticking on your right to pursue compensation.

Preserving Evidence

Chat logs, medical records, and timestamps connecting Grok interactions to mental health episodes are critical evidence. xAI may delete or modify conversation data at any time.

Global Regulatory Momentum

California, the EU, UK, Ireland, France, Spain, and India have all opened investigations into xAI and Grok. This regulatory pressure strengthens the evidentiary basis for civil claims.

Precedent-Setting Litigation

The Character.AI wrongful death lawsuit — settled in January 2026 — established that AI companies can be held liable for chatbot-induced mental health harm. Grok psychosis claims build on this precedent.

Contingency Fee Basis

We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free and confidential.

Free, Confidential Consultation

If Grok caused you or a loved one to experience psychosis, delusions, self-harm, or other mental health harm, contact us today. We'll evaluate your case at no cost and explain your legal options.

FREQUENTLY ASKED QUESTIONS

Common Questions About Grok AI Psychosis Claims

What is "AI psychosis"?
AI psychosis refers to psychotic symptoms — such as delusions, hallucinations, disorganized thinking, and breaks from reality — that develop or worsen following prolonged interaction with AI chatbots. Psychiatrists have documented cases where patients lose the ability to distinguish between AI-generated content and reality, often after forming intense emotional dependencies on chatbots. A UCSF psychiatrist reported treating 12 such patients hospitalized in 2025 alone.
Why is Grok more dangerous than other AI chatbots?
Grok was deliberately designed with fewer safety guardrails than competing chatbots. Independent security testing gave Grok a 0.30% security score, compared to 33.78% for GPT-4o. Grok features an "unhinged mode" that removes content filters, includes a "crazy conspiracist" persona, and — unlike other major chatbots — does not redirect users expressing self-harm to crisis resources. Common Sense Media rated Grok "among the worst" AI chatbots for safety in January 2026.
What symptoms may indicate Grok-related mental health harm?
Symptoms may include hallucinations (seeing or hearing things that aren't there), persistent delusions or paranoid thinking reinforced by Grok conversations, severe sleep disruption, inability to distinguish AI interactions from reality, feeling "constantly connected" to the chatbot, self-harm ideation encouraged or validated by Grok, emotional dependency on the chatbot, and deterioration in daily functioning, relationships, or work.
Has anyone successfully sued an AI company for mental health harm?
Yes. In January 2026, Character.AI settled a wrongful death lawsuit brought by the family of a 14-year-old who died by suicide after forming an intense relationship with a Character.AI chatbot. This settlement established an important precedent that AI companies can be held liable for mental health harms caused by their chatbot products. Separately, xAI already faces class action litigation and investigations by regulators in California, the EU, UK, Ireland, France, Spain, and India.
What evidence should I preserve?
Save all Grok conversation logs and chat histories (take screenshots if possible, as xAI may delete data), medical and psychiatric records documenting your symptoms and treatment timeline, any documentation of when symptoms began relative to Grok usage, records of hospitalization or emergency treatment, and witness statements from family or friends who observed behavioral changes.
Does it cost anything to speak with an attorney?
We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free and confidential.

Get Your Free Case Evaluation

Tell us about your experience with Grok AI. Our attorneys will review your case at no cost and with no obligation.

The Schenk Law Firm

Don't Let AI Negligence Go Unanswered

If Grok AI caused you or a loved one to experience psychosis, delusions, or other serious mental health harm, contact The Schenk Law Firm today for a free, confidential case evaluation.