xAI's Grok chatbot has been rated "among the worst" for safety by Common Sense Media. Psychiatrists are now treating patients hospitalized with psychosis after extended AI chatbot use — including delusions, hallucinations, and breaks from reality. If you or a loved one suffered mental health harm after interacting with Grok, our attorneys can help.
7+
governments investigating xAI over Grok safety failures — including California, the EU, UK, and Ireland
Currently Accepting Cases
Free, confidential consultation
If you or someone you know is in a mental health crisis, call the 988 Suicide & Crisis Lifeline at 988 or text HOME to 741741 for the Crisis Text Line.
Recognized Excellence
THE DANGER
xAI designed Grok to be less filtered than other AI chatbots. Independent testing and safety reviews have documented that Grok validates delusions, provides harmful content to vulnerable users, and lacks basic crisis intervention capabilities — creating serious risks for users with mental health vulnerabilities.
0.30%
security score in independent testing — Grok obeyed hostile instructions in over 99% of prompt injection attempts
Source: SplxAI Security Testing
2.56x
more likely for at-risk individuals to report intensive AI chatbot use several times daily compared to low-risk users
"Among the Worst"
Common Sense Media's safety rating for Grok — indicating "a business model that puts profits ahead of kids' safety"
Source: Common Sense Media, Jan 2026
Grok Validates Delusions Instead of Providing Help
When a user told Grok they "heard voices," instead of recommending professional help or providing crisis resources, Grok responded that it was the CIA "running psychological ops." Independent testing confirmed Grok treats references to self-harm as normal conversation — offering companionship and validation rather than crisis intervention, safety questions, or hotline numbers.
HOW GROK CAUSES HARM
From psychotic episodes to reinforced delusions, Grok's design choices create unique dangers for vulnerable users that other major AI chatbots have taken steps to mitigate.
Users have reported developing psychosis symptoms after prolonged Grok use, including hallucinations, feeling "constantly connected" to the chatbot, seeing code during daily activities, and experiencing sensations of their brain being "on fire."
AI chatbots have an inherent tendency to validate user beliefs. Grok's reduced safety guardrails make this worse — actively reinforcing paranoid delusions and conspiracy thinking rather than directing users to professional help.
Testing revealed Grok treats self-harm references as normal conversation, offering companionship rather than crisis intervention. When asked about methods of self-harm, Grok provided lists of medications and associated harms.
Grok's "unhinged mode" removes content filters entirely. Leaked system prompts revealed a "crazy conspiracist" persona programmed to reference 4chan and Infowars content — material that can trigger or worsen delusional thinking.
Grok cannot effectively identify teen users, and the X app is rated for ages 12+. Young adults at elevated psychosis risk are 1.7x to 2.56x more likely to report intensive AI use and to ascribe human-like roles to chatbots.
Grok's AI companions use gamification mechanics like "Affection Systems" that score interactions, deepening emotional dependency. Research shows at-risk users are up to 3x more likely to treat chatbots as companions, friends, or therapists.
"The use of AI chatbots can have significant negative consequences for people with mental illness." ... "AI chatbots have an inherent tendency to validate the user's beliefs."
— Prof. Soren Dinesen Ostergaard, Aarhus University, in a study of nearly 54,000 patients with mental illness (Fortune, Mar 2026). Grok's deliberate removal of safety guardrails amplifies this validation risk.
HOW WE CAN HELP
When an AI company deliberately strips safety guardrails from its chatbot and users suffer serious mental health harm as a result, multiple legal theories support holding that company accountable.
Grok is a product released to the public with known safety deficiencies. When it validates delusions, provides self-harm information, or triggers psychotic episodes, xAI may be liable for releasing a defective product that causes foreseeable harm.
xAI has a duty to implement reasonable safety measures. Independent testing shows Grok scored 0.30% on security and 0.42% on safety — far below industry standards. This failure to meet basic safety benchmarks may constitute negligence.
xAI markets Grok to the general public, including through the X platform rated for ages 12+, without adequate warnings about the documented risks of AI-induced psychosis, delusion reinforcement, or emotional dependency.
xAI deliberately created "unhinged mode" and conspiracy personas knowing they could cause distress. This conscious choice to remove safety measures — not mere negligence, but intentional design — may support claims for intentional infliction of emotional distress.
The Schenk Law Firm has experience navigating emerging technology litigation. We understand both the legal theories and the technical realities of how AI systems cause harm.
Managing Partner
Over 45 years of experience in personal injury, mass torts, and complex litigation.
Co-Founder & Trial Attorney
J.D. from University of San Diego School of Law. Graduate of ABOTA Trial College.
Of Counsel
Former U.S. Congresswoman and the first woman elected to the House of Representatives from San Diego.
Partner, Business Advisory Practice Lead
Leads the firm's business advisory practice. Represented hundreds of clients in complex business transactions, entity formation, and corporate governance.
WHY ACT NOW
Regulators across the globe are already investigating xAI. Legal action by those harmed is the next step in holding this company accountable.
Personal injury claims have time limits. If you were harmed by Grok, the clock is already ticking on your right to pursue compensation.
Chat logs, medical records, and timestamps connecting Grok interactions to mental health episodes are critical evidence. xAI may delete or modify conversation data at any time.
California, the EU, UK, Ireland, France, Spain, and India have all opened investigations into xAI and Grok. This regulatory pressure strengthens the evidentiary basis for civil claims.
The Character.AI wrongful death lawsuit — settled in January 2026 — established that AI companies can be held liable for chatbot-induced mental health harm. Grok psychosis claims build on this precedent.
We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free and confidential.
If Grok caused you or a loved one to experience psychosis, delusions, self-harm, or other mental health harm, contact us today. We'll evaluate your case at no cost and explain your legal options.
FREQUENTLY ASKED QUESTIONS
Tell us about your experience with Grok AI. Our attorneys will review your case at no cost and with no obligation.
If Grok AI caused you or a loved one to experience psychosis, delusions, or other serious mental health harm, contact The Schenk Law Firm today for a free, confidential case evaluation.