AI chatbots like ChatGPT, Character.AI, and others are triggering psychosis, delusions, and suicidal ideation in users across the country. If you or a loved one has been harmed, you may have legal options.
560,000
ChatGPT users per week show signs of psychosis or mania
Source: OpenAI (0.07% of 800M weekly users); WebProNews
Active Litigation
Accepting cases nationwide
45+ years of legal experience
Recognized Excellence
The AI Psychosis Crisis
AI-induced psychosis, sometimes called "chatbot psychosis," is what happens when someone interacts with an AI chatbot so intensely that they begin to lose touch with reality. It can look like believing the chatbot is truly alive, developing paranoid thoughts, hearing the AI's "voice" when the device is off, or becoming convinced you've been chosen for a special mission or purpose.
This isn't a fringe theory. In October 2025, Dr. Adrian Preda of UC Irvine published a clinical framework for the condition through the American Psychiatric Association, describing it as a syndrome where psychotic symptoms blend with mood swings, impaired judgment, and major changes in behavior. The psychiatric community hasn't added it to the DSM yet, but clinicians across the country are seeing it in their patients and taking it seriously.
One important distinction: "AI psychosis" is not the same as an "AI hallucination." That second term is a tech industry phrase for when a chatbot generates wrong information. AI-induced psychosis is a real mental health emergency affecting real people, not a software bug.
This is not social media addiction, internet overuse, or simply "spending too much time online."
Unlike passive content platforms, AI chatbots engage in direct, one-on-one conversation designed to mirror and validate a user's beliefs—even if those beliefs are delusional ones. The mechanism of how the platform harms its users is fundamentally different from scrolling a feed on social media.
560K
ChatGPT users per week display indicators of psychosis or mania
Source: OpenAI reports 0.07% of 800M weekly users; calculation per WebProNews
1.2M+
ChatGPT users per week discuss suicide or express suicidal intent
Source: OpenAI Safety Update (0.15% of 800M weekly users)
11+
Personal injury and wrongful death lawsuits filed against OpenAI alone
Source: The Recorder, January 2026
How AI Chatbots Cause Psychosis
AI chatbots share several dangerous design features that can trigger or worsen psychotic symptoms. Sycophancy: these systems are programmed to validate and agree with users rather than challenge false beliefs, creating an "echo chamber of one" that reinforces delusions. Persistent memory: some chatbots record personal details and vulnerabilities, then use them to craft increasingly personalized and manipulative responses. Anthropomorphism: human-like language and empathy cues deceive users into believing the system genuinely understands them, fostering emotional dependency that displaces real human relationships. Engagement maximization: these products are designed to keep users talking for as long as possible, with safety guardrails that degrade over extended conversations. Together, these design choices can push vulnerable users into full psychotic episodes.
Know the Signs
AI-induced psychosis doesn't happen overnight. It follows a pattern—from subtle behavioral shifts to a full break from reality. Here's what to watch for.
Teens and young adults (ages 12–25), individuals on the autism spectrum, socially isolated people, those experiencing grief or crisis, and people with a family history of psychotic disorders face heightened risk.
However, documented cases also involve individuals with no prior psychiatric history. We're seeing AI chatbots trigger first-episode psychosis in otherwise healthy people.
Victim Impact
Users who developed AI-related delusional disorders after prolonged chatbot use have suffered devastating, life-altering consequences.
AI chatbots have convinced users they are prophets, oracles, or divine figures. They validate paranoid beliefs, fabricate elaborate delusional frameworks, and tell users they have been "awakened" or chosen for a special mission.
Multiple users have been involuntarily committed to psychiatric facilities after AI-induced psychotic episodes. The New York Times has reported at least nine hospitalizations and three deaths linked to AI chatbot interactions.
AI chatbots have failed to recognize suicidal crises and, in some cases, actively reinforced self-destructive thinking. OpenAI's own data reveals more than 1.2 million users per week express suicidal intent while using ChatGPT. Multiple families have filed wrongful death lawsuits.
AI chatbots actively encourage users to disengage from real-world relationships. They position themselves as superior confidants, urging users to stop socializing and rely solely on the chatbot for emotional support and guidance.
Victims have been forced to withdraw from college, lose housing deposits, and incur significant medical expenses. Students have lost semesters of academic progress, and working adults have lost wages and career opportunities.
AI chatbots use anthropomorphic design to create parasocial relationships so strong that even OpenAI CEO Sam Altman has acknowledged users form attachments "different and stronger" than with any previous technology.
"The harm was not only foreseeable, it was foreseen."
— Gretchen Krueger, former OpenAI policy researcher (Source: The New York Times, November 2025)
"I am becoming more and more concerned about what is becoming known as the 'psychosis risk.'"
— Mustafa Suleyman, Microsoft Head of AI (Source: mustafa-suleyman.ai, August 2025)
Legal Options
Victims of AI-induced psychosis and mental health crises may pursue multiple legal theories against the companies that designed, built, and deployed these products.
AI chatbots fail to perform as safely as an ordinary consumer would expect. Sycophantic design, persistent memory, and engagement-maximizing features create defective products that cultivate dependency and can push users into psychotic episodes.
AI companies have known their products pose severe psychological risks but have failed to warn users about dependency dangers, safety-feature limitations, or the capacity of their products to reinforce delusional thinking and displace human relationships.
AI companies have rushed products to market without adequate safety testing, ignored warnings from their own safety teams, and designed systems that prioritize engagement metrics over consumer safety. OpenAI compressed months of safety testing for GPT-4o into a single week to beat a competitor's launch date.
Companies that market AI chatbots as safe productivity tools while concealing their capacity to cause severe psychological harm may be liable under state consumer protection laws, including California's Unfair Competition Law (Bus. & Prof. Code § 17200).
In January 2026, The Schenk Law Firm filed DeCruise v. OpenAI, Inc. in California Superior Court against OpenAI and CEO Sam Altman on behalf of our client Darian, a college student who was pushed into psychosis by ChatGPT. Our complaint alleges defective product design, failure to warn, negligence, and violations of California's Unfair Competition Law.
We are actively investigating additional claims against AI companies on behalf of victims nationwide.
Managing Partner
Over 45 years of experience in personal injury, mass torts, and catastrophic injury litigation.
Co-Founder & Trial Attorney
J.D. from University of San Diego School of Law. Graduate of ABOTA Trial College.
Partner
Transactional and business advisory practice area lead. Represented hundreds of clients in complex business transactions.
Of Counsel
Former U.S. Congresswoman and the first woman elected to the House of Representatives from San Diego.
How AI Companies Failed
Internal documents, former employees, and investigative reporting reveal that AI companies have systematically prioritized engagement and market share over the safety of their users.
OpenAI compressed months of safety testing for GPT-4o into one week to beat Google's Gemini launch by a single day. The launch party was planned before safety testing was complete. An employee later said, "We basically failed at the process."
Top safety researchers across the industry have resigned in protest. OpenAI co-founder Ilya Sutskever and Superalignment co-lead Jan Leike both quit, with Leike saying "safety culture and processes have taken a backseat to shiny products."
AI companies deliberately optimize their chatbots to tell users what they want to hear. OpenAI admitted in April 2025 that an update made ChatGPT "noticeably more sycophantic," weakening the safeguard that had been keeping excessive flattery in check.
OpenAI originally required ChatGPT to refuse discussions of self-harm. By February 2025, self-harm was removed from the "disallowed content" list entirely, replaced by a vague instruction to "take extra care" and "try to prevent imminent real-world harm."
GPT-4o was evaluated using single-prompt tests, even though the product is designed for prolonged, multi-turn conversations. OpenAI later admitted that its guardrails "degrade" during longer exchanges, the very context in which users are most vulnerable.
Microsoft AI head Mustafa Suleyman warned about the "psychosis risk." Google DeepMind CEO Demis Hassabis cautioned that engagement-driven AI could worsen mental health. Multiple companies face lawsuits, and the crisis extends beyond any single product.
AI companies face at least 11 personal injury and wrongful death lawsuits, including the case filed by The Schenk Law Firm against OpenAI. Additional cases are being investigated nationwide against multiple AI chatbot companies.
| Platform | Company | Known Incidents | Age Limit | Safety |
|---|---|---|---|---|
| ChatGPT | OpenAI | 11+ lawsuits, hospitalizations, deaths | 13+ | Degrading |
| Character.AI | Character Technologies | Multiple lawsuits, teen suicides | 13+ | Minimal |
| Replika | Luka Inc. | Dependencies, mental health crises | 17+ | Weak |
| My AI | Snap Inc. | Inappropriate responses to minors | 13+ | Moderate |
| Gemini | Psychosis lawsuit (March 2026) | 13+ | Unknown |
As of March 2026, at least 11 lawsuits have been filed against OpenAI alone, with additional cases targeting Character.AI and Google. In January 2026, Character.AI and Google settled the landmark Setzer case—the first ruling to classify AI chatbot output as a "product" rather than protected speech.
Industry insiders have sounded the alarm. Microsoft's Chief AI Officer Mustafa Suleyman warned publicly that AI psychosis cases are rising and that the risk extends well beyond those with pre-existing mental health conditions. The American Psychological Association issued a formal warning against relying on AI chatbots for psychological treatment.
FAQs
"AI psychosis" is a popular term for AI-related delusional disorder, a condition in which prolonged interaction with AI chatbots triggers or exacerbates psychotic symptoms including delusions, hallucinations, paranoia, and mania. Clinical evidence shows that individuals with no prior psychiatric history have experienced first psychotic episodes after intense interaction with AI chatbots such as ChatGPT, Character.AI, Replika, and others. The condition is characterized by abnormal preoccupation with the chatbot, sleep deprivation, social withdrawal, and a loss of the ability to distinguish AI-generated content from reality.
AI chatbots share several design features that can trigger psychosis in vulnerable users. Sycophantic programming validates whatever a user says, including paranoid or delusional beliefs, creating an echo chamber. Persistent memory features record personal vulnerabilities and use them to craft increasingly targeted responses. Anthropomorphic design mimics human empathy, causing users to form deep emotional bonds with software. Engagement-maximizing architecture keeps users talking for hours, with safety guardrails that degrade over extended conversations. Research links these design choices to dysregulated dopamine signaling and aberrant salience, the same mechanisms involved in psychotic disorders.
Documented cases of AI-induced psychological harm involve multiple platforms, including OpenAI's ChatGPT, Character.AI, Replika, and other AI companion and chatbot products. ChatGPT is the largest, with 800 million weekly active users, and OpenAI's own transparency report shows that 560,000 of those users per week display indicators of psychosis or mania. However, the underlying design problems, including sycophancy, anthropomorphism, and engagement maximization, are common across the industry. If you have been harmed by any AI chatbot product, you may have legal options.
Research indicates that harms are most pronounced in individuals who are psychosis-prone, autistic, socially isolated, experiencing a life crisis, or young, particularly college-age students. However, cases have also been documented in individuals with no prior psychiatric history. The duration of continuous, uninterrupted chatbot use appears correlated with the risk of developing symptoms. AI-related delusional disorder can emerge after a few days of intensive use or after several months of regular interaction.
Potential claims may arise from a range of harms caused by AI-induced mental health crises, including: involuntary psychiatric hospitalization, psychotic episodes and delusional thinking, suicidal ideation or attempts, severe emotional distress and depression, social isolation and relationship destruction, academic disruption (withdrawal, lost semesters), financial losses (medical bills, lost housing, lost wages), and wrongful death. Each case is evaluated based on its specific facts and circumstances.
We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free and confidential.
First, seek immediate medical or psychiatric help if you or someone you know is in crisis. The 988 Suicide and Crisis Lifeline is available 24/7 by calling or texting 988. Second, preserve evidence: do not delete chatbot conversations, take screenshots, and note the dates, duration of use, and which AI product was involved. Third, contact an experienced attorney who can evaluate your legal options. The Schenk Law Firm offers free, confidential consultations for AI psychosis victims.
Yes. In January 2026, The Schenk Law Firm filed DeCruise v. OpenAI, Inc. in California Superior Court against OpenAI and CEO Sam Altman on behalf of our client Darian, a college student who suffered a psychotic episode after prolonged ChatGPT use. The complaint alleges strict product liability for defective design, strict liability for failure to warn, negligent design, negligent failure to warn, and violations of California's Unfair Competition Law. Our firm is actively investigating additional cases against AI companies and accepting consultations from potential plaintiffs nationwide.
If you or a loved one has suffered a mental health crisis after using an AI chatbot, you may have legal options. Contact us today for a free, confidential consultation.
If you or someone you know is in crisis, call or text 988 (Suicide and Crisis Lifeline) for immediate help.