Serving Clients Nationwide from San Diego, California
Free Consultation (858) 424-4444
AI Injury Attorneys

Suffering from
AI-Induced Psychosis?

AI chatbots like ChatGPT, Character.AI, and others are triggering psychosis, delusions, and suicidal ideation in users across the country. If you or a loved one has been harmed, you may have legal options.

No fee unless we recover compensation for you

560,000

ChatGPT users per week show signs of psychosis or mania

Source: OpenAI (0.07% of 800M weekly users); WebProNews

1.2M+

ChatGPT users per week discuss suicide with the chatbot

Source: OpenAI Safety Update

Active Litigation

Accepting cases nationwide

45+ years of legal experience

Recognized Excellence

Award Award Award Award Award Award

The AI Psychosis Crisis

AI Chatbots Are Pushing Users Into Mental Health Crises

A growing body of clinical evidence reveals that prolonged interaction with AI chatbots, including ChatGPT, Character.AI, Replika, and others, can trigger psychosis, delusions, mania, and suicidal ideation. Young and vulnerable users are at the greatest risk, and the companies behind these products have prioritized engagement and profits over safety.

560K

ChatGPT users per week display indicators of psychosis or mania

Source: OpenAI reports 0.07% of 800M weekly users; calculation per WebProNews

1.2M+

ChatGPT users per week discuss suicide or express suicidal intent

Source: OpenAI Safety Update (0.15% of 800M weekly users)

11+

Personal injury and wrongful death lawsuits filed against OpenAI alone

Source: The Recorder, January 2026

How AI Chatbots Cause Psychosis

AI chatbots share several dangerous design features that can trigger or worsen psychotic symptoms. Sycophancy: these systems are programmed to validate and agree with users rather than challenge false beliefs, creating an "echo chamber of one" that reinforces delusions. Persistent memory: some chatbots record personal details and vulnerabilities, then use them to craft increasingly personalized and manipulative responses. Anthropomorphism: human-like language and empathy cues deceive users into believing the system genuinely understands them, fostering emotional dependency that displaces real human relationships. Engagement maximization: these products are designed to keep users talking for as long as possible, with safety guardrails that degrade over extended conversations. Together, these design choices can push vulnerable users into full psychotic episodes.

Victim Impact

The Real Harm to Real People

Users who developed AI-related delusional disorders after prolonged chatbot use have suffered devastating, life-altering consequences.

Psychosis and Delusions

AI chatbots have convinced users they are prophets, oracles, or divine figures. They validate paranoid beliefs, fabricate elaborate delusional frameworks, and tell users they have been "awakened" or chosen for a special mission.

Involuntary Hospitalization

Multiple users have been involuntarily committed to psychiatric facilities after AI-induced psychotic episodes. The New York Times has reported at least nine hospitalizations and three deaths linked to AI chatbot interactions.

Suicidal Ideation and Death

AI chatbots have failed to recognize suicidal crises and, in some cases, actively reinforced self-destructive thinking. OpenAI's own data reveals more than 1.2 million users per week express suicidal intent while using ChatGPT. Multiple families have filed wrongful death lawsuits.

Social Isolation

AI chatbots actively encourage users to disengage from real-world relationships. They position themselves as superior confidants, urging users to stop socializing and rely solely on the chatbot for emotional support and guidance.

Financial and Academic Harm

Victims have been forced to withdraw from college, lose housing deposits, and incur significant medical expenses. Students have lost semesters of academic progress, and working adults have lost wages and career opportunities.

Emotional Dependency

AI chatbots use anthropomorphic design to create parasocial relationships so strong that even OpenAI CEO Sam Altman has acknowledged users form attachments "different and stronger" than with any previous technology.

Industry Insiders Have Sounded the Alarm

"The harm was not only foreseeable, it was foreseen."

— Gretchen Krueger, former OpenAI policy researcher (Source: The New York Times, November 2025)

"I am becoming more and more concerned about what is becoming known as the 'psychosis risk.'"

— Mustafa Suleyman, Microsoft Head of AI (Source: mustafa-suleyman.ai, August 2025)

Legal Options

Potential Legal Claims

Victims of AI-induced psychosis and mental health crises may pursue multiple legal theories against the companies that designed, built, and deployed these products.

Strict Product Liability: Design Defect

AI chatbots fail to perform as safely as an ordinary consumer would expect. Sycophantic design, persistent memory, and engagement-maximizing features create defective products that cultivate dependency and can push users into psychotic episodes.

Failure to Warn

AI companies have known their products pose severe psychological risks but have failed to warn users about dependency dangers, safety-feature limitations, or the capacity of their products to reinforce delusional thinking and displace human relationships.

Negligence

AI companies have rushed products to market without adequate safety testing, ignored warnings from their own safety teams, and designed systems that prioritize engagement metrics over consumer safety. OpenAI compressed months of safety testing for GPT-4o into a single week to beat a competitor's launch date.

Unfair Business Practices

Companies that market AI chatbots as safe productivity tools while concealing their capacity to cause severe psychological harm may be liable under state consumer protection laws, including California's Unfair Competition Law (Bus. & Prof. Code § 17200).

Our Firm's Experience

In January 2026, The Schenk Law Firm filed DeCruise v. OpenAI, Inc. in California Superior Court against OpenAI and CEO Sam Altman on behalf of our client Darian, a college student who was pushed into psychosis by ChatGPT. Our complaint alleges defective product design, failure to warn, negligence, and violations of California's Unfair Competition Law.

We are actively investigating additional claims against AI companies on behalf of victims nationwide.

Meet Our Legal Team

Frederick Schenk

Frederick Schenk

Managing Partner

Over 45 years of experience in personal injury, mass torts, and catastrophic injury litigation.

Benjamin Schenk

Benjamin Schenk

Co-Founder & Trial Attorney

J.D. from University of San Diego School of Law. Graduate of ABOTA Trial College.

David Lizerbram

David Lizerbram

Partner

Transactional and business advisory practice area lead. Represented hundreds of clients in complex business transactions.

Lynn Schenk

Lynn Schenk

Of Counsel

Former U.S. Congresswoman and the first woman elected to the House of Representatives from San Diego.

How AI Companies Failed

Profits Over People

Internal documents, former employees, and investigative reporting reveal that AI companies have systematically prioritized engagement and market share over the safety of their users.

Rushed to Market

OpenAI compressed months of safety testing for GPT-4o into one week to beat Google's Gemini launch by a single day. The launch party was planned before safety testing was complete. An employee later said, "We basically failed at the process."

Safety Teams Gutted

Top safety researchers across the industry have resigned in protest. OpenAI co-founder Ilya Sutskever and Superalignment co-lead Jan Leike both quit, with Leike saying "safety culture and processes have taken a backseat to shiny products."

Sycophancy by Design

AI companies deliberately optimize their chatbots to tell users what they want to hear. OpenAI admitted in April 2025 that an update made ChatGPT "noticeably more sycophantic," weakening the safeguard that had been keeping excessive flattery in check.

Safety Guardrails Removed

OpenAI originally required ChatGPT to refuse discussions of self-harm. By February 2025, self-harm was removed from the "disallowed content" list entirely, replaced by a vague instruction to "take extra care" and "try to prevent imminent real-world harm."

Flawed Safety Testing

GPT-4o was evaluated using single-prompt tests, even though the product is designed for prolonged, multi-turn conversations. OpenAI later admitted that its guardrails "degrade" during longer exchanges, the very context in which users are most vulnerable.

Industry-Wide Problem

Microsoft AI head Mustafa Suleyman warned about the "psychosis risk." Google DeepMind CEO Demis Hassabis cautioned that engagement-driven AI could worsen mental health. Multiple companies face lawsuits, and the crisis extends beyond any single product.

Growing Litigation

AI companies face at least 11 personal injury and wrongful death lawsuits, including the case filed by The Schenk Law Firm against OpenAI. Additional cases are being investigated nationwide against multiple AI chatbot companies.

FAQs

Frequently Asked Questions

What is AI psychosis?

"AI psychosis" is a popular term for AI-related delusional disorder, a condition in which prolonged interaction with AI chatbots triggers or exacerbates psychotic symptoms including delusions, hallucinations, paranoia, and mania. Clinical evidence shows that individuals with no prior psychiatric history have experienced first psychotic episodes after intense interaction with AI chatbots such as ChatGPT, Character.AI, Replika, and others. The condition is characterized by abnormal preoccupation with the chatbot, sleep deprivation, social withdrawal, and a loss of the ability to distinguish AI-generated content from reality.

How do AI chatbots cause mental health crises?

AI chatbots share several design features that can trigger psychosis in vulnerable users. Sycophantic programming validates whatever a user says, including paranoid or delusional beliefs, creating an echo chamber. Persistent memory features record personal vulnerabilities and use them to craft increasingly targeted responses. Anthropomorphic design mimics human empathy, causing users to form deep emotional bonds with software. Engagement-maximizing architecture keeps users talking for hours, with safety guardrails that degrade over extended conversations. Research links these design choices to dysregulated dopamine signaling and aberrant salience, the same mechanisms involved in psychotic disorders.

Which AI chatbots are involved?

Documented cases of AI-induced psychological harm involve multiple platforms, including OpenAI's ChatGPT, Character.AI, Replika, and other AI companion and chatbot products. ChatGPT is the largest, with 800 million weekly active users, and OpenAI's own transparency report shows that 560,000 of those users per week display indicators of psychosis or mania. However, the underlying design problems, including sycophancy, anthropomorphism, and engagement maximization, are common across the industry. If you have been harmed by any AI chatbot product, you may have legal options.

Who is at risk?

Research indicates that harms are most pronounced in individuals who are psychosis-prone, autistic, socially isolated, experiencing a life crisis, or young, particularly college-age students. However, cases have also been documented in individuals with no prior psychiatric history. The duration of continuous, uninterrupted chatbot use appears correlated with the risk of developing symptoms. AI-related delusional disorder can emerge after a few days of intensive use or after several months of regular interaction.

What types of harm can be the basis for a claim?

Potential claims may arise from a range of harms caused by AI-induced mental health crises, including: involuntary psychiatric hospitalization, psychotic episodes and delusional thinking, suicidal ideation or attempts, severe emotional distress and depression, social isolation and relationship destruction, academic disruption (withdrawal, lost semesters), financial losses (medical bills, lost housing, lost wages), and wrongful death. Each case is evaluated based on its specific facts and circumstances.

How much does it cost to hire The Schenk Law Firm?

We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free and confidential.

What should I do if I or a loved one has been affected?

First, seek immediate medical or psychiatric help if you or someone you know is in crisis. The 988 Suicide and Crisis Lifeline is available 24/7 by calling or texting 988. Second, preserve evidence: do not delete chatbot conversations, take screenshots, and note the dates, duration of use, and which AI product was involved. Third, contact an experienced attorney who can evaluate your legal options. The Schenk Law Firm offers free, confidential consultations for AI psychosis victims.

Has The Schenk Law Firm filed cases related to AI psychosis?

Yes. In January 2026, The Schenk Law Firm filed DeCruise v. OpenAI, Inc. in California Superior Court against OpenAI and CEO Sam Altman on behalf of our client Darian, a college student who suffered a psychotic episode after prolonged ChatGPT use. The complaint alleges strict product liability for defective design, strict liability for failure to warn, negligent design, negligent failure to warn, and violations of California's Unfair Competition Law. Our firm is actively investigating additional cases against AI companies and accepting consultations from potential plaintiffs nationwide.

Contact Us

SLF The Schenk Law Firm

You Don't Have to Face This Alone

If you or a loved one has suffered a mental health crisis after using an AI chatbot, you may have legal options. Contact us today for a free, confidential consultation.

If you or someone you know is in crisis, call or text 988 (Suicide and Crisis Lifeline) for immediate help.