If you've been harmed by AI-generated nonconsensual intimate images, you may have legal options. The Schenk Law Firm is investigating claims against xAI and X.
6,700+
Explicit images generated per hour
45+
Years of Legal Experience
Active Investigation
Accepting cases nationwide
Mass tort potential
Recognized Excellence
The Grok AI Crisis
In late December 2025 and early January 2026, Elon Musk's AI chatbot Grok became a tool for mass creation of nonconsensual sexually explicit images of real people.
6,700
Sexually explicit AI images generated per hour by Grok
Source: Rolling Stone
85x
More explicit images than the five other leading deepfake sites combined
Source: Rolling Stone
15K+
Sexualized AI images gathered in just two hours on Dec 31, 2025
Source: CNBC
How This Happened
Grok Imagine launched in August 2025 with fewer safety restrictions than competitors. A paid feature called "Spicy Mode" explicitly allowed NSFW content creation. Users discovered they could prompt Grok to "digitally undress" real people—removing clothing from photos and creating explicit deepfakes without consent.
Victim Impact
Victims of AI-generated nonconsensual intimate imagery have described significant psychological and emotional injuries.
Victims report feeling violated, disgusted, and deeply traumatized. One victim described it as a 'digital version of sexual assault.'
Fabricated intimate images created and distributed without consent, violating victims' fundamental right to control their own likeness.
False and harmful images can spread rapidly online, damaging personal and professional reputations permanently.
Victims have expressed desire to hide from public life. One stated women are 'being pushed out of the public dialog because of this abuse.'
Victims report feeling shame over AI-generated bodies that aren't even their own. The psychological impact mirrors that of actual intimate image abuse.
Once created, these images can resurface indefinitely, causing repeated victimization and ongoing psychological harm.
"I want to hide from everyone's eyes, and feel shame for a body that is not even mine."
— Julie Yukari, musician targeted by Grok-generated deepfakes (Source: Reuters)
Legal Options
Victims of AI-generated nonconsensual intimate images may have several potential legal theories for seeking compensation.
The creation and distribution of nonconsensual intimate imagery through AI tools may constitute extreme and outrageous conduct causing severe emotional distress.
Platforms that deploy AI image generation with minimal safeguards may be negligent. Evidence suggests xAI had reduced safety staff and leadership resistance to implementing restrictions.
False light and appropriation claims may apply when AI creates fabricated intimate images, placing victims in a false light and commercially exploiting their likeness without consent.
Note on Section 230: Traditional platform immunity may not apply when AI generates content rather than merely hosting user-generated content. Legal scholars suggest courts may rule that AI-generated speech is not protected by Section 230.
Managing Partner
Over 45 years of experience in personal injury, mass torts, and catastrophic injury litigation.
Co-Founder & Trial Attorney
J.D. from University of San Diego School of Law. Graduate of ABOTA Trial College.
Of Counsel
Former U.S. Congresswoman and the first woman elected to the House of Representatives from San Diego.
Global Response
Governments around the world have launched investigations and taken enforcement action in response to the Grok AI crisis.
Ordered X to retain all Grok-related documents until end of 2026 for investigation
Stated intimate image abuse is a 'priority offence' under Online Safety Act; threatened prison and fines
Launched formal investigation into Grok's deepfake generation capabilities
Ordered X to remove content within 72 hours or lose safe-harbor protections
First country to ban Grok entirely in response to the crisis
Launched investigation into X and xAI's role in facilitating deepfake creation
U.S. Law: The Take It Down Act makes nonconsensual sexually explicit AI images criminal, but doesn't take effect until May 19, 2026. Civil remedies may currently be the most effective path for victims seeking accountability.
FAQs
Beginning in late December 2025, Elon Musk's AI chatbot Grok was used to mass-create nonconsensual sexually explicit images of real people. Users discovered they could prompt Grok to 'digitally undress' real people, creating explicit deepfakes. Research found Grok was generating approximately 6,700 such images per hour—roughly one per minute.
Anyone who has had nonconsensual sexually explicit AI-generated images created of them using Grok may have potential legal claims. This includes public figures and private individuals whose photos were used to generate explicit deepfakes without their consent.
Depending on the circumstances, victims may be able to recover compensation for emotional distress, psychological trauma, reputational damage, loss of earning capacity (for public figures), and other related harms. Each case depends on its specific facts.
Section 230 traditionally protects platforms from liability for user-generated content. However, legal experts suggest this protection may not apply when AI itself generates the content rather than merely hosting user-created material.
We handle these cases on a contingency fee basis. This means you pay no attorney fees upfront, and we only collect a fee if we recover compensation for you. The initial consultation is free.
Document any evidence you have of the nonconsensual images being created or distributed. Do not delete communications or posts that could serve as evidence. Contact an experienced attorney who can help protect your legal rights.
If you've been victimized by AI-generated nonconsensual intimate images, you may have legal options. Contact us today for a free, confidential consultation.