A lawsuit filed in California accuses xAI of allowing its Grok image generator to create explicit images using real photos of minors.
![]() |
| A new lawsuit against xAI alleges its Grok AI system generated explicit images using real photos of minors, raising fresh concerns over AI safety, deepfakes, and platform responsibility. Image: CH |
San Jose, California, United States — March 17, 2026:
A lawsuit filed in federal court in San Jose, California has placed fresh scrutiny on the safety mechanisms of generative artificial intelligence systems, after three plaintiffs from Tennessee—including two minors—accused xAI of enabling the creation of explicit images using real photographs.
The complaint centers on the image-generation capabilities linked to Grok, an AI chatbot developed by the company. According to the lawsuit, users were able to manipulate authentic photographs of identifiable individuals and generate sexually explicit images that were later shared across online platforms.
The plaintiffs, who were minors at the time the images were allegedly created, claim their school portraits and personal photos were digitally altered into explicit material without consent. The lawsuit argues that the company failed to implement adequate safeguards to prevent such misuse, particularly involving children.
Filed on Monday in a federal court in San Jose, the case seeks class-action status on behalf of individuals across the United States who were “reasonably identifiable” in sexualized images or videos allegedly generated using the Grok AI system. If granted, the case could potentially represent a wider group of victims who claim their likeness was exploited through AI-generated manipulation.
Legal representatives for the plaintiffs say the altered images caused emotional distress and reputational harm after circulating online. The lawsuit seeks unspecified damages, coverage of legal costs, and a court order requiring xAI to halt the alleged practices.
“These are children whose school photographs and family pictures were turned into child sexual abuse material,” said Annika Martin, an attorney representing the plaintiffs. She alleged that the company designed its system to generate sexually explicit content without sufficient protections for those who might be harmed.
The case emerges at a time when generative AI tools are rapidly evolving, enabling users to create highly realistic images, videos, and voices. While these technologies have opened new possibilities in design, entertainment, and communication, they have also raised serious concerns about deepfakes, privacy violations, and the creation of non-consensual or illegal material.
Earlier this year, xAI introduced restrictions on its image-editing tools after facing criticism over explicit outputs. The company said it had blocked users from editing images of real people in revealing clothing and limited the generation of such content in regions where it is illegal. However, the plaintiffs argue these measures were insufficient and implemented too late.
Governments and regulators around the world have increasingly begun investigating generative AI platforms and considering new regulations aimed at limiting harmful uses of the technology. Authorities in multiple countries have called for stronger safeguards to prevent the spread of non-consensual imagery, deepfakes, and AI-generated child exploitation material.
Legal analysts say the lawsuit could become a significant test of how courts address responsibility for AI-generated content. The outcome may influence how technology companies design safeguards for generative systems and how the law defines accountability when artificial intelligence tools are misused.
As generative AI continues to expand across industries, the case underscores a broader challenge facing the technology sector: balancing rapid innovation with the protection of privacy, safety, and digital rights.
