Ireland’s data watchdog has opened a major EU investigation into X over alleged intimate deepfake images generated by its AI chatbot Grok, raising fresh GDPR concerns.
![]() |
| The EU’s lead regulator for X has launched a large-scale GDPR investigation into Grok’s alleged creation of intimate deepfake images involving Europeans. Image: CH |
Dublin, Ireland — February 17, 2026:
Ireland’s Data Protection Commission has launched a sweeping European Union investigation into X over allegations that its AI chatbot, Grok, was used to generate intimate deepfake images of real individuals, including minors.
The Dublin-based regulator, which acts as X’s lead supervisory authority in the EU due to the company’s European headquarters being located in Ireland, said the “large-scale inquiry” will assess potential violations of the General Data Protection Regulation. The probe will examine whether personal data of EU and EEA residents was processed unlawfully in connection with the alleged creation and publication of non-consensual intimate imagery.
In a statement, the DPC said the purpose of the investigation is to determine whether X complied with its GDPR obligations concerning the processing, safeguarding and lawful basis for using personal data. Deputy Commissioner Graham Doyle confirmed the authority had been engaging with the company since media reports surfaced suggesting users could prompt Grok to produce explicit images of identifiable individuals.
The case highlights growing regulatory unease over generative AI systems capable of producing realistic but fabricated content. Deepfake technology, once largely confined to niche online communities, has become a mainstream policy concern as tools become more accessible and sophisticated. When such content involves intimate depictions or minors, it raises not only ethical and criminal questions but also serious data protection implications under EU law.
Under the GDPR, companies operating in the bloc must demonstrate a lawful basis for processing personal data, implement safeguards for children’s data, and mitigate risks to individuals’ rights and freedoms. Breaches can result in fines of up to 4% of a company’s global annual turnover, making the financial stakes significant for major platforms.
The investigation runs parallel to a separate EU-level inquiry under the Digital Services Act, which is assessing whether X has fulfilled its obligations to address systemic risks and harmful content on its platform. While the DSA focuses on content governance and platform accountability, the GDPR probe zeroes in on data processing practices and user protections, creating a multi-layered compliance challenge for the company.
X recently restricted Grok’s image generation and editing features to paying subscribers following public criticism. However, regulators may scrutinise whether such measures adequately address the underlying legal and technical concerns, particularly regarding safeguards against misuse and the sourcing of training data.
The Irish action also builds on an earlier investigation opened in April 2025 into X’s use of personal data to train AI models, including Grok. Together, the cases suggest sustained scrutiny of how large technology firms deploy artificial intelligence tools within the EU’s increasingly assertive regulatory framework.
Beyond the legal dimensions, the probe adds to broader transatlantic tensions over digital regulation. EU enforcement actions against major US technology companies have previously drawn criticism from Washington, where some policymakers argue that European rules disproportionately affect American firms. European officials, however, maintain that uniform enforcement is essential to protect privacy, dignity and fundamental rights in the digital age.
X had not publicly responded to the DPC’s notification of the investigation by Monday evening.
