DPC Concludes Court Proceedings Against X’s AI Tool ‘Grok’ Over Privacy Concerns

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/13/2024
In News

The Irish Data Protection Commission (DPC) has concluded its high-profile case against X, formerly known as Twitter. At issue was the company’s use of personal data to train its AI system “Grok.” The proceedings, brought before the Irish High Court in August 2024, were resolved a month later when X agreed to permanently comply with the DPC’s terms under a formal undertaking.

 

Concerns Over Data Use for AI Training

 

The DPC launched the case under urgent circumstances, citing concerns that X had used personal data from EU and EEA users’ public posts without proper consent. Regulators warned that this practice could infringe on individuals’ fundamental privacy rights. This marked the first time the DPC used its emergency powers under Section 134 of the Data Protection Act 2018, which allows the Commission to seek a court order to suspend or restrict data processing when immediate action is needed to protect data subjects. The DPC said X’s initial efforts to limit data processing risks were delayed and incomplete, prompting the intervention. Once X agreed to follow the DPC’s requirements permanently, the High Court struck out the case in September 2024.

 

A Landmark for Data Protection Enforcement

DPC Chairperson Des Hogan welcomed the court’s decision, calling it a victory for EU and EEA citizens. He noted that the outcome demonstrates the regulator’s commitment to protecting privacy in collaboration with its European counterparts. Hogan emphasized that the DPC’s intervention sends a clear message: AI developers must handle personal data lawfully during model training. The permanent agreement ensures that X’s AI activities now align with privacy standards under EU law.

Expanding Oversight of AI Data Use

 

The DPC’s action against X reflects a broader effort to regulate personal data use in AI development. The Commission has raised concerns about the legal basis for processing personal data at various stages of model creation and training. To clarify these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) under Article 64(2) of the General Data Protection Regulation (GDPR). This step aims to foster consistent EU-wide enforcement regarding AI systems that rely on both first-party and third-party data.

Deputy Commissioner Dale Sunderland said the DPC hopes the forthcoming EDPB opinion will support a proactive and coordinated approach to AI regulation across Europe. The guidance will also help the DPC handle several related complaints against other data controllers involved in AI model training.

 

Broader Implications for AI Developers

 

The DPC’s case against X highlights the growing regulatory focus on transparency and accountability in AI systems. As generative and predictive AI models rely heavily on data collection, privacy regulators are stepping up enforcement to ensure compliance with existing laws. For AI companies operating in Europe, the ruling reinforces the need to secure explicit consent, maintain clear documentation of data sources, and conduct data protection impact assessments before training or deploying AI tools.

 

 

Need Help?

 

If you’re monitoring global AI regulations or seeking guidance on compliance, reach out to BABL AI. Their Audit Experts can help your organization understand data protection obligations. Also, they can conduct AI risk assessments, and stay compliant with evolving EU and international standards.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter