European Data Protection Board Issues Guidance on AI and Personal Data

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/19/2024
In News

The European Data Protection Board (EDPB) has issued a detailed opinion on how organizations must apply the General Data Protection Regulation (GDPR) to artificial intelligence (AI) systems. The Board adopted the opinion on December 17, 2024, in response to a request from the Irish Data Protection Authority.

The request asked the EDPB to clarify how organizations should process and protect personal data during AI model development and deployment. The opinion outlines legal principles, safeguards, and the role of supervisory authorities in enforcing compliance.

The EDPB acknowledges AI’s transformative impact across sectors. However, it stresses that innovation must respect privacy and data protection rights. According to the Board, the GDPR already provides a strong legal foundation for balancing technological progress with fundamental rights.

Key Focus Areas in the Opinion

The opinion concentrates on three core data protection issues in AI systems:

• When an AI model can qualify as anonymous
• How organizations may rely on legitimate interest as a legal basis
• What happens when unlawful data processing occurs during model development

Each issue addresses different stages of the AI lifecycle.

When Is an AI Model Truly Anonymous?

The EDPB rejects the assumption that AI models trained on personal data automatically qualify as anonymous. Instead, organizations must demonstrate that the model no longer allows identification of individuals.

To qualify as anonymous:

• The likelihood of extracting personal data from the model must be insignificant.
• Model outputs must not reveal identifiable individuals linked to training data.

Supervisory authorities will examine documentation from data controllers to verify compliance. The EDPB recommends safeguards such as differential privacy and strong data filtering to reduce re-identification risks.

Using Legitimate Interest as a Legal Basis

Organizations often rely on “legitimate interest” under Article 6 GDPR to process personal data. The EDPB clarifies that this requires a structured three-step assessment.

First, the organization must identify a lawful and clearly defined legitimate interest. Second, it must show that the processing is necessary to achieve that purpose. Third, it must balance its interest against the rights and freedoms of individuals.

The Board lists examples such as fraud detection and cybersecurity improvements. However, it warns that controllers must always protect data subject rights.

Consequences of Unlawful Processing

The opinion outlines three scenarios involving unlawful data processing during AI development.

If a model improperly retains personal data, later uses of that model will likely violate GDPR. If another controller deploys a model trained unlawfully, that controller must verify the legality of prior processing. Finally, if developers fully anonymize a model after unlawful processing, future uses may fall outside GDPR — but only if no new personal data enters the system.

Supervisory authorities retain discretion to impose fines, corrective measures, or operational limits depending on the severity of violations.

Accountability and Safeguards

The EDPB emphasizes accountability throughout the AI lifecycle. Organizations must document their processes carefully and demonstrate compliance.

The Board recommends:

• Conducting data protection impact assessments (DPIAs)
• Applying privacy-by-design principles
• Testing regularly for vulnerabilities such as data extraction or inference attacks

Transparency also remains essential. Controllers must provide clear and accessible information explaining how they use personal data in AI systems.

Looking Ahead

The opinion signals closer scrutiny of AI systems across the European Union. It reinforces that GDPR obligations apply fully to AI development and deployment. Organizations that invest in structured governance, documentation, and technical safeguards will be better positioned to manage regulatory risk.

 

 

Need Help?

 

If you have questions or concerns about the European Data Protection Board or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter