European Data Protection Board Issues Guidance on AI and Personal Data

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/19/2024
In News

The European Data Protection Board (EDPB) has released a detailed opinion addressing the data protection challenges posed by artificial intelligence (AI), emphasizing the importance of compliance with General Data Protection Regulation (GDPR) during AI model development and deployment.

 

Adopted on December 17, 2024, the opinion responds to a request from the Irish Data Protection Authority to clarify how personal data is processed and protected throughout the lifecycle of AI systems. The document highlights key legal principles, actionable safeguards, and the role of supervisory authorities in ensuring compliance.

 

The EDPB recognizes AI’s transformative potential across sectors but stresses the need for responsible innovation. The GDPR framework, according to the EDPB, offers a robust foundation for balancing technological advancement with fundamental rights like privacy and data protection.

 

The opinion focuses on three critical aspects of data protection in AI:

 

  1. Anonymity of AI Models: Defining when AI models can truly be considered anonymous.

 

  1. Legitimate Interest as a Legal Basis: Establishing how organizations can justify personal data use under GDPR.

 

  1. Consequences of Unlawful Data Processing: Addressing the impact of non-compliance in AI development.

 

The EDPB rejects the assumption that AI models trained on personal data are inherently anonymous. To qualify as anonymous, the model must ensure that:

 

  • The likelihood of extracting personal data from the model is insignificant.

 

  • Outputs generated do not identify individuals linked to the training data.

 

Supervisory authorities are tasked with scrutinizing the documentation provided by data controllers to verify compliance. Methods such as differential privacy and robust data filtering are suggested to minimize risks.

 

The opinion emphasizes the need for organizations to perform a three-step test when relying on “legitimate interest” to process personal data:

 

  1. Identify a legitimate interest: The interest must be lawful, clearly articulated, and grounded in reality.

 

  1. Necessity Test: Demonstrate that data processing is essential to achieve the intended purpose.

 

  1. Balancing Test: Ensure that the interest does not override the fundamental rights and freedoms of individuals.

 

The EDPB provides examples of legitimate interests, such as fraud detection and improving cybersecurity, but warns that these must always be balanced against data subject rights.

 

The EDPB outlines three scenarios to clarify the consequences of unlawful data processing during AI model development:

 

  1. Retention of Personal Data: If personal data is improperly retained in a model, subsequent uses are likely to be non-compliant.

 

  1. Transfer to Another Controller: Controllers deploying such models must verify the legality of prior processing activities.

 

  1. Post-Anonymization: If a model is anonymized after unlawful processing, subsequent uses may avoid GDPR scrutiny, provided no new personal data is involved.

 

Supervisory authorities have discretionary powers to impose corrective measures, including fines and operational restrictions, depending on the severity of the infringement.

 

The opinion underscores the importance of accountability, urging organizations to document their processes comprehensively. Recommended safeguards include:

 

  • Conducting data protection impact assessments.

 

  • Implementing privacy-by-design measures.

 

  • Regular testing for vulnerabilities such as data extraction or inference attacks.

 

The EDPB also highlights the importance of transparency, urging data controllers to provide clear, accessible information to data subjects about how their data is used in AI systems.

 

The EDPB’s opinion sets the stage for stricter scrutiny of AI systems, encouraging organizations to prioritize compliance and adopt innovative practices responsibly.

 

 

Need Help?

 

If you have questions or concerns about the European Data Protection Board or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter