EU Tech Dispatch Explores Federated Learning as a Privacy-Preserving AI Approach

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 06/11/2025
In News

The European Commission’s latest edition of TechDispatch, published by the European Data Protection Supervisor (EDPS), spotlights federated learning (FL) as a promising method for developing artificial intelligence (AI) systems that align with EU data protection values.

 

Unlike traditional machine learning models that require centralized data collection, federated learning enables AI systems to be trained across multiple decentralized devices or servers, keeping personal data at its source. This approach minimizes the need for data transfers and aggregation, reducing privacy risks while still achieving powerful insights.

 

The EDPS says in the report that federated learning represents a technological response to the growing demand for privacy-friendly AI. By allowing local training and only sharing model updates—not raw data—FL addresses core concerns under the General Data Protection Regulation (GDPR), including data minimization and purpose limitation.

 

The publication highlights real-world applications of federated learning, from smartphones that personalize user experiences without centralizing sensitive data, to healthcare collaborations where hospitals can train shared diagnostic models without exchanging patient records. These use cases illustrate FL’s potential to enable collaborative innovation while safeguarding privacy and ensuring regulatory compliance.

 

However, the EDPS notes that federated learning is not without its challenges. Technical issues—such as ensuring the integrity of local model updates, managing device heterogeneity, and addressing vulnerabilities to inference attacks—must be carefully addressed. Furthermore, compliance with GDPR obligations such as transparency, lawful basis for processing, and safeguarding against re-identification remain essential, even with decentralized learning.

 

The EDPS urges developers and organizations to apply data protection by design and by default when implementing federated learning, especially in sensitive domains such as health or biometrics. Complementary measures such as differential privacy, secure aggregation, and trusted execution environments may be necessary to reinforce protections.

 

The TechDispatch also underscores the importance of data protection impact assessments (DPIAs) for high-risk federated learning projects, particularly when deploying systems involving special categories of personal data or automated decision-making.

 

As EU institutions and regulators continue shaping the future of AI, federated learning offers a compelling model for innovation that respects fundamental rights. The EDPS calls for further research, transparency, and collaboration to ensure that privacy-preserving techniques like FL are implemented responsibly and effectively.

 

The publication forms part of the EDPS’s broader mission to inform and guide EU policymakers and the public on emerging technologies and their implications for privacy and data protection.

 
 


Need Help?

 

If you have questions or concerns about any EU guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter