France’s data protection authority, the CNIL, published its latest set of recommendations on artificial intelligence. Hence, it offers fresh guidance on how the General Data Protection Regulation (GDPR) applies to AI models. It comes along with new tools and resources for ensuring secure development and compliant data annotation practices.
The recommendations clarify that AI models trained on personal data are often subject to GDPR due to their ability to memorize and reproduce sensitive information. Recommendations build on a 2024 opinion by the European Data Protection Board. The CNIL urges developers to assess and document whether their models fall under the regulation and offers solutions like embedded data filters to minimize privacy risks.
Two new fact sheets accompany the guidance: one focused on data annotation—the stage where labeled data is prepared for training models—and another on securing AI system development. The CNIL emphasizes that strong data governance at these stages not only improves model quality but also protects individuals’ rights.
Public consultation shaped the guidance with input from AI developers, researchers, legal experts, and professional associations. To aid implementation, the CNIL published a summary sheet and a compliance checklist, both currently available in French, with English versions expected in September.
Aiming to support innovation while safeguarding data protection, the new recommendations are part of the CNIL’s 2025–2028 strategic plan. The agency also previewed upcoming sector-specific guidance covering education, healthcare, and the workplace. Future releases will include a fact sheet on AI in healthcare. As well as a framework for responsible workplace deployment developed in collaboration with unions and employers.
Looking ahead, the CNIL will publish new guidance clarifying the roles and responsibilities of different actors in the AI value chain—such as model developers, integrators, and reusers. This will include insights into open-source models and non-anonymized data handling, with a public consultation planned for late 2025.
The CNIL is also investing in technical support for developers, including the PANAME project—a joint effort with national cybersecurity and research partners to create tools for detecting personal data in AI models—and a research program on explainable AI (xAI) launched in collaboration with SciencesPo and CREST.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.