Spain’s Data Protection Authority Warns of ‘Visible and Invisible’ Risks From AI Image Systems

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/21/2026
In News

Spain’s data protection authority has issued a new guidance document warning that the use of artificial intelligence to generate or modify images of real people carries both “visible” and “less visible” risks, underscoring growing regulatory attention to the spread of deepfakes, synthetic media and generative image platforms.

 

The report, “El uso de imágenes de terceros en sistemas de inteligencia artificial y sus riesgos visibles e invisibles,” published in January by the Agencia Española de Protección de Datos (AEPD), outlines how uploading or transforming photos and videos using AI tools can constitute a form of personal data processing subject to data protection rules. The AEPD stresses that users frequently underestimate how such systems retain, reuse or analyze biometric details in ways that may not be transparent to the person depicted .

 

The guidance highlights a range of “visible” harms—such as false attributions, reputational damage, sexualized synthetic imagery and imagery involving minors—but places particular emphasis on less apparent forms of data exposure, including metadata generation, retention by service providers, and the creation of persistent identifiers in generative models that allow repeated recreation of a subject’s likeness without consent .

 

The AEPD notes that even seemingly trivial uses—such as filters, avatars or humorous edits—may contribute to a wider ecosystem of image reuse, escalating the risk of harassment, impersonation, privacy violations or future misuse. The authority also highlights the difficulty individuals may face in exercising data protection rights when they do not know which system processed their image or how to request deletion.

 

While not every case falls under GDPR enforcement, the AEPD signals heightened scrutiny in situations involving minors, vulnerable groups, sexualized content, or highly realistic synthetic portrayals. The agency warns that in these contexts the impact can be “equivalent—or in some cases greater” than with authentic imagery, particularly when content is widely shared beyond its original setting .

 

The guidance follows a wave of international concern over non-consensual AI “undressing” tools and deepfake abuse, and adds to a growing body of European policy efforts aimed at synthetic media, biometric processing and AI accountability.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter