DPC Launches Inquiry into Google’s AI Practices Over Potential GDPR Violations

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/19/2024
In News

The Data Protection Commission (DPC) has initiated a cross-border statutory inquiry into Google Ireland Limited regarding its compliance with European Union data protection laws. Announced in September, the inquiry will focus on whether Google conducted a Data Protection Impact Assessment (DPIA) as required by the General Data Protection Regulation (GDPR) before using personal data from EU and European Economic Area (EEA) residents in developing its foundational AI model, Pathways Language Model 2 (PaLM 2).

 

This inquiry aims to determine whether Google followed its obligations under Article 35 of the GDPR. According to this provision, organizations must conduct a DPIA when data processing activities, particularly those involving new technologies like AI, pose a high risk to the fundamental rights and freedoms of individuals. The DPIA helps identify and mitigate potential data protection risks, ensuring that the processing is necessary, proportionate, and subject to appropriate safeguards.

 

The DPC emphasized that conducting a DPIA is essential when dealing with high-risk data processing, as it helps ensure the protection of individuals’ rights and freedoms. This inquiry into Google’s handling of personal data is part of a broader effort by the DPC and its peer regulators across the EU and EEA to oversee how companies process personal data in the development of AI models and systems.

 

Cross-border processing, which is the focus of the inquiry, typically involves either the processing of personal data by companies with establishments in multiple EU member states or processing that significantly affects individuals in multiple member states. Google, which operates across the EU, is under scrutiny to ensure it followed these regulations when developing its AI systems.

 

The DPC’s investigation of Google comes at a time when AI and data privacy are under increased scrutiny across the globe. PaLM 2, one of Google’s advanced AI models, has been used in a wide range of applications, raising concerns about how the tech giant handles sensitive data in the EU. The outcome of the inquiry could have significant implications for AI development practices in Europe.

 

The inquiry also reflects the growing regulatory focus on AI technologies, which often involve the use of personal data to train machine learning models. Regulators are increasingly concerned about ensuring these models are developed and deployed responsibly, with appropriate oversight regarding the use of personal data.

 

 

Need Help?

 

With every day comes a new AI regulation or bill, and you might have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter