EDPB Taskforce Report Highlights GDPR Compliance Issues for ChatGPT

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/29/2024
In News

EDPB Taskforce Report Highlights GDPR Compliance Issues for ChatGPT

 

On May 23, the European Data Protection Board (EDPB) released a comprehensive report detailing the work undertaken by its ChatGPT Taskforce, which was established to address data protection concerns related to the popular AI service, ChatGPT. The report outlines ongoing investigations, preliminary views on compliance, and strategic recommendations for ensuring adherence to the General Data Protection Regulation (GDPR).

 

In recent years, large language models (LLMs) such as OpenAI’s GPT series have become increasingly prevalent in various fields. These models, which include ChatGPT, are trained using vast amounts of data, often including personal information, necessitating strict compliance with GDPR provisions. The ChatGPT Taskforce was created by the EDPB in April 2023 to coordinate investigations and enforcement actions across EU member states. This move was necessary because OpenAI, the company behind ChatGPT, did not have an establishment in the EU until February 2024, preventing the application of the One-Stop-Shop (OSS) mechanism under GDPR​​.

 

Supervisory Authorities across Europe are examining how OpenAI collects and uses data. They are looking at web scraping, pre-processing, training, and the handling of prompts and outputs. The lead authority under the OSS now coordinates corrective action, while local authorities finish their own cases.

 

Preliminary Views on Lawfulness

 

The report stresses the need for compliance with GDPR Articles 6 and 9. Concerns focus on training data scraped from public sources, which may endanger fundamental rights. OpenAI has cited legitimate interest as its legal basis, but the taskforce argues that additional safeguards are essential. When users input prompts that include personal data, OpenAI must show clear consent if it uses this information for training. The taskforce warns that relying on broad claims of legitimate interest is not enough.

 

Crucial Components for Compliance

 

The EDPB highlights three pillars:

  • Fairness: Data must not be used in ways that harm or discriminate against people.

  • Transparency: OpenAI must explain how it collects data, especially when scraping from public sources, and comply with Articles 13 and 14.

  • Accuracy: AI outputs are probabilistic and may be wrong. OpenAI must take steps to reduce the risk of false or biased results influencing users.

The GDPR also gives people rights to access, correct, delete, or block their data. The taskforce urges OpenAI to make these rights easier to exercise. Current privacy policies exist but need stronger mechanisms for user control.

 

Conclusion

 

The report recommends continued cooperation among national authorities, more detailed guidance on AI data processing, and stronger safeguards for individuals. The taskforce also plans to facilitate dialogue between OpenAI and regulators to improve transparency and accountability under GDPR.

 

Need Help?

 

If you’re wondering how the EDPB, the GDPR and any other government regulations on AI could impact you, reach out to BABL AI. Their Audit Experts are ready to help you with your concerns and questions while providing valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter