UPDATE — AUGUST 2025: Meta has moved forward with plans to train AI on EU user data from Facebook and Instagram, citing “legitimate interest” as its legal basis. In response, noyb filed formal GDPR complaints with data protection authorities (DPAs) in 11 countries. Legal action is underway, including cease-and-desist letters and threats of class action lawsuits. Some DPAs, such as Norway’s, have questioned the legality of Meta’s approach. However, definitive rulings are still pending. The outcome could shape how GDPR is enforced in the age of AI.
ORIGINAL NEWS STORY:
noyb Urges Immediate Action Against Meta’s Use of Personal Data for AI
In a significant escalation of privacy concerns, the European digital rights group noyb has filed complaints with data protection authorities (DPAs) in 11 European countries to halt Meta’s latest privacy policy updates, which allegedly infringe on user rights under the General Data Protection Regulation (GDPR). This move highlights escalating tensions between privacy advocates and tech giants over the use of personal data in AI technologies.
Meta, formerly known as Facebook, has recently notified millions of its users in Europe about changes to its privacy policy, which would allow the company to use vast amounts of personal data for broadly defined “AI technology.” This data includes personal posts, private images, and online tracking data accumulated since 2007. Critically, Meta plans to share this information with unspecified third parties without explicit consent from users, claiming a “legitimate interest” that purportedly overrides individual privacy rights.
GDPR Complaints Filed
noyb, led by privacy activist Max Schrems, argues that Meta’s approach violates multiple aspects of the GDPR, which is designed to protect personal data and ensure privacy for EU citizens. The group has lodged complaints in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain. These complaints urge the respective DPAs to initiate urgent procedures to prevent Meta’s policy from taking effect on June 26, 2024.
Meta defends its policy by stating that the use of AI technology to process user data is a legitimate business interest, which they believe does not require user consent. This stance has been controversial, as it appears to bypass the stringent consent requirements laid out in the GDPR. Meta also attempts to shift responsibility to users, offering an opt-out option that is complex and burdensome, rather than obtaining affirmative consent.
Privacy Rights at Stake
The complaints raise serious concerns about autonomy and the right to be forgotten. Once Meta uses data in AI systems, users may not be able to request deletion. This conflicts directly with GDPR’s rules on consent and erasure rights. The Norwegian DPA has already questioned whether Meta’s approach is legal. Other authorities have yet to issue formal positions, showing the uneven pace of enforcement across the EU.
Why it Matters
The outcome of these complaints could reshape privacy and AI governance in Europe. Strong enforcement might force Meta and other companies to change how they process personal data. Inaction could erode public trust in GDPR. As Schrems warns: “Meta’s current approach, if unchecked, could set a dangerous precedent. This is not just about privacy but about the fundamental rights of millions of Europeans.”
Need Help?
As this situation unfolds, it will be crucial to monitor the responses from DPAs across Europe and any subsequent legal challenges that may arise. With every day comes a new AI regulation or bill, and you might have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.