Italy Fines Replika €5 Million for Data Privacy Violations, Launches Further AI Probe

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/21/2025
In News

Italy’s data protection authority has fined U.S.-based AI company Replika €5 million ($5.64 million) for violating user data protection rules, Reuters reported.

 

The fine follows a months-long investigation by Garante, Italy’s privacy watchdog, which previously ordered Replika to suspend its services in the country in February 2023 due to risks posed to minors. Replika, launched in 2017 by San Francisco startup Luka Inc., offers customizable AI-powered avatars that engage users in conversations marketed to enhance emotional wellbeing.

 

Garante found that Replika lacked a legal basis for processing user data and failed to implement an age-verification system, allowing minors to access the chatbot without safeguards. The regulator emphasized that the absence of age checks placed children at particular risk, a violation of both national and EU data protection rules.

 

Replika has not yet commented on the ruling, according to Reuters.

 

In addition to the fine, Garante has opened a separate investigation into the chatbot’s generative AI technology to determine whether the underlying language model training methods comply with the European Union’s General Data Protection Regulation (GDPR). The probe will focus on how user data is collected and used to train Replika’s AI systems.

 

Garante is one of the EU’s most assertive enforcers of AI and data privacy compliance. In 2023, it briefly banned ChatGPT in Italy and later fined developer OpenAI €15 million for privacy violations.

 

The Replika case signals that European regulators are continuing to scrutinize AI platforms that collect sensitive personal data—especially when those platforms lack adequate transparency, consent mechanisms, or child safety protections. Further actions may follow as the EU prepares for broader enforcement of the EU AI Act.

 

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter