South Korea Pushes New Transparency Standards for Generative AI Data Practices

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/23/2026
In News

South Korea is pushing to strengthen transparency standards in generative AI, as regulators warn that unclear data practices could undermine user trust.

 

The Personal Information Protection Commission (PIPC) convened a meeting on March 4 in Seoul with major global and domestic AI companies, including Google, Microsoft, OpenAI, Naver and Kakao, to address gaps in how personal data is handled and disclosed in generative AI systems. 

 

The discussion comes as AI services rapidly expand in scope and complexity, raising new questions about how personal information is collected, used and communicated to users. Regulators emphasized that transparency in data processing policies is becoming a central standard in the AI era, particularly as systems increasingly make automated or assisted decisions.

 

The PIPC’s findings from its annual evaluation of personal information processing policies highlighted ongoing shortcomings in the generative AI sector. While overall policy quality improved from an average score of 57.9 in 2024 to 71 in 2025, the AI category lagged behind in clarity, readability and accessibility. 

 

Officials identified several recurring issues, including vague descriptions of what personal data is collected, insufficient disclosure of legal bases for processing, and unclear explanations of how long data is retained. Some companies also failed to specify third-party recipients, instead using broad terms such as “partners” or “service providers.” 

 

Additional concerns included limited accessibility of privacy policies, such as requiring users to log in to view them, and the use of complex or poorly translated language that makes policies difficult to understand.

 

Industry participants acknowledged the challenges, noting that generative AI systems involve complex data flows and must align with global corporate policies. However, companies agreed that clearer and more user-friendly disclosures—particularly around training data usage, retention periods and opt-out options—are necessary.

 

The PIPC said it will update its guidelines for drafting personal data policies and plans to release revised standards in April, alongside industry briefings. The effort aims to establish clearer expectations for AI companies and improve accountability as AI adoption accelerates.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter