South Korea’s Revised AI Basic Act to Take Effect January 22 With New Oversight, Watermarking Rules

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/14/2026
In News

South Korea’s amended Artificial Intelligence Basic Act will take effect on January 22, 2026, ushering in a new regulatory framework that combines industrial promotion with trust, safety, and accountability requirements across both public and private deployments.

 

The National Assembly approved the revisions on December 30, establishing the Artificial Intelligence Development and Trust-Based Ecosystem Act as the country’s core AI statute. The law formalizes the Presidential Council on National Artificial Intelligence Strategy as Korea’s central coordinating authority for national AI policy, reflecting Seoul’s goal of becoming a top-three AI power globally.

 

According to the Ministry of Science and ICT, the law will be implemented alongside an enforcement decree that has largely been finalized. Industry will receive a one-year grace period focused on guidance rather than penalties, allowing companies time to prepare compliance documentation, conduct risk assessments, and deploy internal controls (Simmons & Simmons and Korea Times reporting).

 

The law includes mandatory watermarking and disclosure requirements for AI-generated content, aimed at combatting misinformation and non-consensual deepfakes. It also introduces enhanced oversight for “high-impact” AI systems — including those using massive compute resources or affecting public operations, rights, or critical services. Operators of such systems must establish risk management plans, monitor social impacts, and may be subject to data requests and on-site inspections by the Strategy Council (ERP Today).

 

To accelerate public-sector adoption, civil servants receive limited liability shields for good-faith use of AI tools, while separate provisions expand research labs, innovation clusters, and digital accessibility subsidies for vulnerable populations.

 

Legal analyses (OneTrust, Korea Times) say the law will require companies to prepare governance procedures and documentation in advance of 2027 enforcement, particularly for content provenance, age-appropriate protections, and external disclosures.

 

While the core provisions survived the decree process with minimal change, officials signaled that further sector-specific rules may follow, including procurement guidance and standards for industrial innovation.

 

Need Help?

 

If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter