Hong Kong Releases AI Guideline to Boost Safe, Ethical Generative AI Adoption

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/18/2025
In News

The Digital Policy Office (DPO) of Hong Kong has released its first official Generative Artificial Intelligence Technical and Application Guideline, providing comprehensive instructions for the development, deployment, and responsible use of generative AI technologies. The guideline, issued April 15, aims to strike a balance between innovation and regulation as AI-generated content continues to permeate public and private sectors.

 

Developed in partnership with the Hong Kong Generative AI Research and Development Center (HKGAI), the new guideline outlines governance principles and operational standards for stakeholders including technology developers, service providers, and end users. It emphasizes five key dimensions of AI governance: personal data privacy, intellectual property, crime prevention, reliability and trustworthiness, and system security​.

 

In his remarks at the World Internet Conference Asia-Pacific Summit, Commissioner for Digital Policy Tony Wong stated, “The Government hopes that the Guideline can facilitate the industry and the public in developing and applying generative AI technology in a safe and responsible manner.” He added that it would help foster widespread adoption while mitigating risks.

 

The guideline categorizes stakeholder responsibilities, urging developers to implement ethical model development practices, safeguard user data, and ensure robust testing protocols. Service providers are expected to maintain content integrity, ensure privacy protection, and establish user consent mechanisms. End users are encouraged to engage ethically, verify outputs, and understand potential risks​.

 

To support compliance, the guideline recommends establishing data oversight teams, embedding traceability mechanisms into AI-generated outputs, and conducting regular audits and user feedback loops. High-risk content like deepfakes or financial documents must include tamper-proof identifiers. Independent evaluations are advised at all stages of development and deployment​.

 

The DPO confirmed it will regularly update the guideline and continue collaborating with academic and industry groups to keep pace with evolving technologies. HKGAI, established in 2023 under the AIR@InnoHK initiative, played a central role in drafting the guideline by studying local AI applications and synthesizing global best practices.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter