Hong Kong’s PCPD Publishes Model Framework to Protect Personal Data in AI Systems
In a significant move to address the challenges posed by artificial intelligence (AI) to personal data privacy, the Office of the Privacy Commissioner for Personal Data (PCPD) has published the “Artificial Intelligence: Model Personal Data Protection Framework.” This new framework aims to provide comprehensive guidance and best practices for organizations in Hong Kong to procure, implement, and use AI technologies, including generative AI, in compliance with the Personal Data (Privacy) Ordinance (PDPO).
As AI technology continues to advance rapidly and its application becomes more widespread, concerns about AI’s impact on personal data privacy have intensified. To support the “Global AI Governance Initiative” of China, the PCPD’s Model Framework offers internationally recognized recommendations designed to ensure AI’s benefits are harnessed while safeguarding personal data.
Leadership and Regional Alignment
Privacy Commissioner Ada CHUNG Lai-ling emphasized the importance of AI security as a component of national security. She stressed that the framework provides practical steps to guide responsible use of AI. Chung believes the framework will foster healthy AI development, boost Hong Kong’s role as an innovation hub, and support the Greater Bay Area’s digital economy. Prof. Hon William Wong Kam-fai, of the Legislative Council and PCPD’s Standing Committee on Technological Developments, highlighted the timing. He explained that the framework aligns with China’s “Artificial Intelligence +” strategy, which links AI innovation with industrial growth.
Broad Consultation and Support
The framework drew support from the Office of the Government Chief Information Officer and the Hong Kong Applied Science and Technology Research Institute. The PCPD also consulted public bodies, universities, technology firms, and AI suppliers during drafting. Chung later thanked these contributors for their input.
Core Areas of the Model Framework
The framework builds on the PCPD’s 2021 guidance, which set out three Data Stewardship Values and seven Ethical Principles for AI. It organizes recommendations into four focus areas:
-
AI Strategy and Governance: Organizations should establish AI strategies, governance committees, and employee training. Clear policies and procedures are required for ethical and responsible AI use.
-
Risk Management: Companies must conduct risk assessments and adopt proportionate mitigation measures. Human oversight should increase with higher-risk AI applications.
-
Data and Model Management: Organizations need to prepare personal data responsibly, test and validate models, ensure security, and monitor systems continuously.
-
Communication and Engagement: Regular dialogue with staff, suppliers, customers, and regulators helps build trust and transparency.
Building Trust in AI
The Model Framework provides organizations with a roadmap to adopt AI responsibly while protecting privacy. By following these practices, businesses can ensure compliance with the PDPO and earn public trust in AI systems. Chung emphasized that the guidance reflects a collective effort. She said the framework will strengthen AI governance and improve privacy protections in Hong Kong.
Need Help?
If you’re wondering how PCPD, or any other regulatory body, could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

