Hong Kong’s Securities and Futures Commission Issues Circular on Generative AI Use in Finance

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/19/2024
In News

The Securities and Futures Commission (SFC) of Hong Kong has released a detailed circular outlining expectations for licensed corporations (LCs) that use generative artificial intelligence (AI). Published on November 12, 2024, the circular recognizes the rapid adoption of generative AI across financial services while stressing the need for strong governance to manage emerging risks.

Rising Use of Generative AI in Financial Operations

Generative AI tools—such as chatbots, coding assistants, and data-summarization engines—are increasingly embedded in daily financial operations. Firms are using these models to support client interaction, analyze investment signals, and accelerate software development. Although these tools can improve efficiency, they also bring notable risks.

Key Risks Identified by the SFC

The regulator highlighted several areas of concern:

 

  1. Hallucination Risks: Generative AI models may produce responses that sound plausible but are factually incorrect.

 

  1. Bias and Inconsistency: The data used to train these models may introduce biases, leading to potentially discriminatory outputs.

 

  1. Operational Challenges: Firms relying on external AI providers face risks related to data privacy, cybersecurity, and system availability.

 

Four Governance Principles for AI Adoption

To address these challenges, the SFC outlined four core expectations for licensed corporations.

 

  1. Senior Management Oversight: Senior executives must ensure effective policies, procedures, and internal controls are in place throughout the AI lifecycle, from model development to ongoing monitoring.

 

  1. AI Model Risk Management: LCs are expected to segregate roles related to AI development and validation, conduct comprehensive testing, and maintain detailed documentation of performance assessments.

   

  1. Cybersecurity and Data Protection: Firms must implement robust cybersecurity measures to guard against adversarial attacks and data breaches. This includes encrypting sensitive data and regularly testing AI systems for vulnerabilities.

 

  1. Third-Party Risk Management: LCs using third-party AI models must conduct due diligence to ensure these providers meet high standards of governance and operational resilience.

 

High-Risk AI Applications Require Extra Controls

The SFC placed particular attention on AI systems used for investment advice or recommendations. In these cases, firms must validate outputs rigorously and apply meaningful human oversight. They also need to disclose the limitations of AI-generated responses so clients understand how the technology works and where it may fall short.

Ongoing Communication With Regulators

Recognizing the rapid evolution of AI, the SFC encouraged early engagement from firms planning to adopt significant new AI capabilities. LCs must notify the regulator when they introduce or materially change high-risk AI use cases. They are also expected to ensure compliance with all existing regulatory requirements throughout the process.

Immediate Effect and Transitional Expectations

The circular took effect immediately. Although firms are urged to review and adjust their governance frameworks as soon as possible, the SFC signaled a pragmatic approach and acknowledged that some organizations may need time to fully implement the required controls.

 

 

Need Help?

 

If you have questions or concerns about Hong Kong’s AI proposals and guidelines, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter