Hong Kong’s Securities and Futures Commission Issues Circular on Generative AI Use in Finance

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/19/2024
In News

The Securities and Futures Commission (SFC) of Hong Kong has issued a detailed circular outlining expectations for licensed corporations (LCs) regarding the use of generative artificial intelligence (AI) language models. Released on November 12, 2024, the circular emphasizes the growing adoption of generative AI in financial services while stressing the need for stringent governance to manage risks effectively.

 

Generative AI tools, such as chatbots and data summarization engines, are increasingly being integrated into financial operations. These models are being used for client interaction, investment signal analysis, and software development. While their utility is evident, the SFC highlighted several risks, including:

 

  1. Hallucination Risks: Generative AI models may produce responses that sound plausible but are factually incorrect.

 

  1. Bias and Inconsistency: The data used to train these models may introduce biases, leading to potentially discriminatory outputs.

 

  1. Operational Challenges: Firms relying on external AI providers face risks related to data privacy, cybersecurity, and system availability.

 

To address these challenges, the SFC outlined four core principles:

 

  1. Senior Management Oversight: Senior executives must ensure effective policies, procedures, and internal controls are in place throughout the AI lifecycle, from model development to ongoing monitoring.

 

  1. AI Model Risk Management: LCs are expected to segregate roles related to AI development and validation, conduct comprehensive testing, and maintain detailed documentation of performance assessments.

   

  1. Cybersecurity and Data Protection: Firms must implement robust cybersecurity measures to guard against adversarial attacks and data breaches. This includes encrypting sensitive data and regularly testing AI systems for vulnerabilities.

 

  1. Third-Party Risk Management: LCs using third-party AI models must conduct due diligence to ensure these providers meet high standards of governance and operational resilience.

 

The SFC placed special emphasis on high-risk AI applications, such as providing investment advice or recommendations. Firms using AI in these contexts are required to validate model outputs rigorously, implement human oversight to review AI-generated content, and disclose to clients the limitations of AI responses.

 

Acknowledging the fast-evolving AI landscape, the SFC encouraged LCs to engage with the regulator early in their AI adoption process. Firms must notify the SFC of significant changes in their operations related to high-risk AI use cases and ensure compliance with all relevant regulations.

 

The circular took effect immediately, with the SFC urging firms to review their existing frameworks to align with the outlined principles. However, the SFC noted that some firms might need additional time to implement the necessary changes and promised a pragmatic approach in assessing compliance.

 

 

Need Help?

 

If you have questions or concerns about Hong Kong’s AI proposals and guidelines, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter