MAS Unveils Draft AI Risk Management Guidelines for Financial Institutions

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/01/2025
In News

The Monetary Authority of Singapore (MAS) has released a sweeping consultation paper outlining proposed Guidelines on Artificial Intelligence Risk Management, seeking public feedback as the city-state moves to strengthen oversight of AI deployment across the financial sector. The consultation runs from 13 November 2025 to 31 January 2026. 

 

The draft guidelines set out MAS’ supervisory expectations for how banks, insurers, capital markets firms and other financial institutions should govern, test, monitor and manage risks arising from artificial intelligence, including generative AI and autonomous AI agents. According to the document, MAS aims to ensure that AI systems used in financial services remain safe, fair, transparent and accountable, and that institutions put in place “robust frameworks, policies, procedures, and controls” throughout the entire AI lifecycle. 

 

The proposed rules would apply to all financial institutions operating in Singapore, though MAS emphasizes a proportionate approach based on each firm’s size, business model, and level of AI integration. All institutions would be required to maintain clear AI usage policies, designate responsible oversight personnel, and identify and inventory all AI systems used in their operations. Higher-risk AI applications — such as those involved in credit decisions, risk management, or customer-facing advice — would be subject to stricter controls, including independent validation, stress testing, transparency obligations and enhanced human oversight. 

 

The draft also highlights emerging risks posed by generative AI, including hallucinations, data leakage, copyright and privacy concerns, and vulnerabilities to adversarial attacks. MAS notes that increasingly autonomous AI agents could introduce additional operational and security risks, requiring new safeguards. 

 

MAS proposes a 12-month transition period once the guidelines are finalized, acknowledging varying levels of AI maturity across the sector. The regulator is seeking industry views on risk assessment methods, governance structures — including whether high-exposure firms should form dedicated cross-functional AI committees — and appropriate lifecycle controls. 

 

Submissions will be published unless confidentiality is requested.

 

Need Help?

 

If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter