UPDATE — SEPTEMBER 2025:
Since the Financial Stability Board’s (FSB) November 2024 report on artificial intelligence in financial services, regulators worldwide have advanced its recommendations, with growing attention to systemic risk and cross-border coordination. In July 2025, the FSB presented a progress report to the G20 reaffirming its 2024 warnings on third-party dependencies, market correlations, and cyber risk. It acknowledged that data on AI adoption in finance remains incomplete and signaled that targeted global standards may be considered by 2026.
Other international institutions have echoed these concerns. At the Spring 2025 IMF–World Bank meetings, both bodies flagged AI as an emerging financial stability issue, stressing the danger of fragmented regulatory approaches. Similarly, the International Organization of Securities Commissions (IOSCO) launched a consultation in June 2025 on AI in securities markets, focusing on explainability, model risk, and vendor oversight.
Regionally, implementation steps are underway. In the European Union, the phased rollout of the EU AI Act has been paired with new guidance from the European Banking Authority (EBA) and ESMA clarifying how AI governance obligations intersect with prudential supervision. In the United States, the Federal Reserve, FDIC, and OCC issued a joint request for comment in May 2025 on AI-related risks in banking, while the Treasury’s Financial Stability Oversight Council (FSOC) explicitly cited AI as a “cross-cutting risk driver” in its 2025 annual report. The United Kingdom moved forward with a multi-year AI regulatory sandbox under the Bank of England and FCA, designed to test AI models in controlled settings. Meanwhile, the Monetary Authority of Singapore updated its FEAT (Fairness, Ethics, Accountability, Transparency) principles in August 2025 to explicitly account for generative AI in credit and anti-fraud applications.
ORIGINAL NEWS POST:
FSB Warns of AI’s Dual Potential for Finance: Opportunity and Risk
The Financial Stability Board (FSB) has released a detailed report examining how artificial intelligence is reshaping financial services and what that means for global financial stability. The report highlights AI’s growing role across the sector while warning that oversight and data collection have not kept pace.
Financial institutions now deploy AI tools for fraud detection, customer support, compliance monitoring, and risk analysis. Generative AI and large language models have accelerated this trend by lowering costs and expanding capabilities. Regulators themselves are also experimenting with AI to improve supervisory efficiency.
However, the FSB notes that while adoption is rising quickly, comprehensive data on how AI affects systemic risk remains limited.
Key Risks Identified by the FSB
The report warns that AI could amplify existing weaknesses in the financial system if left unchecked.
- Third-Party Dependencies: Increasing reliance on specialized hardware, cloud services, and pre-trained AI models concentrates risks within a few providers. Such dependency exposes financial institutions to operational disruptions.
- Market Correlations: The widespread use of shared AI models and data sources risks increasing market correlations, which could exacerbate liquidity crises and amplify market shocks.
- Cyber Threats: As AI tools become more accessible, malicious actors may exploit these technologies, heightening cybersecurity risks.
- Model Risk and Data Quality: The opaque nature of many AI models, combined with limited explainability, could increase risks for financial institutions that lack robust AI governance frameworks.
The FSB also flags long-term concerns tied to generative AI, including its potential role in large-scale fraud and financial misinformation.
Need for Global Coordination
The FSB stresses that no single regulator can manage these risks alone. AI systems often operate across borders, making coordination essential.
The report calls for closing data gaps to better track AI use in finance. It also urges regulators to assess whether current frameworks adequately cover AI-driven risks. Stronger cross-border cooperation would help align oversight and reduce regulatory fragmentation.
Balancing Innovation and Stability
Despite these concerns, the FSB recognizes AI’s significant upside. Tools such as document analysis, natural language processing, and predictive analytics can improve efficiency and customer experience across financial services.
The report concludes that the challenge lies in balance. Financial institutions, regulators, and technology providers must work together to capture AI’s benefits while limiting systemic harm. Strong governance, transparency, and international collaboration will be essential to ensuring AI supports a stable and resilient financial system.
Need Help?
If you have questions or concerns about global AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


