ASIC Report Warns of AI Governance Gaps in Australia’s Financial Services Sector

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/18/2024
In News

UPDATE — SEPTEMBER 2025: Since ASIC’s October 2024 report Beware the Gap: Governance Arrangements in the Face of AI Innovation, both the regulator and the Australian government have advanced efforts to close AI governance gaps in the financial services sector.

In March 2025, ASIC began supervisory follow-ups with several of the 23 licensees reviewed in the report, checking whether firms had updated risk management frameworks to explicitly address AI-specific risks such as bias, explainability, and data security. By June, ASIC had clarified that failures in AI governance would be treated as breaches of licensee obligations, effectively warning firms that weak oversight of AI systems could lead to the same penalties as broader compliance failures.

At the federal level, the Department of Industry, Science and Resources closed its consultation on mandatory “guardrails” for high-risk AI in January 2025 and circulated a draft Responsible AI in High-Risk Settings Bill in May. The bill targets transparency, human oversight, and risk assessments in areas like credit and insurance, with government officials indicating an introduction to Parliament by late 2025 and phased obligations starting in 2026.

Other agencies are also moving in parallel. The Office of the Australian Information Commissioner (OAIC) has tied forthcoming Privacy Act reforms directly to AI, with an emphasis on transparency and auditing of automated decision-making. In mid-2025, the government launched an “AI Impact Navigator” tool to help organizations classify and assess AI risk, modeled on international frameworks.

For financial service providers, this means dual compliance pressure: addressing ASIC’s supervisory expectations today while preparing for statutory requirements under the forthcoming AI legislation. Generative AI remains the fastest-growing area—ASIC’s follow-ups found that by early 2025, nearly one-third of reviewed firms had pilot projects involving customer-facing AI, such as chatbots and marketing automation—raising ongoing concerns about bias, accuracy, and consumer trust.

 

ORIGINAL NEWS POST:

 

ASIC Report Warns of AI Governance Gaps in Australia’s Financial Services Sector

 

The Australian Securities and Investments Commission (ASIC) has sounded an alarm over potential gaps in artificial intelligence (AI) governance within the nation’s financial services sector. Its newly released report, “Beware the Gap: Governance Arrangements in the Face of AI Innovation,” highlights critical findings from a review of 23 financial services and credit licensees. The review underscores the accelerating use of AI across the industry and the pressing need for governance frameworks to evolve in step with technological advancements.

 

ASIC’s review analyzed 624 AI use cases across banking, credit, insurance, and financial advisory services. It found that while 57% of AI use cases were less than two years old or still in development, a concerning 92% of generative AI use cases were either deployed in 2023 or under development. This rapid expansion raises alarms about whether licensees have adequate governance structures to mitigate risks to consumers.

 

While many organizations demonstrated a cautious approach to integrating AI—augmenting rather than replacing human decision-making—competitive pressures are driving faster adoption of complex AI models. ASIC warns that gaps between governance frameworks and AI deployment could magnify risks of consumer harm, including algorithmic bias, misinformation, and a lack of transparency.

 

ASIC identified several shortcomings in current governance practices:

 

  1. Lagging Risk Management: Only half of the licensees had updated their risk management frameworks to address AI-specific risks, such as algorithmic bias or consumer harm from opaque decision-making models.

 

  1. Transparency and Accountability: Few organizations disclosed their use of AI to consumers or implemented mechanisms for contesting AI-driven decisions. Such practices risk eroding consumer trust, especially when AI decisions impact sensitive areas like credit or insurance approvals.

 

  1. Reliance on Third Parties: Thirty percent of AI models were developed by third-party providers. In many cases, licensees lacked robust oversight of these external systems, raising concerns about data security, model accuracy, and alignment with Australian regulations.

 

Generative AI—a subset of AI focused on creating content such as text and images—was highlighted as an area of both promise and peril. While it accounted for only 5% of current use cases, it represented 22% of projects in development. Applications included drafting marketing materials, analyzing customer interactions, and enhancing operational efficiencies. However, ASIC emphasized that generative AI models introduce unique risks, including data privacy violations and the generation of misleading or inaccurate outputs.

 

To address these challenges, ASIC has issued a set of recommendations urging licensees to strengthen their governance frameworks. Key action points include:

 

 

  • Enhancing Human Oversight: Decision-making processes should incorporate meaningful human involvement to monitor and mitigate risks effectively.

 

  • Proactively Engaging with Regulation: With Australia’s AI regulatory landscape evolving, including proposed mandatory guardrails for high-risk AI, licensees must prepare to meet new compliance standards.

 

 

Need Help?

 

Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter