UPDATE — SEPTEMBER 2025:
Since ASIC’s October 2024 report Beware the Gap: Governance Arrangements in the Face of AI Innovation, both the regulator and the Australian government have advanced efforts to close AI governance gaps in the financial services sector.
In March 2025, ASIC began supervisory follow-ups with several of the 23 licensees reviewed in the report, checking whether firms had updated risk management frameworks to explicitly address AI-specific risks such as bias, explainability, and data security. By June, ASIC had clarified that failures in AI governance would be treated as breaches of licensee obligations, effectively warning firms that weak oversight of AI systems could lead to the same penalties as broader compliance failures.
At the federal level, the Department of Industry, Science and Resources closed its consultation on mandatory “guardrails” for high-risk AI in January 2025 and circulated a draft Responsible AI in High-Risk Settings Bill in May. The bill targets transparency, human oversight, and risk assessments in areas like credit and insurance, with government officials indicating an introduction to Parliament by late 2025 and phased obligations starting in 2026.
Other agencies are also moving in parallel. The Office of the Australian Information Commissioner (OAIC) has tied forthcoming Privacy Act reforms directly to AI, with an emphasis on transparency and auditing of automated decision-making. In mid-2025, the government launched an “AI Impact Navigator” tool to help organizations classify and assess AI risk, modeled on international frameworks.
For financial service providers, this means dual compliance pressure: addressing ASIC’s supervisory expectations today while preparing for statutory requirements under the forthcoming AI legislation. Generative AI remains the fastest-growing area—ASIC’s follow-ups found that by early 2025, nearly one-third of reviewed firms had pilot projects involving customer-facing AI, such as chatbots and marketing automation—raising ongoing concerns about bias, accuracy, and consumer trust.
ORIGINAL NEWS POST:
ASIC Report Warns of AI Governance Gaps in Australia’s Financial Services Sector
The Australian Securities and Investments Commission (ASIC) has issued a warning about major gaps in artificial intelligence governance across the country’s financial services sector. In its new report, Beware the Gap: Governance Arrangements in the Face of AI Innovation, ASIC outlines concerns after reviewing 23 financial services and credit licensees. The review shows rapid AI growth and highlights the need for stronger oversight as the technology spreads through the industry.
Rapid AI Expansion Raises Oversight Concerns
ASIC analyzed 624 AI use cases across banking, credit, insurance, and financial advisory services. The report notes that 57% of these use cases were less than two years old or still under development. Even more striking, 92% of generative AI projects had either launched in 2023 or remained in development. This swift growth heightens risks for consumers and raises doubts about whether firms can manage AI responsibly.
Although many organizations still use AI to support rather than replace human judgment, competitive pressure is pushing firms toward faster adoption of complex systems. ASIC warns that weak governance during this growth phase could increase risks such as bias, misinformation, and a lack of transparency.
Key Shortcomings in AI Governance
ASIC identified several areas where firms need improvement.
- Lagging Risk Management: Only half of the reviewed licensees had updated their risk management frameworks to cover AI-specific dangers. These gaps include limited safeguards for algorithmic bias and unclear handling of consumer harm from opaque models.
- Transparency and Accountability: Few organizations disclose their use of AI to customers. Even fewer offer ways to contest AI-generated decisions. ASIC warns that this lack of transparency may erode consumer trust, particularly in sensitive areas like credit and insurance.
- Reliance on Third Parties: Thirty percent of identified AI models were built by outside vendors. In many cases, licensees lacked strong oversight of these systems. This raises concerns about data security, model accuracy, and compliance with Australian law.
Generative AI: Fast-Growing and High-Risk
Generative AI accounted for only 5% of current use cases but represented 22% of projects in development. Firms are exploring it for marketing, customer interaction analysis, and operational efficiency. However, ASIC cautions that generative models come with unique risks. These include privacy issues, the creation of misleading content, and unreliable or inaccurate outputs.
ASIC’s Recommendations for Stronger Governance
To address these issues, ASIC urges firms to take several steps.
- Developing AI-Specific Policies: Organizations should align their AI strategies with ethical principles, focusing on fairness, inclusivity, and transparency.
- Enhancing Human Oversight: Decision-making processes should incorporate meaningful human involvement to monitor and mitigate risks effectively.
- Proactively Engaging with Regulation: With Australia’s AI regulatory landscape evolving, including proposed mandatory guardrails for high-risk AI, licensees must prepare to meet new compliance standards.
Need Help?
Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


