The Australian Securities and Investments Commission (ASIC) has sounded an alarm over potential gaps in artificial intelligence (AI) governance within the nation’s financial services sector. Its newly released report, “Beware the Gap: Governance Arrangements in the Face of AI Innovation,” highlights critical findings from a review of 23 financial services and credit licensees. The review underscores the accelerating use of AI across the industry and the pressing need for governance frameworks to evolve in step with technological advancements.
ASIC’s review analyzed 624 AI use cases across banking, credit, insurance, and financial advisory services. It found that while 57% of AI use cases were less than two years old or still in development, a concerning 92% of generative AI use cases were either deployed in 2023 or under development. This rapid expansion raises alarms about whether licensees have adequate governance structures to mitigate risks to consumers.
While many organizations demonstrated a cautious approach to integrating AI—augmenting rather than replacing human decision-making—competitive pressures are driving faster adoption of complex AI models. ASIC warns that gaps between governance frameworks and AI deployment could magnify risks of consumer harm, including algorithmic bias, misinformation, and a lack of transparency.
ASIC identified several shortcomings in current governance practices:
- Lagging Risk Management: Only half of the licensees had updated their risk management frameworks to address AI-specific risks, such as algorithmic bias or consumer harm from opaque decision-making models.
- Transparency and Accountability: Few organizations disclosed their use of AI to consumers or implemented mechanisms for contesting AI-driven decisions. Such practices risk eroding consumer trust, especially when AI decisions impact sensitive areas like credit or insurance approvals.
- Reliance on Third Parties: Thirty percent of AI models were developed by third-party providers. In many cases, licensees lacked robust oversight of these external systems, raising concerns about data security, model accuracy, and alignment with Australian regulations.
Generative AI—a subset of AI focused on creating content such as text and images—was highlighted as an area of both promise and peril. While it accounted for only 5% of current use cases, it represented 22% of projects in development. Applications included drafting marketing materials, analyzing customer interactions, and enhancing operational efficiencies. However, ASIC emphasized that generative AI models introduce unique risks, including data privacy violations and the generation of misleading or inaccurate outputs.
To address these challenges, ASIC has issued a set of recommendations urging licensees to strengthen their governance frameworks. Key action points include:
- Developing AI-Specific Policies: Organizations should align their AI strategies with ethical principles, focusing on fairness, inclusivity, and transparency.
- Enhancing Human Oversight: Decision-making processes should incorporate meaningful human involvement to monitor and mitigate risks effectively.
- Proactively Engaging with Regulation: With Australia’s AI regulatory landscape evolving, including proposed mandatory guardrails for high-risk AI, licensees must prepare to meet new compliance standards.
Need Help?
Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.