UPDATE — AUGUST 2025: Australia’s plan to introduce mandatory guardrails for AI remains in the policy design phase rather than law. The government closed its consultation period on October 4, 2024, and has since been reviewing feedback to shape its next steps. Ministers have reaffirmed a commitment to a risk-based framework for high-risk uses of AI, with three options still on the table: embedding new obligations in existing laws, passing standalone framework legislation, or enacting a comprehensive AI Act. Of these, a growing number of agencies and stakeholders, including the eSafety Commissioner, have signaled support for framework legislation that sets national principles while allowing sector regulators to tailor implementation.
In the meantime, Australia is relying on voluntary standards, such as the Voluntary AI Safety Standard (VAISS), and sector-specific reviews to guide responsible practices. The debate has intensified in 2025, with business groups and the Productivity Commission cautioning against an EU-style approach and advocating instead for a lighter, gap-analysis model that minimizes regulatory burden. The government has stressed that it will pursue an “Australian approach” to AI guardrails, aiming to balance innovation with accountability, but binding rules for high-risk AI—covering testing, transparency, and accountability—are not expected until framework legislation is introduced, likely in 2026.
Australian Government Wants Responses on Proposals for Mandatory AI Guardrails in High-Risk Settings
The Australian Government is inviting public feedback on a new proposals paper outlining plans to introduce mandatory guardrails for artificial intelligence (AI) in high-risk settings. The initiative responds to growing concern about AI’s societal impact, especially in critical sectors such as healthcare, criminal justice, and public safety. The paper, “Introducing Mandatory Guardrails for AI in High-Risk Settings,” represents a key milestone in the country’s effort to ensure safe and responsible AI deployment while fostering innovation.
Growing Need for Regulation
AI has become deeply embedded in daily life, often without public awareness. Its use now spans essential services and infrastructure, but Australia’s current regulatory system has not kept up with the technology’s rapid evolution. While AI can deliver major benefits for social and economic development, its deployment—particularly in high-risk contexts—requires careful oversight. Governments worldwide are adopting risk-based approaches to AI regulation. Australia’s proposals aim to align with these global efforts while addressing domestic priorities. The plan seeks to build public trust and establish a clear regulatory foundation to ensure AI is used responsibly and safely in areas where the consequences of failure could be severe.
Defining “High-Risk” AI
A central feature of the proposals is determining which AI systems should fall under the mandatory guardrails. The paper suggests using a principles-based definition that evaluates both intended use and potential harm. AI systems used in healthcare to diagnose diseases or in criminal justice to predict reoffending are examples of technologies that could be classified as high-risk. The paper also addresses general-purpose AI (GPAI), which can perform multiple tasks beyond its original purpose. The government is exploring how GPAI should be treated under high-risk classifications, given its unpredictable potential. Ensuring that GPAI systems meet appropriate safeguards is essential to prevent large-scale harm.
Proposed Guardrails: Testing, Transparency, and Accountability
The proposals outline three main categories of mandatory guardrails. First, testing would require rigorous assessments of AI systems during both development and deployment to confirm they perform safely and effectively. Second, transparency would obligate AI developers and deployers to disclose relevant details about how their systems work. These disclosures would extend to end users, regulators, and others across the AI supply chain. Finally, accountability would ensure that organizations developing or deploying AI take ownership of associated risks. Companies would need to establish governance structures, identify potential harms, and take corrective actions before AI systems reach the public.
Three Policy Pathways Under Consideration
The paper presents three possible legislative approaches for mandating AI guardrails:
-
Adapting existing frameworks—integrating AI obligations into laws such as privacy or consumer protection statutes.
-
Introducing new framework legislation—creating an umbrella law allowing sector-specific rules for AI.
-
Enacting a comprehensive AI Act—a unified, economy-wide law setting clear standards across all sectors.
Each option aims to balance regulatory certainty with flexibility, ensuring Australia can keep pace with future AI developments.
Consultation and Next Steps
The proposals include several discussion questions inviting stakeholders to share evidence and perspectives on definitions, obligations, and implementation mechanisms. The submission process closes on October 4, and responses can be submitted online HERE. Stakeholders are encouraged to participate, as their feedback will help shape Australia’s final approach to AI governance—an approach designed to encourage innovation while managing risk in an increasingly AI-driven world.
Need Help?
You might be wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.


