Australian Government Publishes Interim Response on AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/02/2024
In News

UPDATE – FEBRUARY 2026:

Australia’s AI strategy has shifted from exploring mandatory guardrails toward a more innovation-focused national roadmap. In December 2025, the government released its National AI Plan, prioritizing economic growth, infrastructure investment, and AI adoption across industry rather than introducing comprehensive AI-specific legislation. The plan reinforces Australia’s reliance on existing, technology-neutral laws—such as privacy, consumer protection, and sectoral regulations—to address AI-related risks, instead of creating a standalone AI statute.

A major structural development is the establishment of the Australian AI Safety Institute (AISI), backed by approximately AUD $29.9 million in funding. The Institute, launching in early 2026, is tasked with monitoring systemic AI risks, testing advanced models, supporting government capability, and coordinating international engagement on AI safety. This marks a shift toward institutional oversight and risk evaluation rather than immediate legislative mandates.

While earlier consultation papers proposed mandatory guardrails for high-risk AI uses, the current approach emphasizes voluntary standards, capability building, and targeted regulatory adjustments. Privacy law reforms that account for AI-driven harms remain under consideration but have not yet been finalized.

As of early 2026, Australia continues to pursue a risk-based, non-legislative model centered on economic competitiveness, international coordination, and practical AI governance—without enacting a comprehensive AI law.


ORIGINAL NEWS POST:

Australia Releases Interim Response on Safe and Responsible AI

With all the focus on the EU AI Act and the United States, one nation has released a discussion paper on safe and responsible AI. The Australian government released its interim response on January 17 after conducting public consultations between June and August 2023. Submissions highlighted the opportunities of AI to improve wellbeing and grow the economy, but also concerns about potential harms. Views differed on the appropriate policy response.

Key Themes in Submissions

Opportunities: AI can create jobs, benefit consumers, transform healthcare, and support the transition to net zero emissions. It also has the potential to improve education through personalized learning.

Risks: Technical limitations can lead to inaccurate or biased outcomes. A lack of transparency makes it difficult to predict errors, ensure accountability, and explain outcomes. Interactions with existing harms or laws may create new risks like online abuse or discrimination. Powerful new AI models could lead to large-scale, rapid harms. The pace of change also risks unforeseen consequences.

Policy response: Most called for government action to prevent and respond to harms, especially irreversible ones. But many applications are low risk and shouldn’t face burdensome rules. Suggested approaches include strengthening existing laws or new legislation focused on testing, transparency and accountability for high-risk AI. Non-regulatory initiatives like standards and capability building are also important.

The government’s interim response accepts current laws likely don’t adequately address AI’s risks. It commits to considering mandatory safety requirements for legitimate but high-risk AI uses where harms are irreversible. This could be through existing or new laws developed transparently with industry. Immediate steps include an AI safety standard, expert advisory group, and assessing labeling of AI content.

What Happens Next?

  • Testing, transparency, and accountability measures for high-risk AI
  • Clarifying privacy and related legal frameworks
  • International coordination, including participation in the Bletchley Declaration
  • Investments in capability and AI adoption

The response aims to ensure high-risk deployment of AI is safe and reliable while enabling low-risk innovation. It adopts principles like taking a risk-based and community-focused approach. Outside of the response, the Australian Government is keeping an eye on responses to AI around the world, including the EU AI Act.

Need Help?

If you’re wondering how Australia’s interim response, as well as other AI laws around the world, could impact you, reach out to BABL AI. Their Audit Experts are ready to help you with your concerns and questions while providing valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter