Australia will establish a new national body dedicated to monitoring and mitigating risks from artificial intelligence, with the government announcing the creation of the Australian AI Safety Institute (AISI). The institute, unveiled during National AI Week, is slated to become operational in early 2026 and will serve as a central hub for evaluating advanced AI systems, advising government agencies, and coordinating national and international safety efforts.
According to the Albanese Government, the AISI will provide “trusted, expert capability” to test emerging AI technologies, identify harms, and support regulators as the technology rapidly evolves. Its mandate includes monitoring technical developments, assessing potential societal impacts, and sharing insights across government to enable timely action. The institute will also offer guidance to industry, civil society and the public through established channels such as the National AI Centre.
In a joint release, Industry and Innovation Minister Tim Ayres said Australia stands to benefit enormously from AI adoption but must ensure safeguards keep pace. “Adopted properly and safely, AI can revitalise industry, boost productivity and lift the living standards of all Australians,” Ayres said. “But… we need to make sure we are keeping Australians safe from any malign uses of AI.”
The AISI will complement existing laws covering consumer protection, online safety and competition, while helping government determine where updates to legislation may be necessary. The institute will also evaluate whether AI companies are complying with Australian legal standards on fairness and transparency.
Assistant Minister Andrew Charlton emphasized that the AISI will strengthen Australia’s preparedness. “The Institute will help identify future risks, enabling the government to respond to ensure fit-for-purpose protections for Australians,” he said.
Australia will integrate the AISI into the International Network of AI Safety Institutes, joining global partners working to develop shared testing methods and safety standards. The institute also aligns with the government’s broader agenda to restrict harmful AI tools, including deepfake pornography, nudify apps, and undetectable stalking technologies.
The new body forms a key pillar of Australia’s National AI Plan, expected before the end of 2025, and is designed to bolster public trust as AI becomes embedded across industries and public services.
Need Help?
If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.


