The Australian Government has published new “Guidance for AI Adoption,” outlining six essential practices designed to ensure responsible, transparent, and human-centred use of artificial intelligence across the public and private sectors. Released by the National Artificial Intelligence Centre (NAIC) on 21 October 2025, the guidance replaces the 2024 Voluntary AI Safety Standard and provides a roadmap for organizations at every stage of AI maturity.
The Guidance for AI Adoption comes in two parts: Foundations, for organizations beginning their AI journey, and Implementation Practices, for technical and governance professionals. Together, the resources provide a unified framework for ethical governance, aligning with Australia’s AI Ethics Principles and global standards such as ISO/IEC 42001 and the U.S. NIST AI Risk Management Framework.
The guidance identifies six practices for responsible AI use: deciding who is accountable, understanding impacts, measuring and managing risks, sharing essential information, testing and monitoring systems, and maintaining human control. It applies equally to AI developers and deployers, with clear expectations for transparency, bias prevention, and human oversight.
A key feature is the emphasis on accountability throughout the AI lifecycle. Organizations are urged to designate responsible officers for AI oversight, ensure staff training, and maintain governance systems that align with legal obligations and ethical expectations. The framework also highlights stakeholder engagement, encouraging organisations to consult affected communities, monitor for harms, and establish accessible redress mechanisms.
Risk management sits at the centre of the new approach. The guidance advises proportionate oversight based on system complexity, data sensitivity, and potential harms. Organizations are encouraged to maintain AI registers, document testing processes, and disclose when AI systems influence decisions affecting individuals. For high-risk applications such as healthcare, finance, or employment, additional transparency and independent testing are recommended.
The document adopts a distinctly human-centred philosophy—prioritizing safety, fairness, and inclusion. It reinforces Australia’s commitment to international declarations such as the Bletchley Declaration and calls for meaningful human oversight to preserve accountability and trust in automated systems.
NAIC Director-General noted that the updated guidance reflects feedback from hundreds of organizations across sectors, including small and medium-sized enterprises seeking more accessible, actionable advice. The Centre plans to expand its suite of resources over the next year, including AI policy templates, risk-screening tools, and register frameworks to help organizations align with emerging regulation.
By embedding these practices into everyday operations, the government aims to make Australia a global leader in responsible, human-centred AI—where innovation advances in step with trust, transparency, and accountability.
Need Help?
If you have questions or concerns about these, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


