Australia Unveils National AI Assurance Framework to Foster Trust and Safety
In a groundbreaking move to ensure the safe and ethical use of artificial intelligence (AI) across all levels of government, the Australian, state, and territory governments have collectively released the National Framework for the Assurance of Artificial Intelligence in Government. This framework was agreed upon during the Data and Digital Ministers Meeting (DDMM) and marks a significant step in aligning AI practices nationwide.
AI technologies, though in use for decades, have recently seen a surge in general-use capabilities, such as generative AI, becoming integral to everyday tools. Recognizing the importance of public confidence and trust in AI, governments across Australia have collaborated for nearly a year to develop this unified framework.
In June 2023, Data and Digital Ministers committed to a nationally consistent approach to AI’s safe and ethical use by government entities. A cross-jurisdictional working group, co-chaired by the Commonwealth and New South Wales (NSW) Governments, spearheaded the effort to harmonize AI assurance practices while respecting the unique circumstances and structures of each jurisdiction. The NSW Artificial Intelligence Assurance Framework, one of the world’s pioneering AI frameworks, served as a baseline for this national initiative.
Collaboration and Commitment
Lucy Poole, co-chair of the working group and general manager of the Digital Transformation Agency’s Strategy, Planning, and Performance division, praised the joint effort. She said the framework reflects collaboration across all levels of government and will continue to evolve as lessons are shared. She also emphasized that the framework creates consistency in AI assurance across jurisdictions. According to Poole, this benefits both the public and businesses by providing clear and reliable standards.
Framework Practices and Principles
The framework sets out practices aligned with Australia’s AI Ethics Principles. These include:
-
Maintaining reliable data and information assets.
-
Ensuring compliance with anti-discrimination laws.
-
Applying ethical principles in practical ways.
Moreover, the framework highlights five assurance cornerstones: governance, data governance, standards, procurement, and a risk-based approach. These elements already exist in many government structures but are now identified as central enablers for trustworthy AI.
Case Studies and Practical Examples
The framework also provides case studies to show how governments already manage AI responsibly. For example, projects on recordkeeping demonstrate how to handle sensitive data. Other examples highlight efforts to ensure transparency and explainability. These real-world insights give organizations concrete guidance on applying the framework in practice.
Implementation Across Jurisdictions
Each government will create its own assurance processes tailored to its structure and responsibilities. However, all will align with the national framework and Australia’s AI Ethics Principles. This approach ensures that AI adoption remains ethical and consistent nationwide. The Australian Government, through the Digital Transformation Agency (DTA), will develop and test its own AI assurance framework. The DTA serves as the main advisor on digital strategy, standards, and investments.
Need Help?
If you’re wondering how Australia’s framework, or any other government’s regulations or bills around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.