Australian Government Introduces New Policy for Responsible AI Use Across Public Service

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/26/2024
In News

The Australian Government has introduced a new policy to guide the responsible use of artificial intelligence (AI) across its public service agencies. This policy, which takes effect on September 1, 2024, aims to set a high standard for AI use within government while ensuring that the technology is deployed in a safe, ethical, and transparent manner. The policy is mandatory for all non-corporate Commonwealth entities (NCEs) and encourages corporate Commonwealth entities to voluntarily adopt its principles.

 

The policy applies to a broad range of government entities, with significant carveouts for national security. It does not apply to AI use within the defense portfolio or the National Intelligence Community (NIC), which includes agencies such as the Office of National Intelligence (ONI), Australian Security Intelligence Organisation (ASIO), and Australian Signals Directorate (ASD). However, these entities may voluntarily adopt elements of the policy where feasible without compromising national security interests.

 

This strategic carveout ensures that critical national security functions remain uncompromised while allowing flexibility for defense and intelligence agencies to implement best practices aligned with the broader government AI framework.

 

The policy aims to create a coordinated and consistent approach across the Australian Public Service (APS). By establishing baseline requirements for governance, assurance, and transparency, the policy seeks to eliminate barriers to AI adoption while promoting responsible use. The Australian Government recognizes the transformative potential of AI but acknowledges that public trust is essential for its widespread adoption. Recent consultations revealed that the public remains wary of how AI is used by the government, particularly regarding data privacy, transparency, and the implications of AI-assisted decision-making.

 

To address these concerns, the policy mandates that government agencies provide clear, public-facing AI transparency statements within six months of the policy’s implementation. These statements must outline how AI is used, the measures in place to monitor its effectiveness, and the steps taken to mitigate any negative impacts. The statements will be reviewed annually, ensuring that they remain up-to-date and relevant as AI technologies and applications evolve.

 

The responsible AI policy is designed to work in tandem with existing frameworks and legislation. It emphasizes that agencies should not view the policy in isolation but rather integrate it with other regulations concerning data governance, cybersecurity, privacy, and ethical practices. For example, the policy aligns with the APS Code of Conduct, various data governance standards, and cybersecurity guidelines, ensuring a holistic approach to AI governance.

 

Moreover, the policy is flexible, allowing for adaptation as the regulatory environment and AI technologies continue to evolve. This adaptability is crucial given the rapid advancements in AI and the associated challenges that arise from its deployment at scale.

 

A key feature of the policy is its emphasis on accountability and capacity building. Agencies are required to designate accountable officials to oversee AI implementation within 90 days of the policy’s commencement. These officials will be responsible for coordinating AI-related activities, participating in government-wide AI forums, and ensuring compliance with evolving requirements. Additionally, the policy strongly recommends that agencies provide AI training for all staff, with specialized training for those involved in AI procurement, development, and deployment.

 

To further support responsible AI use, agencies are encouraged to participate in the pilot phase of the Australian Government’s AI assurance framework. This initiative will help refine best practices and provide insights that can be shared across government entities.

 

While the policy is comprehensive, the exclusion of national security agencies ensures that these entities retain the flexibility needed to safeguard Australia’s critical interests. Defense and NIC members can adopt relevant elements of the policy without compromising their operational capabilities. This approach balances the need for robust AI governance with the unique requirements of national security.

 

 

Need Help?

 

If you’re wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter