Australian Government Wants Responses on Proposals for Mandatory AI Guardrails in High-Risk Settings

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/25/2024
In News

The Australian Government is seeking comments and views on a significant proposals paper, outlining a plan to introduce mandatory guardrails for the development and deployment of Artificial Intelligence (AI) in high-risk settings. These new regulations come in response to growing concerns about AI’s impact on society, particularly in sectors where the risks are most pronounced, such as healthcare, criminal justice, and public safety. The paper, “Introducing Mandatory Guardrails for AI in High-Risk Settings,” marks a key step in the government’s approach to ensuring that AI technologies are used safely and responsibly while promoting innovation.

 

The increasing integration of AI into daily life, often without public knowledge or awareness, has accelerated its use in various institutions, services, and infrastructure. However, the Australian Government’s consultations revealed that the existing regulatory framework is ill-equipped to handle the unique risks posed by AI. While AI offers immense potential to improve both social and economic outcomes, its use—especially in high-risk environments—requires careful oversight.

 

Internationally, governments are adopting risk-based approaches to AI regulation, placing preventative guardrails to mitigate potential harms. Australia’s new proposals aim to align with global efforts, while also addressing the country’s unique context. The proposals call for establishing a regulatory environment that builds public trust and ensures AI is developed and deployed safely, especially in high-risk settings where the consequences of failure could be severe.

 

A key element of the proposal is defining “high-risk” AI. The paper suggests a principles-based approach to determine which AI systems should fall under the mandatory guardrails. High-risk AI would be defined based on its intended use and the potential for harm to individuals, communities, or society. For instance, AI used in healthcare to diagnose diseases or in the criminal justice system to predict recidivism could be considered high-risk due to the potential for significant adverse effects if the AI systems are flawed or biased.

 

The paper also discusses general-purpose AI (GPAI), which is capable of performing a wide range of tasks beyond its initial design. The government is considering how to classify GPAI under high-risk settings, given its versatility and the unpredictable nature of its potential applications. Ensuring that GPAI is subject to appropriate guardrails is crucial, as its misuse or malfunction could result in widespread harm.

 

The Australian Government’s proposals focus on creating clear, mandatory guardrails that address risks while promoting responsible AI innovation. These guardrails include provisions for testing, transparency, and accountability throughout the AI lifecycle.

 

Testing is a key component, requiring AI systems to undergo rigorous assessments during both their development and deployment phases. This ensures that the systems perform as intended and meet safety standards. Transparency is also emphasized, with a requirement for developers and deployers of AI to disclose information about how their systems work, including providing details to end users, relevant authorities, and other actors in the AI supply chain.

 

Accountability is another crucial guardrail, requiring that those responsible for developing and deploying AI systems take ownership of the risks involved. This means implementing strong governance frameworks and ensuring that potential harms are identified and mitigated before the AI system is released to the public.

 

The proposals paper outlines three potential approaches to mandate the guardrails: adapting existing regulatory frameworks, introducing new framework legislation, or enacting a comprehensive Australian AI Act. The aim is to create a regulatory structure that can evolve alongside AI advancements while providing businesses with the certainty they need to innovate responsibly.

 

Option one involves incorporating AI-specific guardrails into current regulatory systems, such as privacy or consumer protection laws. Option two would introduce new framework legislation, which could serve as an umbrella for various sectors to develop AI-specific regulations. The third option, a whole-of-economy AI Act, would provide a broad legislative foundation for all AI-related activities across Australia, ensuring that AI systems comply with clear, standardized rules.

 

The proposals paper includes a series of discussion questions, inviting stakeholders to weigh in on key aspects of the proposed guardrails, including the definition of high-risk AI and the regulatory mechanisms to mandate the guardrails. Submissions can be made online, and stakeholders are encouraged to provide evidence or supporting documents to inform the government’s final decision. You can enter submissions HERE. The submission process ends on October 4.

 

 

Need Help?

 

If you’re wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter