Australia to Expand Voluntary AI Safety Standards with Focus on Transparency and Developer Practices

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/09/2025
In News

The Australian government is set to enhance its Voluntary AI Safety Standard (VAISS), aiming to strengthen ethical and safe AI practices. The National Artificial Intelligence Centre (NAIC), which launched VAISS Version 1 (V1) in September 2024, announced plans to release Version 2 (V2) in 2025. The updated framework will incorporate new guidance on labeling and watermarking AI-generated content and refine the initial 10 guardrails established in the first version.  

 

V2 of VAISS will include several significant updates, addressing feedback from industry stakeholders to enhance the utility of the framework for developers, deployers, and policymakers. Key areas of focus include:

 

  • Labelling and Watermarking: Developers and deployers will be required to implement content labeling and watermarking to improve transparency for end-users. Discussions during consultations will explore technical options, communication strategies, and alignment with global standards.

  

  • Developer Best Practices: The enhanced 10 guardrails will provide detailed guidance for AI model and system developers, emphasizing best practices for ethical development and deployment.

  

  • Procurement Guidance: Updates will include expanded details to assist organizations in sourcing AI technologies responsibly, supplemented by a standalone procurement guide.

 

To ensure the new standards reflect diverse perspectives, NAIC will host up to nine virtual consultation sessions from January 27 to February 14. The sessions will be organized into focused groupings to encourage robust and tailored discussions. Groups expected to participate include Australian AI start-ups, technology companies, AI-enabled product and service providers, government agencies, civil society organizations, not-for-profits, and academia.

 

NAIC has invited individuals and organizations to express their interest in participating, emphasizing the importance of collective input to create a comprehensive and effective framework.

 

The updates build on the success of VAISS V1, which introduced foundational guardrails for AI safety and ethical use. These principles emphasized transparency, accountability, and the prevention of harm. By extending the standard, NAIC aims to address emerging challenges in AI technology, such as the proliferation of deepfakes and synthetic media, and ensure that Australian developers and users are equipped with the tools needed for responsible AI innovation.

 

The VAISS framework is part of Australia’s broader strategy to position itself as a leader in ethical AI governance. By integrating international standards and prioritizing transparency, Australia seeks to support innovation while safeguarding societal interests.

 

 

Need Help?

 

If you’re wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter