UPDATE — SEPTEMBER 2025: Since the National Artificial Intelligence Centre (NAIC) launched VAISS V1 in September 2024, work on Version 2 (V2) has advanced significantly in 2025:
-
Consultations Completed (Jan–Feb 2025): Nine stakeholder sessions gathered feedback from startups, civil society, academia, and major tech players. Participants stressed the need for practical watermarking options and global alignment with the EU AI Act and U.S. NIST guidance.
-
Draft V2 Circulated (mid-2025): NAIC released a draft framework incorporating technical proposals for watermarking and labeling AI-generated text and images. The draft was supported by research collaboration with CSIRO’s Data61.
-
Procurement Toolkit (July 2025): A new standalone guide was issued to help SMEs and public agencies integrate ethical AI considerations into sourcing decisions, modeled on OECD and UK frameworks.
-
Integration with National AI Capability Plan (Aug 2025): VAISS V2 was explicitly tied into Australia’s broader AI policy, becoming recommended guidance for government-funded AI projects rather than a purely voluntary standard.
-
Final Release Expected (late 2025): NAIC has confirmed the final VAISS V2 will be published before year-end. Some provisions—particularly labeling of AI-generated political or election-related content—are under review for possible mandatory enforcement through electoral and communications law.
Key anticipated additions in V2: stronger content provenance standards, developer guardrails aligned with global AI risk classifications, incident reporting practices, and SME-focused examples for practical adoption.
ORIGINAL NEWS STORY:
Australia to Expand Voluntary AI Safety Standards with Focus on Transparency and Developer Practices
The Australian government is set to enhance its Voluntary AI Safety Standard (VAISS), aiming to strengthen ethical and safe AI practices. The National Artificial Intelligence Centre (NAIC), which launched VAISS Version 1 (V1) in September 2024, announced plans to release Version 2 (V2) in 2025. The updated framework will incorporate new guidance on labeling and watermarking AI-generated content and refine the initial 10 guardrails established in the first version.
V2 of VAISS will include several significant updates, addressing feedback from industry stakeholders to enhance the utility of the framework for developers, deployers, and policymakers. Key areas of focus include:
- Labelling and Watermarking: Developers and deployers will be required to implement content labeling and watermarking to improve transparency for end-users. Discussions during consultations will explore technical options, communication strategies, and alignment with global standards.
- Developer Best Practices: The enhanced 10 guardrails will provide detailed guidance for AI model and system developers, emphasizing best practices for ethical development and deployment.
- Procurement Guidance: Updates will include expanded details to assist organizations in sourcing AI technologies responsibly, supplemented by a standalone procurement guide.
To ensure the new standards reflect diverse perspectives, NAIC will host up to nine virtual consultation sessions from January 27 to February 14. The sessions will be organized into focused groupings to encourage robust and tailored discussions. Groups expected to participate include Australian AI start-ups, technology companies, AI-enabled product and service providers, government agencies, civil society organizations, not-for-profits, and academia.
NAIC has invited individuals and organizations to express their interest in participating, emphasizing the importance of collective input to create a comprehensive and effective framework.
The updates build on the success of VAISS V1, which introduced foundational guardrails for AI safety and ethical use. These principles emphasized transparency, accountability, and the prevention of harm. By extending the standard, NAIC aims to address emerging challenges in AI technology, such as the proliferation of deepfakes and synthetic media, and ensure that Australian developers and users are equipped with the tools needed for responsible AI innovation.
The VAISS framework is part of Australia’s broader strategy to position itself as a leader in ethical AI governance. By integrating international standards and prioritizing transparency, Australia seeks to support innovation while safeguarding societal interests.
Need Help?
If you’re wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.