Australia has released new national guidance to help businesses identify and disclose AI-generated content, marking one of the country’s most detailed efforts yet to boost transparency in an era of rapidly advancing synthetic media.
The guide, titled “Being clear about AI-generated content: A guide for business,” was published by the National AI Centre (NAIC) and the Department of Industry, Science and Resources in November 2025. It outlines voluntary best-practice measures for organisations that create, modify or deploy AI-generated text, images, audio, and video.
The guidance warns that as AI systems become more capable and widespread, it is increasingly difficult for consumers to distinguish human-made material from synthetic content. The document notes that this blurring creates risks ranging from misinformation and fraud to reputational harm for companies using AI without disclosure.
To address these concerns, the guide recommends three primary transparency mechanisms: visible labelling, digital watermarking, and detailed metadata recording. According to the framework illustrated on page 8, businesses should apply one or more mechanisms depending on the risk level and how extensively AI contributed to the content. For example, fully AI-generated images, legal drafts, or clinical decision-support materials require stronger safeguards, combining labels, metadata, and robust watermarking.
The guidance also emphasises legal responsibilities under Australian Consumer Law, privacy legislation, and online safety rules. It highlights that businesses must ensure AI-assisted content does not mislead consumers and that metadata containing personal information is handled securely.
Australia’s approach aligns with emerging global standards, including the EU’s requirement that AI outputs be machine-readable and detectable as artificial, and the U.S. National Institute of Standards and Technology’s work on synthetic content provenance. The guide positions transparency as both a regulatory expectation and a competitive advantage for companies seeking to build trust with customers.
The government says it will update the framework as international norms evolve and as AI Safety Institutes advance research into watermarking and content provenance.
Need Help?
If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.


