Australian Regulator Warns Businesses of Privacy Risks From Workplace Generative AI Tools

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/23/2025
In News

Australian privacy regulators are warning businesses that the growing use of generative artificial intelligence (AI) tools in the workplace carries significant privacy risks if not properly governed, even as such tools promise gains in efficiency and productivity.

 

The Office of the Australian Information Commissioner (OAIC) said publicly available generative AI tools such as ChatGPT, Copilot, Gemini and Grammarly can expose organizations to serious compliance and reputational risks when employees enter personal or sensitive information into them. Once data is uploaded to web-based AI systems, regulators warned, it can be difficult or impossible to track, control, or remove, depending on the tool’s settings and data practices.

 

The OAIC said it had previously advised organizations regulated under the Privacy Act to avoid entering personal information into publicly accessible AI tools. While newer enterprise and on-premises AI options may offer greater control, the regulator stressed that privacy risks remain if organizations fail to actively manage how data is collected, used, disclosed and secured.

 

The guidance highlighted a fictional case study based on a real data breach notification to illustrate how problems can arise. In the example, an insurance employee uploaded a customer’s financial hardship application—including sensitive health and family details—into ChatGPT to generate a summary, despite company policy prohibiting such use. The resulting summary omitted key information, leading to a rejected application and significant harm to the customer. The incident amounted to an unauthorised disclosure of sensitive personal information and exposed the company to regulatory consequences.

 

According to the OAIC, risks linked to workplace GenAI use extend beyond data disclosure to include secondary uses of information, inaccurate outputs, security weaknesses, and poor decision-making based on flawed summaries or analyses. Organizations may also need to update privacy policies, collection notices, and customer communications to reflect AI use.

 

The regulator said strong privacy governance is essential, including privacy impact assessments, clear internal policies, staff training, and technical controls to prevent inappropriate data uploads. While AI tools can deliver real benefits, the OAIC said organizations must adopt a comprehensive, organization-wide approach to risk management to ensure compliance and protect individuals from harm.

 

Need Help?

 

If you’re wondering how AI policies, or any other government’s AI bill or regulation could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter