Australia Issues Guidance on Privacy Risks and Responsibilities with AI Products

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/30/2024
In News

UPDATE — SEPTEMBER 2025: Australia’s privacy and AI rules have tightened around the OAIC’s October guidance. The government has endorsed most recommendations from the 2023 Privacy Act Review, with draft amendments expected to add a right to erasure, a direct right of action, and new transparency duties for automated decision-making—explicitly treating AI-generated or inferred data as personal information when it relates to an individual. In parallel, Canberra finalized a voluntary National AI Assurance Framework (mid-2024) to guide testing, auditing, and governance of foundation and generative AI systems across sectors.

 

A public consultation on mandatory guardrails for high-risk AI (health, finance, employment, law enforcement) concluded in September 2024; the government’s response is due in 2025 and is widely expected to foreshadow risk assessments and audit obligations aligned with the EU’s approach. The OAIC has continued to stress that organizations must apply Privacy Act duties both to data entered into AI tools and to AI outputs, and it has signaled support for expanded enforcement powers in forthcoming reforms. Sector regulators have also moved: federal agencies must now conduct risk assessments and human review before deploying generative AI; finance and health supervisors have reminded firms that AI use will be scrutinized under sector laws in addition to the Privacy Act.

 

ORIGINAL NEWS STORY:

 

Australia Issues Guidance on Privacy Risks and Responsibilities with AI Products

 

Australia’s Office of the Australian Information Commissioner (OAIC) recently released new guidance detailing privacy obligations for organizations using artificial intelligence (AI) products, with a focus on generative and general-purpose AI tools that collect or generate personal information. 

 

Released on October 23, the OAIC’s guidance, targeted at organizations deploying AI in public or private settings, emphasizes that privacy obligations apply both to personal information entered into an AI system and to any personal information generated by it. Businesses considering AI products are advised to conduct thorough due diligence, scrutinizing the system’s privacy, security, and governance before adoption. Organizations should confirm that any AI tools they use have been tested and are suitable for their intended application, with clear policies on human oversight and data management.

 

Transparency Requirements for AI Use

 

The guidelines stress that transparency is essential. Any use of AI, such as public-facing chatbots, should be clearly identified to users, with privacy policies updated to explain how AI tools operate, what data they handle, and who may access it. AI-generated personal data, whether factual or inferred, must be treated as personal information under Australia’s Privacy Act and handled accordingly.

 

Privacy by Design and Early Risk Assessments

 

The OAIC encourages a “privacy by design” approach. This includes conducting Privacy Impact Assessments (PIAs) to identify risks early and plan mitigation strategies. The guidance warns organizations not to use personal or sensitive information with publicly available generative AI tools unless they obtain explicit consent. Businesses should use AI only when necessary and ensure data processing aligns with the original purpose of collection.

 

Key Privacy Risks Identified by the OAIC

 

The guidance highlights several risks associated with AI systems:

 

  1. Bias and Discrimination: AI systems may reflect biases from training data. If not addressed, these biases can produce unfair outcomes related to gender, age, or race.

 

  1. Transparency Limitations: Many AI models operate as opaque “black boxes.” This opacity makes it difficult for organizations to explain decisions to users and regulators.

 

  1. Data Breach Risks: AI systems often rely on large datasets, increasing exposure to potential breaches and unauthorized access.

 

  1. Inaccurate or Harmful Outputs: Generative AI can produce incorrect information—sometimes presented as factual. These errors can cause reputational or legal harm, especially when used in decision-making.

Due Diligence Before Using AI Products

 

Organizations must review an AI tool’s privacy and security features before deployment. This includes checking for vulnerabilities, evaluating cloud-hosting risks, and confirming whether the AI provider retains data for training. If the provider accesses data unnecessarily, businesses should disable those features or reconsider using the product.

 

Transparency, Oversight, and Staff Training

 

To comply with the Privacy Act, organizations must maintain clear, updated policies on AI-related data handling. Any AI-driven processes must allow for auditing and human oversight, particularly in high-stakes settings such as customer service or automated decision-making. The OAIC also emphasizes staff training. Employees should understand the system’s limitations so they can verify outputs and clearly explain AI-assisted decisions to individuals.

Ongoing Monitoring and Review

 

The OAIC recommends regular reviews and audits to ensure AI tools remain appropriate for their intended use. As AI capabilities evolve, organizations must update their policies and controls to maintain compliance.

 

 

Need Help?


If you’re wondering how Australia’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter