Australia’s Office of the Australian Information Commissioner (OAIC) recently released new guidance detailing privacy obligations for organizations using artificial intelligence (AI) products, with a focus on generative and general-purpose AI tools that collect or generate personal information.
Released on October 23, the OAIC’s guidance, targeted at organizations deploying AI in public or private settings, emphasizes that privacy obligations apply both to personal information entered into an AI system and to any personal information generated by it. Businesses considering AI products are advised to conduct thorough due diligence, scrutinizing the system’s privacy, security, and governance before adoption. Organizations should confirm that any AI tools they use have been tested and are suitable for their intended application, with clear policies on human oversight and data management.
The guidelines stress that transparency is essential. Any use of AI, such as public-facing chatbots, should be clearly identified to users, with privacy policies updated to explain how AI tools operate, what data they handle, and who may access it. AI-generated personal data, whether factual or inferred, must be treated as personal information under Australia’s Privacy Act and handled accordingly.
Adopting a ‘privacy by design’ approach, the OAIC advises organizations to conduct Privacy Impact Assessments (PIAs) from the outset to evaluate how AI products might affect individuals’ data privacy. A PIA can help identify privacy risks early and offer strategies to minimize or eliminate these risks.
Furthermore, the OAIC recommends that businesses avoid using personal or sensitive information with publicly available generative AI tools, which u. Organizations should be mindful to use AI only when necessary, and ensure that personal data use aligns with the purpose for which it was initially collected. If data input involves sensitive information, explicit consent should be obtained.
The OAIC identifies potential privacy risks, particularly with AI’s handling of personal data:
- Bias and Discrimination: AI systems may unintentionally embed biases from their training data, which could lead to unfair outcomes based on gender, race, or age. Businesses should assess if AI systems have been designed with comprehensive and diverse data to mitigate these risks.
- Transparency Challenges: Complex AI models often function as ‘black boxes,’ making it challenging to explain how decisions are made. This lack of transparency can prevent organizations from ensuring open data handling.
- Data Breach Risks: Given the large datasets that AI systems often handle, there is an elevated risk of data breaches, potentially exposing personal information stored or processed by the AI.
- Inaccurate and Unfair Inferences: Generative AI may produce incorrect information or ‘hallucinations’ that appear accurate. This can lead to reputational harm or legal issues, especially when AI is used in decision-making with significant implications for individuals.
Organizations are urged to evaluate AI products’ privacy and security attributes before use, assessing whether AI is genuinely necessary and whether its application will meet privacy and accuracy obligations under Australia’s Privacy Act. Security reviews should examine any potential vulnerabilities, including unauthorized data access or disclosure risks, especially if an AI product is hosted in the cloud.
To comply with privacy rules, organizations must limit data sharing, ensuring only necessary data is processed by the AI tool. If AI providers retain access to data for training purposes, businesses must carefully consider the appropriateness of the AI product and take steps to disable unnecessary data-sharing features where possible.
To maintain transparency, organizations must have clear, updated policies on AI data handling, accessible to both customers and employees. Any AI-driven processes should be auditable and allow human oversight, especially when AI tools are used in high-stakes settings, such as customer service or decision-making.
The OAIC highlights the importance of clear documentation and staff training. Employees should understand the AI system’s limitations, enabling them to verify its outputs accurately and clarify AI-driven decisions for affected individuals. Regular reviews, auditing, and updates are recommended to ensure that the AI product remains suitable for its intended use.
Need Help?
If you’re wondering how Australia’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.