Australian Inquiry Calls for Stronger AI Safeguards in Workplaces Amid Rapid Digital Transformation

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/18/2025
In News

Australia is at a critical juncture in defining the role of artificial intelligence (AI) in the workplace, according to a new report. The Australian House of Representatives Standing Committee on Employment, Education, and Training report, titled “The Future of Work,” highlights both the opportunities and risks associated with AI and automated decision-making (ADM) in the workplace. The report calls for urgent regulatory reforms to address growing concerns about data privacy, job security, and the ethical use of AI in hiring, monitoring, and performance evaluations.

 

The inquiry, launched in April 2024, examined the rapid integration of AI and ADM into Australian workplaces. It found that while these technologies have the potential to enhance productivity and efficiency, they also raise significant challenges, particularly in terms of workplace fairness and transparency. The report makes 21 recommendations aimed at mitigating risks while ensuring that AI-driven workplace transformations are conducted in a responsible and equitable manner.

 

One of the report’s key findings is that AI-driven hiring, performance monitoring, and decision-making processes should be classified as high-risk, given their potential impact on workers’ livelihoods. To address this, the committee recommends the adoption of mandatory regulatory guardrails proposed by the Department of Industry, Science, and Resources. This would require AI developers and employers using AI-driven hiring and workplace management tools to undergo stricter oversight.

 

Another major area of concern is worker privacy. The report urges the Australian government to update the “Privacy Act 1988 (Cth)” and the “Fair Work Act 2009 (Cth)” to strengthen protections against excessive surveillance and data misuse. The committee found that many employers are collecting and using worker data without adequate transparency or consent. As a result, the report recommends banning the sale of worker data to third parties and requiring employers to provide clear disclosures about how AI systems monitor employees.

 

The report also highlights the need for greater transparency and accountability in AI-driven workplace decisions. It proposes amendments to the *Fair Work Act* to require employers to disclose when AI or ADM systems are being used in decision-making processes. The committee recommends implementing a legal right for workers to receive explanations for AI-driven employment decisions, similar to existing protections in the European Union. Additionally, it calls for prohibiting the use of AI in making final employment-related decisions without human oversight.

 

The rise of AI in Australian workplaces has also raised concerns about algorithmic bias and discrimination. The report recommends mandatory audits of AI systems to assess and mitigate bias. It also calls for stricter guidelines on the data used to train AI models, ensuring compliance with intellectual property and anti-discrimination laws. The committee suggests requiring AI developers to prove that their training data does not infringe on Australian copyright laws or reinforce harmful biases.

 

Recognizing that AI adoption could lead to job displacement, the report emphasizes the need for workforce upskilling and retraining. It urges the government to collaborate with employers and education providers to develop training programs that equip workers with digital skills needed for AI-integrated workplaces. The report also proposes financial incentives to help small and medium-sized enterprises implement AI responsibly while ensuring their employees are prepared for technological transitions.

 

Another critical recommendation is the establishment of a national code of practice for AI use in Austrailian workplaces. This would set clear standards for responsible AI deployment, ensuring that new technologies do not undermine worker rights or create unsafe working conditions. The report suggests that Safe Work Australia develop guidelines to mitigate the psychological and physical risks associated with AI-driven management systems.

 

In addition to regulatory reforms, the committee stresses the importance of public awareness and education. It recommends launching national information campaigns to help businesses and workers understand AI’s implications, rights, and best practices. By increasing transparency and promoting informed discussions, the government aims to build public trust in AI’s role in the workforce.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the Australian or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter