U.S. Department of Labor Issues New Guidelines to Protect Workers as AI Integration Expands in Workplaces

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/07/2024
In News

The U.S. Department of Labor has released a new document titled Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers,” highlighting a set of guidelines intended to protect workers as artificial intelligence (AI) systems become increasingly integrated into workplaces. The principles underscore ethical AI development with a focus on worker empowerment, privacy, and responsible innovation.

 

Principles

 

The guidelines cover several critical principles:

 

  1. Worker Empowerment and Inclusion: The Department urges employers and AI developers to involve workers early in the design and deployment of AI systems. It stresses that workers, particularly from underserved communities, should have a meaningful say in how new technologies are introduced and used. This inclusion helps ensure that AI aligns with job quality goals and supports a fair workplace.

 

  1. Ethical and Responsible AI Development: The guidance calls for strong ethical standards in AI creation. Developers should routinely evaluate AI systems for accuracy, reliability, and unfair bias. According to the Department, these checks are essential to protect workers’ rights and ensure safe, equitable workplace practices. Regular assessments can also prevent AI tools from causing unintended harm.

 

  1. Human Oversight and Governance: To avoid over-reliance on automated tools, the Department emphasizes the need for clear human oversight. Employers should establish governance structures to review AI-influenced decisions, especially those tied to hiring, performance evaluation, job assignments, and other core employment conditions. This oversight ensures that people—not algorithms—remain accountable for high-stakes decisions.

 

  1. Transparency and Worker Awareness: The Department stresses the importance of transparency in workplace AI use. Employers should tell workers when AI systems monitor activities, what data is being collected, and how that data will be used. Clear communication can build trust and helps workers understand the technology shaping their jobs.

 

  1. Protecting Labor Rights: The principles reaffirm that AI must not weaken workers’ rights. Employers remain responsible for complying with labor laws, including those covering organizing, non-discrimination, and workplace health and safety. The Department notes that AI tools must operate within these existing protections.

 

  1. Using AI to Support Workers: The framework encourages employers to use AI to enhance job quality rather than replace workers. The Department advises companies to adopt AI tools that assist employees, improve safety, and support task performance. It also suggests that gains created by AI—such as productivity improvements—should translate into better wages or improved working conditions.

 

A Non-Binding Framework for Ethical AI Adoption

Although these principles do not carry legal force, they offer a practical blueprint for employers navigating the rapid expansion of workplace AI. The Department of Labor hopes the guidance will help organizations adopt AI in ways that protect employees, strengthen job quality, and promote responsible innovation.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter