How is the EEOC handling AI?
While new laws are being discussed, like the EU AI Act, several governments around the world are looking at how existing legislation and governing bodies can regulate AI. In the United States, the Equal Employment Opportunity Commission (EEOC) has spent the past several years doing that exact thing. While examining how previous legislations apply, the EEOC has offered up new initiatives, while also handling several discrimination lawsuits.
Just this year in August, the EEOC settled its first-ever discrimination lawsuit. In this lawsuit, iTutorGroup, which integrates three companies providing English-language tutoring services to students in China, was found to be committing age discrimination. iTutorGroup’s application software was automatically rejecting female applicants aged 55 and older, and male applicants aged 60 and older. This led to hundreds of qualified U.S. candidates being rejected, simply because of their age. iTutorGroup has to pay $365,000, which will be distributed to applicants who were rejected due to their age. While iTutorGroup has ceased hiring in the U.S., should the entity ever resume operations within the U.S., they would need to require training, policy changes and monitoring to prevent future discrimination. The EEOC noted in their lawsuit that anti-discrimination laws apply to remote workers controlled by foreign companies, as we’ve seen in other AI laws globally.
On top of lawsuits, the EEOC is launching initiatives and has recently released its future strategic plan. In the EEOC’s Strategic Enforcement Plan for Fiscal Years 2024-2028, part of their plan recognizes the proliferation of AI or machine learning by employers. They recognize that employers are also using AI and machine learning when it comes to targeted job advertisements, recruiting and hiring and other employment decision practices. That’s why in the plan, the EEOC will focus on addressing technology-related employment discrimination and ensure that the use of technology doesn’t result in discriminatory practices. The EEOC wants to also focus on the screening tools, whether it’s AI or other automated systems, pre-employment tools or background checks that are disproportionately impacting workers. On top of that, the EEOC will focus on employer practices when it comes to pay disparities, such as secret pay policies, discouraging or prohibiting workers from asking or sharing their pay information and the reliance on past salary history or expectations to set pay. While the plan does recognize AI, the EEOC includes other priorities including the advancement of equal pay, solidifying access to the legal system and other initiatives that could eventually be impacted by AI.
This plan is most likely building upon the Artificial Intelligence and Algorithmic Fairness Initiative, which was launched back in 2021. The Initiative was created to ensure that AI and other emergency technology tools implemented in hiring and other employment decisions were complying with civil rights laws that the EEOC enforces. The initiative also planned to establish an internal work group to help guide the EEOC’s initiative, launch a series of listening sessions, gather information about AI and other employment technologies, identify promising practices and issue technical assistance. We’ve seen some guidance in 2023 when it comes to technical assistance, like the Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees and Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.
If you have questions on how the EEOC could affect your company or would like to get audited to ensure compliance with the EEOC, reach out to BABL AI and one of their Audit Experts can help.