ICO Audit Spurs Improvements in AI Recruitment Tools

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/20/2024
In News

The Information Commissioner’s Office (ICO) has issued new recommendations for developers and providers of AI-powered recruitment tools, aiming to strengthen data protection and transparency in the hiring process. The move marks an important step toward safeguarding job seekers’ rights as AI becomes more common in recruitment.

Audit Findings Reveal Bias, Excessive Data Collection, and Transparency Failures

Released on November 6, 2024, the ICO’s audit findings show that AI recruitment tools can improve efficiency and scale. However, they also reveal serious risks, including unfair exclusion and privacy violations. After reviewing several providers, the ICO issued nearly 300 recommendations focused on fairness, transparency, and data minimization.

The audit uncovered cases where AI systems inferred a candidate’s gender, ethnicity, or other sensitive traits from names or profile information. These inferences were often inaccurate, and in some tools, recruiters could filter candidates by protected characteristics—a practice the ICO says violates UK discrimination and data protection laws.

The ICO also found that some providers collected far more personal data than necessary, retaining large datasets without meaningful consent. Under UK GDPR, organizations must process personal data lawfully, fairly, and transparently. The audit showed that several tools failed to meet those standards.

Key Recommendations for Developers and Recruiters

To address these issues, the ICO outlined several priority actions:

 

  1. Fairness in Data Processing: Developers must ensure data is processed accurately and without bias. The ICO said inferred demographic data cannot be relied on for lawful processing.

 

  1. Data Minimization and Purpose Limitation: Companies were advised to restrict data collection to essential information and avoid repurposing it for unapproved uses.

 

  1. Transparency: Organizations must communicate clearly with candidates about how AI systems make decisions and how their data is used.

 

  1. Accountability in AI Use: The ICO urged recruiters to perform thorough data protection impact assessments and clarify roles and responsibilities in AI contracts.

 

Industry Reaction and Next Steps

All audited providers accepted or partially accepted the ICO’s recommendations. Some were already monitoring bias and improving data accuracy. One company, for example, offered bespoke AI models designed to reduce unnecessary data collection and increase transparency—an approach the ICO described as promising.

The findings signal the need for stronger safeguards across the recruitment ecosystem. Ian Hulme, the ICO’s Director of Assurance, said AI can streamline hiring, but poor governance risks harming candidates and eroding trust.

To help organizations make better procurement decisions, the ICO also released a set of questions for evaluating AI tools. This guidance supports the UK’s broader push to encourage responsible AI adoption while still supporting innovation.

The ICO will host a webinar on January 22, 2025, to walk developers and recruiters through its findings and discuss privacy risks in AI-driven hiring.

 

 

Need Help?

 

Keeping track of the growing AI regulatory landscape can be difficult. So you might have any questions or concerns. Therefore, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter