NYC AI Bias Law: One Year In and What to Consider | Lunchtime BABLing 38

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/01/2024
In Podcast

Welcome to the latest edition of “Lunchtime BABLing,” where we delve into the critical intersection of technology, regulation, and business. In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg take a comprehensive look at New York City’s Local Law 144, a groundbreaking regulation that aims to ensure fairness and equity in hiring practices through the auditing of AI tools for bias.

Understanding Local Law 144

New York City’s Local Law 144, also known as the Automated Employment Decision Tools (AEDT) law, requires employers using AI-based hiring tools to have these tools audited for bias. This regulation, which has been in effect for a year, is designed to address concerns about potential discrimination in automated hiring processes. AI tools are increasingly used to screen candidates and make hiring decisions, but without proper oversight, these tools can perpetuate or even exacerbate biases.

 

Key Takeaways from the First Year

As we mark the first year of Local Law 144, Shea Brown and Bryan Ilg share their insights and experiences from working with numerous organizations to ensure compliance. One of the main challenges identified is the need for clarity around the audit process and the types of data required. There are two primary data types considered in these audits: historical data (real-world usage data from employers) and test data (synthetic or collected data used for testing purposes).

 

Preparing for Year Two and Beyond

With year two on the horizon, organizations must understand their obligations under Local Law 144. One crucial aspect discussed in the episode is the difference between providing historical data versus relying on test data. Employers who provide historical data to their AI tool vendors can benefit from aggregated audits, potentially simplifying their compliance efforts. However, those who do not share this data will need to either start doing so or seek independent audits to remain compliant.

 

Practical Advice for Employers

To help employers navigate these requirements, Shea and Bryan outline a decision tree for compliance:

    1. Determine Data Sharing: Are you providing historical data to your AI tool vendor?
      • Yes: Confirm with your vendor that your data is included in their audit. If so, you are compliant.
      • No: Decide whether to start sharing this data or prepare to commission an independent audit.
    2. Assess Statistically Significant Data: Ensure you understand what constitutes a statistically significant amount of data for your organization. Typically, this involves having sufficient applicant data across various demographic categories to perform meaningful bias analysis.
    3. Stay Proactive: Regularly review your compliance status and be prepared for evolving regulations, both in New York and potentially in other jurisdictions.

The Importance of Continuous Compliance

As Shea and Bryan emphasize, compliance is not a one-time task but an ongoing responsibility. Regular audits, continuous monitoring, and proactive adjustments are essential to ensure that AI tools remain fair and unbiased. This not only helps in meeting legal requirements but also builds trust and integrity within your organization and with your stakeholders.

 

Looking Ahead

The landscape of AI regulation is rapidly evolving. Other states, including California and Illinois, are considering similar measures, and the European Union’s AI Act is poised to introduce stringent requirements for AI systems. Staying informed and prepared is critical for any organization using AI in its hiring processes.

 

Listen Now

We invite you to listen to this informative episode of “Lunchtime BABLing” and gain valuable insights into navigating NYC’s AI bias law. Whether you are an HR professional, an employer, or an AI developer, this episode provides the knowledge and strategies you need to stay compliant and foster fairness in your hiring practices.

 

Find all Lunchtime BABLing episodes on YouTube, SIMPLECAST, and all major Podcast Streaming Platforms.

 

Need Help?

Stay tuned for more episodes of “Lunchtime BABLing” as we continue to explore the dynamic world of AI, ethics, and regulation. If you have any questions or need assistance with your compliance efforts, please do not hesitate to contact us at BABL AI. We’re here to help you navigate these complex challenges and ensure your AI tools are both effective and fair.

What’s New?

Stay up to date with the latest updates.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter