Interview with Khoa Lam on AI Auditing

024. Interview with Khoa Lam on AI Auditing | Lunchtime BABLing

On this week’s Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam.

They discuss a wide range of topics including:

1: How Khoa got into the field of Responsible AI

2: His work at AI Incident Database

3: His thoughts on generative AI and large language models

4: The technical aspects of AI and Algorithmic Auditing

Available on YouTube, Simplecast, and all major streaming platforms. 

Final Rules for NYC Local Law 144

022. Final Rules for NYC Local Law 144 – Lunchtime BABLing

We review the new rules for NYC’s Local Law No. 144, which requires bias audits of automated employment decision tools.

The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement).

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab

021. Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab – Lunchtime BABLing

This week on Lunchtime BABLing, we discuss:

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research.

3: In context learning of large language models the problems for auditing.

4: NIST’s AI Risk Management Framework and its influence on public policy like California’s ASSEMBLY BILL NO. 331.

5: Updates on The Algorithmic Bias Lab’s new training program for AI auditors.

Sign up for courses here:

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

AI Governance Report & Auditor Training

020. AI Governance and Report & Audit Training – Lunchtime BABLing

This week we discuss our recent report “The Current State of AI Governance”, which is the culmination of a year-long research project looking into the effectiveness of AI governance controls.

Full report here:  

We also discuss our new training program, the “AI & Algorithm Auditor Certificate Program“, which starts in May 2023.

This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general:

1: Algorithms, AI, & Machine Learning

2: Algorithmic Risk & Impact Assessments

3: AI Governance & Risk Management

4: Bias, Accuracy, & the Statistics of AI Testing

5: Algorithm Auditing & Assurance

Early pricing can be found here:   

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

Interrogating Large Language Models with Jiahao Chen

019. Interrogating Large Language Models with Jaihao Chen

On this week’s Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC.

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?

2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process?

3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering).

4: Black-box vs. Whitebox testing of LLMs for algorithm auditing.

5: Classical assessments of intelligence and their applicability to LLMs.

6: Re-thinking education and assessment in the age of AI.

Jiahao Chen Twitter  Responsible AI LLC

Available on YouTube, SimpleCast, and all major Podcast streaming platforms.

The 5 Skills you NEED for AI Auditing

018. The 5 Skills you NEED for AI Auditing – Lunchtime BABLing

You need way more than “five skills” to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs.

This is part of our weekly webinar/podcast that went very long so we’ve cut out a lot of the Q&A, which covered a lot of questions that we’ll address in future videos, like:

What kind of training do I need to become an AI or algorithm auditor?

Do I need technical knowledge of machine learning to do AI ethics?

Available on YouTube, SimpleCast, and all major Podcast streaming platforms 

Criteria-Based Bias Audit

017. Criteria-based Bias Audit – Lunchtime BABLing

On this week’s Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144.

Available on YouTube, Streamcast and all major Podcast streaming platforms. 

 

Breaking into AI Ethics (Part 2)

016. Lunchtime BABLing – Breaking into AI Ethics (Part 2)

In this Q&A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic.

Questions include:

1. Do I need an advanced degree to work in responsible AI?

2. How do I know what topics to focus on?

3. Do I need programming skills to work in responsible AI?

4. Where can I find training in AI ethics?

Available on YouTube, Streamcast, and all major Podcast streaming platforms.