National Conference on AI Law, Ethics, and Compliance

026. National Conference on AI Law, Ethics and Compliance | Lunchtime BABLing 25

In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance.



Key Discussions:

 

-Understanding AI and the risks involved.

-Governance frameworks for AI deployment.

-The implications of the recent U.S. Executive Order on AI.

-Global initiatives for AI safety and governance.

 

Industry Spotlight:

 

-The surge of generative AI in corporate strategy.

-The evolving landscape of AI policy, privacy concerns, and intellectual property.

 

Engage with Us:

 

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses:

 

Coupon Code: “BABLING”

 

Link to the full AI and Algorithm Auditing Certificate Program is here:

 

All Lunchtime BABLing episodes are available on YouTube, Simplecast, and all major podcast streaming platforms. 

AI and Algorithm Auditing Certificate | Lunchtime BABLing 25

025. AI and Algorithm Auditing Certificate | Lunchtime BABLing 25

Lunchtime BABLing is back with an all new season! 

In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program.

Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses:

Coupon Code: “BABLING”

Link to the full AI and Algorithm Auditing Certificate Program is here:

Available on YouTube, Simplecast, and all major streaming platforms. 

Interview with Khoa Lam on AI Auditing

024. Interview with Khoa Lam on AI Auditing | Lunchtime BABLing

On this week’s Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam.

They discuss a wide range of topics including:

1: How Khoa got into the field of Responsible AI

2: His work at AI Incident Database

3: His thoughts on generative AI and large language models

4: The technical aspects of AI and Algorithmic Auditing

Available on YouTube, Simplecast, and all major streaming platforms. 

AI Audits: Uncovering Risks in ML Systems

BABL AI CEO, Shea Brown, PhD joined the MLSecups Podcast to talk about the “W’s” and security practices related to AI and algorithm audits. 

  1. What is included in an AI audit? 
  2. Who is requesting AI audits and, conversely, who isn’t requesting them but should be?
  3. When should organizations request a third party audit of their AI/ML systems and machine learning algorithms?
  4. Why should they do so? What are some organizational risks and potential public harms that could result from not auditing AI/ML systems?
  5. What are some next steps to take if the results of your audit are unsatisfactory or noncompliant? 

Final Rules for NYC Local Law 144

022. Final Rules for NYC Local Law 144 – Lunchtime BABLing

We review the new rules for NYC’s Local Law No. 144, which requires bias audits of automated employment decision tools.

The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement).

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab

021. Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab – Lunchtime BABLing

This week on Lunchtime BABLing, we discuss:

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research.

3: In context learning of large language models the problems for auditing.

4: NIST’s AI Risk Management Framework and its influence on public policy like California’s ASSEMBLY BILL NO. 331.

5: Updates on The Algorithmic Bias Lab’s new training program for AI auditors.

Sign up for courses here:

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

AI Governance Report & Auditor Training

020. AI Governance and Report & Audit Training – Lunchtime BABLing

This week we discuss our recent report “The Current State of AI Governance”, which is the culmination of a year-long research project looking into the effectiveness of AI governance controls.

Full report here:  

We also discuss our new training program, the “AI & Algorithm Auditor Certificate Program“, which starts in May 2023.

This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general:

1: Algorithms, AI, & Machine Learning

2: Algorithmic Risk & Impact Assessments

3: AI Governance & Risk Management

4: Bias, Accuracy, & the Statistics of AI Testing

5: Algorithm Auditing & Assurance

Early pricing can be found here:   

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

Interrogating Large Language Models with Jiahao Chen

019. Interrogating Large Language Models with Jaihao Chen

On this week’s Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC.

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?

2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process?

3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering).

4: Black-box vs. Whitebox testing of LLMs for algorithm auditing.

5: Classical assessments of intelligence and their applicability to LLMs.

6: Re-thinking education and assessment in the age of AI.

Jiahao Chen Twitter  Responsible AI LLC

Available on YouTube, SimpleCast, and all major Podcast streaming platforms.