Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab

Written by Jeffery Recker

Co-Founder and Chief Operating Officer of BABL AI.
Posted on 04/04/2023
In Podcast

021. Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab – Lunchtime BABLing

This week on Lunchtime BABLing, we discuss:

1: The power, hype, and dangers of large language models like ChatGPT.

2: The recent open letter asking for a moratorium on AI research.

3: In context learning of large language models the problems for auditing.

4: NIST’s AI Risk Management Framework and its influence on public policy like California’s ASSEMBLY BILL NO. 331.

5: Updates on The Algorithmic Bias Lab’s new training program for AI auditors.

Sign up for courses here:

Available on YouTube, Simplecast, and all major Podcast streaming platforms.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter