021. Large Language Models, Open Letter Moratorium on AI, NIST’s AI Risk Management Framework, and Algorithmic Bias Lab – Lunchtime BABLing
This week on Lunchtime BABLing, we discuss:
1: The power, hype, and dangers of large language models like ChatGPT.
2: The recent open letter asking for a moratorium on AI research.
3: In context learning of large language models the problems for auditing.
4: NIST’s AI Risk Management Framework and its influence on public policy like California’s ASSEMBLY BILL NO. 331.
5: Updates on The Algorithmic Bias Lab’s new training program for AI auditors.
Sign up for courses here:
Available on YouTube, Simplecast, and all major Podcast streaming platforms.