Lunchtime BABLing

Catch up on the latest in AI Auditing and Compliance with the BABL AI podcast.

Ensuring LLM Safety: A Guide to Evaluation and Compliance

Ensuring LLM Safety: A Guide to Evaluation and Compliance

As the adoption of large language models (LLMs) accelerates across sectors, so too do the questions surrounding their safety, reliability, and compliance. In this solo episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown offers a deep dive into how to evaluate LLM-powered systems with a risk-aware mindset—especially in light of new regulations like the EU AI Act, Colorado’s AI law, and pending legislative efforts in California and New York.
Read More

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter
Want to be a guest? Want someone from BABL AI to be on your podcast? Want to learn more about a specific topic? We want to hear from you!