What Does Explainable AI Really Mean?

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/31/2025
In Podcast

In the latest episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to take on one of the most complex—and misunderstood—topics: explainable AI

 

 

Topics Discussed:

 

  • Explainability vs. Interpretability: The conversation begins by untangling the often-interchanged terms “explainability” and “interpretability.” While both relate to how we understand what an AI system is doing, they serve different stakeholder needs. Interpretability typically focuses on the technical inner workings, while explainability aims to communicate outcomes in a human-understandable way. And as the team discusses, context matters—especially when AI decisions affect real lives.

 

  • Why a “Good Enough” Explanation Depends on Who’s Asking: Whether it’s a patient wondering why their doctor’s office is using AI or a regulator reviewing a high-risk system, expectations for what counts as a “useful” explanation vary widely. The group stresses that explainability should be use-case specific and grounded in stakeholder relevance. An explanation isn’t good enough simply because it sounds convincing—it must be meaningful to the person impacted.

 

  • When Even Humans Can’t Explain Themselves: Dr. Brown draws parallels between AI and human cognition, noting that humans frequently generate post hoc explanations that don’t reflect the true reasons behind our decisions. In that light, expecting AI systems—especially large language models—to give flawless, trustworthy explanations might be asking the impossible. That’s why oversight structures, testing, and certification matter more than we think.

 

  • Can You Trust an AI to Explain Itself?: Referencing the latest report from the Center for Security and Emerging Technology (CSET), the team explores the gap between explainability research and real-world impact. As the report points out, current evaluation methods focus more on internal correctness than external effectiveness—meaning explanations may satisfy researchers but still fall short for users, regulators, or affected communities.

 

  • Can You Trust an AI to Explain Itself?: Referencing the latest report from the Center for Security and Emerging Technology (CSET), the team explores the gap between explainability research and real-world impact. As the report points out, current evaluation methods focus more on internal correctness than external effectiveness—meaning explanations may satisfy researchers but still fall short for users, regulators, or affected communities.

 

  • The Need for Literacy, Certification, and Human Oversight: Whether it’s a receptionist explaining a waiver for AI-powered documentation tools or a developer deploying a recommender system, the group agrees: baseline AI literacy is essential. At BABL AI, that means building clear standards, training programs, and certifications to ensure systems don’t just work—but work responsibly.

 

Where to Find Episodes

 

Lunchtime BABLing can be found on YouTubeSimplecast, and all major podcast streaming platforms.

 

 

Need Help?

 

For more information and resources on explainable AI, be sure to visit BABL AIs website and stay tuned for future episodes of Lunchtime BABLing

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter