What Companies Need To Consider When Implementing AI | Lunchtime BABLing 30

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/22/2024
In Podcast

In the latest episode of Lunchtime BABLing, Shea Brown, CEO of BABL AI, and Bryan Ilg, VP of Sales, delve into the intricacies of implementing AI compliance, governance, ethical considerations, and the imperative of building trust in AI technologies. Their conversation sheds light on the evolving landscape of artificial intelligence and its implications for businesses.

 

The Core of AI Governance

The dialogue opens with an exploration of AI’s potential and the necessity of addressing bias and ethical risks. Shea emphasizes that while AI offers immense benefits, it also presents significant challenges, particularly in terms of regulatory and reputational risks. The discussion underscores the importance of a robust governance strategy to navigate these challenges effectively.

Avoiding Blind Spots

Bryan and Shea explore how business leaders often overlook regulatory risk, reputational damage, and societal impact when deploying AI. They stress the importance of evaluating how AI affects different communities and the need for responsible deployment.

Steps to Build a Governance Strategy

Shea outlines a foundational approach to AI governance:

  • Assign clear ownership and accountability

  • Inventory AI systems across the business

  • Conduct risk assessments and implement mitigations

  • Integrate these processes into broader risk management systems

Leveraging Existing Governance Frameworks

For companies with governance models already in place, Shea recommends a federated approach. This means embedding AI-specific responsibilities into local teams and existing compliance functions, while reinforcing education and alignment at the organizational level.

Why You Need an AI Compliance Team

The episode discusses how AI compliance teams can help maintain oversight, ensure policy adherence, and manage complex systems responsibly. Still, Shea notes the limits of internal teams—highlighting the value of external audits for independent assurance.

Third-Party Audits: Trust in Action

Bryan and Shea underscore that third-party AI audits are no longer a nice-to-have—they’re a strategic tool for compliance, trust, and stakeholder confidence. BABL AI’s audit methodology is designed to help organizations build responsible AI programs that meet regulatory expectations and stand up to public scrutiny.

Implementing AI Compliance as a Competitive Edge

The episode closes with a forward-looking take on the strategic benefits of early regulatory alignment, particularly with the EU AI Act. Shea draws parallels to the sustainability movement—companies that move first build stronger reputations and avoid costly last-minute scrambles.

 

Lunchtime BABLing listeners can use coupon code “BABLING” to receive a 20% discount on our AI and Algorithm Auditor Certification Program.

 

All Lunchtime BABLing episodes are available on YouTube, Simplecast, and all major podcast streaming platforms.

 

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter