BABL AI to Contribute to AI Governance Workshop and Panel at IASEAI’26 at UNESCO House

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/20/2026
In Press

BABL AI is proud to announce that Dinah Rabe will represent the organization at IASEAI’26, the second annual conference of the International Association for Safe & Ethical AI (IASEAI), taking place February 24–26, 2026, at UNESCO House in Paris.

Dinah Rabe will participate in a full-day governance workshop titled:

“Who Watches the Watchers? Designing Trustworthy Public-Private Frameworks for AI Governance”

The workshop, proposed by Dr. Gillian Hadfield (Johns Hopkins University) and Bri Treece (Fathom), is designed to move beyond high-level AI governance principles and focus on the operational architecture of trustworthy oversight systems. It will explore emerging outcome-based regulatory models—sometimes described as “regulatory markets”—and examine the practical mechanisms required to make such frameworks credible, scalable, and accountable.

Dinah Rabe will take part in Panel 2: The State of AI Assurance, alongside leading experts in AI verification and oversight. The session will evaluate the current technical capabilities of AI testing, monitoring, and assurance systems, as well as the roadmap for strengthening independent AI verification.

The afternoon session will feature an interactive tabletop exercise simulating the negotiation of a public-private oversight partnership in a high-stakes domain such as healthcare or finance. Participants will define accountability mechanisms, transparency requirements, conflict-of-interest safeguards, and reporting structures between government entities and third-party expert organizations.

The exercise will introduce real-world stressors, including system failures and governance conflicts, to test the resilience of proposed frameworks. A structured debrief will extract practical lessons for policymakers, regulators, and assurance providers.

“Designing effective AI oversight requires more than principles. It requires operational clarity, credible assurance mechanisms, and trust between public and private actors,” said Dinah Rabe. “Workshops like this help move the field from abstract debate to implementable governance architecture.”

BABL AI continues to support the development of rigorous AI auditing and assurance frameworks globally, contributing expertise at the intersection of technical evaluation, regulatory alignment, and ethical risk management.

About BABL AI:

BABL AI is a global leader in independent AI and algorithmic auditing, governance, and risk assurance. The company works with governments, enterprises, and certification bodies to design and evaluate responsible AI systems aligned with emerging regulatory frameworks.

About IASEAI:

IASEAI is an independent nonprofit organization committed to ensuring AI systems operate safely and ethically for the benefit of humanity. IASEAI brings together experts from academia, policy, civil society, industry, and international institutions to translate AI safety principles into actionable governance strategies.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter