BABL AI Senior Consultant Borhane Blili-Hamelin Participates in NIST’s AI Safety Institute Consortium Plenary

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/09/2024
In Press

BABL AI is proud to announce that Senior Consultant Borhane Blili-Hamelin participated in the recent plenary event hosted by the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Safety Institute Consortium (AISIC). This member-only hybrid event, held on December 3, 2024, brought together representatives from 280 organizations, including BABL AI, to reflect on accomplishments and set the agenda for advancing AI safety and evaluation practices in the coming year.

 
The AISIC is a cornerstone of the U.S. government’s efforts to engage external organizations in shaping the future of AI safety. Since its inception in the winter of 2024, the consortium has been at the forefront of tackling critical issues such as synthetic content, generative AI risk management, evaluation, red-teaming, and frontier AI safety and security. The December plenary served as an opportunity for members to collaborate with NIST staff, share insights, and plan the next steps in addressing the challenges of emerging AI technologies.

 
Borhane Blili-Hamelin’s Key Takeaways:

 

  • Engaging with AI Leaders: Reflecting on his experience, Blili-Hamelin highlighted NIST’s leadership in evidence-based AI policy and its innovative approaches to AI testing and evaluation. “I’m in awe of the phenomenal work NIST and AISI staffers are doing,” he said. “It’s inspiring to see how the consortium provides a persistent space for outside organizations to collaborate on pushing the science of AI evaluation and safety forward.”

 

  • The Challenge of Foundation Models: Addressing the reliability, safety, and security challenges posed by general-purpose AI technologies, Blili-Hamelin noted the complexity of converging on effective practices. However, he expressed optimism, saying, “The plenary left me feeling hopeful. NIST, the US AISI, and consortium members like BABL AI are bringing the urgency, skill, and dedication needed to meet the moment.”

 

About BABL AI:

 

Since 2018, BABL AI has been auditing and certifying AI systems, consulting on responsible AI best practices and offering online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing.

 

About NIST’s Artificial Intelligence Safety Institute Consortium:

 

The Artificial Intelligence Safety Institute Consortium, established under the U.S. Artificial Intelligence Safety Institute by the National Institute of Standards and Technology (NIST), is a groundbreaking initiative designed to advance the development of safe and trustworthy AI systems.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter