BABL AI Senior Consultant Borhane Blili-Hamelin Participates in NIST’s AI Safety Institute Consortium Plenary

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/09/2024
In Press

BABL AI announced that Senior Consultant Borhane Blili-Hamelin participated in a recent plenary event hosted by the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Safety Institute Consortium (AISIC). In addition, the member-only hybrid event took place on December 3, 2024.

The plenary brought together representatives from 280 organizations, including BABL AI. Also, participants reflected on progress made in 2024 and helped set priorities for advancing AI safety and evaluation work in the year ahead.

AISIC’s Role in Advancing AI Safety

AISIC plays a central role in the U.S. government’s efforts to collaborate with external organizations on AI safety. Since its launch in winter 2024, the consortium has focused on issues such as synthetic content, generative AI risk management, evaluation methods, red-teaming, and frontier AI safety and security.

During the December plenary, members worked directly with NIST staff. Together, they shared lessons learned and discussed next steps for addressing emerging risks tied to advanced AI systems.

Key Takeaways from the Plenary

Reflecting on the event, Blili-Hamelin emphasized NIST’s leadership in evidence-based AI policy and evaluation. “I’m in awe of the phenomenal work NIST and AISI staffers are doing,” he said. “It’s inspiring to see how the consortium provides a persistent space for outside organizations to collaborate on pushing the science of AI evaluation and safety forward.”

He also highlighted the growing challenges posed by foundation models and general-purpose AI. These systems raise complex questions around reliability, safety, and security.

Despite these challenges, Blili-Hamelin expressed optimism. “The plenary left me feeling hopeful,” he said. “NIST, the US AISI, and consortium members like BABL AI are bringing the urgency, skill, and dedication needed to meet the moment.”

BABL AI’s Ongoing Engagement in AI Safety

BABL AI’s participation in AISIC reflects its ongoing commitment to advancing trustworthy AI practices. Through audit work, research collaboration, and engagement with public institutions, the company continues to contribute to the development of rigorous AI evaluation and governance approaches.

 

About BABL AI:

 

Since 2018, BABL AI has been auditing and certifying AI systems, consulting on responsible AI best practices and offering online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing.

 

About NIST’s Artificial Intelligence Safety Institute Consortium:

 

The Artificial Intelligence Safety Institute Consortium, established under the U.S. Artificial Intelligence Safety Institute by the National Institute of Standards and Technology (NIST), is a groundbreaking initiative designed to advance the development of safe and trustworthy AI systems.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter