UPDATE — JULY 2025: The U.S. AI Safety Institute has undergone major changes under the Trump administration.
The institute has been rebranded as the Center for AI Standards and Innovation (CASI) or the AI Innovation Center, with the word “Safety” removed from its name. Alongside the rebranding, its mission has shifted. The new focus emphasizes U.S. competitiveness, standards-setting, and national security—rather than risk mitigation and ethical guardrails.
Several 2025 executive orders reversed prior Biden-era directives. These changes eliminated references to “AI fairness” and “AI safety” from official federal and NIST documents. The current approach prioritizes rapid AI deployment and global influence in AI standards.
The leadership appointments below remain factually correct as of April 2024. However, the policy context and institutional identity have evolved significantly since then.
ORIGINAL NEWS STORY:
New Leadership Team Announced for U.S. AI Safety Institute
On April 2024, U.S. Secretary of Commerce Gina Raimondo unveiled the latest additions to the executive leadership team of the U.S. AI Safety Institute Consortium (AISIC), marking a significant step forward in the nation’s commitment to responsible AI governance. The AISIC, housed at the National Institute of Standards and Technology (NIST), is tasked with advancing the safe and trustworthy development and deployment of artificial intelligence technologies.
Leadership Team Appointments
The new leadership included:
-
Paul Christiano – Head of AI Safety
-
Adam Russell – Chief Vision Officer
-
Mara Campbell – Chief Operating Officer and Chief of Staff (Acting)
-
Rob Reich – Senior Advisor
-
Mark Latonero – Head of International Engagement
They joined previously appointed AISIC Director Elizabeth Kelly and Chief Technology Officer Elham Tabassi.
Each leader brought specific expertise to guide the institute’s work, from national security testing of frontier models to international collaboration on AI safety standards.
Statements from Federal Leaders
Secretary Raimondo expressed her support for the new team. She emphasized that attracting top experts was crucial to keeping the U.S. at the forefront of responsible AI. “We’ve selected the best in their fields to lead this effort,” she said.
Bruce Reed, White House Deputy Chief of Staff, echoed that view. He underscored the value of drawing from civil society, academia, and the private sector to “shape AI in accordance with our values.”
NIST Director Laurie E. Locascio also praised the new leaders. She noted their collective experience would help the institute build “a solid foundation for AI safety going into the future.”
Core Functions and Roles
-
Paul Christiano leads efforts to test advanced AI systems, especially those with national security implications.
-
Adam Russell shapes the institute’s long-term vision and strategic direction.
-
Mara Campbell oversees day-to-day operations, acting as a central organizer.
-
Rob Reich builds bridges with civil society and ensures public concerns are addressed.
-
Mark Latonero leads global engagement efforts to align U.S. initiatives with international standards.
Together, they were tasked with building a comprehensive, stakeholder-informed approach to AI governance.
Need Help?
Keeping track of the ever-changing AI landscape can be tough, especially if you have questions and concerns. Therefore, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

