An international coalition of AI Safety Institutes has rebranded to reflect a growing focus on scientific measurement and evaluation of advanced artificial intelligence systems, as governments seek to keep oversight methods aligned with the rapid pace of AI development.
The group, previously known as the International Network of AI Safety Institutes, will now operate under the name International Network for Advanced AI Measurement, Evaluation and Science. The network includes institutes from Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, the United Kingdom, and the United States.
The rebrand signals a shift away from broader, and sometimes ambiguous, notions of “AI safety” toward a more technical emphasis on how advanced AI models are measured, tested, and evaluated. Officials involved in the network say robust scientific evaluation is essential for building trust in increasingly powerful AI systems and for ensuring that governance frameworks are grounded in evidence rather than assumptions.
As part of the transition, the United Kingdom has been named coordinator of the international network. In this role, the UK will help shape global efforts to improve methodologies for evaluating AI capabilities, risks, and real-world behavior across borders.
“Trust in AI isn’t optional – it’s critical,” said UK AI Minister Kanishka Narayan. He said the UK’s leadership role would focus on uniting countries around shared approaches to evaluation and research, helping unlock the benefits of AI while managing risks.
The international network was originally established in November 2024 following the UK-hosted AI Safety Summit, which brought together governments to coordinate responses to the emergence of advanced foundation models. Since then, participating countries have increasingly emphasized the need for standardized testing and measurement as AI systems grow more capable and widely deployed.
The rebrand also follows changes at the national level in the UK. In February, the UK’s own AI Safety Institute renamed itself the AI Security Institute, underscoring a sharper focus on national security concerns tied to advanced AI.
Adam Beaumont, interim director of the UK’s AI Security Institute, said global coordination is essential. “Advanced AI systems are being developed and deployed globally, so our approach to evaluating them has to be global too,” he said, adding that the network will prioritize practical, rigorous testing methods that can be applied across jurisdictions.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


