UPDATE — SEPTEMBER 2025: Since its launch in November 2024, the Canadian Artificial Intelligence Safety Institute (CAISI) has moved from announcement into active operations, establishing leadership, initiating research, and expanding its role in global AI safety collaboration. In February 2025, the government appointed Dr. Elissa Strome, formerly of CIFAR, as CAISI’s first Executive Director, providing the institute with a clear governance structure. By spring 2025, CAISI issued its first calls for investigator-led AI safety research projects through CIFAR, focusing on interpretability, robustness testing, and the misuse of AI in cybersecurity. These projects are expected to begin in 2026.
Internationally, CAISI formally joined the International Network of AI Safety Institutes in early 2025, alongside counterparts in the U.S., U.K., Japan, and Singapore. Through this forum, Canada has been coordinating frontier model evaluations, developing technical safety benchmarks, and sharing information with global partners. NRC-led initiatives have also been aligned with trilateral commitments between the U.S., U.K., and Canada, including early projects that target AI use in critical infrastructure and election disinformation.
At the policy level, CAISI has become increasingly linked to Canada’s pending Artificial Intelligence and Data Act (AIDA). After stalling in 2024, AIDA was revived in spring 2025 with amendments, and CAISI has been positioned to serve as the technical body responsible for testing and assurance once the law is enacted. The institute has also contributed to consultations on the updated AI in the Public Service Strategy, published in June 2025, applying its expertise to government procurement and risk assessments.
CAISI has also begun building its public role. In summer 2025, it hosted workshops and expert roundtables in Toronto and Montréal to engage stakeholders and outline its priorities. These efforts suggest CAISI is on track to serve as a national evaluation hub for large AI models, similar to the role played by NIST in the U.S. While still in its early build-out phase, CAISI now stands as a focal point for Canada’s AI safety ecosystem, advancing research, aligning with federal legislation, and contributing to international cooperation on responsible AI governance.
ORIGINAL NEWS POST:
Canada Launches National AI Safety Institute to Address Risks and Build Trust
The Canadian government has officially launched the Canadian Artificial Intelligence Safety Institute (CAISI), a landmark initiative designed to position Canada at the forefront of safe and responsible artificial intelligence (AI) development and deployment. This institute will serve as the cornerstone of the government’s strategy to mitigate AI-related risks while fostering innovation.
Announced by François-Philippe Champagne, Minister of Innovation, Science and Industry, CAISI is a direct response to growing concerns about the misuse of AI technologies in areas such as disinformation, cybersecurity, and election interference. Backed by an initial $50 million budget over five years, the institute is part of a broader $2.4 billion investment from the 2024 federal budget.
CAISI aims to tackle the complexities of AI safety by leveraging Canada’s globally renowned AI research ecosystem, including partnerships with institutions like the National Research Council of Canada (NRC), CIFAR, and three leading national AI institutes—Amii in Edmonton, Mila in Montréal, and the Vector Institute in Toronto. The institute will conduct research through two streams:
- Applied and Investigator-Led Research: CIFAR will oversee research projects addressing fundamental AI safety questions with input from Canadian and international experts.
- Government-Directed Projects: The NRC will focus on initiatives aligned with government priorities, such as cybersecurity and collaborations with other international AI safety institutes.
A core aspect of CAISI’s mission is fostering international collaboration. The institute will work closely with global AI safety organizations, building on Canada’s commitments under the Bletchley Declaration, which emphasizes global AI safety coordination. Later this month, CAISI representatives will participate in the inaugural meeting of the **International Network of AI Safety Institutes** in San Francisco.
CAISI is a key component of Canada’s comprehensive strategy for AI governance, which includes the proposed “Artificial Intelligence and Data Act” and a “Voluntary Code of Conduct” on managing advanced AI systems responsibly. These measures aim to protect Canadians while enabling businesses to innovate.
Canada’s AI sector is a critical driver of economic growth, employing over 140,000 professionals and attracting $8.6 billion in venture capital in 2022 alone. The country also ranks first globally in year-over-year growth of women in AI and leads the G7 in AI research output per capita. With the establishment of CAISI, Canada seeks to build on these achievements while addressing the ethical and safety challenges posed by rapid AI development.
Need Help?
If you’re wondering how CAISI, or any other AI Institutes, strategies, regulations, and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.


