UPDATE — SEPTEMBER 2025: Since Canada announced in November 2024 that 10 new organizations had signed its Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, the country has moved forward significantly on AI governance. The voluntary code, which originally had 40 signatories, has grown steadily. By mid-2025, more than 55 organizations had joined, expanding beyond large technology and finance companies to include SMEs and public-sector institutions. The government has also begun promoting the code internationally as a pre-regulatory safeguard that complements Canada’s legislative efforts.
The most important legislative development has been progress on the Artificial Intelligence and Data Act (AIDA). Bundled within Bill C-27, AIDA advanced through the Standing Committee on Industry and Technology in June 2025, which adopted amendments narrowing the definition of “high-impact systems,” strengthening the powers of the proposed AI and Data Commissioner, and clarifying obligations for general-purpose AI models. The bill was reported back to the House of Commons in July 2025 and now awaits third reading. If passed before year’s end, AIDA will mark Canada’s first binding national AI law, though many enforcement details will depend on forthcoming regulations.
Meanwhile, the Canadian AI Safety Institute (CASI), created under Budget 2024’s $2.4 billion AI package, formally launched operations in early 2025. Its first work program, released in the spring, outlined priorities including frameworks for third-party testing and auditing of frontier models, red-teaming protocols, and incident reporting guidance. CASI has also begun participating in international coordination efforts, engaging with OECD forums, the G7 Hiroshima AI Process, and the International Network of AI Safety Institutes.
ORIGINAL NEWS POST:
Canadian Organizations Sign On to Voluntary AI Code of Conduct
The Government of Canada has taken another major step in fostering the safe and responsible development of artificial intelligence (AI). 10 new organizations recently signed Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, expanding the list of signatories committed to ethical AI practices.
The announcement, made by the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, highlights Canada’s leadership in promoting trust and safety in AI. With the addition of these organizations—including TELUS Digital, SAP Canada, and Interac Corp.—the number of signatories has now grown to 40. The voluntary code encourages adherence to key principles in developing and managing generative AI systems, aiming to mitigate risks while unlocking AI’s vast potential.
The new signatories represent a diverse range of sectors, including technology, finance, and telecommunications. By signing the code, organizations commit to measures ensuring transparency, accountability, and ethical AI governance.
Tobias Dengel, President of TELUS Digital Solutions, affirmed the company’s dedication: “We are dedicated to upholding the standards outlined in Canada’s voluntary AI code of conduct to help guide our approach to AI design, implementation, deployment, and governance.”
Debbie Gamble, Chief Strategy and Marketing Officer at Interac Corp., added, “By embracing Canada’s voluntary generative AI code of conduct, we underscore our commitment to the responsible and ethical use of AI.”
Hewlett Packard Enterprise’s Trish Damkroger emphasized the importance of collaboration: “We believe in the power of public-private partnerships to enable future AI-driven innovations where data privacy, ethics and sustainability are integral to AI’s design, deployment, and use.”
The voluntary code is part of Canada’s broader strategy to secure its leadership in AI innovation while ensuring public safety. The government has allocated $2.4 billion in Budget 2024 to advance AI development, including establishing the Canadian AI Safety Institute, enhancing AI infrastructure, and supporting workforce training.
Canada has also introduced the Artificial Intelligence and Data Act (AIDA), a legislative effort to regulate high-impact and general-purpose AI systems. AIDA, currently under review by the House of Commons Standing Committee on Industry and Technology, aims to ensure that AI systems prioritize human rights, safety, and transparency.
Minister Champagne praised the growing coalition of AI leaders: “It is excellent news that so many organizations have signed on to the voluntary code of conduct to help build trust and safety as our AI industry grows.”
Need Help?
If you’re wondering how Canada’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.


