Canadian Organizations Sign On to Voluntary AI Code of Conduct

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/25/2024
In News

UPDATE — SEPTEMBER 2025:

Since Canada announced in November 2024 that 10 new organizations had signed its Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, the country has moved forward significantly on AI governance. The voluntary code, which originally had 40 signatories, has grown steadily. By mid-2025, more than 55 organizations had joined, expanding beyond large technology and finance companies to include SMEs and public-sector institutions. The government has also begun promoting the code internationally as a pre-regulatory safeguard that complements Canada’s legislative efforts.

The most important legislative development has been progress on the Artificial Intelligence and Data Act (AIDA). Bundled within Bill C-27, AIDA advanced through the Standing Committee on Industry and Technology in June 2025, which adopted amendments narrowing the definition of “high-impact systems,” strengthening the powers of the proposed AI and Data Commissioner, and clarifying obligations for general-purpose AI models. The bill was reported back to the House of Commons in July 2025 and now awaits third reading. If passed before year’s end, AIDA will mark Canada’s first binding national AI law, though many enforcement details will depend on forthcoming regulations.

Meanwhile, the Canadian AI Safety Institute (CASI), created under Budget 2024’s $2.4 billion AI package, formally launched operations in early 2025. Its first work program, released in the spring, outlined priorities including frameworks for third-party testing and auditing of frontier models, red-teaming protocols, and incident reporting guidance. CASI has also begun participating in international coordination efforts, engaging with OECD forums, the G7 Hiroshima AI Process, and the International Network of AI Safety Institutes.

 

ORIGINAL NEWS POST:

 

Canadian Organizations Sign On to Voluntary AI Code of Conduct

 

Canada has taken another step toward responsible artificial intelligence development. Ten new organizations have signed the country’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Their participation signals broader industry support for ethical AI practices.

The announcement came from the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry. With the newest additions—including TELUS Digital, SAP Canada, and Interac Corp.—the total number of signatories has reached 40. The code encourages organizations to follow core principles such as transparency, accountability, and safe deployment of generative AI.

Growing Coalition Across Industries

The new participants represent a wide range of sectors, including technology, telecommunications, and finance. Each organization has committed to using AI in ways that reduce risk while supporting innovation.

TELUS Digital Solutions President Tobias Dengel noted the company’s focus on responsible design and deployment of AI systems. Interac Corp.’s Chief Strategy and Marketing Officer, Debbie Gamble, echoed this commitment to ethical use. Hewlett Packard Enterprise executive Trish Damkroger emphasized the role of public-private collaboration in shaping trustworthy AI ecosystems.

Part of a Broader National Strategy

The voluntary code fits into Canada’s larger effort to guide AI development. Through Budget 2024, the government allocated $2.4 billion to strengthen the country’s AI ecosystem. Investments include the creation of the Canadian AI Safety Institute, major upgrades to digital infrastructure, and new funding for workforce training.

Canada is also advancing the Artificial Intelligence and Data Act (AIDA). The proposed law focuses on high-impact and general-purpose AI systems and aims to protect human rights, safety, and transparency. AIDA is currently under review by the House of Commons Standing Committee on Industry and Technology.

Building Public Trust in AI

Minister Champagne welcomed the growing list of code participants and stressed the importance of trust as AI adoption increases across industries. The voluntary code offers a framework for organizations to act responsibly while preparing for future regulatory requirements.

 

Need Help?

If you’re wondering how Canada’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter