The Government of Canada has taken another major step in fostering the safe and responsible development of artificial intelligence (AI). 10 new organizations recently signed Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, expanding the list of signatories committed to ethical AI practices.
The announcement, made by the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, highlights Canada’s leadership in promoting trust and safety in AI. With the addition of these organizations—including TELUS Digital, SAP Canada, and Interac Corp.—the number of signatories has now grown to 40. The voluntary code encourages adherence to key principles in developing and managing generative AI systems, aiming to mitigate risks while unlocking AI’s vast potential.
The new signatories represent a diverse range of sectors, including technology, finance, and telecommunications. By signing the code, organizations commit to measures ensuring transparency, accountability, and ethical AI governance.
Tobias Dengel, President of TELUS Digital Solutions, affirmed the company’s dedication: “We are dedicated to upholding the standards outlined in Canada’s voluntary AI code of conduct to help guide our approach to AI design, implementation, deployment, and governance.”
Debbie Gamble, Chief Strategy and Marketing Officer at Interac Corp., added, “By embracing Canada’s voluntary generative AI code of conduct, we underscore our commitment to the responsible and ethical use of AI.”
Hewlett Packard Enterprise’s Trish Damkroger emphasized the importance of collaboration: “We believe in the power of public-private partnerships to enable future AI-driven innovations where data privacy, ethics and sustainability are integral to AI’s design, deployment, and use.”
The voluntary code is part of Canada’s broader strategy to secure its leadership in AI innovation while ensuring public safety. The government has allocated $2.4 billion in Budget 2024 to advance AI development, including establishing the Canadian AI Safety Institute, enhancing AI infrastructure, and supporting workforce training.
Canada has also introduced the Artificial Intelligence and Data Act (AIDA), a legislative effort to regulate high-impact and general-purpose AI systems. AIDA, currently under review by the House of Commons Standing Committee on Industry and Technology, aims to ensure that AI systems prioritize human rights, safety, and transparency.
Minister Champagne praised the growing coalition of AI leaders: “It is excellent news that so many organizations have signed on to the voluntary code of conduct to help build trust and safety as our AI industry grows.”
Need Help?
If you’re wondering how Canada’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.