Canadian Government Launches Consultation on AI Bill

Canadian Government Launches Consultation on AI Bill

While the United States, particularly New York City, remains immersed in the deliberation of several AI bills, including the NO FAKES Act, another North American country is actively seeking public input on its AI proposal. In 2022, the Canadian government introduced the Artificial Intelligence and Data Act, as part of Bill C-27. Since its introduction, the bill has undergone several updates, and a committee initiated a study in connection with C-27. On Thursday, October 12, Honourable François-Philippe Champagne, the Minister of Innovation, Science, and Industry, and Honourable Pascale St-Onge, the Minister of Canadian Heritage, initiated a consultation process to gather public input.

As per the consultation paper, Canadians have until Monday, December 4, 2023, to provide online feedback. The consultation will delve into questions surrounding the use of copyrighted material in the training of AI systems, authorship and ownership rights of AI-generated content, and liability in cases where AI-generated content infringes on copyrighted material. This isn’t the first time the Canadian government has sought public input or outlined future AI regulations.

This year’s consultation follows a previous attempt in 2021. On September 27, 2023, the Canadian government introduced the country’sVoluntary Code of Conduct on the Responsible Development and Management of Advanced Generative Artificial Intelligence Systems. In a  press release, Honourable Champagne remarked, “As developments in AI intensify, our government is seizing every opportunity to stimulate innovation and explore the possibilities offered by this revolutionary technology. Canada’s copyright framework needs to remain balanced and able to facilitate a functional marketplace, and that’s why we’re studying the best way forward to protect the rights of Canadians while ensuring the safe and ethical development of AI.”

To participate in Canada’s consultation, click this link.

For insights into how Canada’s Artificial Intelligence and Data Act could impact your business, contact BABL AI. Their Audit Experts can address your questions and concerns related to this and other AI regulations.

NYC introduces a new framework for evaluating AI

NYC introduces a new framework for evaluating AI

With Local Law 144, also known as the NYC Bias Law, now in full effect, New York City Mayor Eric Adams, alongside Chief Technology Officer Matthew Fraser and others, has introduced a new framework to evaluate AI. On Monday, October 16, Mayor Adams presented the comprehensive New York City Artificial Intelligence Action Plan, outlining the city’s commitment to harnessing AI’s potential to enhance services and processes for all New Yorkers. The plan is specifically crafted to ensure the responsible and equitable use of AI for the entire community.

The 51-page plan, collaboratively developed by the Office of Technology and Innovation, engaged 50 city employees from 18 agencies and gathered insights from leaders in industry, society, and academia. In the introductory segment of the plan, Mayor Adams addressed New Yorkers, underscoring the significance and possibilities AI holds for safety, opportunity, and efficiency citywide. Acknowledging the rewards and risks associated with AI, Mayor Adams emphasized the paramount importance of its responsible utilization. This sentiment was echoed in a subsequent letter from CTO Fraser, who stressed community engagement throughout the AI systems’ development and deployment.

The plan encompasses the city’s ongoing AI-related initiatives and delineates seven key initiatives, featuring 37 specific actions to address a broad spectrum of issues. These initiatives encompass establishing a responsible use framework for AI in the city government, creating a coordinated approach to AI governance and management, developing a process to identify and assess high-risk AI systems, ensuring transparency, fairness, and explainability of AI systems, promoting responsible data sharing and collaboration across multiple agencies, investing in the city’s AI ecosystem, and building public understanding and awareness of AI and its impacts. Each action item includes timelines, responsible parties, and expected outcomes, with officials aiming to complete 27 specific actions within the next year.

As a preliminary step, the plan calls for the establishment of an “AI Steering Committee” comprising stakeholders from across the city. Concurrent with the plan’s unveiling, the government announced the testing of the first citywide AI-powered chatbot. This chatbot, available in beta on the MyCity Business site, assists business owners with operations and growth in the city. Leveraging Microsoft’s Azure AI services, the chatbot was trained on information from over 2,000 NYC business web pages. Overall, the plan is a comprehensive and ambitious effort to leverage AI’s power for the collective benefit of New Yorkers. It not only establishes a blueprint for responsible AI use but also fosters transparency, collaboration, talent development, and community engagement. New York City aims to ensure that AI is employed ethically, equitably, and to the benefit of all.

For businesses seeking insights into the impact of this framework or in need of a New York City Bias Audit, contact BABL AI. Their Audit Experts can address your queries related to this regulation and more.

Norwegian Government Announces New Minister for Digitalisation and Governance with Responsibility for Artificial Intelligence

Norwegian Government Announces New Minister for Digitalisation and Governance with Responsibility for Artificial Intelligence

While the European Union (EU) refines details for the Harmonised Rules on Artificial Intelligence, or the EU AI Act, Norway, a European country outside the EU, is gearing up to explore AI regulations in the coming year. On Monday, October 16, the Norwegian government announced the appointment of Karianna Tung as the new Minister of Digitalisation and Governance. The 39-year-old Tung will spearhead a newly established ministry tasked with overseeing Norway’s digital strategy and governance reforms.

Prime Minister Jonas Gahr Støre highlighted the pivotal role AI will play in Tung’s responsibilities. Tung’s mandate includes enhancing coordination on digitalization across Norway’s government departments and advancing the use of AI to enhance public services. In a press release from Trondheim Tech Port, Tung expressed her enthusiasm for the crucial role she is about to undertake: “Digitization and new technology are primarily an opportunity to make our lives better and easier. This can mean creating new jobs, finding solutions to climate challenges, and renewing public services for the people – at school, in the healthcare system, and in elderly care.”

While Tung’s official role begins operations next year, she will collaborate with the current Ministry of Local Government and Regional Development in the interim. This transitional period will encompass the shift of three existing departments—Government Services, National IT Policy and Public Governance, and Employer Policy—into the new ministry. Additionally, Tung is expected to oversee state personnel policy, public administration reforms, and the development of a new government headquarters.

Tung’s appointment underscores Norway’s commitment to AI governance and responsible AI development. If you’re curious about the potential impact on your organization, consider reaching out to BABL AI. Their Audit Experts can provide tailored guidance based on your specific needs. could be punished by the Digital Services Act in the EU could be punished by the Digital Services Act in the EU

On Thursday, October 12, the European Union (EU) revealed that it had formally requested information from X, scrutinizing the company’s handling of violent content and disinformation related to the Israel-Hamas conflict. The EU’s formal request seeks to evaluate X’s compliance with the recently enacted Digital Services Act (DSA). This inquiry has the potential to trigger a comprehensive investigation into the company’s adherence to the DSA. A failure to respond accurately or provide misleading information may expose X to fines of up to 5% of its daily global turnover. Continued violations under the DSA could escalate these fines to as much as 6% of the global turnover, translating to potential penalties in the hundreds of millions. Persistent violations could also result in the suspension of X in the EU.

The EU’s request followed an open letter from EU Commissioner Thierry Breton to Musk on X just two days prior. In the letter, Breton emphasized the need for transparency in content policies, timely action in response to notices of illegal content in the EU, and the implementation of effective measures to address disinformation risks to public security and civic discourse.

As one of the 17 designated Very Large Online Platforms (VLOPs) under the DSA, X faces heightened scrutiny. The company is mandated to furnish details about the activation and functionality of its crisis response protocol by Wednesday, October 18. Additionally, X has until Tuesday, October 31, to address other concerns raised. Non-compliance could lead to the imposition of the first major fine under the DSA. Importantly, X is not the sole social media entity receiving warnings. In letters directed to X, EU Commissioner Thierry Breton also reminded Mark Zuckerberg of Meta and Shou Zi Chew of TikTok about their obligations to combat misinformation under the DSA. Many of these concerns revolve around disinformation related to the Israel-Hamas conflict.

Since the inception of the DSA, major tech companies have been actively striving for compliance. If you’re uncertain about your company’s compliance under the DSA or other AI regulations, consider reaching out to BABL AI. Their Audit Experts possess specific expertise in handling DSA compliance and can assist you with any questions or concerns.

Ukrainian Minister Releases AI Roadmap

Ukrainian Minister Releases AI Roadmap

While many nations deliberate their strategies for AI regulation, Ukraine, a country grappling with a conflict, has unveiled its own roadmap. Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, introduced the country’s AI roadmap on October 7, emphasizing Ukraine’s ambition to lead the global AI trend. “Understanding current developments, reacting swiftly, and having our own strategy is essential,” stated Fedorov in a formal press release.

Ukraine’s AI roadmap adopts a phased approach with the overarching goal of balancing societal and business interests while safeguarding human rights. The first stage, spanning two to three years, concentrates on developing future regulators and preparing the business sector for impending regulation. The subsequent stage involves implementing their plan in alignment with the Harmonised Rules on Artificial Intelligence, known as the EU AI Act.

Although the expected unveiling of Ukraine’s AI roadmap is slated for 2024, its implementation awaits the adoption of the EU AI Act to ensure regulatory alignment. This strategic timeline aims to enhance the competitiveness of Ukrainian businesses, facilitating global market access and positioning the country for seamless integration into the EU in the future. However, specific details regarding the timeline for implementation are not outlined in the roadmap.

Ukraine’s AI roadmap is characterized as a comprehensive and balanced framework designed to regulate AI within the country. Fedorov highlights the collaborative nature of the roadmap’s development, involving relevant businesses, scientists, and educators through an expert committee on AI under the Ministry. Fedorov asserts, “AI is actively used in various domains in Ukraine today, with particular significance in military technologies. Developing regulatory frameworks for AI is crucial for the country’s progress and enables us to move faster in this direction.”

For companies seeking insights into the implications of this announcement or assistance in preparing for EU AI Act Conformity Assessment, BABL AI stands ready to help. Contact them today and one of their Audit Experts can provide support.

Bipartisan Group of U.S. Lawmakers Introduce NO FAKES Act

Bipartisan Group of U.S. Lawmakers Introduce NO FAKES Act

The U.S. entertainment industry is lauding a newly introduced bipartisan discussion draft bill in Washington D.C. Titled the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act,” or NO FAKES Act, this initiative emerged on Thursday, October 12, with sponsorship from Senators Chris Coons of Delaware, Marsha Blackburn of Tennessee, Amy Klobuchar of Minnesota, and Thom Tillis of North Carolina. It is the latest addition to a series of bills addressing related concerns at the federal level.

The primary objective of the NO FAKES Act is to safeguard the image, voice, and visual likeness of individuals, whether living or deceased. It explicitly prohibits the production, publication, distribution, or transmission of unauthorized digital replicas of individuals without their explicit consent. Furthermore, the act bars the use of an individual’s visual likeness in a manner likely to cause confusion or deceive people. Violations could incur civil penalties, including damages and injunctive relief.

While stringent, the NO FAKES Act includes specific exceptions allowing the use of an individual’s image or voice without explicit consent. For instance, it exempts digital replicas used as part of news, public affairs, sports broadcasts, or reports. Similarly, an individual’s image or voice can be utilized in documentaries, docudramas, or historical/biographical works, provided the representation is factual and accurate.

Senator Coons emphasized the need for clear policies regulating the use and impact of generative AI in a press release stating, “Congress must strike the right balance to defend individual rights, abide by the First Amendment, and foster AI innovation and creativity.”

Senator Tillis added, “While AI presents extraordinary opportunities for technological advancement, it also poses new problems, including the voice and likeness of artists being replicated to create unauthorized works.”

SAG-AFTRA President Fran Drescher underscored the importance of consent with Deadline, stating, “A performer’s voice and appearance are all part of their unique essence, and it’s not okay when those are used without their permission.”

The NO FAKES Act is the latest in a series of legislative measures in the U.S. Over the past month, state Governors Phil Murphy, J. Kevin Stitt, Josh Shapiro and Glenn Youngkin issued executive orders related to AI, while lawmakers in D.C. urged the White House to adopt the Blueprint for an AI Bill of Rights.

Companies seeking insights into the potential impact of these diverse legislative efforts are encouraged to reach out to BABL AI. Their team of Audit Experts is ready to provide tailored guidance and support.

UNESCO announces new AI project

UNESCO announces new AI project

Aiming to address the ethical governance of AI, the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Dutch Authority for Digital Infrastructure, with financial support from the European Commission’s Technical Support Instrument (TSI), have jointly unveiled the “Supervising AI by Competent Authorities” project. Launched on October 5, this collaborative initiative seeks to collect data and establish a comprehensive framework for the ethical supervision of AI in European countries.

The project aligns with the EU AI Act and UNESCO’s Recommendation on the Ethics of AI, issued in November 2021. Focusing on societal challenges and risks associated with AI, the project emphasizes the imperative for effective and ethical governance frameworks. According to Gabriela Ramos, the Assistant Director-General for Social and Human Sciences of UNESCO, “This is not a technological discussion. It is a societal one. We are talking about the kind of world we want to live in. To shape the technological development of AI, we need effective governance frameworks underpinned by the ethical and moral values we all hold dear.”

UNESCO is adopting a multi-faceted strategy to achieve its objectives. Firstly, a comprehensive global report will delve into the current state of AI across the world. Secondly, a series of case studies will be developed, offering insights into real-world scenarios and their governance approaches. Thirdly, the project will formulate best practices for AI supervision, covering regulatory frameworks, risk assessments, and ethical compliance and deployment. Fourthly, organized training sessions will equip authorities with the necessary knowledge and skills to navigate the complexities of AI. Fifthly, ongoing support will be provided to authorities beyond the initial project phase. Lastly, the collaborative effort extends beyond the EU, with the project aspiring to contribute to global ethical practices.

While the project is in its early stages, it holds the potential to influence the EU AI Act and have repercussions on global AI-related legislation. For insights into how UNESCO’s project might impact your company or to ensure compliance with existing laws, consider reaching out to BABL AI. Their Audit Experts can provide guidance tailored to your specific needs.

New Jersey Establishing AI Task Force

New Jersey Establishing AI Task Force

New Jersey Governor Phil Murphy has joined a growing list of U.S. Governors, including J. Kevin Stitt, Josh Shapiro and Glenn Youngkin, in acknowledging the potential benefits of AI. On October 10, 2023, Governor Murphy signed Executive Order No. 346, a landmark decision that recognizes the transformative potential of AI and establishes a dedicated task force. The primary aim of this task force is to explore the responsible development and utilization of AI within the state of New Jersey.

Under the provisions of Executive Order No. 346, the task force will comprise representatives from various state agencies, academic institutions, and industry partners. Its overarching goal is to identify potential risks and challenges associated with AI, including concerns related to data privacy and security, and to devise comprehensive strategies for effectively addressing these issues.

The six-page Executive Order extends beyond the establishment of the task force and outlines several additional initiatives. The Economic Development Authority is mandated with exploring avenues through which AI can catalyze economic growth and foster job creation. Simultaneously, the Office of the Secretary of Higher Education will undertake a comprehensive review of AI as a research opportunity for the state’s colleges and universities. The Office of Information Technology is entrusted with the development of a policy governing the use of AI within state government, coupled with an evaluation of tools and strategies to enhance government services through AI. Furthermore, the Office of Innovation is tasked with formulating a training program aimed at fostering responsible and effective use of AI.

In essence, the Executive Order is a robust document that meticulously examines the potential benefits and challenges associated with AI across diverse public and private sectors. It notably underscores the critical importance of collaboration among government bodies, industry stakeholders, and academic institutions. To spearhead these efforts, the AI task force will be led by prominent state figures, including the Chief Technology Officer, the Chief Innovation Officer, the Chief Executive Officer, the Director of the Office of Diversity, Equity, Inclusion, and Belonging, the Commissioner of the Department of Education, the Secretary of Higher Education, the Commissioner of the Department of Labor and Workforce Development, the Director of the New Jersey Office of Homeland Security and Preparedness, the Attorney General, and any additional members appointed by the Governor.

The task force is charged with compiling and releasing a comprehensive report within 12 months of the order. This report, shedding light on the state’s AI landscape, will be made accessible to the public and presented to the state Legislature simultaneously.

Companies seeking clarity on how New Jersey’s executive order, alongside other state laws and executive orders, may impact them are encouraged to reach out to BABL AI. Their team of Audit Experts stands ready to provide tailored support and guidance.

U.S. House and Senate Democrats Urge President to Issue AI Executive Order

U.S. House and Senate Democrats Urge President to Issue AI Executive Order

As the U.S. House remains without a Speaker, Democrats in both the House and Senate are urging President Joe Biden to issue an executive order on AI. In a letter dispatched to the White House on Wednesday, Senator Ed Markey of Massachusetts and Representative Pramila Jayapal of Washington implore the Biden administration to adopt the Blueprint for an AI Bill of Rights as the foundational framework for this executive order.

The AI Bill of Rights, released by the White House Office of Science and Technology Policy (OSTP) in October 2022, serves as a comprehensive framework delineating principles and guidance surrounding AI. Its primary objective is to steer the development and deployment of AI systems in accordance with democratic values while safeguarding the civil rights, civil liberties, and privacy of American citizens.

Five core principles underscore the AI Bill of Rights:

  1. Public Trust in AI: AI systems should be developed and deployed in a manner that earns and maintains public trust.

  2. Robust and Reliable: AI systems should operate safely and securely.

  3. Privacy and Security: AI systems should protect privacy and data while ensuring security.

  4. Inclusive: AI systems should avoid creating or reinforcing biases, remaining inclusive and accessible to everyone.

  5. Accountable: AI systems require human control and oversight.

The AI Bill of Rights is complemented by a technical companion offering seven concrete steps applicable to diverse organizations, both public and private:

  1. Overview: Providing an initial understanding of the companion’s content.

  2. Safe and Effective AI Systems: Outlining steps for testing, validation, and risk management to ensure the safety and efficacy of AI systems.

  3. Algorithmic Discrimination Protections: Detailing steps concerning data collection, analysis, and monitoring to prevent algorithmic discrimination.

  4. Data and Privacy Protection: Encompassing steps such as data minimization, de-identification, secure storage, and transmission.

  5. Transparency about Data Collection and Use: Providing clear explanations of how AI systems work and transparent disclosure of data collection and usage.

  6. Human Alternatives, Consideration, and Fallback Options: Outlining steps for human oversight, interview processes, and alternative considerations.

  7. Appendix: Illustrating the principles and steps outlined throughout the technical companion.

Amidst ongoing discussions in D.C. regarding AI regulation, Democrats express in their letter to President Biden, “Your Administration has the opportunity to establish these protections as government-wide policy by incorporating the AI Bill of Rights into your upcoming executive order on AI, or subsequent executive orders. In particular, these principles should apply when a federal agency develops, deploys, purchases, funds, or regulates the use of automated systems that could meaningfully impact the public’s rights.”

For companies seeking insights into the potential implications of an executive order on AI in the U.S., reach out to BABL AI. One of their Audit Experts can provide valuable support tailored to specific needs.

UK’s ICO issues notice against Snapchat’s AI chatbot

UK's ICO issues notice against Snapchat's AI chatbot

While the United Kingdom currently lacks comprehensive legislation specifically regulating AI systems, existing laws, regulations, and oversight bodies remain vigilant. On October 6, 2023, the UK’s Information Commissioner’s Office (ICO) issued a preliminary enforcement notice against Snap, Inc., and Snap Group Limited (Snap), the parent company of the popular social media app Snapchat. The ICO’s action stems from concerns that Snap may have inadequately assessed the privacy risks associated with its “My AI” feature, particularly concerning UK users aged 13-17. Introduced in February 2023, “My AI” is a chatbot integrated into the Snapchat platform.

According to the ICO’s investigation, Snap’s initial risk assessment failed to sufficiently address the data protection risks posed by AI technology, especially in the context of minors. The enforcement notice suggests that Snap might be compelled to temporarily halt the processing of data related to “My AI” in the UK, effectively suspending the service for users in the region. However, this suspension would be temporary, contingent upon Snap conducting a comprehensive risk assessment that adequately identifies and evaluates potential harms arising from “My AI.” Once this evaluation is complete, and any necessary mitigations are implemented, the ban on data processing would be lifted.

Importantly, the ICO is not mandating a permanent suspension of “My AI” but is emphasizing the need for Snap to conduct a thorough assessment to address privacy risks before resuming its use. Snap will have an opportunity to respond to the notice before any final enforcement action is taken. The ICO underscores that this proactive stance is aimed at safeguarding consumers’ privacy rights in the UK concerning AI technology. The enforcement notice follows the ICO’s earlier publication of companies developing or utilizing generative AI systems, emphasizing their existing data protection obligations under the UK GDPR and Data Protection Act 2018.

Want to know if your company’s AI could be impacted by laws already on the books in the UK? Contact BABL AI and one of their Audit Experts will be able to answer all your questions related to this, AI audits, and more.