Dozens of Countries Declare Cooperation on AI

Dozens of Countries Declare Cooperation on AI

While the United States grappled with AI in its own way last week, an agreement between 28 countries, including the U.S., and the European Union was recently published. On November 1, Britain released the “Bletchley Declaration,” published on the opening day of the AI Safety Summit. The Declaration aims to boost global cooperation efforts on AI safety.


The Declaration affirms the need for the safe and responsible development of AI to realize both its benefits and risks. It emphasizes that AI presents enormous opportunities to enhance human well-being and achieve sustainability goals. However, it also underlines that the risks must be examined and addressed through international cooperation moving forward. This international cooperation looks to promote inclusive growth, protect rights, and build public trust.


The Declaration recognizes that AI is being deployed across many aspects of life and that now is a unique moment in human time to act on the opportunities and risks before it’s too late. The Declaration highlights particular safety concerns with advanced AI capabilities that could match or exceed today’s models, posing a risk of harm from misuse or control issues. It calls for an urgent understanding of frontier AI risks and actions, planning to address them through existing forums and initiatives while resolving to work together to ensure human-centric, trustworthy, and responsible AI for all mankind.


The Declaration urges a pro-innovation governance approach that maximizes benefits while accounting for risks along the way. It notes the relevance of common principles and codes of conduct in the field of AI. Because of that, the Declaration calls for inclusive engagement with partners to build AI capacity on a global scale while ensuring safety through evaluations, testing, and transparency. The Declaration sets an agenda to identify shared risks, build understanding, collaborate on evaluation and research, and support the international network on AI safety.


The AI Safety Summit plans to meet again in 2024. While no date has been set, it’s expected to be hosted in France.


For assistance in navigating the changing global landscape of laws, regulations, and executive orders, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support.

U.S. Lawmakers Introduce Bill Complimenting Recent Executive Order

U.S. Lawmakers Introduce Bill Complimenting Recent Executive Order

With an Executive Order from the White House, United States lawmakers appear to be next in line to empower the federal government to tackle AI issues. On Thursday, November 2, U.S. Senators Jerry Moran of Kansas and Mark Warner of Virginia introduced the Federal Artificial Intelligence Risk Management Act of 2023. The bill aims to regulate the use of AI by federal agencies.


It mandates federal agencies to utilize the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology (NIST) when employing AI. The bill specifies that the framework is designed to help organizations manage the risks associated with AI use, incorporating a risk management process encompassing identification, assessment, mitigation, and monitoring of risks. The framework also outlines guidelines to ensure transparency, explainability, and accountability of AI.


The Federal Artificial Intelligence Risk Management Act of 2023 would additionally necessitate the Office of Management and Budget (OMB) to issue guidance requiring agencies to integrate the framework into their risk management efforts. The OMB would also be obligated to establish a workforce initiative facilitating federal agencies’ access to diverse expertise. It appears that this new bill is a direct response to President Joe Biden’s Executive Order. If signed into law, the Federal Artificial Intelligence Risk Management Act of 2023 would possess more lasting power than an executive order, as an executive order could be rescinded by a future presidential administration. According to a press release from Senator Moran’s office, U.S. Representative Ted W. Lieu of California will introduce companion legislation in the U.S. House of Representatives.


In the press release, Senator Moran stated, “AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector…However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data.”


Senator Warner adds, “It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”


For assistance in navigating U.S. laws, regulations and executive orders, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support.

U.S. Vice President Announces AI Initiatives

U.S. Vice President Announces AI Initiatives

Just a few days after United States President Joe Biden signed an Executive Order on AI, Vice President Kamala Harris is unveiling new U.S. initiatives on AI safety during her visit to the United Kingdom for the Global Summit on AI Safety in London. On November 1, VP Harris announced several bold actions that, in her statement, she says will build upon President Biden’s Executive Order.


In the release, VP Harris emphasizes that the U.S. is collaborating with the private sector, other governments, and society to uphold rights and safety as AI innovation progresses. She is committed to establishing international rules and norms for AI that reflect democratic values, believing that her participation in this week’s Summit will advance this agenda. Her press release outlines seven initiatives and actions.


The first initiative involves the creation of the U.S. AI Safety Institute (US AISI) within the National Institute of Standards and Technology (NIST) by the Department of Commerce. The US AISI will operationalize NIST’s AI Risk Management Framework by developing guidelines, tools, benchmarks, and best practices to evaluate and mitigate risks from AI. It will conduct evaluations, provide technical guidance for regulators, and facilitate collaboration and information sharing with peer institutions internally.


The second initiative introduces a first-ever draft policy guidance on the use of AI by the U.S. government. Building on prior efforts like the AI Bill of Rights and NIST’s AI Risk Management Framework, this policy outlines steps to advance AI innovation, increase transparency and accountability, create safeguards for AI, and require federal agencies to conduct AI impact assessments. The policy is released for public comment through the Office of Management and Budget.


The third initiative involves a political declaration in which 31 other nations join the U.S. in endorsing responsible military use of AI and autonomy. This declaration establishes norms for the responsible development, deployment, and use of military AI capabilities.


The fourth initiative sees 10 leading foundations committing over $200 million to advance AI in the public interest, covering areas such as protecting rights, providing transparency and accountability, empowering workers, and supporting international AI rules.


The fifth initiative focuses on detecting and blocking scammers who use AI-generated voice models to target and steal from vulnerable individuals through fraudulent phone calls.


The sixth initiative calls on all nations to support the development and implementation of international norms on authentic government-produced digital content and AI-generated or manipulated content.


The final initiative is a pledge to incorporate responsible and rights-respecting practices in government development, procurement, and use of AI.


For those curious about how the VP’s statement and other global laws could impact their company, reaching out to BABL AI is recommended. One of their audit experts will gladly provide assistance.

U.N. Launches AI Advisory Body

U.N. Launches AI Advisory Body

While the European Union and the United States pursue their unique approaches to AI, the United Nations (U.N.) has introduced a new advisory board dedicated to AI. The U.N. AI Advisory Body, unveiled on October 26, aims to scrutinize the risks, opportunities, and international governance of AI. Secretary-General Antonio Guterres emphasized the urgent need to address the transformative potential of AI.


With 39 members, the body includes tech executives like Sony Chief Technology Officer Hiroaki Kitano and Open AI CTO Mira Murati, along with government officials from Mexico to South Africa and academics from countries such as the U.S. and Japan. The diverse composition of the body, representing six continents, underscores its commitment to global collaboration. The body is anticipated to deliver a preliminary report by the end of 2023 and a final report in 2024, with recommendations to be discussed at a U.N. summit in September 2024.


Upon the body’s announcement, the Secretary-General expressed the view that AI could contribute to addressing many global challenges. “But all this depends on AI technologies being harnessed responsibly, and made accessible to all – including the developing countries that need them most. As things stand, AI expertise is concentrated in a handful of companies and countries. This could deepen global inequalities and turn digital divides into chasms.” He highlighted that the advisory body marks the beginning of efforts to responsibly leverage the benefits of AI.


For those contemplating the potential impact of the U.N. and other governmental bodies worldwide, feel free to contact BABL AI. One of their audit experts can provide valuable guidance and support.

European Commission Adopts Rules on Independent Audits

European Commission Adopts Rules on Independent Audits

While the European Union continues to finalize the details of the Harmonised Rules on Artificial Intelligence, or the EU AI Act, a significant development has occurred with the European Commission adopting a delegated regulation under the Digital Services Act (DSA). This Commission Delegated Regulation complements the regulations in place for digital services by establishing rules for conducting audits on very large online platforms (VLOPs) and very large online search engines (VLOSEs). The DSA is presently applicable to VLOPs and VLOSEs.


The Commission Delegated Regulation outlines rules concerning procedural steps, auditing methodologies, and reporting templates for audits performed on VLOPs and VLOSEs, ensuring comprehensive compliance with EU standards. The audit report template encompasses sections on the audit scope, methodology, findings, and opinion. The audit implementation report template covers sections on the implementation of audit recommendations, implementation status, and reasons for any non-implementation. The selection of the audit methodology should cater to the specifics of the audited obligation or commitment and be adaptable. It should also consider other information provided by the provider, such as risk analysis if the provider has conducted such an analysis. A written agreement, incorporating contractual terms, must be established between the audited provider and the auditing organization.


Nineteen services were identified by the European Commission in April 2023 as the first to undergo audits. These services, including Alibaba AliExpress, Amazon Store, Apple AppStore, Bing, Booking.com, Facebook, Google Search, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando, have until the end of August 2024 to submit their initial audits to the Commission. It’s crucial to emphasize that the delegated act aims to establish a harmonized legal framework for all online intermediary services in the EU, fostering a safer digital environment that upholds the fundamental rights of all users.


For assistance in navigating DSA compliance, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support before the DSA goes into full effect.

Dissecting the White House’s Executive Order on AI

Dissecting the White House's Executive Order on AI

Early on Monday, October 30, the White House released an Executive Order on AI. The order initially focused on monitoring and regulating AI risks while leveraging its potential, but initial details were sparse. Later in the day, President Joe Biden provided more comprehensive information than was initially disclosed.


The executive order outlines guiding principles for AI policy, emphasizing the safety and security of AI systems, the promotion of responsible innovation and competition, support for workers affected by AI implementation, advancement of equity and civil rights during AI proliferation, protection of U.S. residents and their privacy, and the strengthening of U.S. leadership abroad.


Within 30 to 365 days, various U.S. agencies are directed to take specific measures to manage AI risks. Notable directives include the development of AI safety guidelines and standards by the Commerce Department, guidance on AI use in public benefits programs by the Department of Health and Human Services, and the evaluation of AI risks to national security by the Defense Department. Other directives cover AI integration into critical infrastructure guidelines by Homeland Security, AI guidance needs related to transportation by the Department of Transportation, and guidance and resources on AI use in education by the Department of Education. The State Department is tasked with streamlining visas to attract AI talent to the U.S., while the Labor Department is expected to develop AI best practices for employers.


Simultaneously, the executive order urges the Consumer Financial Protection Bureau to use its authorities to compel financial institutions to assess AI models for bias. The Federal Trade Commission is also encouraged to exercise authorities to promote competition and protect consumers from AI-related harms. The order advocates for new research programs and funding for the National Science Foundation, the Department of Energy, and the National Institute of Standards and Technology. It also directs the United States Patent and Trademark Office to address IP policy reviews, patent eligibility, inventorship issues, and other patent-related subjects. Regarding competition, the Federal Trade Commission will explore new opportunities for small businesses while ensuring larger companies do not disadvantage smaller businesses and competitors.


The executive order includes directives to support workers affected by AI adoption and to strengthen civil rights protections concerning AI’s use in criminal justice, government benefits, and housing/lending. The Labor Department will provide guidance on how AI could impact workers, and the director of the National Science Foundation will explore ways to foster a diverse, AI-ready workforce, including educational resources and workforce development.


Furthermore, the executive order requests the Office of Management and Budget to issue guidance on federal AI use, risks, and talent needs. Government agencies are also asked to designate Chief AI Officers who will coordinate their agency’s use of AI, promote AI innovation, manage risks, and report on relevant issues. Overseeing all these efforts will be the White House AI Council, comprising key officials from various federal agencies. This council will ensure effective and ethical formulation, development, communication, engagement, and implementation of AI-related policies.


If you have any inquiries or need assistance preparing for the executive order’s potential impact or any other AI laws globally, reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

White House Issues Long-Awaited AI Executive Order

White House Issues Long-Awaited AI Executive Order

Following weeks of speculation and discussions in Washington D.C., the White House has officially introduced a comprehensive executive order addressing AI. President Joe Biden’s administration revealed the executive order focused on monitoring and regulating AI risks while harnessing its potential on Monday, October 30. The order has garnered early praise from some within the AI community, including the Center for Data Innovation, which stated, “Too often, proposals for AI exclusively focus on how AI might go wrong and leave out policies for how to ensure the technology goes right. The EO rightly includes steps to harness AI’s potential in education and healthcare, but achieving AI adoption at scale requires much more significant investment and detailed policy initiatives than the EO currently envisions.”


President Biden’s executive order establishes new standards for AI safety and security, mandating that developers of high-risk AI systems share safety test results with the government. Simultaneously, it directs actions to safeguard Americans’ privacy and civil rights from AI-related risks. Although the order does not explicitly define high-risk AI systems, it specifies that companies developing foundation models posing serious risks to national and economic security, public health, or safety must notify the government and share testing results.


The order introduces standards for rigorous testing of AI systems before public release, guarding against potential misuse for biological fraud or engineering while enhancing privacy protections. Additionally, it aims to prevent AI-based discrimination in justice, healthcare, and housing. The order addresses algorithmic discrimination, ensuring fairness in criminal justice AI, and advancing responsible AI in healthcare and education. Although not explicitly outlined, the order commits to developing principles and best practices to protect workers from AI-related harms, covering issues such as job displacement, labor standards, workplace equity and safety, and data collection. The objective is to provide guidance to prevent unfair treatment of workers by AI systems in hiring, compensation, and organizing.


The order promotes a fair AI ecosystem by supporting small developers and ensuring oversight by the Federal Trade Commission (FTC). It also addresses immigration concerns, seeking to modernize and streamline visas to retain AI talent. The executive order plans to accelerate research through new AI resources and data access, emphasizing collaboration by expanding international cooperation on AI safety and ethics. The administration aims to establish crucial standards with partner nations to promote rights-affirming AI worldwide. Collaboration with Congress is also highlighted for legislation on responsible leadership in AI, although it does not specify recent legislation introduced in Congress or mention collaboration at the state and city levels.


The administration is encouraging people to weigh-in on ai.gov. In essence, President Biden’s executive order outlines vital steps for the United States’ approach to safe, secure, and trustworthy AI. 


For more information and assistance in preparing for this executive order and other global regulations, contact BABL AI. Their Audit Experts are ready to provide valuable assistance.


White House Executive Order on AI Expected Next Monday

White House Executive Order on AI Expected Next Monday

While state lawmakers in various parts of the United States, such as Michigan and Colorado, work on crafting regulations for AI, the White House is contemplating an executive order. The administration of U.S. President Joe Biden is anticipated to introduce an AI executive order next Monday, as reported by several American media outlets. On October 11, House and Senate Democrats had urged President Biden to issue this long-awaited executive order.


According to Axios, officials were set to announce the executive order at 2:30 p.m. EST next Monday. The Washington Post later specified that the executive order would mandate “advanced AI models to undergo assessments before they can be used by federal workers.” Additionally, the order might facilitate the immigration of highly skilled technology workers to the U.S., aiming to enhance the country’s competitiveness.


The White House has scheduled a briefing on the Biden Administration’s commitment to advancing the safe, secure, and trustworthy development and use of AI on the same Monday at 10 a.m. EST, according to an email statement. Interested individuals can register for the webinar here. The briefing will feature Stephen Benjamin, Director of the Office of Public Engagement, Dr. Arati Prabhakar, Director of the Office of Science and Technology, and Dr. Ben Buchanan, White House Special Advisor for Artificial Intelligence.


BABL AI will closely monitor all developments on Monday and provide a subsequent post detailing how the Executive Order might impact AI in America. For any inquiries or assistance regarding preparation for the potential executive order or any other AI laws globally, feel free to reach out to BABL AI. Their Audit Experts are prepared to offer valuable assistance.

Michigan State Lawmakers Introduce AI Bills Involving Political Ads

Michigan State Lawmakers Introduce AI Bills Involving Political Ads

Amidst diverse approaches by governing bodies across the United States towards AI regulation at city and state levels, a bipartisan group of lawmakers in Michigan is taking a distinctive stance by addressing the use of AI in political campaigns. Michigan state Representatives Penelope Tsernoglou, Matthew Bierlein, Ranjeev Puri, and Noah Arbit introduced a comprehensive legislative package known as the “Regulate Use of Artificial Intelligence for Political Campaigns,” comprising House Bills 5141-5145. This package, introduced on October 12 and subsequently approved on October 17, aims to regulate and establish restrictions on the use of AI in political campaigns, alongside imposing disclosure requirements.


House Bill 5141 specifically targets the prohibition of manipulated media in political campaigns and mandates the disclosure of AI-generated media use. The bill introduces a private right of action for candidates adversely affected by violations. House Bill 5142 requires political ads and other political communication forms utilizing AI-generated media to disclose this fact and prohibits the use of AI-generated media impersonating a candidate or public official. House Bill 5143 defines AI for the bills’ purposes, while House Bill 5144 prohibits the distribution of deepfakes, accompanied by a definition of deepfakes. House Bill 5145 amends several codes, making it a crime to distribute materially deceptive media with the intent to influence an election. Representative Tsernoglou presented a PowerPoint on AI-generated content.


The proposed penalties under the legislative package are stringent. A first violation could result in up to 93 days in prison and/or a fine of up to $1,000. A second violation escalates the top fine to $1,500. A third violation is deemed a felony, attracting up to two years in prison and/or a fine of up to $2,000. Each publicly aired or distributed ad incurs a separate fine. The bills explicitly exclude news broadcasts and satire. Notably, Michigan is one of 10 U.S. states with a full-time legislature, convening throughout the year, as opposed to the part-time legislatures in the majority of states.


For insights into how Michigan’s House Bills 5141-5145 may impact your business, consider reaching out to BABL AI. Their team of Audit Experts can provide guidance on this regulation and other AI-related regulations, addressing your specific questions and concerns.


Listen to BABL AI Chief Ethics Officer Johana Davidovic speak about this topic on NPR’s Detroit Today here.  

Colorado Commissioner Approves Regualtion 10-1-1 for Insurers

Colorado Commissioner Approves Regualtion 10-1-1 for Insurers

While New York City positions itself as a focal point for AI regulation in the United States, another state takes the lead in regulating AI within the insurance industry. The Commissioner of the Colorado Division of Insurance, operating under the Department of Regulatory Agencies, has officially adopted Regulation 10-1-1, outlining Governance and Risk Management Framework Requirements for Life Insurers’ Utilization of External Consumer Data and Information Sources, Algorithms, and Predictive Models. This regulation mandates that life insurers authorized to operate in Colorado establish and maintain a risk-based governance structure and risk management framework when employing external consumer data and information sources, algorithms, and predictive models.


Under Regulation 10-1-1, insurers are compelled to document governing principles that articulate values and objectives while demonstrating how algorithms and predictive models, utilizing external consumer data and information sources, are reasonably designed to prevent unfair discrimination. Oversight of this framework must rest with the board of directors or a board committee. Senior management holds responsibility and accountability for the overall strategy, providing direction governing the use of algorithms and predictive models, ensuring clear lines of communication, and delivering regular reporting. A documented cross-functional governance group, comprising representatives from key areas such as legal, compliance, risk management, product development, underwriting, actuarial, data science, marketing, and customer services, is also required.


Insurers must meticulously document policies, processes, and procedures governing the design, development, testing, deployment, use, and ongoing monitoring and testing of algorithms and predictive models. They are further obligated to establish and maintain a risk management framework for continuous monitoring and testing of models and data sources, as well as implementing controls to mitigate identified risks. Providing training and education to employees involved in the design, development, use, and ongoing monitoring and testing of data, algorithms, and predictive models is another stipulation.


Crucial to Regulation 10-1-1 are reporting requirements. Insurers are expected to submit a narrative report summarizing their progress toward compliance by June 1, 2024. Additionally, an annual narrative report outlining compliance with the regulation’s requirements is mandatory. If an insurer cannot attest to compliance, it must submit a corrective action plan. Exempt insurers, not subject to the regulation’s requirements, must submit an exemption report annually on December 1, indicating that they do not utilize data, algorithms, and/or predictive models. The regulation takes effect on November 14, 2023.


For insights into how Colorado’s Regulation 10-1-1 might impact your business, reach out to BABL AI. Their team of Audit Experts can address your questions and concerns related to this regulation and other AI-related regulations.