Preparing for the EU AI Act

Preparing for the EU AI Act

While the United States has been dominating recent headlines on AI laws, European Union lawmakers are diligently working on their groundbreaking legislation, the Harmonised Rules on Artificial Intelligence, known as the EU AI Act, behind closed doors. According to EU lawmaker Brando Benifei, the EU AI Act is expected to serve as a global blueprint, shaping the regulatory landscape for AI in various countries. Draft rules for the EU AI Act could gain approval next month, signaling the importance for companies to commence preparations promptly.


A crucial step in readiness involves understanding where a company stands in the different risk levels outlined in the EU AI Act: minimal risk, limited risk, high-risk, and unacceptable risk. The impact of the EU AI Act extends not only to providers of AI systems based in the EU but also to providers in third countries placing AI systems in the EU market, as well as providers in the EU utilizing AI systems and deployers in third countries if the output of their AI systems is used in the EU. Given its comprehensiveness, the EU AI Act is likely to affect a majority of AI systems.


Providers of high-risk AI systems must establish a quality management system equipped with a robust monitoring system and up-to-date technical documentation. Before entering the market, high-risk AI systems must undergo a conformity assessment procedure, and once on the market, maintain logs generated by the AI system to ensure ongoing compliance. Relevant national competent authorities, distributors, importers, and deployers must be informed of risks related to the AI systems and any corrective actions taken.


Deployers of high-risk AI systems are obligated to implement appropriate technical and organizational measures to ensure compliance. Human oversight and control over input data are mandatory, with providers or distributors being promptly informed of any risks associated with the system. Generated logs must be retained, and data protection impact assessments are required when applicable.


AI systems designed to interact with individuals must disclose, as appropriate, which functions are AI-enabled, the presence of human oversight, responsible decision-making processes, and the rights of end-users to object. End-users must be informed that they are interacting with an AI system. For any biometric system, obtaining consent before processing biometric or personal data is mandatory. Artificially created or manipulated content must be labeled as inauthentic, and if possible, identify the person who generated or manipulated it.


Providers and deployers will encounter a myriad of questions as they prepare for their AI system to enter the market. Key considerations include understanding the intent and type of AI, sourcing of information, validation processes for gathered data, and the origin of the code. Establishing an inventory of AI systems, regardless of their current deployment status, is recommended. This allows organizations to define the intended purpose and capabilities of their AI systems, including detailed information on the AI architecture, infrastructure, and foundation. Transparent procedures and guidelines for AI systems, employee awareness of EU AI Act requirements, and compliance with monitoring, data protection, and other essential requirements for all AI systems should be ensured by organizations to avoid potential dire consequences.


For assistance in navigating EU AI Act compliance, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support before the EU AI Act goes into full effect.

Navigating Potential Penalties under the EU AI Act

Navigating Potential Penalties under the EU AI Act

Amidst the deliberations of the European Parliament on the European Union’s Harmonised Rules on Artificial Intelligence, known as the EU AI Act, stakeholders and companies are proactively scrutinizing existing regulations. They are evaluating their positions within the EU AI Act’s risk level framework and gaining insights into Conformity Assessment. However, a crucial aspect drawing considerable attention is the spectrum of penalties and fines outlined in Article 71.


Article 71 of the EU AI Act delineates penalties and fines, employing its own classification of AI systems and their associated risk levels. The most substantial fine is reserved for AI systems deemed to carry unacceptable risks—systems explicitly prohibited under the EU AI Act due to their potential to pose risks violating human values and rights. Violations could result in fines of up to 40 million Euros or 7% of annual worldwide turnover.


The subsequent tier of fines pertains to violations involving data, data governance, and transparency. AI systems found in violation of these aspects could face fines of up to 20 million Euros or 4% of annual worldwide turnover. Another category of fines concerns non-compliance of AI systems or foundational models, encompassing instances where bias or potential harm to safety, livelihoods, and rights are identified. Entities in violation could incur fines of up to 10 million Euros or 2% of annual worldwide turnover.


Another potential fine involves an entity supplying incorrect, incomplete, or misleading information to authorities, carrying a penalty of up to 500,000 Euros or 1% of annual worldwide turnover. Beyond companies, European Union agencies, bodies, and institutions can face fines of up to 1.5 million Euros for non-compliance with prohibitions outlined in the EU AI Act. They may also be fined 1 million Euros for non-compliance with Article 10 and up to 750,000 Euros for non–compliance with obligations other than those laid down under Articles 5 and 10. While providers will likely be the primary recipients of fines, the EU AI Act allows for the penalization of others, including users, importers, distributors, and notified bodies.


When considering penalties and fines, the EU AI Act takes several factors into account, including the nature and duration of the violation. It also considers intentional negligence, actions taken to mitigate damaging effects, previous violations and fines, market share of the company, any financial gains derived from the violation, and whether the AI system is used for professional or personal activities. Currently, there is no central authority for imposing fines; instead, member states of the EU are tasked with incorporating the provisions of infringements into national law.


For any inquiries or assistance with preparing for the EU AI Act, reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

Things to know when preparing for the Digital Services Act (DSA)

Things to know when preparing for the Digital Services Act (DSA)

While the European Union’s Digital Services Act (DSA) is currently in effect for very large online platforms (VLOPs) and very large online search engines (VLOSEs), it is only a few months away from being fully applicable to other entities. The DSA will unleash its complete regulatory impact on February 17, 2024, encompassing services, marketplaces, and online platforms offering services in the EU, irrespective of their physical location.

 

This implies a global impact of the DSA, affecting companies providing diverse digital services, such as cloud services, data centers, content delivery, search engines, social media, app stores, and more. Online platforms and service providers will be required to designate a contact person to engage with authorities in EU member states, the European Commission, and the European Board for Digital Services. Furthermore, platforms must establish procedures to address illegal content as defined by national laws. Upon notification of such content, platforms are obligated to promptly inform the relevant authority about the actions they intend to take to resolve the issue.

 

To prepare for the implementation of the DSA, organizations should undertake several key steps. First, determine if the DSA applies to your organization and analyze its potential impact on operations and stakeholders. Second, review current policies and procedures against DSA requirements to identify and address any gaps. Third, scrutinize existing data protection measures for both organizational and third-party data to ensure alignment with DSA standards. Fourth, identify key regulatory authorities and establish open communication channels for reporting and notifications. Fifth, educate your employees on DSA requirements, emphasizing their importance and best practices for compliance. Sixth, conduct risk assessments and audits to ensure compliance with DSA standards while establishing an effective complaints mechanism.

 

The DSA introduces several other requirements, including transparency obligations, providing information on online advertisements, dispute report submissions, crisis response measures, etc. Compliance is paramount, especially since non-compliance could result in a penalty of up to 6% of annual worldwide revenue. Companies and platforms may also face civil suits and liabilities.


For assistance in navigating DSA compliance, reach out to BABL AI. One of their audit experts can provide valuable guidance and support before the DSA goes into full effect.

What is the NYC AI Bias Audit Law?

What is the NYC AI Bias Audit Law?

While the European Union is at the forefront of global AI regulation with the Harmonised Rules on Artificial Intelligence, known as the EU AI Act, New York City is setting the pace in the United States regarding bias in AI-based hiring. Local Law 144, dubbed the NYC Bias Law, specifically addresses Automated Employment Decision Tools (AEDT). This legislation, enacted on July 5, 2023, resulted from amendments by the NYC Department of Consumer and Work Protection (DCWP), which issued a final ruling in April 2023, following the swearing-in of a new New York City Council in November 2021. The primary goal is to safeguard the rights of workers and citizens.


Over the years, companies using AEDT have grappled with issues ranging from bias and invalid correlations to legal and ethical risks. For instance, during the early stages of the COVID-19 pandemic in March 2020, HireVue discontinued its facial analysis component due to accusations of bias and lack of transparency. Similarly, companies like Arctic Shores faced criticism from disability advocates for creating games that posed challenges for individuals with certain disabilities. More recently, a Massachusetts man filed a lawsuit against  CVS over its AI interview process, once again implicating HireVue, adding to a growing list of companies employing potentially flawed AI systems.


The NYC Bias Law specifically targets AEDT, described as a process that utilizes machine learning, data analytics, statistical modeling, or AI to generate scores, classifications, or recommendations. Essentially, it’s an AI-based tool automating assessments of job candidates. For example, an AEDT could analyze your application, focusing on key phrases and keywords to rank you against other candidates. However, it can go further, incorporating standardized tests, video interviews analyzing facial and speech patterns, and even assessments in games or images, as well as VR simulations of job tasks.


Crucially, the NYC Bias Law mandates independent audits to determine AEDT compliance. An independent auditor is unbiased, objective, and has no affiliation with the company. This ensures the auditor was not involved in the AEDT’s creation, development, or distribution, has no ties to the employer or employment agency utilizing the AEDT, and holds no financial interest in these entities.


Want to know if your company needs a New York City Bias Audit done? Contact BABL AI and one of their Audit Experts will be able to answer all your questions related to this regulation and more.

What is an EU AI Act Conformity Assessment?

What is an EU AI Act Conformity Assessment?

Negotiations on the European Union’s Harmonised Rules on Artificial Intelligence, known as the EU AI Act, are anticipated to continue later this year. In the interim, various stakeholders are scrutinizing the legislation to anticipate potential implications once the bill receives approval from the European Parliament. While many are assessing their positions within the EU AI Act’s risk level framework, others are contemplating their status under the Conformity Assessment segment of the bill.


The EU AI Conformity Assessment pertains to the mandatory testing and certification process outlined in the EU AI Act, ensuring that AI systems adhere to regulations before entering the market. Although the assessment predominantly targets high-risk AI systems, there are also conformity assessment requirements for limited-risk AI systems. Minimal-risk AI systems, while exempt from these assessments, remain subject to distinct regulations and transparency obligations. Essentially, the level of conformity assessments aligns with the risk level presented by an AI system, maintaining the principle of proportionality under the EU AI Act.


Conformity assessments can be conducted either by the AI system provider or by third-party conformity assessment entities. However, specific high-risk AI necessitates a third-party conformity assessment. Providers performing their own assessments must adhere to rigorous processes outlined in the EU AI Act. Nevertheless, independent third-party assessments offer an additional layer of oversight for AI systems. Comprehensive documentation and records of testing processes, as well as results, must be maintained and provided to authorities validating the assessments. Additionally, post-marketing monitoring mechanisms will be established to assess ongoing compliance with AI system regulations.


Conformity assessments play a pivotal role in determining whether an AI system meets the stipulated requirements under the EU AI Act. The evaluation covers various aspects, including risk management, datasets, documentation, transparency, human oversight, accuracy, cybersecurity, and more. Testing aims to assess whether the AI system’s actual performance aligns with its intended purpose, ultimately minimizing risks by verifying reliability and accuracy.


Upon the conclusion of a conformity assessment, the AI provider must generate a legal declaration document affirming that their AI systems have fulfilled all requirements under the EU AI Act. This ensures that when the AI system enters the market, it carries a CE marking, signifying that it has been assessed by the provider and deemed compliant with EU standards. Providers are also obligated to maintain comprehensive technical documentation, encompassing details about the AI system’s design, development process, risk management, evaluation results, metrics, and other relevant technical aspects. This documentation must be made available to supervisory authorities upon request, contributing to transparency efforts.


In essence, conformity assessments aim to instill user trust in AI systems on the market, assuring consumers that these systems have met rigorous safety standards and passed reliability tests before deployment. Conformity plays a pivotal role in the EU AI Act, as the public needs assurance that AI systems are trustworthy and compliant. Supervisory authorities will oversee the entire conformity process through AI audits, document checks, post-marketing monitoring, and more.


If you have questions or need help preparing for a EU AI Act Conformity Assessment, reach out to BABL AI. One of their Audit Experts can provide valuable assistance.

What are the different risk levels in the EU AI Act?

What are the different risk levels in the EU AI Act?

Even though the European Union Parliament is still in the negotiation phase regarding the Harmonised Rules on Artificial Intelligence, commonly referred to as the EU AI Act, numerous questions persist about various aspects of this extensive legislation. The EU AI Act aims to regulate AI systems based on the level of risk they pose, categorizing them into minimal-risk, limited-risk, high-risk, and unacceptable risk. The classification into these categories determines the obligations and restrictions applied under the EU AI Act, targeting regulation at the highest risk AI applications while imposing fewer regulations, if any, on minimal-risk AI systems.


According to the EU AI Act, minimal-risk AI systems must comply with transparency obligations, such as declaring their AI nature. In most cases, this entails providing information to users when interacting with an AI system, along with documentation and human oversight. Examples of minimal-risk AI systems may include AI-enabled spam filters in emails, and a classic example of a no-risk AI system would be a video game with non-playable characters, or NPCs, which may face no regulation under the EU AI Act.


Moving to limited-risk AI systems, the EU AI Act stipulates that these systems must undergo conformity assessments before being placed on the market. This involves testing and certification by manufacturers or third-party bodies. Additional obligations include risk management, record-keeping, transparency, and human oversight. Limited-risk AI systems encompass applications in credit scoring, HR recruitment, and product recommendations, with requirements designed to be proportionate to the associated risk.


For high-risk AI systems, the EU AI Act imposes strict obligations before their use, encompassing rigorous testing, risk management procedures, high-quality datasets, extensive documentation, cybersecurity measures, human oversight, and detailed user information, among other requirements. Examples of high-risk AI systems include those used in critical infrastructures such as energy, AI systems for law enforcement officials, court systems, medical diagnoses, safety components in transportation, employment and employee monitoring, and education practices.


The final risk level under the EU AI Act is unacceptable risk. This category is reserved for AI systems that are outright prohibited due to posing unacceptable risks and violating human values and rights. Examples include AI systems that exploit vulnerable groups, engage in mass surveillance of the public, utilize deepfakes for public harm, are implemented in lethal autonomous weapons, create social scores, or employ subliminal disinformation tactics.


While most AI systems fall into these four risk levels, exceptions exist. For instance, AI systems developed exclusively for national security purposes may be exempt under the EU AI Act, as well as those developed for research and innovation, provided they are not put into public service or placed on the market. Start-ups creating high-risk AI systems and small-scale providers may qualify for lighter obligations or have a delayed compliance timeline. While the EU AI Act provides a broad risk-based framework, it also allows for the examination of various instances of special cases. Flexibility is being built into the EU AI Act to accommodate conflicting interests and innovation. However, it’s crucial to note that since the details of the EU AI Act are still being refined, this information is subject to change.


If you have questions about where your AI system falls within the four risk levels or need assistance preparing for an EU AI Act Conformity Assessment, reach out to BABL AI. One of their Audit Experts can provide valuable assistance.

How is the EEOC handling AI?

How is the EEOC handling AI?

While new laws are being discussed, like the EU AI Act, several governments around the world are looking at how existing legislation and governing bodies can regulate AI. In the United States, the Equal Employment Opportunity Commission (EEOC) has spent the past several years doing that exact thing. While examining how previous legislations apply, the EEOC has offered up new initiatives, while also handling several discrimination lawsuits.


Just this year in August, the EEOC settled its first-ever discrimination lawsuit. In this lawsuit, iTutorGroup, which integrates three companies providing English-language tutoring services to students in China, was found to be committing age discrimination. iTutorGroup’s application software was automatically rejecting female applicants aged 55 and older, and male applicants aged 60 and older. This led to hundreds of qualified U.S. candidates being rejected, simply because of their age. iTutorGroup has to pay $365,000, which will be distributed to applicants who were rejected due to their age. While iTutorGroup has ceased hiring in the U.S., should the entity ever resume operations within the U.S., they would need to require training, policy changes and monitoring to prevent future discrimination. The EEOC noted in their lawsuit that anti-discrimination laws apply to remote workers controlled by foreign companies, as we’ve seen in other AI laws globally.


On top of lawsuits, the EEOC is launching initiatives and has recently released its future strategic plan. In the EEOC’s Strategic Enforcement Plan for Fiscal Years 2024-2028, part of their plan recognizes the proliferation of AI or machine learning by employers. They recognize that employers are also using AI and machine learning when it comes to targeted job advertisements, recruiting and hiring and other employment decision practices. That’s why in the plan, the EEOC will focus on addressing technology-related employment discrimination and ensure that the use of technology doesn’t result in discriminatory practices. The EEOC wants to also focus on the screening tools, whether it’s AI or other automated systems, pre-employment tools or background checks that are disproportionately impacting workers. On top of that, the EEOC will focus on employer practices when it comes to pay disparities, such as secret pay policies, discouraging or prohibiting workers from asking or sharing their pay information and the reliance on past salary history or expectations to set pay. While the plan does recognize AI, the EEOC includes other priorities including the advancement of equal pay, solidifying access to the legal system and other initiatives that could eventually be impacted by AI.


This plan is most likely building upon the Artificial Intelligence and Algorithmic Fairness Initiative, which was launched back in 2021. The Initiative was created to ensure that AI and other emergency technology tools implemented in hiring and other employment decisions were complying with civil rights laws that the EEOC enforces. The initiative also planned to establish an internal work group to help guide the EEOC’s initiative, launch a series of listening sessions, gather information about AI and other employment technologies, identify promising practices and issue technical assistance. We’ve seen some guidance in 2023 when it comes to technical assistance, like the Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees and Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.


If you have questions on how the EEOC could affect your company or would like to get audited to ensure compliance with the EEOC, reach out to BABL AI and one of their Audit Experts can help.

What is the EU AI Act?

What is the EU AI Act?

The European Union is once again leading the way in digital regulation with its latest piece of legislation, the Harmonised Rules on Artificial Intelligence, or the EU AI Act. The EU has been on the cutting edge when it comes to digital rights and digital regulations, whether it’s the General Data Protection Regulation (GDPR), passed back in April 2016 which deals with information privacy and human rights, or the Digital Services Act, passed in July 2022 which helps moderate online information and social media content. Now, the EU is working on standards for managing AI systems. The governing body is looking to minimize potential risks and potential harm while ensuring the safety and rights of all humans.


There have been several incidents of legal, ethical and biased uses of AI. Companies, journalism outlets, academia, nonprofits and governmental bodies have found bias in AI over the years across the globe. There are several examples over the years, including in 2018 when Microsoft acknowledged that use of AI in its offerings may result in reputational harm or liability. In 2019, Denmark found that its tax fraud detection AI was incorrectly flagging low income and immigrant groups more than native Danes. Even AI-powered tools that were used during the COVID-19 pandemic to help save lives instead raised red flags about privacy and accuracy concerns. AI’s use has only accelerated since these issues and the problems have only increased across the landscape.

The EU AI Act was proposed in April 2021 before being drafted and adopted in December 2022. Over the past year, there have been several amendments and revisions before the latest version was approved in June 2023. It’s expected that a final version of the EU AI Act will be approved before the end of 2023, just in time for the European Parliament elections in 2024. Even with approval, there will likely be a two-year implementation period. So, don’t expect all the regulations to take effect until 2026 at the earliest.


That’s why now is the time to understand who this massive piece of legislation applies to. First and foremost, AI systems established within the EU must comply. However, the EU AI Act not only applies to AI systems developed and used within the EU, but also to countries outside the EU that have had their AI systems introduced and/or used within the EU market. So just because your AI system is in America, doesn’t mean you’re free of this law if it’s in the EU marketplace. That’s not all though, even AI providers and users located outside the EU come under the jurisdiction of the EU AI Act if their AI systems outcomes or results are utilized or have an impact within the EU. To summarize, most companies and others are going to have to adhere to the EU AI Act in some way. 


However, there are AI systems that are exempt under the EU AI Act. AI systems that are still being researched and tested before being sold are exempt from the EU AI Act. That is, as long as they follow basic rights and laws, and are not tested in real-life situations. For example, if you’re a pharmaceutical company developing an AI system to assist in the discovery of new drugs, you’re using AI to analyze vast datasets of chemicals, as well as those interactions between chemicals. As long as that AI is used in a controlled research environment, you’re exempt from the EU AI Act until it is considered safe and effective. Another entity exempt from the EU AI Act is government authorities from other countries or international groups working under international agreements. It also does not apply to AI systems made only for military uses. In addition, AI parts given out for free under open-source licenses don’t need to follow the regulation, except for large general AI models like ChatGPT or DALL-E.


If you have questions on how this could affect your company or would like help preparing for an EU AI Act Conformity Assessment, reach out to BABL AI and one of their Audit Experts can help. 

What is the Digital Services Act?

What is the Digital Services Act?

As the European Union works on the final touches of its AI regulation legislation, the Harmonised Rules on Artificial Intelligence, or the EU AI Act, we look at one regulation that has service providers scrambling to comply with before next year. The Digital Services Act (DSA) was submitted to the European Parliament in December 2020. After a year and a half of discussion, the European Council approved the DSA on October 4, 2022 and will be directly applicable across the EU on February 17, 2024.


To simply put, the DSA regulates digital services, marketplaces and online platforms operating within the EU, with the aim to create a safe and more open digital landscape while protecting the fundamental rights of users, and establishing clear responsibilities and accountabilities for online platforms. As long as an outlet is offering a service in the EU, regardless of their place of establishment, they are impacted by the DSA. This means that companies that provide digital services like cloud services, data centers, content delivery, search engines, social media, app stores and others will be impacted. That includes platforms like Google, Meta, Amazon, Apple, TikTok and more. So, while this is a European Union law, it will resound globally.


The DSA has core obligations that require platforms to assess and mitigate risks brought by their systems. It also requires platforms to remove illegal content, protect children, suspend users offering illegal services, ensure the traceability of online traders and empower consumers through various transparency measures. This also means that platforms must publicly report how they use tools for automated content moderation as well as disclose all instances of illegal content which is flagged by content moderators or by the automated content moderation.


The DSA highlights additional requirements for large platforms, which are referred to as very large online platforms (VLOPs). A VLOP will face additional requirements in the field of risk management, external and independent auditing, transparency reporting, access to data and algorithms, advertising transparency and user choice for recommendation algorithms. The threshold for a VLOP is 45 million+ monthly active EU users, which will most likely impact the platforms we mentioned above as well as several large EU firms. In an effort to catch other potential VLOPs, the DSA has these obligations aimed at fast-growing start-ups which are approaching similar scales and risk profiles of other VLOPs. 


Unlike others, VLOPs designated under the DSA will have four months before next February to comply with obligations like risk assessment, transparency reporting and data access. That means there are staggered timelines based on platform size before the final date when the European Commission and national Digital Services Coordinators will oversee enforcement. The DSA establishes oversight and enforcement cooperation between the European Commission and EU countries. As for penalties for non-compliance, it includes fines of up to 6% of global turnover, which means some VLOPS could face hundreds of millions in fines if they’re found to be non-compliant.


If you have questions about how to stay compliant with the Digital Services Act, reach out to BABL AI.

AI Risk Management Framework for Life Insurance

Managing the risks associated with the use of artificial intelligence (AI) and machine learning (ML) has been an urgent topic in recent years. The potential for these algorithms to discriminate, limit access to important life opportunities, and otherwise harm individuals and organizations has motivated the need for companies to implement deliberate AI Risk Management Frameworks.

Recent activity includes:

  • The release of NIST’s AI Risk Management Framework 1.0, which outlines four core functions for a robust framework; Govern, Map, Measure, and Manage.
  • Article 9 of the proposed EU AI Act, which outlines general risk management requirements for “high-risk” AI systems.
  • ISO’s recently released ISO/IEC 23894:2023, Guidance on risk management.
  • Articles 34 and 35 of the Digital Services Act (DSA) require large online platforms to assess and manage “systemic risks”, including those posed by algorithms such as recommender systems or targeted advertisements.

What is Colorado’s Senate Bill 21–169?

The increased use of external data and predictive algorithms in the insurance industry has given rise to worries about unfair discrimination, and the need for insurers to manage the unique risks that AI/ML may entail. This is why Senate Bill 21–169 was enacted by the General Assembly of the State of Colorado. The law recognizes the increasing use of what it calls “external consumer data and information sources” (ECDIS), as well as algorithms and predictive models using external consumer data, in insurance rating, underwriting, claims, and other business practices. These tools have the potential to benefit insurers and consumers, however, the accuracy and reliability of external consumer data can vary greatly, and some algorithms and predictive models may lack a sufficient rationale for use in insurance practices. The use of these tools could potentially have a negative impact on the availability, affordability, and utilization of such insurance.

To address these issues, Colorado’s Department of Regulatory Agencies Division has released draft regulations for SB21–169, which focuses on underwriting practices in life insurance, that require insurers to adopt a governance and risk management framework. The framework is designed to control the use of external consumer data, algorithms, and predictive models to prevent any unfair discrimination based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.

What are the Governance and Risk Management Framework Requirements?

While the framework proposed by NIST allows for a lot of flexibility depending on a company’s size and risk profile, the governance and risk management framework proposed for SB21-169 is quite prescriptive, and must include the following components:

  1. Documented governing principles outlining the values and objectives of the insurer that ensure that ECDIS, algorithms, and predictive models using ECDIS are designed, developed, used, and monitored transparently and accountably and do not lead to unfair discrimination.
  2. Board of directors and senior management responsibility and accountability for setting and monitoring the overall strategy, and providing direction for governance on the use of ECDIS, algorithms, and predictive models. This includes establishing clear lines of communication and regular reporting to senior management on the performance and potential risks of ECDIS, algorithms, and predictive models.
  3. Cross-functional algorithm and predictive model governance committee composed of representatives from key functional areas including legal, compliance, risk management, product development, underwriting, actuarial, data science, marketing, and customer service, as applicable.
  4. Clearly assigned and documented roles and responsibilities of key personnel involved in the design, development, use, and oversight of ECDIS, algorithms, and predictive models using ECDIS.
  5. Established written policies and processes for the design, development, testing, deployment, use, and ongoing monitoring of ECDIS and algorithms and predictive models that use ECDIS and to ensure that they are documented, tested, and validated.
  6. Development and implementation of an ongoing supervision and training program for relevant personnel on the responsible and compliant use of ECDIS, algorithms, and predictive models including issues related to bias and potential unfair discrimination.
  7. Implementation of controls to prevent unauthorized access to algorithms or predictive models.
  8. Processes and protocols in place for addressing consumer complaints and inquiries about the use of ECDIS, algorithms, and predictive models in a manner that provides consumers with sufficiently clear information necessary for consumers to take meaningful action in the event of an adverse decision.
  9. Plan for responding to and recovering from any unintended consequences.
  10. Engage outside experts for performing audits when internal resources are insufficient.

Additionally, if an insurer uses third-party vendors and other external resources with respect to ECDIS and predictive models, the insurer is responsible for ensuring regulatory requirements are met.

What are the Documentation Requirements?

Life insurers must also maintain comprehensive documentation for their use of all ECDIS and algorithms and/or predictive models that use ECDIS, including those supplied by third parties. Documentation must include an up-to-date inventory of all ECDIS, algorithms, and predictive models in use, including a detailed description of each, results, and timing of annual reviews of the inventory. Insurers must also maintain a system for tracking and managing changes, a description of testing conducted to detect unfair discrimination, a description of the input and output of the algorithm and/or predictive model, and a description of any limitations of the algorithm and/or predictive model.

Insurers must also conduct regular reviews and updates to the documentation to ensure its continued accuracy and relevance, and all documentation must be easily accessible to appropriate insurer personnel and available upon request by the Division.

What are the Reporting Requirements?

Beyond making documentation easily accessible, the draft regulations have a number of reporting requirements.

  • Insurers currently using ECDIS and algorithms/predictive models with ECDIS must submit a progress report to the Division within six months of the effective date of the regulation (TBD), outlining current compliance with risk management and documentation requirements, areas under development, any difficulties, and expected completion date.
  • Insurers must also submit a final report demonstrating compliance within one year of the effective date of the regulation, including details of their completed compliance with the risk management and documentation requirements.
  • Insurers must submit a report every two years following the report required above, containing an up-to-date inventory of all ECDIS and algorithms/predictive models, results and timing of reviews, any material changes to the governance and risk management framework, and any risks detected and steps taken to mitigate them.
  • Insurers not using ECDIS or algorithms/predictive models must submit an attestation within one month of the regulation’s effective date and annually thereafter, signed by an officer indicating that the insurer does not use ECDIS or algorithms/predictive models.
  • Insurers not using ECDIS or algorithms/predictive models but planning to use them in the future must first submit the progress report specified above and then comply with the full reporting requirements upon adoption.

Conclusion

The new AI risk management framework required by Senate Bill 21–169 is a particular example of a more general approach to AI risk management that companies can and should use when using or developing AI/ML business solutions. For insurers, and the vendors they may rely on for AI/ML solutions, considering these requirements holistically with other global standards like the NIST AI Risk Management Framework and the requirements of the EU AI Act will ensure good coverage in a rapidly changing regulatory environment.

Need help? Contact us for a free consultation.