Colorado Governor Signs Landmark AI Consumer Protection Bill into Law

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/20/2024
In News

In an update to a story we brought to you earlier this month, state lawmakers in Colorado have successfully passed and the Governor has signed into law a bill aimed at protecting customers from artificial intelligence (AI). The bill, titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems”, establishes regulations and requirements for developers and deployers of high-risk AI systems in Colorado to safeguard consumers from algorithmic discrimination.

 

Algorithmic discrimination is defined as instances where an AI system results in unlawful differential treatment or impacts that disfavor individuals based on protected characteristics like race, gender, age, and other traits. A high-risk AI system is identified as any AI system that plays a substantial role in making consequential decisions that have material legal or significant effects on areas such as education, employment, financial services, healthcare, housing, and more. The bill outlines distinct requirements for developers, who create or substantially modify AI systems, and deployers, who use or deploy high-risk AI systems.

 

For developers of high-risk AI systems, the bill mandates that they exercise reasonable care to avoid algorithmic discrimination arising from the intended uses of their systems. They must provide documentation to deployers detailing the AI system’s capabilities, limitations, purpose, intended uses, training data, evaluations for discrimination risks, and more. Developers must also publicly disclose the types of high-risk AI systems they have developed and how they manage risks of algorithmic discrimination. Additionally, developers must disclose any known risks of algorithmic discrimination to the Colorado Attorney General and deployers within 90 days of discovery.

 

As for deployers of high-risk AI systems, the key requirements are that they use reasonable care to avoid algorithmic discrimination and implement a risk management program to identify and mitigate discrimination risks. Deployers must conduct impact assessments of their deployed high-risk AI systems at least annually. When using a high-risk AI system to make consequential decisions about consumers, deployers must notify those consumers. For any adverse decisions, impacted consumers must be provided the reasons for the AI decision and an opportunity to correct data or appeal the decision. Deployers must also publicly disclose the types of high-risk AI systems they deploy and how discrimination risks are managed, as well as notify the Attorney General of any discovered algorithmic discrimination within 90 days.

 

The bill outlines exemptions for compliance with other laws, research activities, national security applications, regulated financial institutions following equivalent AI governance frameworks, and more. Enforcement is designated as an unfair trade practice violation under the authority of the Colorado Attorney General, with a transitional notice period in the first year for violators to correct issues before enforcement action. Affirmative defenses are allowed if developers and deployers adhere to designated AI risk management frameworks and self-correct violations through feedback mechanisms, testing, or internal reviews. The Attorney General can require documentation from developers and deployers, as well as promulgate rules to implement the bill’s provisions related to documentation, notices, impact assessments, risk management programs, and establishing requirements for rebuttable presumptions and affirmative defenses.

 

Need help?

 

If you’re wondering how Colorado’s new law, or any other AI legislation around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter