Colorado’s Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/21/2024
In Blog

UPDATE — AUGUST 2025: This blog remains accurate as of August 2025. Colorado’s Artificial Intelligence Act (CAIA), also known as Senate Bill 24-205, was signed into law by Governor Jared Polis on May 17, 2024. The law takes effect on February 1, 2026, and remains the most comprehensive state-level AI regulation in the U.S. Lawmakers attempted to delay and amend the statute through SB 25-318 in 2025, but the effort failed in May. Colorado continues to refine rules and gather feedback before implementation. The CAIA requires developers and deployers of high-risk AI systems to complete impact assessments, notify consumers, reduce discrimination risks, and disclose system use. Enforcement rests with the Colorado Attorney General.

ORIGINAL BLOG POST:

Colorado’s Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law

 

In a landmark move, Colorado has enacted the “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” bill, now officially signed into law by the Governor. This pioneering legislation, also known as the Colorado AI Act (CAIA), is the first of its kind in the United States to adopt a comprehensive and risk-based approach to artificial intelligence (AI) regulation. With the law set to take effect on February 1, 2026, it aims to protect consumers from algorithmic discrimination while ensuring transparency and accountability in the deployment and development of high-risk AI systems.

 

Scope and Definitions

 

The Act applies to developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that makes or substantially influences consequential decisions. These decisions carry major effects in areas such as education, employment, financial services, healthcare, housing, insurance, and legal services.

 

Developers:

Individuals or entities that create or substantially modify AI systems, including both general-purpose and high-risk AI systems.

 

Deployers:

Individuals or entities that use high-risk AI systems in their operations.

 

Key Provisions for Developers

 

Developers of high-risk AI systems have several critical responsibilities under the new law:

 

  • Duty of Care: Developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended use of their AI systems. 

 

  • Documentation and Disclosure: Developers are required to provide deployers with comprehensive documentation about the AI system’s capabilities, limitations, intended uses, training data, and any evaluations for discrimination risks. This includes maintaining a publicly available statement summarizing high-risk AI systems and the measures taken to manage risks of algorithmic discrimination.

 

  • Incident Reporting: Developers must notify the Colorado Attorney General and all known deployers within 90 days if they discover any risks of algorithmic discrimination through ongoing testing and analysis.

 

Deployer Obligations

 

Deployers must also exercise reasonable care to avoid discrimination. They need to maintain a risk management policy that outlines staff roles and processes for mitigating risks. Deployers must conduct impact assessments each year or after significant modifications. Assessments should include the system’s purpose, data sources, discrimination risks, and post-deployment monitoring plans. Consumers must be notified when AI plays a role in consequential decisions. If the decision is adverse, the deployer must provide an explanation and allow consumers to correct data or appeal to a human reviewer. Deployers must also issue a public statement about their use of high-risk AI systems.

 

 

Consumer Rights

 

The Colorado AI Act provides strong protections for consumers:

 

  • Right to Pre-Use Notice: Consumers must be told when AI makes or influences consequential decisions.

 

  • Right to Explanation: Adverse decisions require a clear explanation, including the system’s role.

 

  • Right to Correct and Appeal:Consumers may correct personal data and request human review.

 

  • Right to Opt-Out: Consumers may opt out of profiling under the Colorado Privacy Act.

 

Exemptions and Safe Harbors

 

Small businesses with fewer than 50 employees are exempt from some duties, such as impact assessments and public disclosures, if they do not train systems with their own data. They must still provide notices and honor consumer rights. Developers and deployers can claim an affirmative defense if they cure violations quickly and follow recognized frameworks like the NIST AI Risk Management Framework. Impact assessments performed for other legal requirements may count if they cover similar issues

 

Enforcement and Rulemaking

 

The Colorado Attorney General holds exclusive enforcement authority. The office may create rules on documentation, risk policies, assessments, consumer notices, and standards for affirmative defenses.

 

Conclusion

 

Colorado’s AI Act sets a precedent for state-level AI regulation. It balances innovation with consumer protection and could shape future federal policy. By establishing clear duties for developers and deployers, the law positions Colorado as a national leader in AI governance.

 

 

Need Help?

 

Also, businesses operating in Colorado should prepare now. Compliance will help avoid penalties, build trust, and promote responsible AI use. BABL AI’s Audit Experts can guide you through requirements and ensure readiness for February 2026.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter