Colorado Governor Signs Landmark AI Consumer Protection Bill into Law

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/20/2024
In News

UPDATE – FEBRUARY 2026:

Since the August 2025 update, Colorado’s Artificial Intelligence Act (SB 24-205) has undergone an important timeline change. It otherwise remains largely intact. During a special legislative session in August 2025, lawmakers passed SB25B-004. This bill delayed the law’s effective date from February 1, 2026, to June 30, 2026. Governor Jared Polis signed the measure on August 28, 2025. He was responding to concerns from businesses and industry groups about implementation complexity and compliance readiness.

Despite the delay, the substance of the law has not changed. The same first-in-the-nation obligations for developers and deployers of high-risk AI systems remain in place. These obligations include risk management programs, annual impact assessments, consumer notification requirements, transparency disclosures, and safeguards designed to prevent algorithmic discrimination. Notably, the delay was framed as an implementation adjustment rather than a policy reversal.

As of February 2026, no additional amendments have been enacted. The Colorado Attorney General’s Office continues rulemaking and stakeholder consultations to clarify enforcement expectations ahead of the new June 30 rollout. These discussions are expected to further define practical compliance requirements. This includes documentation standards, impact assessment formats, and consumer notice procedures.

The law also sits within a rapidly changing national landscape. Federal actions in early 2026 aimed at limiting or preempting certain state-level AI regulations have introduced uncertainty about how state frameworks like Colorado’s may interact with future federal policy. However, no federal action has altered Colorado’s authority to proceed at this time. The state continues preparing for enforcement under the existing framework.

Overall, Colorado’s AI Act remains one of the most significant state-level AI accountability laws in the United States. The key change since the prior update is the delayed effective date. This extension gives organizations additional time to prepare while preserving the law’s original structure and compliance expectations.

ORIGINAL NEWS STORY:

Colorado Governor Signs Landmark AI Consumer Protection Bill into Law

 

In an update to a story we brought to you earlier this month, state lawmakers in Colorado have successfully passed and the Governor has signed into law a bill aimed at protecting customers from artificial intelligence (AI). The bill, titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems”, establishes regulations and requirements for developers and deployers of high-risk AI systems in Colorado to safeguard consumers from algorithmic discrimination.

 

Algorithmic discrimination is defined as instances where an AI system results in unlawful differential treatment or impacts that disfavor individuals based on protected characteristics like race, gender, age, and other traits. A high-risk AI system is identified as any AI system that plays a substantial role in making consequential decisions. These decisions have material legal or significant effects on areas such as education, employment, financial services, healthcare, housing, and more. The bill outlines distinct requirements for developers, who create or substantially modify AI systems. There are also requirements for deployers, who use or deploy high-risk AI systems.

 

Developer Duties

 

Developers must take reasonable care to prevent algorithmic discrimination. They must also provide deployers with detailed documentation. This includes the system’s purpose, intended uses, data sources, and results of discrimination testing. Public disclosure is required. Developers must identify the high-risk systems they build and explain how they manage risks. If they discover discrimination risks, they must notify the Colorado Attorney General and affected deployers within 90 days.

 

Deployer Duties

 

Deployers of high-risk AI also face strict requirements. They must create risk management programs, perform annual impact assessments, and use reasonable care to avoid algorithmic discrimination. When AI makes a consequential decision about a consumer, the deployer must notify the individual. If the decision is adverse, the consumer has a right to know why and to appeal or correct the data used. Deployers must also disclose the high-risk systems they use, explain risk management practices, and report discrimination to the Attorney General within 90 days.

 

Exemptions and Enforcement

 

The law provides exemptions for research, national security, and regulated financial institutions that already follow equivalent frameworks. Enforcement falls under the Colorado Attorney General. In the first year, violators will receive a notice period to fix issues before penalties apply. The law also allows affirmative defenses. Developers and deployers can avoid liability if they show compliance with approved risk frameworks. They must also correct problems through testing, feedback, or internal review. The Attorney General has rulemaking power to shape requirements on documentation, notices, impact assessments, and risk management.

 

Need Help?

 

If you’re wondering how Colorado’s new law, or any other AI legislation around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter