Colorado’s Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/21/2024
In Blog

In a landmark move, Colorado has enacted the “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” bill, now officially signed into law by the Governor. This pioneering legislation, also known as the Colorado AI Act (CAIA), is the first of its kind in the United States to adopt a comprehensive and risk-based approach to artificial intelligence (AI) regulation. With the law set to take effect on February 1, 2026, it aims to protect consumers from algorithmic discrimination while ensuring transparency and accountability in the deployment and development of high-risk AI systems.

 

Scope and Definitions

 

The Colorado AI Act targets both developers and deployers of AI systems, focusing on those classified as high-risk. According to the law, a high-risk AI system is any artificial intelligence system that, when deployed, makes or is a substantial factor in making consequential decisions. These decisions have significant material effects on areas such as education, employment, financial services, healthcare, housing, insurance, and legal services.

 

Developers:

Individuals or entities that create or substantially modify AI systems, including both general-purpose and high-risk AI systems.

 

Deployers:

Individuals or entities that use high-risk AI systems in their operations.

 

Key Provisions for Developers

 

Developers of high-risk AI systems have several critical responsibilities under the new law:

 

  • Duty of Care: Developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended use of their AI systems. 

 

  • Documentation and Disclosure: Developers are required to provide deployers with comprehensive documentation about the AI system’s capabilities, limitations, intended uses, training data, and any evaluations for discrimination risks. This includes maintaining a publicly available statement summarizing high-risk AI systems and the measures taken to manage risks of algorithmic discrimination.

 

  • Incident Reporting: Developers must notify the Colorado Attorney General and all known deployers within 90 days if they discover any risks of algorithmic discrimination through ongoing testing and analysis.

 

Key Provisions for Deployers

 

Deployers of high-risk AI systems also face stringent requirements:

 

  • Duty of Care: Similar to developers, deployers must use reasonable care to protect consumers from algorithmic discrimination risks.

 

  • Risk Management Policy: Deployers must establish and maintain a risk management policy that governs the use of high-risk AI systems. This policy should detail the processes and personnel involved in identifying and mitigating discrimination risks.

 

  • Impact Assessments: Deployers are required to conduct impact assessments annually and upon any substantial modifications to the AI systems. These assessments must document the purpose, intended use, risk of algorithmic discrimination, data used, performance, transparency measures, and post-deployment monitoring efforts.

 

  • Consumer Notifications and Rights: When a high-risk AI system is used to make consequential decisions about consumers, deployers must inform the consumers about the AI system’s use. Consumers must also be given a statement explaining the principal reasons for any adverse decisions, the type of data used, and the data sources. Additionally, consumers have the right to correct any inaccurate personal data and to appeal decisions for human review.

 

  • Public Disclosure: Deployers must make a public statement regarding the use of high-risk AI systems and the measures taken to manage discrimination risks.

 

 

Consumer Rights

 

The Colorado AI Act significantly enhances consumer rights concerning AI interactions:

 

  • Right to Pre-Use Notice: Consumers must be informed if a high-risk AI system is used to make consequential decisions about them, including the system’s purpose and nature.

 

  • Right to Explanation: If an adverse decision is made using a high-risk AI system, consumers must receive an explanation of the decision, including how the AI system contributed to the decision.

 

  • Right to Correct and Appeal: Consumers can correct inaccurate personal data used by the AI system and appeal decisions for human review.

 

  • Right to Opt-Out: Consumers must be informed of their right to opt-out of profiling for automated decisions under the Colorado Privacy Act.

 

Exemptions and Safe Harbors

 

The law provides certain exemptions and safe harbors to facilitate compliance and encourage innovation:

 

  • Small Businesses: Small deployers with fewer than 50 full-time employees who do not use their own data to train high-risk AI systems are exempt from maintaining a risk management program, conducting impact assessments, and creating a public statement. However, they are still subject to the duty of care and must provide relevant consumer notices and rights.

 

  • Affirmative Defense: Developers and deployers who discover and cure violations through internal testing or red-teaming and comply with recognized AI risk management frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, will have an affirmative defense against enforcement actions by the Attorney General.

 

  • Interoperability with Other Laws: Impact assessments conducted to comply with other relevant laws or regulations can satisfy the requirements of the Colorado AI Act if they are reasonably similar in scope and effect.

 

Enforcement and Rulemaking

 

The Colorado Attorney General has exclusive authority to enforce the Colorado AI Act and may promulgate rules to implement and enforce the law. This includes requirements for developer documentation, risk management policies, impact assessments, consumer notices, and establishing standards for rebuttable presumptions and affirmative defenses.

 

Conclusion

 

The Colorado AI Act represents a significant step forward in the regulation of artificial intelligence, setting a precedent for other states and potentially influencing future federal regulations. By establishing clear responsibilities for developers and deployers of high-risk AI systems and enhancing consumer protections, Colorado is positioning itself at the forefront of AI governance.

 

 

Need help?

 

For businesses operating in Colorado, it is crucial to begin preparing for the implementation of these regulations. Ensuring compliance with the Colorado AI Act will not only help avoid legal pitfalls but also foster consumer trust and promote ethical AI practices. If you have any questions or need assistance navigating the new regulatory landscape, BABL AI’s Audit Experts are ready to provide valuable support and guidance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter