UPDATE — AUGUST 2025: This blog post accurately reflects Utah’s original Artificial Intelligence Policy Act (S.B. 149), signed into law in March 2024 and effective May 1, 2024. However, readers should note that significant amendments were enacted in early 2025 that reshape the law’s scope and requirements.
In May 2025, the Utah Legislature passed follow-up legislation—SB 226, SB 332, HB 452, and SB 271—that narrowed disclosure obligations, refined key definitions, and extended the law’s sunset date to July 1, 2027. Among the most notable updates:
-
Disclosure is now required only in high-risk interactions, such as those involving legal, financial, or medical advice, or when a consumer explicitly asks whether they are interacting with AI.
-
The definition of “generative AI” was narrowed to cover systems designed to simulate human conversation in consumer interactions.
-
A new safe harbor provision protects compliant entities from certain enforcement actions if they follow approved guidelines and best practices.
-
HB 452 adds specific regulations for mental health chatbots, including disclosure, advertising limits, and privacy rules.
-
SB 271 creates protections against the unauthorized use of a person’s name, image, or voice in AI-generated content.
-
The Division of Consumer Protection retains authority for enforcement, with continued powers to impose fines, pursue injunctions, and seek legal remedies.
These amendments reflect a more targeted approach to AI governance in Utah—emphasizing consumer protection in high-risk contexts while reducing overly broad compliance burdens and promoting responsible innovation.
ORIGINAL BLOG POST:
Utah’s Artificial Intelligence Policy Act: Key Takeaways and Impacts
On March 13, 2024, Utah Governor Spencer Cox signed the Artificial Intelligence Policy Act, commonly referred to as S.B. 149 or the AI Law, into law. It created Utah’s first regulatory framework for artificial intelligence, effective May 1, 2024. The law introduced liability rules, consumer protections, and a new Office of Artificial Intelligence Policy. It also established the AI Learning Laboratory Program to test technologies before widespread deployment.
Background and Purpose of Utah’s AI Law
AI is now embedded in industries from healthcare and finance to transportation and entertainment. These tools offer efficiency and growth but raise concerns over privacy, fairness, and security. Utah lawmakers responded with S.B. 149, aiming to balance benefits with safeguards. The law’s purpose is simple: promote responsible AI while protecting consumers. It requires oversight that matches existing professional standards and ensures AI systems align with legal and ethical norms.
Key Provisions of the AI Law
Definitions and Scope
The AI Law begins by defining key terms essential for understanding its scope and application. It defines “generative artificial intelligence” as an AI system trained on data that interacts with users through text, audio, or visual communication, generating non-scripted outputs with limited or no human oversight. This definition encompasses a wide range of AI technologies, from chatbots and virtual assistants to more complex AI systems used in various applications.
The law also defines “regulated occupation” and “state certification,” setting the stage for how AI technologies will be regulated in different professional contexts. A “regulated occupation” refers to professions that require state-granted licenses or certifications, such as healthcare providers, lawyers, and financial advisors. The law stipulates that AI technologies used in these professions must comply with the same regulatory standards as their human counterparts, ensuring that AI does not circumvent existing legal and ethical requirements.
Liability and Consumer Protection
One of the central aspects of the AI Law is establishing liability for the use of AI that violates consumer protection laws. The law mandates that individuals and organizations using generative AI must clearly disclose its use to consumers. This includes verbal disclosure at the start of any oral interaction and electronic messaging before written exchanges. This transparency is crucial in maintaining consumer trust and ensuring that individuals are aware when they are interacting with AI systems rather than humans.
Furthermore, the law holds users accountable for any violations committed by the AI, ensuring that reliance on AI does not absolve individuals or entities from their legal responsibilities. This provision addresses concerns about the “black box” nature of AI, where the decision-making processes of AI systems are often opaque. By holding users accountable, the law ensures that AI systems are used responsibly and ethically.
The AI Law also includes specific provisions for addressing violations. The Division of Consumer Protection is empowered to impose fines, bring court actions, and seek various remedies, including injunctions, disgorgement of funds, and payment of damages to affected individuals. These enforcement mechanisms provide a robust framework for ensuring compliance and addressing any harm caused by AI systems.
Office of Artificial Intelligence Policy
The AI Law establishes the Office of Artificial Intelligence Policy within the Department of Commerce. This office is tasked with overseeing the management and regulation of AI technologies in the state. It is responsible for creating and administering an AI Learning Laboratory Program, consulting with businesses and stakeholders on regulatory proposals, and making rules to govern the participation in the learning laboratory. The office also reports annually to the Business and Labor Interim Committee on its findings, participation, and recommended legislation.
By centralizing the oversight of AI technologies, the office can ensure consistent application of regulations and foster collaboration among various stakeholders. This approach facilitates the development of best practices and standards for AI, contributing to the overall goal of safe and responsible AI deployment.
AI Learning Laboratory Program
Objectives and Functions
The AI Learning Laboratory Program is a cornerstone of the AI Law. Its primary purpose is to analyze and research the risks, benefits, impacts, and policy implications of AI technologies. The program aims to inform the state regulatory framework, encourage the development of AI technologies, evaluate the effectiveness of current and potential regulations, and produce recommendations for legislation and regulation of AI.
The learning laboratory provides a structured environment for testing and evaluating AI technologies. By simulating real-world scenarios, the program can assess how AI systems perform in practice, identify potential risks, and develop strategies to mitigate these risks. This approach ensures that AI technologies are thoroughly vetted before widespread deployment, reducing the likelihood of unforeseen negative consequences.
The AI Learning Laboratory Program also fosters collaboration between the public and private sectors. By involving industry leaders, academic institutions, and other stakeholders, the program can leverage diverse perspectives and expertise to address the complex challenges associated with AI. This collaborative approach is essential for developing innovative solutions that balance the benefits and risks of AI technologies.
Participation and Regulatory Mitigation
The program invites applications from individuals and organizations interested in participating in the learning laboratory. Participants must demonstrate technical expertise, financial capability, and the potential to provide substantial consumer benefits. The office may grant regulatory mitigation on a temporary basis, allowing participants to test AI technologies with reduced regulatory burdens while ensuring safeguards are in place to protect consumers. This approach fosters innovation while maintaining oversight to prevent harm.
The process for selecting participants is rigorous and transparent. Applicants must undergo a thorough evaluation to ensure they meet the eligibility criteria, including the technical capability to develop and test AI technologies responsibly. The office works closely with participants to establish benchmarks and assess the outcomes of their participation in the learning laboratory. This continuous monitoring and evaluation process helps ensure that AI technologies meet the required standards and contribute positively to society.
Regulatory mitigation agreements are a key feature of the AI Learning Laboratory Program. These agreements outline the terms and conditions under which participants can test their AI technologies, including limitations on scope, safeguards, and reporting requirements. By providing a controlled environment for experimentation, the program enables participants to refine their technologies and address any issues before full-scale deployment.
Ensuring Compliance and Enforcement
Role of the Division of Consumer Protection
The Division of Consumer Protection plays a crucial role in enforcing the AI Law. It is responsible for administering the provisions related to generative AI and ensuring compliance with consumer protection statutes. The division has the authority to impose fines, bring court actions, and seek various remedies, including injunctions, disgorgement of funds, and payment of damages to affected individuals. This robust enforcement mechanism ensures that violations are addressed promptly and effectively.
The division’s enforcement powers are essential for maintaining the integrity of the AI regulatory framework. By holding individuals and organizations accountable for their use of AI, the division ensures that AI technologies are developed and used in accordance with legal and ethical standards. This enforcement capability also serves as a deterrent, discouraging irresponsible or harmful use of AI technologies.
Administrative and Court Actions
The AI Law outlines specific procedures for administrative and court actions related to violations. In cases where generative AI is used to commit an offense, the law provides for significant penalties, including fines and civil penalties. Courts can also award attorney fees, court costs, and investigative fees to the Division of Consumer Protection, further strengthening the enforcement framework.
These enforcement mechanisms are designed to provide swift and effective remedies for violations. The law empowers the Division of Consumer Protection to take immediate action in response to any breaches, ensuring that individuals and organizations are held accountable for their actions. This proactive approach helps prevent harm and promotes responsible use of AI technologies.
Broader Implications and Future Directions
Impact on AI Development and Deployment
The AI Law’s comprehensive regulatory framework is expected to have a significant impact on the development and deployment of AI technologies in Utah. By establishing clear guidelines and accountability measures, the law provides a structured environment for innovation. Companies and developers can leverage the AI Learning Laboratory Program to test and refine their technologies, ensuring they meet regulatory standards before widespread deployment.
The law’s emphasis on transparency and accountability is likely to enhance public trust in AI technologies. By requiring clear disclosures and holding users accountable for violations, the law addresses some of the key concerns associated with AI, such as the “black box” nature of AI decision-making and the potential for biased or discriminatory outcomes. This increased transparency and accountability can foster greater acceptance and adoption of AI technologies.
Collaboration and International Influence
Utah’s law influences how companies build and deploy AI. It offers a clear regulatory path while fostering innovation through controlled testing. Transparency and liability provisions aim to build public trust. With the 2025 amendments, Utah shifted to a narrower focus. The law now targets high-risk scenarios, avoiding broad mandates while still prioritizing consumer protection. This evolution reflects the state’s effort to adapt regulation as AI matures.
Conclusion
Utah’s AI Policy Act marked a pioneering move in 2024, and the 2025 amendments refined its scope. Together, they show how states can protect consumers while supporting innovation. Utah’s approach may serve as a model for future AI governance across the U.S.
Need Help?
Also, for businesses operating in Utah, it is crucial to begin preparing for the implementation of these regulations. Ensuring compliance with the Utah AI Policy Act will not only help avoid legal pitfalls but also foster consumer trust and promote ethical AI practices. Therefore, if you have any questions or need assistance navigating the new regulatory landscape, BABL AI’s Audit Experts are ready to provide valuable support and guidance.