Utah’s Artificial Intelligence Policy Act: Key Takeaways and Impacts

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/31/2024
In Blog

On March 13, 2024, Utah Governor Spencer Cox signed the Artificial Intelligence Policy Act, commonly referred to as S.B. 149 or the AI Law, into law. This groundbreaking legislation, effective from May 1, 2024, establishes a regulatory framework for the development, deployment, and use of artificial intelligence (AI) technologies within Utah. The AI Law addresses various critical aspects, including liability and consumer protection, the creation of an Office of Artificial Intelligence Policy, and the establishment of an AI Learning Laboratory Program. This blog post explores the key provisions, objectives, and implications of the AI Law, offering a comprehensive understanding of its impact on the AI landscape in Utah.

 

Background and Purpose of Utah’s AI Law

 

The rapid advancement of AI technologies has led to significant societal and economic benefits. However, these advancements also pose potential risks, particularly concerning privacy, security, and fairness. Recognizing the need for a balanced approach that maximizes benefits while mitigating risks, the Utah Legislature introduced S.B. 149. The AI Law ensures that AI technologies are developed and used responsibly, with robust regulatory oversight to protect consumers and promote innovation.

 

AI technologies have become integral to various industries, from healthcare and finance to transportation and entertainment. These technologies promise to improve efficiency, enhance decision-making, and create new opportunities for economic growth. However, their deployment also raises concerns about data privacy, security, and ethical implications. The AI Law is a proactive measure to address these concerns, ensuring that AI systems are developed and used in a manner that aligns with societal values and legal standards.

 

Key Provisions of the AI Law

 

Definitions and Scope

The AI Law begins by defining key terms essential for understanding its scope and application. It defines “generative artificial intelligence” as an AI system trained on data that interacts with users through text, audio, or visual communication, generating non-scripted outputs with limited or no human oversight. This definition encompasses a wide range of AI technologies, from chatbots and virtual assistants to more complex AI systems used in various applications.

 

The law also defines “regulated occupation” and “state certification,” setting the stage for how AI technologies will be regulated in different professional contexts. A “regulated occupation” refers to professions that require state-granted licenses or certifications, such as healthcare providers, lawyers, and financial advisors. The law stipulates that AI technologies used in these professions must comply with the same regulatory standards as their human counterparts, ensuring that AI does not circumvent existing legal and ethical requirements.

 

Liability and Consumer Protection

One of the central aspects of the AI Law is establishing liability for the use of AI that violates consumer protection laws. The law mandates that individuals and organizations using generative AI must clearly disclose its use to consumers. This includes verbal disclosure at the start of any oral interaction and electronic messaging before written exchanges. This transparency is crucial in maintaining consumer trust and ensuring that individuals are aware when they are interacting with AI systems rather than humans.

 

Furthermore, the law holds users accountable for any violations committed by the AI, ensuring that reliance on AI does not absolve individuals or entities from their legal responsibilities. This provision addresses concerns about the “black box” nature of AI, where the decision-making processes of AI systems are often opaque. By holding users accountable, the law ensures that AI systems are used responsibly and ethically.

 

The AI Law also includes specific provisions for addressing violations. The Division of Consumer Protection is empowered to impose fines, bring court actions, and seek various remedies, including injunctions, disgorgement of funds, and payment of damages to affected individuals. These enforcement mechanisms provide a robust framework for ensuring compliance and addressing any harm caused by AI systems.

 

Office of Artificial Intelligence Policy

The AI Law establishes the Office of Artificial Intelligence Policy within the Department of Commerce. This office is tasked with overseeing the management and regulation of AI technologies in the state. It is responsible for creating and administering an AI Learning Laboratory Program, consulting with businesses and stakeholders on regulatory proposals, and making rules to govern the participation in the learning laboratory. The office also reports annually to the Business and Labor Interim Committee on its findings, participation, and recommended legislation.

 

By centralizing the oversight of AI technologies, the office can ensure consistent application of regulations and foster collaboration among various stakeholders. This approach facilitates the development of best practices and standards for AI, contributing to the overall goal of safe and responsible AI deployment.

 

AI Learning Laboratory Program

 

Objectives and Functions

The AI Learning Laboratory Program is a cornerstone of the AI Law. Its primary purpose is to analyze and research the risks, benefits, impacts, and policy implications of AI technologies. The program aims to inform the state regulatory framework, encourage the development of AI technologies, evaluate the effectiveness of current and potential regulations, and produce recommendations for legislation and regulation of AI.

 

The learning laboratory provides a structured environment for testing and evaluating AI technologies. By simulating real-world scenarios, the program can assess how AI systems perform in practice, identify potential risks, and develop strategies to mitigate these risks. This approach ensures that AI technologies are thoroughly vetted before widespread deployment, reducing the likelihood of unforeseen negative consequences.

 

The AI Learning Laboratory Program also fosters collaboration between the public and private sectors. By involving industry leaders, academic institutions, and other stakeholders, the program can leverage diverse perspectives and expertise to address the complex challenges associated with AI. This collaborative approach is essential for developing innovative solutions that balance the benefits and risks of AI technologies.

 

Participation and Regulatory Mitigation

The program invites applications from individuals and organizations interested in participating in the learning laboratory. Participants must demonstrate technical expertise, financial capability, and the potential to provide substantial consumer benefits. The office may grant regulatory mitigation on a temporary basis, allowing participants to test AI technologies with reduced regulatory burdens while ensuring safeguards are in place to protect consumers. This approach fosters innovation while maintaining oversight to prevent harm.

 

The process for selecting participants is rigorous and transparent. Applicants must undergo a thorough evaluation to ensure they meet the eligibility criteria, including the technical capability to develop and test AI technologies responsibly. The office works closely with participants to establish benchmarks and assess the outcomes of their participation in the learning laboratory. This continuous monitoring and evaluation process helps ensure that AI technologies meet the required standards and contribute positively to society.

 

Regulatory mitigation agreements are a key feature of the AI Learning Laboratory Program. These agreements outline the terms and conditions under which participants can test their AI technologies, including limitations on scope, safeguards, and reporting requirements. By providing a controlled environment for experimentation, the program enables participants to refine their technologies and address any issues before full-scale deployment.

 

Ensuring Compliance and Enforcement

 

Role of the Division of Consumer Protection

The Division of Consumer Protection plays a crucial role in enforcing the AI Law. It is responsible for administering the provisions related to generative AI and ensuring compliance with consumer protection statutes. The division has the authority to impose fines, bring court actions, and seek various remedies, including injunctions, disgorgement of funds, and payment of damages to affected individuals. This robust enforcement mechanism ensures that violations are addressed promptly and effectively.

 

The division’s enforcement powers are essential for maintaining the integrity of the AI regulatory framework. By holding individuals and organizations accountable for their use of AI, the division ensures that AI technologies are developed and used in accordance with legal and ethical standards. This enforcement capability also serves as a deterrent, discouraging irresponsible or harmful use of AI technologies.

 

Administrative and Court Actions

The AI Law outlines specific procedures for administrative and court actions related to violations. In cases where generative AI is used to commit an offense, the law provides for significant penalties, including fines and civil penalties. Courts can also award attorney fees, court costs, and investigative fees to the Division of Consumer Protection, further strengthening the enforcement framework.

 

These enforcement mechanisms are designed to provide swift and effective remedies for violations. The law empowers the Division of Consumer Protection to take immediate action in response to any breaches, ensuring that individuals and organizations are held accountable for their actions. This proactive approach helps prevent harm and promotes responsible use of AI technologies.

 

Broader Implications and Future Directions

 

Impact on AI Development and Deployment

The AI Law’s comprehensive regulatory framework is expected to have a significant impact on the development and deployment of AI technologies in Utah. By establishing clear guidelines and accountability measures, the law provides a structured environment for innovation. Companies and developers can leverage the AI Learning Laboratory Program to test and refine their technologies, ensuring they meet regulatory standards before widespread deployment.

 

The law’s emphasis on transparency and accountability is likely to enhance public trust in AI technologies. By requiring clear disclosures and holding users accountable for violations, the law addresses some of the key concerns associated with AI, such as the “black box” nature of AI decision-making and the potential for biased or discriminatory outcomes. This increased transparency and accountability can foster greater acceptance and adoption of AI technologies.

 

Collaboration and International Influence

The establishment of the AI Office and its engagement with stakeholders, including industry leaders, academic institutions, and international partners, positions Utah as a leader in AI governance. The state’s proactive approach to regulating AI could serve as a model for other jurisdictions, influencing the broader discourse on AI policy and regulation.

 

International collaboration is a key component of the AI Law’s strategy. By working with global partners, Utah can share best practices, learn from the experiences of other regions, and contribute to the development of international standards for AI. This collaborative approach helps ensure that AI technologies are developed and used in a manner that aligns with global norms and values.

 

Conclusion

 

Utah’s Artificial Intelligence Policy Act represents a landmark effort to balance the benefits and risks of AI technologies through comprehensive regulation. By establishing clear definitions, enforcing liability and consumer protection, creating the Office of Artificial Intelligence Policy, and launching the AI Learning Laboratory Program, the law provides a robust framework for responsible AI development and use. As AI continues to evolve, Utah’s approach may offer valuable insights and best practices for other regions seeking to navigate the complex landscape of AI regulation.

 

Need Help?

 

For businesses operating in Utah, it is crucial to begin preparing for the implementation of these regulations. Ensuring compliance with the Utah AI Policy Act will not only help avoid legal pitfalls but also foster consumer trust and promote ethical AI practices. If you have any questions or need assistance navigating the new regulatory landscape, BABL AI’s Audit Experts are ready to provide valuable support and guidance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter