The Utah Office of Artificial Intelligence Policy (OAIP) is implementing a new framework to guide the evaluation of AI tools, signaling a major step forward in the state’s push for responsible and transparent innovation. Developed by Fellows of the Aspen Institute’s Science and Technology Policy Academy, the PIONR Framework—short for Prosperity, Integrity and Innovation, Openness, Natural Resource Stewardship, and Respect for Culture and Values—offers a comprehensive rubric for assessing AI solutions aligned with Utah’s ethical and economic priorities.
The PIONR Framework was born from a real-world policy challenge assigned to Aspen Fellows Jordan Loewen-Colón, Ayodele Odubela, and Jeanette Jordan in the summer of 2024. Their goal: create a structured, transparent method to help the OAIP evaluate applicants for its AI Learning Lab, a regulatory sandbox that tests AI tools developed in the state. Until now, unclear evaluation standards and limited public insight had created barriers for AI startups looking to engage with the Office and led to growing public skepticism about AI’s role in society.
By integrating PIONR into its vetting process, the OAIP aims to address both of these concerns. The framework includes guiding questions that assess an AI tool’s economic potential, ethical safeguards, transparency, environmental impact, and cultural sensitivity. These categories mirror the state’s AI policy focus areas, ensuring that evaluation remains rooted in community values while embracing emerging technology.
The PIONR Framework is designed to be both flexible and rigorous. Companies participating in the Learning Lab will be scored across PIONR dimensions, using a mix of qualitative and quantitative metrics—from job creation and sustainability to data security and fairness. Developers will also be able to revise and resubmit their evaluations as their tools evolve, making the process iterative and adaptive to changing regulations.
To further bolster public trust, the OAIP has launched a new webpage listing current Learning Lab participants and detailing the evaluation framework. This transparency initiative is intended to reduce regulatory uncertainty, attract new partnerships, and offer Utahns a clearer view of how AI systems are being tested and deployed in their communities.
Utah’s AI evaluation approach stands in contrast to one-size-fits-all federal proposals and sets a potential model for other states seeking to balance AI innovation with civic accountability. The OAIP’s adoption of the PIONR Framework could signal a broader shift toward value-driven governance in the age of artificial intelligence.
Need Help?
If you’re concerned or have questions about how to navigate the AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.