Arizona Sets Guardrails for Government Use of Generative AI

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/18/2024
In News

Arizona has adopted a groundbreaking statewide policy to guide the use of generative artificial intelligence (Gen AI) across government agencies, emphasizing transparency, accountability, and data security. The Generative Artificial Intelligence Policy and accompanying Use of Generative AI Procedure establish a comprehensive framework for how state employees, agencies, and vendors can responsibly implement Gen AI tools in government operations. The measures aim to strike a balance between fostering innovation and mitigating risks associated with emerging AI technologies.

 

The Arizona Department of Administration, tasked with statewide IT coordination, highlights AI’s immense potential to improve government services while underscoring its limitations. The policy stresses that AI is “a tool, not a substitute” for the responsibilities of public servants, reinforcing that human oversight is essential.

 

According to the policy, generative AI systems can streamline tasks like drafting memos, translating materials, and summarizing lengthy documents, but accuracy, privacy, and fairness must remain priorities. The document warns employees: “Gen AI outputs should not be assumed to be truthful, credible, or accurate.”

 

Arizona’s framework is built on seven guiding principles:

 

  • Empowerment: AI should enhance efficiency, allowing employees to deliver better services.

 

  • Transparency and Accountability: Agencies must disclose when and how AI tools are used, including identifying which Gen AI systems contributed to public-facing content.

 

  • Fairness: AI adoption must align with Arizona’s values of inclusion, ensuring equitable outcomes, particularly for marginalized communities.

 

  • Security: Agencies must safeguard state data and perform risk assessments when using AI tools.

 

  • Privacy: No confidential data may be entered into publicly accessible AI systems without authorization.

 

  • Data Readiness: Agencies are responsible for maintaining high-quality, bias-free data that is ready for AI deployment.

 

  • Training: Employees must complete mandatory training before engaging with Gen AI systems to ensure responsible use.

 

The inclusion of robust data governance and cybersecurity requirements reflects growing concerns over the misuse of AI and its potential risks to sensitive information.

 

The Use of Generative AI Procedure provides clear guidance on AI’s applications in government work, alongside critical “do’s and don’ts.” Examples include:

 

  • Drafting Communications: AI tools like ChatGPT can help generate memos, letters, or policy briefs, provided outputs are fact-checked and edited.

 

  • Simplifying Public Information: Agencies can use AI to rewrite policies or websites into plain language for accessibility.

 

  • Translation Services: Tools like Meta’s SeamlessM4T may assist in translating public documents, but human verification remains essential.

 

  • Summarizing Data: AI can condense lengthy reports for decision-makers, though agencies must still review full texts to avoid mischaracterizations.

 

However, the guidelines prohibit using AI for sensitive communications, creating content about controversial topics, or inputting confidential data into public tools.

 

The policy imposes strict conditions on AI procurement. Any new AI solutions must be reviewed and approved by the State Chief Information Officer and Chief Information Security Officer before deployment. Vendors must disclose the use of generative AI in products and guarantee proper licensing for AI model training data.

 

Additionally, agencies are required to ensure all AI-generated works adhere to copyright standards. Public-facing content that uses AI must include annotations disclosing the specific tools and processes used. 

 

To ensure readiness, Arizona will mandate annual AI training for all relevant state employees. Training programs will cover AI risks, ethical considerations, and proper application of generative tools.

 

The policy acknowledges that generative AI will evolve, calling for periodic reviews to incorporate lessons learned and adapt to new risks.

 

 

Need Help?

 
If you’re wondering how Arizona’s AI laws and regulations, or any other AI legislation around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter