The New York State Information Technology Policy NYS-P24-001, also known as the “Acceptable Use of Artificial Intelligence Technologies,” was announced early in 2024. It establishes comprehensive guidelines for state entities to ethically and responsibly adopt AI systems, but how comprehensive is it? The policy draws heavily from the NIST AI Risk Management Framework to guide agencies in leveraging AI to enhance services and efficiency, while prioritizing privacy, accountability, and public oversight.
The policy encompasses all public-facing AI systems utilizing machine learning, natural language processing, computer vision and other AI capabilities. It permits the careful integration of AI where appropriate to further agency missions, provided rigorous governance controls are implemented. These include:
- Requiring human review and approval of all AI-aided decisions affecting the public, with regular audits of outputs, decisions and methodologies.
- Mandatory AI risk assessments evaluating security, privacy, fairness, compliance and other risks. Identified risks must be mitigated.
- Strict protocols to safeguard personal/confidential data used in AI systems, including data minimization, retention limits, accuracy verification, and subject transparency.
- Adherence to state cybersecurity standards, including encryption, access controls and integrity safeguards to prevent AI misuse.
- Promoting algorithmic fairness, equitability and explainability to avoid biased or opaque results.
- Appointing an accountable Information Owner for each AI system, with legal/ethics approvals required prior to adoption.
- Disclosing the use of AI systems directly interacting with the public.
- Maintaining a public inventory of in-scope AI systems, with new systems reported in 180 days.
- Requiring exceptions for any non-compliance with mandatory requirements.
Advanced AI capabilities like machine learning and computer vision are increasingly being adopted by government agencies. However, ethical risks remain regarding privacy, bias, and transparency. New York’s policy exemplifies a responsible governance approach that allows realizing AI productivity gains without compromising public trust.
Specific innovations include mandating human oversight of all automated decisions affecting the public, rigorous AI risk analysis, and emphasizing AI fairness principles. By incorporating strong safeguards, transparency and accountability requirements, New York provides a comprehensive model for ethically integrating AI capabilities into public sector missions.
New York State isn’t alone as several states around the U.S. have released AI bills, regulations and initiatives. If you have any questions about the everchanging AI regulatory landscape, consider contacting BABL AI as their team of Audit Experts will offer valuable insight.