DiNapoli Audit Warns of Gaps in AI Oversight Across New York State Agencies

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/15/2025
In News

New York State agencies are adopting artificial intelligence (AI) across a wide range of services, but without strong central oversight or clear internal policies, they risk misusing the technology, according to a new audit released April by State Comptroller Thomas P. DiNapoli.

 

The audit found that agencies are largely on their own when it comes to regulating AI, with each taking a different approach to oversight. It examined AI use at four agencies—the Office for the Aging (NYSOFA), Department of Corrections and Community Supervision (DOCCS), Department of Motor Vehicles (DMV), and Department of Transportation (DOT)—and found major gaps in compliance, training, and risk management.

 

“This audit is a wake-up call,” DiNapoli said. “State agencies are using AI to monitor inmates, detect license fraud, and assist older adults, but there’s no inventory, limited training, and inadequate safeguards. Without stronger governance, these tools could create unintended harm.”

 

The state’s current AI Policy, issued by the Office of Information Technology Services (ITS) in January 2024, outlines responsible use but lacks concrete implementation guidance. Agencies are left to interpret the policy on their own, and in some cases—such as NYSOFA’s AI companion device for seniors—they weren’t even sure whether their tools fell under the policy’s scope.

 

Only the DMV had internal AI oversight policies, but even it exempted its facial recognition system from review—despite ITS guidelines clearly defining such systems as AI.

 

The audit also found no agency had conducted regular audits of their AI systems for accuracy, bias, or reliability. While some staff had been trained in using AI tools, none had been trained on potential risks like algorithmic bias or data misuse.

 

DiNapoli’s report makes seven recommendations, including that ITS provide more robust guidance and training, and that agencies establish governance structures and coordinate with ITS. He also announced plans to advance legislation requiring independent audits of AI use across state agencies.

 

“AI can help deliver services more efficiently, but the public deserves assurance it’s being used responsibly, ethically, and with proper oversight,” DiNapoli said.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the New York or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter