UPDATE — AUGUST 2025: Federal agencies have already submitted their 2024 AI use case inventories. The Office of Management and Budget (OMB) consolidated the data, revealing more than 2,133 AI use cases across 41 agencies, including 227 safety- or rights-impacting applications. The Department of Health and Human Services (HHS) alone reported 271 use cases, a 66% year-over-year increase. Meanwhile, OMB released new 2025 guidance (M-25-21) in April, emphasizing faster innovation, stronger AI governance, and public trust. Departments such as Homeland Security (DHS) are now preparing their 2025 inventories under this updated direction.
ORIGINAL NEWS STORY:
Biden Administration Finalizes Comprehensive AI Reporting Guidelines for Federal Agencies
The Biden administration has finalized its guidance for federal agencies’ 2024 artificial intelligence (AI) use case inventories. The guidance introduces a more structured process for cataloging AI applications across the federal government, with inventories due December 16, 2024. It builds on earlier governance efforts but adds new requirements to increase consistency, transparency, and accountability.
Expanding Oversight Beyond Defense and Intelligence
The guidance primarily targets non-Department of Defense (DoD) and non-intelligence agencies. This reporting framework was first created under a 2020 executive order and later codified into law. The Biden administration has updated it to address growing risks and complexities in AI use across government operations. The final version narrows exclusions for certain use cases and adds a process for agencies to request extensions to meet risk management standards.
Standardized Reporting and Greater Transparency
Under the new rules, federal agencies must disclose their AI use cases using a standardized format through an OMB-managed platform. Each agency must also publish a machine-readable CSV file of all publicly releasable use cases on its website. This effort increases public transparency by showing how AI supports government functions and what safeguards exist to prevent misuse.
Agencies must include detailed information about their AI systems, especially those that are safety-impacting or rights-impacting. They must specify whether each system handles personally identifiable information (PII), uses custom code, or disseminates public information. In addition, instead of deleting retired systems, agencies must now mark them as “no longer in use.” This requirement helps preserve a full historical record of AI system lifecycles across the government.
Flexibility and Accountability
The updated guidance also gives agencies limited flexibility. They can request waivers for certain risk management requirements or determine that specific use cases initially marked as high-risk no longer meet that definition. However, agencies must publicly document and justify these decisions, ensuring accountability remains intact.
Meanwhile, the OMB tightened definitions for excluded categories of AI applications. National security and intelligence use cases remain exempt, but repetitive or related tasks that rely on commercial AI tools must now be reported. This change ensures that even small, routine applications of AI are visible in public inventories.
Collaboration and Accessibility
As agencies prepare to meet the December 16 deadline, they are encouraged to use plain language and avoid excessive acronyms in their inventories. This helps make reports accessible to the public and ensures citizens can easily understand how AI is being deployed in government. The guidance also instructs agencies to work with the Chief Artificial Intelligence Officers (CAIO) Council to share best practices and improve interagency collaboration. By standardizing reporting methods, the OMB hopes to strengthen oversight and foster cooperation across departments.
Need Help?
Navigating the evolving AI policy landscape can be challenging. If you have questions about OMB’s AI guidance or other U.S. and global AI regulations, reach out to BABL AI. Their Audit Experts can help you understand compliance obligations, assess risks, and develop responsible AI strategies tailored to your organization.

