Biden Administration Finalizes Comprehensive AI Reporting Guidelines for Federal Agencies

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/28/2024
In News

The Biden administration has finalized its guidance for federal agencies’ 2024 artificial intelligence (AI) use case inventories, introducing a more comprehensive and structured process for cataloging AI applications across government. The guidance lays out the requirements for federal agencies to inventory their AI use cases, with a submission deadline of December 16, 2024. This new directive builds on previous AI governance efforts but introduces significant updates aimed at improving consistency, transparency, and accountability across all participating agencies.

 

The guidance primarily targets non-Department of Defense (DoD) and non-intelligence community agencies, as it refines how AI use cases should be reported, categorized, and disclosed to the public. This reporting requirement, first established under a 2020 Trump-era executive order and later codified into law, has evolved under the Biden administration to address the growing complexities and risks associated with AI deployment in government operations. The final version of the guidance incorporates several key changes from earlier drafts, including a narrowing of exclusions for certain use cases and provisions for requesting deadline extensions for compliance with risk management practices.

 

Under the new guidelines, federal agencies must disclose their AI use cases using a standardized format managed by the Office of Management and Budget (OMB). Each agency is required to submit an inventory through an OMB-managed platform and subsequently publish a machine-readable CSV file of all publicly releasable use cases on their respective websites. This effort is aimed at enhancing transparency by providing the public with detailed information about how AI is being used across government, and the safeguards in place to ensure ethical and responsible AI applications.

 

A notable change in the final guidance is the requirement for agencies to include additional detailed information about their AI use cases, especially those that are safety-impacting or rights-impacting. Agencies must report granular details about AI applications based on their development stage, including whether the system handles personally identifiable information (PII), whether it involves custom-developed code, and whether it disseminates information to the public. Additionally, agencies are now required to mark AI use cases that have been retired as “no longer in use” rather than removing them from the inventory altogether. This change aims to provide a clearer historical record of AI applications and their lifecycle within government.

 

The guidance also introduces flexibility for agencies to modify their inventory practices. For example, agencies can now request waivers for certain risk management requirements under OMB’s AI governance memo. They can also determine that some use cases, initially presumed to be rights- or safety-impacting, do not actually meet those criteria. These determinations must be publicly reported, adding another layer of accountability to the process.

 

Moreover, the final guidance refines the categories of use cases that are excluded from inventory reporting. Research and development (R&D) use cases and those involving national security systems or intelligence community operations remain excluded. However, the finalized document tightens the definitions for other excluded categories, ensuring that more AI applications fall within the reporting scope. For example, the previous draft had excluded one-time or routine tasks carried out using commercial-off-the-shelf AI products; the final version now requires agencies to inventory such use cases if they are performed repeatedly or involve related tasks.

 

As agencies prepare to meet the December 16 deadline, they are also advised to be transparent in their public inventories. The guidance encourages the use of plain language and minimizes the use of acronyms to make the information accessible to the general public. Additionally, agencies are expected to coordinate with the Chief Artificial Intelligence Officers (CAIO) Council to share best practices and improve interagency collaboration on common AI use cases.

 

 

Need Help?

 

Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter