UPDATE — SEPTEMBER 2025: Since OMB published its AI acquisition guidance (M-24-18) in October 2024, federal agencies have shifted into the implementation phase. By spring 2025, major departments like DoD, HHS, and DHS began updating procurement processes, adding AI risk assessment checklists, and training acquisition staff. Smaller agencies, however, reported falling behind due to limited resources.
In June 2025, OMB released a progress report showing that more than two-thirds of large agencies were already aligning their acquisitions with M-24-18. To support consistency, OMB announced it would publish an AI Acquisition Playbook later in 2025, developed with GSA’s AI Center of Excellence and NIST, to give contracting officers detailed, practical instructions.
Meanwhile, NIST updated its AI Risk Management Framework (RMF 1.1) in May 2025, which many agencies are now using as the benchmark for risk controls in procurements. The GSA also began piloting model AI contract clauses on explainability, auditability, and vendor data-sharing, with draft language expected to expand government-wide before the end of 2025. DoD issued its own aligned AI acquisition guidance in July 2025, including an “AI Impact Level” classification system to guide contracting decisions.
On Capitol Hill, lawmakers have held hearings on AI procurement accountability and civil liberties, with some pushing to codify OMB’s guidance into law. While no statute has passed yet, oversight pressure is rising as agencies scale up AI acquisitions.
ORIGINAL NEWS STORY:
OMB Issues New Guidance for Responsible AI Acquisition in U.S. Government
The Office of Management and Budget (OMB) has released new guidance to ensure that federal agencies responsibly acquire artificial intelligence (AI) technologies, advancing innovation while managing associated risks. The guidance, titled *Advancing the Responsible Acquisition of Artificial Intelligence in Government* (M-24-18), is part of the Biden-Harris Administration’s broader strategy to strengthen AI safety, protect privacy, and promote civil rights, as outlined in President Biden’s Executive Order on AI.
Building Accountability Into AI Procurement
The U.S. government spends over $750 billion annually, including more than $100 billion on IT products and services. This purchasing power gives agencies a major role in shaping how AI is developed and deployed. M-24-18 expands on March 2024’s M-24-10 memo, which introduced the first government-wide rules for AI use. The new guidance focuses on acquisition, ensuring agencies manage risks while encouraging innovation. AI offers significant benefits, but it also presents new challenges related to data handling, bias, and accountability. M-24-18 directs agencies to adopt best practices for identifying, evaluating, and mitigating risks before awarding contracts for AI technologies.
Stronger Safeguards and Collaboration
The memo requires agencies to involve privacy officials early in the acquisition process to reduce potential harm. It also instructs them to collaborate with vendors to understand system design, data sources, and potential bias. Contracts must include clear requirements for transparency, explainability, and data protection, giving agencies the ability to audit systems and ensure accountability throughout an AI system’s lifecycle. OMB urges agencies to use outcome-based acquisitions that reward measurable results rather than static deliverables. This approach helps agencies adapt to evolving risks while maintaining performance and oversight.
Encouraging Competition and Innovation
A major goal of M-24-18 is to create a diverse, competitive AI marketplace. Agencies are encouraged to seek multiple vendors to avoid overreliance on a single provider. They must also consider interoperability and open standards when evaluating bids to promote transparency and long-term value. The guidance calls for agencies to apply innovative acquisition techniques that improve contractor performance and ensure mission alignment. This includes pilot projects, phased testing, and shared performance benchmarks.
Cross-Agency Collaboration
To manage AI’s complexity, M-24-18 promotes coordination among technical, legal, and ethical experts. It encourages cross-functional teams—bringing together specialists in AI, cybersecurity, privacy, and civil rights—to guide procurement and implementation. These teams help agencies identify the right use cases, set measurable goals, and share lessons across government. By pooling expertise, agencies can build more resilient and trustworthy AI systems while maintaining public confidence in federal technology use.
Need Help?
For those curious about how the Biden Administration and other global laws could impact their company, reaching out to BABL AI is recommended. One of their audit experts will gladly provide assistance.


