UPDATE — SEPTEMBER 2025: Since OMB published its AI acquisition guidance (M-24-18) in October 2024, federal agencies have shifted into the implementation phase. By spring 2025, major departments like DoD, HHS, and DHS began updating procurement processes, adding AI risk assessment checklists, and training acquisition staff. Smaller agencies, however, reported falling behind due to limited resources.
In June 2025, OMB released a progress report showing that more than two-thirds of large agencies were already aligning their acquisitions with M-24-18. To support consistency, OMB announced it would publish an AI Acquisition Playbook later in 2025, developed with GSA’s AI Center of Excellence and NIST, to give contracting officers detailed, practical instructions.
Meanwhile, NIST updated its AI Risk Management Framework (RMF 1.1) in May 2025, which many agencies are now using as the benchmark for risk controls in procurements. The GSA also began piloting model AI contract clauses on explainability, auditability, and vendor data-sharing, with draft language expected to expand government-wide before the end of 2025. DoD issued its own aligned AI acquisition guidance in July 2025, including an “AI Impact Level” classification system to guide contracting decisions.
On Capitol Hill, lawmakers have held hearings on AI procurement accountability and civil liberties, with some pushing to codify OMB’s guidance into law. While no statute has passed yet, oversight pressure is rising as agencies scale up AI acquisitions.
ORIGINAL NEWS STORY:
OMB Issues New Guidance for Responsible AI Acquisition in U.S. Government
The Office of Management and Budget (OMB) has released new guidance to ensure that federal agencies responsibly acquire artificial intelligence (AI) technologies, advancing innovation while managing associated risks. The guidance, titled *Advancing the Responsible Acquisition of Artificial Intelligence in Government* (M-24-18), is part of the Biden-Harris Administration’s broader strategy to strengthen AI safety, protect privacy, and promote civil rights, as outlined in President Biden’s Executive Order on AI.
With over $750 billion in annual federal spending, including $100 billion on IT products and services in 2023, the federal government’s purchasing power has significant influence over technological advancements, including AI. This new memo builds on previous guidance, M-24-10, issued in March 2024, which set the first binding government-wide requirements for AI use. M-24-18 focuses on providing agencies with specific guidelines for the acquisition of AI, ensuring that risks are mitigated and that AI technologies deliver reliable and ethical outcomes.
AI presents both opportunities and challenges, particularly when it comes to the complexities of how systems are built, trained, and deployed. To address these challenges, M-24-18 introduces best practices and strict requirements aimed at managing AI risks. These guidelines are especially important for AI use cases that involve personal rights or safety concerns.
The OMB memo emphasizes the importance of involving privacy officials early in the AI acquisition process to identify and mitigate privacy risks, ensuring compliance with applicable laws. Agencies are instructed to collaborate with vendors to assess the specific AI technologies being acquired and to apply additional risk management procedures when dealing with rights- or safety-impacting AI.
The OMB also directs agencies to negotiate clear contractual terms to ensure vendors provide detailed information that allows for thorough risk assessment and performance evaluation. These contracts should safeguard government data and intellectual property while ensuring that AI systems used in decision-making processes that affect the public are safe and transparent. In addition, the memo promotes the use of outcome-based acquisition techniques, which focus on driving performance while continuously managing risks.
A key component of the guidance is promoting a competitive AI marketplace that encourages innovation and diversity among suppliers. This ensures that agencies have access to the most advanced AI solutions from a wide range of vendors, helping to reduce the risks associated with vendor lock-in and to drive better value for the government.
To achieve this, agencies are encouraged to incorporate acquisition principles that minimize dependency on a single vendor and to prioritize interoperability and transparency during the market research and vendor selection processes. The guidance also suggests leveraging innovative acquisition strategies that focus on securing good contractor performance and achieving the agency’s mission objectives.
Recognizing the rapidly evolving nature of AI technology and the novel risks it presents, the guidance calls for increased collaboration among federal agencies. Cross-functional teams, including experts in AI, acquisition, cybersecurity, privacy, and civil liberties, are essential to informing strategic AI acquisition and ensuring that the technologies are deployed effectively and responsibly.
These collaborations will help agencies identify and prioritize AI investments that align with their missions and share lessons learned across the federal government. By pooling knowledge and resources, agencies can better navigate the challenges associated with AI procurement and improve the effectiveness of future AI policies and procedures.
Need Help?
For those curious about how the Biden Administration and other global laws could impact their company, reaching out to BABL AI is recommended. One of their audit experts will gladly provide assistance.