UPDATE — JULY 2025: This article remains accurate following changes in U.S. federal AI policy. While the Biden-era OMB AI guidance is still publicly available, the Trump administration repealed it in January 2025. Still, the policy’s goals and structure continue to influence ongoing debates about AI governance in the United States.
ORIGINAL NEWS STORY:
White House Directs Federal Agencies to Prioritize Responsible AI Governance and Innovation
U.S. Vice President Kamala Harris revealed that the White House Office of Management and Budget (OMB) unveiled its inaugural government-wide policy aimed at mitigating the risks associated with AI while harnessing its benefits. This announcement marks a pivotal step in fulfilling a crucial aspect of President Joe Biden’s momentous AI Executive Order, which mandated comprehensive actions to bolster AI safety and security, safeguard privacy, promote equity and civil rights, ensure consumer and worker protection, foster innovation and competition, and elevate American leadership worldwide.
Federal agencies have diligently executed all the tasks outlined in the Executive Order, including the completion of the 150-day actions. Further reinforcing the Biden-Harris Administration’s commitment to spearheading responsible AI innovation. The recent unveiling of OMB’s groundbreaking policy underscores the administration’s unwavering dedication.
Federal Agencies Must Implement Safeguards
The OMB policy gave federal agencies a deadline of December 1, 2024 to adopt safety measures for AI use. These safeguards include:
-
Risk and impact assessments
-
Testing protocols
-
Ongoing monitoring
The goal was to protect Americans’ rights and safety in areas like healthcare, education, housing, and employment.
Public Transparency and Oversight
The policy emphasized greater transparency. Agencies must:
-
Publish detailed inventories of their AI use
-
Identify systems that affect public rights or safety
-
Explain how they manage those risks
They must also report on metrics tied to sensitive AI systems and clearly disclose any exemptions, along with reasons.
To ensure internal oversight, each agency must:
-
Appoint a Chief AI Officer
-
Establish an AI Governance Board
These roles and structures were designed to strengthen accountability and responsible leadership in how agencies use AI.
Building Capacity and Encouraging Innovation
To support these efforts, the federal government pledged to hire 100 AI professionals by summer 2024. Agencies were encouraged to explore how AI could help solve big challenges like climate change and public health—but only with strong safeguards in place.
The OMB also aimed to remove outdated policies that made it harder for agencies to use AI tools effectively.
Need Help?
Keeping track of the ever-changing AI landscape can be tough. Especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

