UPDATE — JULY 2025: This article remains accurate following the latest developments in federal AI policy. While the Biden-era OMB policy remains publicly available, it was repealed by the Trump administration in January 2025. However, the original goals, structure, and implications of the policy continue to influence ongoing debates about U.S. AI governance.
ORIGINAL NEWS STORY:
White House Directs Federal Agencies to Prioritize Responsible AI Governance and Innovation
U.S. Vice President Kamala Harris revealed that the White House Office of Management and Budget (OMB) unveiled its inaugural government-wide policy aimed at mitigating the risks associated with AI while harnessing its benefits. This announcement marks a pivotal step in fulfilling a crucial aspect of President Joe Biden’s momentous AI Executive Order, which mandated comprehensive actions to bolster AI safety and security, safeguard privacy, promote equity and civil rights, ensure consumer and worker protection, foster innovation and competition, and elevate American leadership worldwide.
Federal agencies have diligently executed all the tasks outlined in the Executive Order, including the completion of the 150-day actions. Further reinforcing the Biden-Harris Administration’s commitment to spearheading responsible AI innovation. The recent unveiling of OMB’s groundbreaking policy underscores the administration’s unwavering dedication.
The OMB’s Approach
The newly unveiled policy encompasses a multifaceted approach. Focuses on addressing risks stemming from AI utilization, expanding transparency in AI usage, promoting responsible AI innovation, nurturing the AI workforce, and fortifying AI governance. Under this directive, federal agencies are mandated to implement robust safeguards by December 1, 2024, to ensure that AI applications uphold Americans’ rights and safety. These safeguards encompass a spectrum of measures, including rigorous assessments, testing protocols, and ongoing monitoring mechanisms to mitigate the risks associated with algorithmic discrimination and promote transparency in AI deployment across various sectors, ranging from healthcare and education to employment and housing.
Furthermore, the policy places a premium on enhancing public transparency. Requiring federal agencies to release expanded inventories of their AI use cases. Inventories will shed light on the AI applications impacting rights or safety and explain the measures undertaken to mitigate risks. Additionally, agencies must report metrics about sensitive AI use cases. On top of notifying the public of any AI exemptions granted, accompanied by justifications.
Conclusion
Eliminating unnecessary barriers hindering federal agencies’ AI initiatives is sought by OMB’s policy. The transformative potential of AI in addressing societal challenges, such as climate crises and public health emergencies, is underscored, with agencies encouraged to explore innovative AI applications underpinned by robust safeguards. A pledge to recruit 100 AI professionals by Summer 2024 is made by the administration.
Moreover, to bolster AI governance, agencies are directed to designate Chief AI Officers and establish AI Governance Boards to oversee AI deployment and governance within their respective domains. These measures are aimed at ensuring accountability, leadership, and oversight in AI implementation across federal agencies, bolstering the government’s capacity to navigate the complexities of AI adoption while safeguarding public interests.
Need Help?
Keeping track of the ever-changing AI landscape can be tough. Especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.