U.S. Advances AI Governance with New Safety Guidelines, Talent Initiatives, and Global Leadership Efforts

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/31/2024
In News

In a significant stride towards managing the opportunities and challenges posed by artificial intelligence (AI), President Joe Biden’s administration announced a series of comprehensive actions and developments following the issuance of a landmark Executive Order (EO) nine months ago. This EO aimed at ensuring that the United States leads in AI innovation while effectively managing associated risks. The latest update includes new commitments from Apple and highlights federal agencies’ achievements in meeting the EO’s requirements.

 

President Biden’s Executive Order built on voluntary commitments previously secured from 15 leading U.S. AI companies. The inclusion of Apple in this initiative further solidifies these commitments as foundational elements of responsible AI innovation. The administration reported that federal agencies had successfully completed all 270-day actions mandated by the EO, demonstrating a coordinated and robust approach to AI governance.

 

Significant progress has been made across various domains, addressing safety, security, privacy, equity, and consumer protection concerns. The National Institute of Standards and Technology (NIST) has been at the forefront, releasing critical safety guidelines and frameworks. These include managing the risks associated with generative AI and dual-use foundation models, which are AI systems capable of being used for both beneficial and harmful purposes. These guidelines aim to prevent the misuse of AI technologies in ways that could endanger individuals, public safety, or national security.

 

The Department of Energy (DOE) has also expanded its AI testbeds and model evaluation tools, focusing on safeguarding critical infrastructure and exploring innovative AI systems. These efforts are part of a broader initiative to enhance the United States’ AI capabilities in energy security and national security sectors. The National Science Foundation (NSF) has launched initiatives to fund researchers in designing AI-ready testbeds, further bolstering the country’s AI research infrastructure.

 

A notable aspect of the administration’s approach is the emphasis on public engagement and transparency. The AI Safety Institute has released guidelines for managing misuse risks associated with dual-use AI models, seeking public feedback to refine these recommendations. This initiative reflects a commitment to open dialogue and collaborative governance in the rapidly evolving field of AI.

 

In line with promoting responsible AI development, the administration has launched a government-wide AI Talent Surge. This initiative aims to bring hundreds of AI professionals into government service, enhancing capacity for both national security and non-national security missions. The AI Talent Surge has already resulted in over 200 hires, including specialists in AI policy and technology.

 

The Department of Commerce’s report on dual-use foundation models, set for release soon, will provide a comprehensive analysis of the benefits, risks, and policy implications associated with widely available AI models. This report is part of a broader effort to develop a strategic framework for managing AI technologies that have significant potential for both societal benefit and harm.

 

To further support innovation, the administration has invested in various programs, including a $23 million initiative to promote privacy-enhancing technologies. The NSF’s Privacy-preserving Data Sharing in Practice program and the Experiential Learning in Emerging and Novel Technologies program are key components of this strategy, aimed at fostering inclusive learning and expanding AI research capabilities.

 

On the international front, the U.S. has taken significant steps to lead global efforts in AI governance. This includes a comprehensive plan for U.S. engagement on global AI standards, developed by NIST, and the launch of a global network of AI Safety Institutes. Additionally, the U.S. spearheaded a landmark United Nations General Assembly resolution promoting the safe and secure use of AI to address global challenges.

 

 

Need Help?

 

For those curious about how the Biden Administration and other global laws could impact their company, reaching out to BABL AI is recommended. One of their audit experts will gladly provide assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter