Biden Issues First National Security Memorandum on AI to Strengthen U.S. Leadership and Global Standards for Safe, Ethical AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/05/2024
In News

UPDATE — SEPTEMBER 2025: The U.S. AI Safety Institute (USAISI), housed at NIST, has released draft guidance on AI red-teaming and model evaluation, positioning itself as the lead for interagency security coordination. NIST also updated its AI Risk Management Framework in mid-2025, tailoring it to national security contexts, including dual-use and mission-critical systems. Meanwhile, the Departments of Commerce and Defense strengthened export controls on advanced AI chips in April 2025 to address vulnerabilities in the semiconductor supply chain, a core focus of the NSM. Defense agencies also launched pilot AI assurance programs, while intelligence agencies embedded adversarial threat assessments into procurement rules.

The first global meeting of AI safety institutes was held in San Francisco in spring 2025, bringing together partners from the EU, U.K., Canada, Japan, Singapore, and South Korea. The U.S. also built on the G7’s 2023 Hiroshima Agreement by adopting a shared AI Systems Use Code, and promoted the Political Declaration on Responsible Military AI, which gained over 65 signatories at a July 2025 U.N. session.

To ensure accountability, the Office of Management and Budget (OMB) issued follow-up guidance in summer 2025 requiring agencies to track AI deployments under the NSM’s governance framework. The Department of Commerce also delivered its competitiveness report in June, stressing workforce pipelines, compute access, and small-business participation in defense AI.

 

ORIGINAL NEWS STORY:

 

Biden Issues First National Security Memorandum on AI to Strengthen U.S. Leadership and Global Standards for Safe, Ethical AI

 

President Joe Biden issued the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI), outlining a detailed approach to ensure AI technology supports U.S. national security goals responsibly. The NSM directs the U.S. government to lead global efforts in developing safe, secure, and trustworthy AI. This directive builds upon Biden’s previous executive actions aimed at promoting responsible AI innovation.

 

The NSM outlines three key objectives: establishing the U.S. as a leader in responsible AI, utilizing cutting-edge AI for national security while respecting democratic values, and advancing global consensus on AI governance. Biden’s initiative signals an understanding of AI’s dual role as both a national security tool and a potential risk if not managed carefully.

 

A significant focus of the NSM is the strengthening of U.S. AI capabilities. Recognizing that powerful AI models require substantial computational resources, the NSM mandates improved security for semiconductor supply chains and development of advanced AI-enabled supercomputers. These actions are designed to protect the U.S. AI ecosystem against foreign interference and ensure the country maintains an edge in AI innovation.

 

The memorandum assigns high priority to counterintelligence efforts, directing relevant agencies to assist AI developers in safeguarding their technologies from espionage and cyber threats. It formally designates the U.S. AI Safety Institute as the primary point of contact for the AI industry in matters of security, underscoring the Institute’s central role in working with agencies like the Department of Defense and Department of Energy to evaluate emerging AI technologies.

 

The NSM emphasizes that any AI deployment by the federal government must align with democratic values. To operationalize this, the memorandum directs the creation of an AI Governance and Risk Management Framework, establishing guidelines for ethical AI use in national security contexts. The framework provides a structured approach for agencies to monitor and mitigate AI risks, including privacy invasions, bias, and discrimination.

 

The memorandum also advocates for AI’s role in enhancing public safety, with each agency tasked to assess and manage potential AI risks, particularly those involving personal privacy, discrimination, or transparency. This commitment to accountability and human rights aims to establish a foundation for responsible AI use in government operations.

 

Acknowledging AI’s international implications, the NSM emphasizes the importance of global AI governance. Over the past year, the Biden administration has spearheaded several initiatives, such as developing an International Code of Conduct for AI alongside G7 allies and championing the Political Declaration on Military AI. The NSM instructs U.S. agencies to work with allies to develop a cohesive AI governance framework that upholds human rights and aligns with international law.

 

To further strengthen these efforts, the U.S. plans to host the inaugural meeting of a global network of AI safety institutes in San Francisco, fostering a unified approach to AI standards and safety assessments worldwide. Through these diplomatic efforts, the Biden administration aims to lead a coalition that establishes norms for safe AI development.

 

The NSM directs various agencies to take immediate actions, including comprehensive evaluations of AI-related risks and updates to their internal guidelines to ensure alignment with the memorandum’s objectives. The administration plans to conduct an economic assessment of the AI sector’s competitiveness, reinforcing the importance of a robust AI ecosystem that balances innovation with security.

 

 

Need Help? 

 

If you’re wondering how AI regulations could impact you in this ever changing landscape, don’t hesitate to reach out to BABL AI. Their team of Audit Experts is ready to offer valuable insight while answering any questions or concerns you may have. 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter