UPDATE — JULY 2025: The NSA’s Cybersecurity Information Sheet (CSI) Deploying AI Systems Securely—released April 15, 2024—remains a cornerstone for secure AI deployment. It continues to guide National Security Systems and Defense Industrial Base stakeholders on zero trust, secure design, lifecycle validation, and operational hardening.
In May 2025, the NSA expanded its AI security guidance with a second CSI: AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems. This document focuses on the AI data lifecycle. It emphasizes provenance, integrity, access controls, and supply chain security. Moreover, it explains how organizations can apply layered safeguards across the entire process.
Together, these two CSIs provide comprehensive and up-to-date recommendations. As a result, organizations now have federal guidance to protect both AI systems and the data they rely on.
ORIGINAL NEWS STORY:
NSA Unveils Comprehensive Guidance for Secure Deployment of AI Systems
The National Security Agency (NSA) has unveiled a Cybersecurity Information Sheet (CSI) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems,” providing comprehensive guidance for organizations stepping into the deployment of Artificial Intelligence (AI) systems. Released on April 15, the CSI serves as a roadmap for National Security System owners and Defense Industrial Base companies, especially those engaging with AI systems developed by external entities.
The CSI aims to strengthen the security of AI systems by focusing on confidentiality, integrity, and availability. It addresses known vulnerabilities and outlines defenses against malicious activity. Its scope includes machine learning-based systems deployed on-premises or in private clouds—particularly in high-threat environments—but does not extend to third-party AI deployments.
Organizations Must Remain Proactive
To secure deployment environments, organizations are urged to define clear roles, align with IT standards, and design robust architectures. Best practices include encrypting sensitive data, patching vulnerabilities quickly, and applying zero trust principles. Strong authentication and access controls are also essential.
Throughout the AI lifecycle, the guidance stresses validation steps like version control, integrity checks, supply chain security, and API protection. Continuous monitoring through logging and detection of unauthorized changes helps ensure system reliability. Protecting model weights through hardware safeguards and isolated storage is also advised.
Conclusion
For operations and maintenance, the CSI highlights role-based access, multi-factor authentication, user training, audits, and penetration testing. Regular updates, high availability planning, and disaster recovery steps like immutable backups are critical.
Need Help?
Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.