NSA, CISA, and Allies Release Joint Guidance on Securing Data for AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/30/2025
In News

In a sweeping effort to address emerging cybersecurity risks in artificial intelligence, six major intelligence and cybersecurity agencies have jointly released detailed guidance on securing data used to train and operate AI systems. The document, titled “AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems,” was published by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, and counterparts in Australia, New Zealand, and the United Kingdom.

 

The guidance lays out critical best practices for organizations using AI systems, with a focus on safeguarding sensitive, proprietary, or mission-critical data across the AI lifecycle—from planning and data collection to deployment and ongoing monitoring. According to the document, maintaining data integrity and provenance is essential for ensuring reliable AI outcomes and preventing model manipulation through data poisoning or drift.

 

The report warns that AI models are only as trustworthy as the data they’re trained on, highlighting three primary risks: vulnerabilities in the data supply chain, maliciously modified (or “poisoned”) data, and data drift. These threats can compromise system accuracy and result in unpredictable or harmful outputs.

 

The guidance emphasizes technical strategies to mitigate these threats, including encryption, cryptographic hashing, digital signatures, access controls, anomaly detection, and secure deletion protocols. It also calls for rigorous metadata management, regular audits for bias and duplication, and the use of secure, certified storage solutions.

 

Notably, the document sheds light on how inexpensive and low-effort it can be for bad actors to poison widely used web-scale datasets, sometimes for as little as $60. To counteract this, the authors recommend dataset verification through hash checks, regular re-scraping by curators, and formal certification by both data and model providers.

 

The report also aligns with existing frameworks like the NIST AI Risk Management Framework and references Executive Order 14179, which mandates the development of ideologically neutral AI systems in federal agencies.

 

As AI adoption accelerates across critical infrastructure and cybersecurity, the intelligence community underscores the importance of robust, proactive data security measures. The report concludes that every phase of the AI lifecycle must be secured to safeguard both the technology and the missions it supports.

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter