Singapore’s Cyber Security Agency Releases Comprehensive Guides to Enhance AI System Security Amid Rising Cyber Threats

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/06/2024
In News

UPDATE — SEPTEMBER 2025: Singapore’s Cyber Security Agency (CSA) continues to position itself as a global leader in AI security, following the release of its Guidelines on Securing AI Systems and the Companion Guide in October 2024. While the main Guidelines remain unchanged, the Companion Guide—intended as a living document—has already received its first incremental updates in 2025.

In May 2025, CSA issued clarifications and expanded examples around adversarial robustness testing and secure model retraining, two areas where industry pilots had flagged gaps. The agency also added references to newer international frameworks, including the updated NIST AI Risk Management Framework (January 2025) and draft standards from ISO/IEC JTC 1 SC 42 on AI security. These additions mark a deliberate effort to align Singapore’s approach with global best practices.

The CSA has also confirmed that sector-specific pilots are underway. Through 2025, the Smart Nation and Digital Government Group began applying the guidelines in healthcare AI and smart city deployments, generating new use-case driven recommendations. CSA has signaled that lessons learned from these pilots will be folded into the next major revision of the Companion Guide, expected in 2026.

At the ASEAN Digital Ministers’ Meeting in July 2025, Singapore announced that its AI security framework would serve as a reference point for a forthcoming ASEAN AI Governance and Security framework. This move elevates the CSA’s guidelines from a national initiative to a potential regional benchmark for AI cybersecurity.

 

ORIGINAL NEWS STORY:

 

Singapore’s Cyber Security Agency Releases Comprehensive Guides to Enhance AI System Security Amid Rising Cyber Threats

 

Singapore’s Cyber Security Agency (CSA) has published an extensive set of guidelines and a companion guide aimed at securing artificial intelligence (AI) systems, a move signaling the nation’s proactive stance on managing cybersecurity risks associated with AI. The new resources, titled “Guidelines on Securing AI Systems” and the “Companion Guide on Securing AI Systems,” were developed in consultation with AI and cybersecurity experts across sectors. The CSA emphasizes the need for both system owners and developers to adopt a lifecycle approach to AI security to address potential vulnerabilities effectively.

 

AI systems, while beneficial across sectors, introduce unique cybersecurity risks. According to the CSA, these systems are vulnerable to a range of novel attacks, including adversarial machine learning, which maliciously alters AI model behavior. The CSA’s publications focus on enhancing trust in AI by encouraging organizations to integrate robust security practices and risk management strategies throughout the AI system lifecycle, from planning and development to deployment and end-of-life.

 

Key Principles in the AI Security Guidelines

 

The “Guidelines on Securing AI Systems” outline several key principles for AI security. It suggests that AI should be designed with security in mind, also known as “secure by design” and “secure by default.” This approach, the CSA notes, will help organizations preemptively manage security risks before they evolve into significant threats. Unlike traditional software, which is generally rule-based, AI systems depend on machine learning and neural networks, making them more adaptive yet inherently more vulnerable to manipulation.

 

A foundational element of the guidelines is conducting a thorough risk assessment before implementing an AI solution. Risk assessments allow organizations to evaluate potential threats, prioritize security resources, and customize defense strategies based on specific use cases. The guidelines recommend continuous monitoring and feedback loops to maintain security as AI systems evolve.

 

A Full AI Lifecycle Approach

 

CSA breaks AI security into five stages:

 

  1. Planning and Design: Organizations should build security awareness across all teams involved in AI development. This stage includes running detailed risk assessments to spot vulnerabilities as early as possible.

  

  1. Development: Because AI depends heavily on external data, APIs, and third-party models, supply-chain security is a major concern. CSA recommends strong supplier vetting, software bills of materials (SBOMs), and routine vulnerability scans to catch weaknesses introduced through external components.

   

  1. Deployment: Before deployment, organizations should secure their infrastructure and prepare incident-response procedures. These steps help teams react quickly when unexpected model behavior appears.

 

  1. Operations and Maintenance: CSA highlights the need to continuously monitor AI inputs, outputs, and system performance. Teams should watch for signs of adversarial attacks, data drift, or model degradation. Secure update processes are also critical to prevent new weaknesses during routine maintenance.

   

  1. End of Life: When retiring an AI system, organizations must securely dispose of models and datasets—particularly when handling sensitive information. Proper disposal prevents data leaks and unauthorized reuse.

 

Companion Guide Offers Practical Controls and Case Studies

 

The Companion Guide supplements the main Guidelines with detailed controls and real-world examples. It includes both traditional cybersecurity practices—such as secure coding, access control, and logging—and AI-specific measures like adversarial testing, model hardening, and input validation. CSA designed the guide as a living document, meaning it will be updated as new threats, tools, and best practices emerge. It also references tools such as the MITRE ATLAS framework to help organizations better understand the threat landscape.

Supporting a Secure AI Future

 

Together, the two documents aim to help organizations adopt AI with confidence. By combining classical cybersecurity practices with AI-specific controls, CSA hopes to raise security standards across sectors and build trust in increasingly complex AI systems.

 

Need Help?

 

If you’re wondering how Singapore’s Cyber Security Agency AI guidelines, or any other government’s guidelines, bills or regulations could impact you, reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter