UPDATE – JULY 2025: While Singapore’s 2024 Advisory Guidelines under the PDPA remain active and foundational, new developments announced at the PDP Summit 2025 have significantly expanded the country’s AI governance ecosystem. These include the launch of a Global AI Assurance Sandbox, a national Privacy Enhancing Technologies (PET) Guide, and the elevation of the Data Protection Trustmark (DPTM) to a formal national standard (SS 714:2025). These initiatives reflect Singapore’s push toward more structured, risk-based, and internationally aligned AI and data protection practices. Organizations should now reference both the 2024 guidelines and these 2025 frameworks for a complete view of compliance and responsible AI deployment in Singapore.
ORIGINAL NEWS STORY:
Singapore Releases Advisory Guidelines on Personal Data Protection
While no AI-specific regulation has been issued yet, Singapore recently unveiled AI guidelines. On March 1, the Personal Data Protection Commission released Advisory Guidelines providing guidance to organizations on using personal data for developing, testing, and deploying AI systems that make recommendations, predictions or decisions involving personal data under the Personal Data Protection Act (PDPA) in Singapore. It aims to provide clarity on using personal data compliantly for AI while giving assurance to consumers on how their data is used through transparency measures like consent notifications and written policies.
Key Highlights of the Guidelines:
1. Developing/Testing AI Systems Using Personal Data:
Organizations may utilize the Business Improvement Exception or Research Exception under the PDPA for AI system development and testing without consent, under certain conditions. These exceptions apply when using personal data to enhance products/services, understand consumer behavior, or conduct commercial research.
2. Deploying AI Systems Collecting/Using Personal Data:
Consent is generally required for collecting/using personal data, unless exceptions like legitimate interests for fraud detection apply. Organizations must provide clear notification about the personal data collected, how it’s processed, and specific data features influencing outcomes.
3. Ensuring Accountability and Trustworthy AI:
Organizations must demonstrate accountability through written data protection policies, ensuring fairness, data quality, and bias assessment. Technical safeguards and human oversight are essential for trustworthy AI, especially for high-impact use cases.
4. Procurement of AI Systems from Vendors:
Vendors acting as data intermediaries must adhere to PDPA’s Protection and Retention Obligations. Recommended practices include data mapping/labeling and maintaining records on data lineage and transformations. Vendors should support customers in meeting PDPA obligations by understanding their information needs and providing training on AI system usage.
Need Help?
Keeping track of the ever-changing AI landscape can be tough. Especially if you have questions and concerns about how it will impact you. Therefore, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.