Philippine National Privacy Commission Issues Guidelines on AI and Data Privacy

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/27/2024
In News

The Philippine National Privacy Commission (NPC) has issued an official advisory clarifying how the Data Privacy Act of 2012 (DPA) applies to AI systems that process personal data. The announcement, released on December 19, 2024, signals a stronger regulatory focus on AI governance in the country.

Importantly, the advisory confirms that the DPA and its Implementing Rules and Regulations apply to AI systems during training, testing, and deployment. As a result, organizations developing or using AI must treat these activities as personal data processing under Philippine law.

DPA Applies Across the AI Lifecycle

According to the NPC, personal information controllers (PICs) must follow core privacy principles throughout the AI lifecycle. These principles include transparency, accountability, fairness, accuracy, and data minimization.

Therefore, organizations cannot treat AI training datasets or automated outputs as exempt from privacy obligations. Instead, they must ensure compliance at every stage of development and deployment.

Transparency and Data Subject Rights

The advisory requires PICs to clearly inform data subjects about AI-driven processing. Specifically, organizations must disclose the purpose, scope, and nature of processing. They must also explain potential risks, expected outputs, and available dispute mechanisms.

In addition, the NPC stresses the importance of upholding data subject rights. Individuals retain the right to object, correct, or erase their personal data. Organizations must provide clear processes for exercising these rights, even when data has been incorporated into AI datasets.

Governance and Human Oversight

Beyond transparency, the advisory calls for stronger governance measures. PICs must conduct Privacy Impact Assessments and integrate privacy-by-design principles into AI systems.

Furthermore, the NPC encourages the formation of AI ethics boards. These bodies can guide responsible deployment and monitor compliance. Organizations must also ensure meaningful human intervention in automated decision-making systems.

Regular monitoring is equally critical. PICs must continuously evaluate AI outputs to confirm that systems remain lawful and ethical.

Bias Mitigation and Fairness Controls

To address discrimination risks, the NPC requires PICs to monitor systemic, human, and statistical biases in AI systems. Organizations must implement safeguards to prevent manipulative or oppressive outcomes.

Consequently, fairness is not optional. It is a regulatory expectation tied directly to compliance under the DPA.

Looking Ahead

The NPC plans to release further guidance and training materials to support compliance. In the meantime, organizations using AI should review their data protection programs and update governance structures before 2025.

 

 

Need Help?

 

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter