Turkey’s Personal Data Protection Authority has published a new report examining “agentic AI,” outlining how these emerging systems could reshape digital services while also creating new risks for privacy, accountability, and personal data protection. The February 2026 document, titled “Etken Yapay Zekâ (Agentic AI),” presents a broad overview of agentic AI systems and highlights the need for a human-centered approach as the technology develops.
According to the report, agentic AI systems differ from more traditional artificial intelligence tools because they are designed to pursue goals, adapt to changing conditions, and initiate actions with varying levels of autonomy. Rather than simply responding to prompts, these systems may plan tasks, coordinate multiple steps, and adjust their behavior based on environmental inputs. The authority notes that this makes agentic AI more dynamic than conventional AI, but also more difficult to monitor and govern.
The document describes AI agents as the software components that carry out perception, decision-making, and action within broader agentic AI systems. In some cases, multiple agents may work together in coordinated structures to complete complex workflows. The report gives examples ranging from travel planning and conference organization to smart-city traffic management, research support, customer service, finance, healthcare decision support, and incident response.
While the authority says these systems may improve efficiency and help manage complex processes, it also warns of significant risks. Among them are high levels of autonomy, limited transparency, and the possibility that system behavior may become difficult to explain. The report says that in multi-step or multi-agent environments, errors, bias, or flawed assumptions may spread through later stages of a workflow, making harmful outcomes harder to detect and correct. It also notes that the “black box” nature of some AI systems may make it harder to understand why certain actions were taken or who should be held responsible.
A major focus of the report is the impact on personal data protection. The authority warns that agentic AI systems may process personal data across multiple stages, combine information from different sources, and generate new inferences about individuals. That, the report says, can raise concerns around purpose limitation, data minimization, lawful basis, transparency, accountability, and the handling of sensitive personal data. It also warns that systems built on generative AI models may introduce accuracy risks, including hallucinated or misleading outputs involving personal information.
The publication frames agentic AI as an evolving field that should not be treated separately from broader AI debates, but as a development requiring more careful attention to rights, oversight, and privacy throughout the full lifecycle of system design and deployment.
Need Help?
If you’re wondering how AI policies, or any other government’s AI bill or regulation could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.


