BABL AI Chief Ethics Officer Jovana Davidovic Addresses Agentic AI Risks at REAIM 2026 Summit

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/06/2026
In Press

Dr. Jovana Davidovic, Chief Ethics Officer at BABL AI, delivered a featured presentation at the Third REAIM Summit (Responsible Artificial Intelligence in the Military Domain) held February 4–5, 2026, in A Coruña, Spain. The global summit convened states, international organizations, defense experts, academics, and civil society leaders to advance practical measures for responsible military AI governance.

Dr. Davidovic’s talk, titled “Relocating Initiative and Interpretation: Agentic AI and the End of Human Judgment in Lethal Force,” examined the growing deployment of agentic AI systems in military contexts — particularly in intelligence analysis, data fusion, and battlefield management.

While AI agents are often promoted as force multipliers, Davidovic argued that certain forms of agentic AI — especially large language model (LLM)-based agents — pose profound ethical risks in lethal settings. She emphasized that the very features that make these systems operationally attractive — initiative, internal goal management, and dynamic task orchestration — can undermine context-appropriate human judgment and meaningful oversight.

“In lethal contexts, relocating initiative and interpretive authority to the system itself changes the human role in fundamental ways,” Davidovic explained. “There is a subset of agentic AI systems that are fundamentally incompatible with ethically permissible use in lethal force.”

Her analysis distinguishes between agentic applications that may remain compatible with meaningful human control and those that are not. In particular, she argued that some emerging systems reconfigure the “kill chain” in ways that displace human judgment rather than support it — creating distinctive ethical and governance risks that extend beyond earlier AI-enabled tools.

The presentation concluded by outlining implications for military doctrine, governance frameworks, and international regulatory efforts, including potential restrictions or prohibitions on certain types of warfighting AI.

The REAIM Summit aims to translate previously agreed principles on responsible military AI into concrete and practical measures. Spain, host of REAIM 2026, emphasized strengthening multi-stakeholder collaboration to address the technological, ethical, legal, and social dimensions of Military AI.

Dr. Davidovic’s participation underscores BABL AI’s continued leadership in advancing rigorous ethical analysis and governance solutions for high-risk AI systems. In addition to her role at BABL AI, she serves as a senior researcher at the Peace Research Institute Oslo and as an associate professor of philosophy at the University of Iowa.

The full summit sessions are available via the official REAIM YouTube channel: www.youtube.com/@reaim2026

About BABL AI:

Since 2018, BABL AI has been auditing and certifying AI systems, consulting on responsible AI best practices and offering online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing.

REAIM Summit:

REAIM Summits established a global consensus that AI must be developed and applied in ways that uphold international peace, security and stability. These summits emphasize compliance with international law, human accountability for military AI systems, and maintaining human judgement in the use of force.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter