BABL AI is proud to announce that its Chief Ethics Officer, Professor Jovana Davidovic, has published a new paper titled “Bridging Responsibility Gaps for Warfighting AI” in Oxford Intersections: AI in Society. The paper examines how to assign and maintain human responsibility in the use of artificial intelligence for military operations—a question that has become increasingly urgent as AI systems assume more autonomous roles in conflict.
Davidovic’s research confronts one of the central ethical and legal dilemmas of AI in warfare: when an autonomous system makes a life-or-death decision, who is accountable for its actions? The paper identifies distinct forms of “responsibility gaps” that emerge in military AI use, from algorithmic opacity to distributed decision-making, and offers strategies to address them through organizational design, human oversight, and technical transparency.
The article calls for clearer allocation of responsibility across the lifecycle of AI systems used in warfighting—from development and deployment to post-action review—emphasizing that responsibility must be embedded at both the human and institutional levels. Davidovic argues that ethical frameworks and legal standards must evolve in parallel with advances in AI capabilities to ensure accountability and compliance with humanitarian norms.
This publication continues BABL AI’s mission to align AI technology with human rights, governance, and accountability.
About BABL AI:
Since 2018, BABL AI has been auditing and certifying AI systems, consulting on responsible AI best practices and offering online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing.


