AI in Combat: CSET Report Warns Military Commanders Against Blind Reliance on Decision Support Systems

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/18/2025
In News

As combat forces around the world embrace artificial intelligence (AI) to gain strategic advantages, a new report from the Center for Security and Emerging Technology (CSET) urges caution. The April 2025 issue brief, AI for Military Decision-Making, explores how AI-enabled decision support systems (DSS) can enhance operational effectiveness but also carry significant risks if misapplied.

 

Co-authored by Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner, the report offers a framework for evaluating when and how to deploy AI systems that assist military commanders in life-and-death decisions on the battlefield. The authors emphasize that while AI systems can synthesize vast data quickly and generate operational insights, commanders must remain vigilant to avoid overreliance.

 

“Commanders must weigh the scope, data quality, and human-machine interaction before trusting AI systems in combat scenarios,” the report advises. Context shifts, poor or biased training data, and human cognitive biases can all lead to flawed decisions if not addressed.

 

The authors highlight several concerns, including:

 

  • Scope Misalignment: Systems trained on data from one environment (e.g., urban warfare) may fail in another (e.g., jungle or mountainous terrain).

 

  • Faulty Predictions: AI systems are often deployed to anticipate enemy movements or societal unrest, but such projections are notoriously hard to validate.

 

  • Human Bias and Automation Overconfidence: Users might trust DSS outputs too much, even when those outputs are misleading or incorrect.

 

To address these challenges, the report recommends five key mitigation strategies:

 

  1. Set context- and risk-based criteria for when and how AI-DSS are deployed.

 

  1. Train and qualify operators, especially for systems involved in targeting and lethal operations.

 

  1. Establish continuous certification cycles to ensure systems and teams remain effective.

 

  1. Designate Responsible AI officers in military units to oversee ethical use and incident reporting.

 

  1. Document incidents and system harms to build institutional learning and public trust.

 

Ultimately, CSET stresses that while AI can be a powerful tool for military decision-making, it is not a substitute for human judgment. “AI should support, not replace, the ethical and strategic thinking of commanders,” the report concludes.

 

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter