The U.S. Department of War is reviewing its relationship with artificial intelligence company Anthropic and could designate the firm a “supply chain risk,” a move that would force military contractors to sever ties with the company, according to reporting by Axios.
The potential designation reflects escalating tensions between the Pentagon and Anthropic over how its AI model, Claude, can be used in military operations. Defense officials have pushed AI developers—including Anthropic, OpenAI, Google, and xAI—to allow their systems to be used for “all lawful purposes,” including intelligence, weapons development, and battlefield operations.
Anthropic, however, has maintained restrictions intended to prevent uses such as fully autonomous weapons targeting or mass domestic surveillance. The company has said its discussions with the government have focused on ethical safeguards and policy boundaries rather than specific operations.
According to Axios, the Pentagon views these restrictions as limiting operational flexibility and is weighing whether to penalize the company by classifying it as a supply chain risk—a designation typically reserved for adversarial or untrusted entities. Such a move could require defense contractors to certify that they do not rely on Anthropic’s AI tools in their own systems.
The dispute is particularly significant because Claude is already embedded within the military. The AI model was reportedly used during a U.S. military operation targeting Venezuelan leader Nicolás Maduro, highlighting its growing role in national security applications.
Anthropic has emphasized its commitment to supporting national security while ensuring responsible deployment of its technology. The company has indicated willingness to negotiate but remains cautious about loosening safeguards designed to prevent misuse.
The standoff underscores broader tensions between defense agencies seeking maximum operational freedom and AI developers attempting to balance commercial partnerships with ethical constraints. As militaries increasingly rely on AI for intelligence analysis and operational planning, the outcome of negotiations with Anthropic could shape future standards governing the use of advanced AI systems in national defense.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


