OpenAI Revises Pentagon AI Deal After Altman Calls Rollout ‘Opportunistic and Sloppy’

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/04/2026
In News

OpenAI is revising its newly announced Pentagon contract after chief executive Sam Altman acknowledged the agreement appeared “opportunistic and sloppy,” according to reporting by the Guardian.

 

The San Francisco-based company, which operates ChatGPT, said it will amend the deal with the US Department of War to explicitly prohibit its artificial intelligence from being used for domestic mass surveillance or by defence department intelligence agencies such as the National Security Agency. The Guardian reported that the contract was struck shortly after the Pentagon dropped its existing AI contractor, Anthropic.

 

The agreement triggered public backlash and raised concerns that OpenAI’s technology could be deployed in ways reminiscent of the mass surveillance practices revealed in the 2013 Snowden disclosures. Despite OpenAI’s initial assurances that the contract contained “more guardrails than any previous agreement for classified AI deployments,” critics questioned whether the company had moved too quickly.

 

In a message to employees reposted on X, Altman admitted the rollout had been rushed. “We shouldn’t have rushed to get this out on Friday,” he wrote, adding that the issues were “super complex” and required clearer communication. He said the company had been attempting to avoid a worse outcome but conceded that the move appeared poorly handled.

 

The controversy has also sparked internal dissent. Nearly 900 employees at OpenAI and Google signed an open letter urging their companies to refuse demands related to surveillance and autonomous weapons. OpenAI has previously stated that one of its “red lines” is prohibiting use of its technology to direct autonomous weapons systems.

 

The Guardian also reported that Anthropic’s chatbot, Claude, surged in app store rankings amid calls on social media to “delete ChatGPT.” Meanwhile, several cabinet-level agencies have begun phasing out Anthropic’s tools following the Pentagon’s designation of the company as a supply chain risk.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter