OpenAI Details Pentagon AI Deal After U.S. Orders Agencies to Stop Using Anthropic

OpenAI outlines safeguards in its Pentagon AI deal after the U.S. orders agencies to stop using Anthropic, allowing a reported six-month transition

author-image
Deepali Jain
New Update
02-3-26

OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models across classified Pentagon networks, the company said in a blog post titled Our Agreement with the Department of War, published on March 2.

Advertisment

In the post, OpenAI outlined three written restrictions governing how its systems can be used in defence settings: the models cannot be used for mass domestic surveillance of U.S. citizens, cannot operate autonomous weapons without human oversight, and cannot be deployed for high-stakes automated decision-making such as social credit scoring.

The company said the deployment will follow a cloud-only model, with OpenAI staff maintaining a safety layer while cleared government personnel remain responsible for consequential decisions. It added that the agreement references existing U.S. laws and policies governing intelligence activities and civil liberties. OpenAI also said it retains the right to terminate the agreement if its terms are violated and that it has urged the Pentagon to extend similar contractual safeguards to all AI developers.

How the Anthropic Contract Ended

The announcement came hours after President Donald Trump ordered all federal agencies to stop using technology from Anthropic and Defence Secretary Pete Hegseth designated the company a national security “supply chain risk.” The designation, previously applied to foreign entities rather than U.S. companies, effectively barred federal agencies and contractors from working with Anthropic.

Anthropic had been the only AI company with a model deployed on the Pentagon’s classified networks under a reported $200 million contract signed in July 2024. Renewal negotiations had been under way for months. According to media reports, the Pentagon sought a commitment that Anthropic’s Claude model be available for “all lawful purposes” without company-imposed restrictions.

Anthropic declined. Chief executive Dario Amodei said the company could not agree to terms that might allow its technology to be used for domestic surveillance or integrated into fully autonomous lethal systems. In an interview with CBS News, Amodei said, “Disagreeing with the government is the most American thing in the world.” The company has said it will challenge the supply chain risk designation in court.

Hegseth described Anthropic as “sanctimonious,” while Pentagon official Emil Michael said Amodei was “putting the nation’s safety at risk.” White House AI adviser David Sacks, posting on X ahead of the deadline, referred to Anthropic as “woke AI” and accused its leadership of using safety concerns to pursue regulatory advantage. Anthropic rejected those characterisation

Advertisment

 OpenAI Steps In

Within hours of Anthropic’s contract being cancelled, OpenAI chief executive Sam Altman announced the new agreement on X, writing that, “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome...”

The timing prompted scrutiny. The safeguards outlined by OpenAI, particularly the prohibitions on domestic surveillance and autonomous weapons, are broadly similar to the restrictions Anthropic had defended. Altman acknowledged that “the optics don’t look good” and described the agreement as “definitely rushed,” but argued that engagement with written guardrails was preferable to withdrawing from government collaboration.

OpenAI’s head of national security partnerships said in a public post that deployment architecture plays a critical role in how safeguards function. A cloud-only API model, the company argues, limits the ability to embed AI systems directly into weapons platforms, placing practical constraints beyond contract language.

One detail that complicated Anthropic’s position emerged from earlier reporting: in late 2024, the company had entered a separate arrangement via Palantir and Amazon Web Services that allowed U.S. intelligence and defence agencies to access Claude. That arrangement drew questions about the consistency of its approach to defence use, though Anthropic maintained that its core restrictions remained in place.

Public Response and Legal Questions

The episode quickly spilled into the public sphere. Anthropic’s Claude app moved to the top position on Apple’s U.S. App Store rankings, overtaking OpenAI’s ChatGPT. The company reported a surge in new user registrations and said its free user base had grown by more than 60% since January. On social media platforms including X and Reddit, some users called for cancelling ChatGPT subscriptions, while others expressed support for Anthropic’s stance.

Legal experts have questioned whether the supply chain risk designation will withstand judicial review, noting that such actions typically require a formal risk assessment process. A broad interpretation of Hegseth’s order, potentially barring contractors from engaging in any commercial activity with Anthropic, could affect companies such as Amazon, Google and Nvidia, which have invested in the startup.

Advertisment

OpenAI, in its blog post, described the sequence of events as a poor beginning to government-AI collaboration and said it had asked the Pentagon to seek a resolution with Anthropic while offering equivalent terms to other developers.

Anthropic’s classified Pentagon contract has been cancelled, and federal agencies have been directed to stop using its technology, with a reported six-month transition period. The company’s legal challenge is pending. OpenAI’s agreement is now in effect.

Anthropic openai ai