/ciol/media/media_files/2026/03/06/17-02-26-27-2026-03-06-15-12-17.png)
Anthropic, the maker of the Claude AI models, has come under scrutiny following a public back-and-forth with the United States’ Department of War (the Department of Defense renamed under the Trump administration) over the company’s role in supplying AI tools to national security agencies.
The development comes after the department informed Anthropic that it had been designated a “supply chain risk” to U.S. national security, a move the company said it plans to challenge in court.
In a statement released on March 5, 2026, Anthropic said it had received a formal letter from the Department of War (DoW) confirming the designation. The company argued that the action is not legally sound and said it would pursue legal avenues to contest it.
At the same time, Anthropic said it would continue supporting the U.S. national security community during the transition period. The company said it is prepared to provide access to its AI models to the Department of War and related agencies at a nominal cost, along with engineering support, for as long as it is permitted to do so.
“Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations,” CEO Dario Amodei said in a statement, adding that the company sees significant alignment with the department in advancing U.S. national security and the responsible use of AI across government.
Anthropic also said the designation applies narrowly to the use of its Claude models in contracts directly tied to the department, and does not broadly restrict the use of its technology by other customers, including contractors working with the U.S. government.
Amodei said the wording in the department’s letter suggests that the designation has a limited scope. In his statement, he pointed to the law referenced in the letter, which is intended to safeguard government supply chains and requires authorities to take the least restrictive action necessary when addressing potential risks.
Anthropic said it had been in discussions with the department in recent days about ways to continue supporting national security operations while adhering to its internal policies.
At the same time, the company reiterated its long-standing position that private companies should not be involved in operational military decision-making. Anthropic said its concerns are limited to two areas outlined in its usage policies: the development of fully autonomous weapons and applications involving mass domestic surveillance.
The issue gained further attention after an internal company message discussing the situation was leaked to the press. Anthropic said it did not leak the message and apologised for the tone of the post, describing it as written during a particularly tense period following public announcements related to the designation and broader developments in government AI contracts.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us