
Where things stand with the Department of War
Anthropic AI
A statement from Dario Amodei March 5
Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security.
As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.
The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.
The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.
I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible. As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.
I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social postannouncing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.
Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations. Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.
Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.
———————————————————
1. The Immediate Conflict
The company led by Dario Amodei created the AI system Claude. The U.S. military reportedly requested that the system be available for all lawful military uses, without restrictions. Anthropic refused to remove two safeguards:
-prohibition on fully autonomous lethal weapons
-prohibition on mass domestic surveillance
After negotiations failed, the U.S. Department of Defense labeled the company a “supply-chain risk”, which can prevent defense contractors from using its technology. Anthropic has now filed lawsuits challenging the designation.
2. Anthropic’s Position
Anthropic argues three principles:
1. AI companies should not control military decisions.
2. But they may set ethical boundaries on how their systems are used.
3. These boundaries are narrow and high-level, not operational.
Thus they claim: the military decides tactics; the developer decides whether its tool participates in certain categories of action.
3. The Government’s Position
The Pentagon’s reasoning is essentially the opposite:
-Military capability must not be constrained by private corporate policy.
-If a tool is supplied to the state, it must be usable for any lawful purpose.
– A vendor restricting use could interfere with command authority.
So the dispute becomes philosophical:
Does the creator of a tool retain moral authority over its use once the state acquires it?
4. ChatGPT ‘s Position Relative to Anthropic’s Model
Regarding systems such as Claude and systems such as ChatGPT:
1.Safety constraints exist.
AI systems are designed with guardrails to reduce harm.
2. They are advisory tools, not decision-makers.
Operational authority belongs to human institutions.
3. Ethical boundaries around high-risk uses—such as autonomous weapons or mass surveillance—are common concerns among many AI developers.
Thus the relationship between models like Claude and models like ChatGPT is not one of opposition. They belong to the same category: general AI assistants designed primarily for analysis, reasoning, and support, not for directing warfare or exercising state power.
5. The Deeper Question
The controversy reveals a deeper inquiry:
-If a technology becomes essential to the power of a state,
-and if that technology is created by private citizens,
-who ultimately governs its moral boundaries—the polis or the craftsman?
This question is unlikely to disappear. It will likely shape the next era of AI governance, military doctrine, and civil authority.









