G² Dispatches: Termination of U.S. Defense Contracts with Anthropic
Key Notes
The U.S. Department of War terminated roughly $200 million in contracts with Anthropic and banned its technology across the federal government after the company refused to remove safety guardrails and grant unrestricted military use of its Claude AI model.
The unprecedented designation of Anthropic as a national security supply-chain risk signals a major escalation in government pressure on AI firms to provide unrestricted defense access, potentially reshaping how frontier AI companies negotiate with the U.S. government.
Anthropic will likely pursue legal action while defense agencies shift rapidly toward alternative AI providers such as Google, OpenAI, or xAI, increasing government leverage over remaining contractors in future negotiations.
Development
In recent weeks, the United States Department of War has terminated an estimated 200 million dollars in contracts issued to frontier Artificial Intelligence company Anthropic following a dispute with the company about use cases of its LLM model, called Claude. In recent contract negotiations, the Department of War requested that Anthropic allow its technology to be used for “all lawful purposes”, and dismantle security guardrails that had been written into previous contracts and the vendor’s own Usage Policy. Anthropic responded by releasing a third version of its Responsible Scaling Policy, and reiterating their commitment to the previously agreed upon guidelines.
In a statement posted to social media website X, Secretary of War Pete Hegseth has stated that the Department of War wants full, unrestricted access to Anthropic’s model for all lawful purposes in defense of the United States. On February 24th, the Secretary of War gave Anthropic until the evening of February 27th to provide that access or face harsh penalties. In his statement of February 26th, Anthropic CEO Dario Amodei claimed that the security guardrails provide protections against two use cases: mass domestic surveillance, and fully autonomous weapons. In the mass domestic surveillance case, Amodei claims that Artificial Intelligence systems, operating without current constraints, may be used to assemble a comprehensive picture of a person’s life automatically and at a previously unimaginable scale. In the case of fully autonomous weapon systems, Amodei expresses his worry that such systems cannot be relied upon to exercise the critical judgement of highly trained human soldiers.
As of the time of the writing of this Dispatch, Anthropic has not agreed to provide the Department of War with the access they had requested, reiterating their concerns that the models, acting without the previously established guardrails, represent a security risk. After Anthropic did not provide access to the Department of War by the Department-imposed deadline, President Donald Trump issued a directive across the Federal Government that all departments shall cease all use of Anthropic’s technology. Further, the Secretary of War Pete Hegseth designated Anthropic a supply-chain risk to National Security, a designation never before applied to an American company. Anthropic has threatened to sue the Department over the designation.
Analysis
Anthropic’s frontier AI model was the first approved for classified use and, as such, may be deployed on multiple clandestine missions across the government simultaneously. Phasing out the model therefore will very likely come with data privacy challenges as the technology, which relies on data for its learning, could have already been exposed to classified materials.
The loss of a 200 million dollar contract, and the loss of the total value of its future government contracts that could have been negotiated had Anthropic continued partnership with the government, is not likely to be rewarded by investors who had valued the company at 360 billion dollars as of its most recent fund raising round concluded in Mid-February.
The supply chain risk designation is typically only applied to foreign companies with ties to adversary nations, and has never before been applied to an American company. Secretary of War Pete Hegseth has said the designation requires all contractors, suppliers, and partners doing business with the United States military to cease all commercial activity with Anthropic. The application of such a designation to an American company may be detrimental to the company’s investors, the present operations of the company within the United State technology and consumer ecosystem, and the company’s ability to raise future capital.
Forecast
If the legal battle between Anthropic and the Department of War proceeds, it is highly likely that the issue of data privacy is brought up in the proceedings. An LLM model uses data to improve its reasoning capabilities, and Claude was the first such model approved for classified use. Since the model was exposed to classified data, and has memory storage as an element of its design, the issue of how this data was used by the model and when in its training pipeline the data was used could become an important issue in the case. It seems somewhat unlikely that Anthropic actually used classified data for training any instance of its model, since Anthropic is requesting to retain data privacy guardrails, but further deep and thorough legal review is likely to be required.
Apart from the legal elements of uncoupling the LLM from classified information there are likely structural complications that result from the decision to stop using Anthropic’s technology. Eliminating the LLM’s exposure to clandestine materials is likely to be comparatively easy, but retraining the workforce of entire departments on new tools after an abrupt change is likely to cost valuable time and money for a Department of War that has just initiated hostilities against Iran.
Several other companies are highly likely to be ready to implement their own bespoke technologies to fill the gaps left by Anthropic’s departure. Google, OpenAI, and xAI had also received 200 million dollar contracts from the Department of War for use of their LLM technologies. OpenAI recently announced its successful completion of contract negotiations with the Department of War, although the scope of those contracts are not yet known. The likely preparedness of Anthropic’s competitors to provide the Department of War with additional access to their tools could ameliorate any disruptions caused by the departure of Anthropic’s models. OpenAI and other competing government contractors are more likely to give in to government demands now that the government has demonstrated willingness to terminate business with a contractor if their demands are not met.
The Department of War is likely to allow Anthropic’s competition to replace the departing company, rather than continue negotiations or engage in further escalatory actions against the company. The Department has already seen a deadline broken, and moved Anthropic into a high risk designation, and is highly unlikely to have taken those actions to pressure the company into compliance when product alternatives are readily available, rather the actions of the Department are highly likely to mark the conclusion of Anthropic’s business with the federal government, at least for the remainder of the Trump Administration.