The Pentagon and the San Francisco-based Artificial Intelligence firm Anthropic are locked in a dispute over the agency’s use of the latter’s flagship AI model, Claude. The dispute, which concerns Anthropic’s refusal to let the agency use Claude with no security restriction for “all lawful purposes,” escalated as a Pentagon official said that the agency could sever its ties with the company and declare it a “supply chain risk” over the dispute.
A Pentagon official, speaking anonymously to Axios, revealed that the agency could cut ties and declare the company a supply “chain risk,” which effectively means any company doing business with the Pentagon cannot have any business ties with Anthropic.
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the official said to Axios, adding that the Pentagon is “close” to cutting the ties.
However, the decision has logistical problems as Claude is the only AI model currently used in the US military’s classified system and has been widely praised for its effectiveness. Replacing it with another would require the Pentagon to forge new contracts with companies that could be as efficient as Claude.
In fact, effectiveness seems to be a bigger issue in this regard, as other competing AI models, such as xAI, OpenAI and Google have already agreed to remove the safety measures but are still not used in the military.
How Could Pentagon Severing Ties Impacts Anthropic
The impact of the potential severance itself would only affect Anthropic marginally. Axios reports that the deal in question is worth around $200 million in revenues annually, which is almost insignificant compared to its $14 billion in annual revenues. However, declaring it a “supply chain risk” could have an impact on the firm as it would lead to other companies canceling their ties.
On top of that, Department of War officials are showing no signs of budging, even though Anthropic indicates that talks are going in a “productive” direction.
“The Department of War’s relationship with Anthropic is being reviewed,” a Department of War spokesperson said. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Also read: US used Anthropic’s Claude AI in Venezuela raid to capture Nicolas Maduro: Report
What Is The Dispute About: Explained
The dispute, which has failed to be resolved despite months of meetings between Anthropic and Pentagon officials, concerns the terms under which the military can use Claude. While the agency wants unrestricted usage for “all lawful purposes,” Anthropic CEO Dario Amodei has expressed concerns about surveillance and violation of privacy.
According to Axios, Anthropic wants terms in the contract which would prevent the agency and the military to conduct mass surveillance on Americans or developing weapons that can fire without human involvement.
Designating a company as a “supply chain risk” is a major step that is usually reserved for foreign business adversaries. It seems, at some point, one has to budge as officials admit other AI models are “just behind” when it comes to handling specialized military operations.