The US Department of Defense has labelled Anthropic’s Claude AI model a supply chain risk, raising concerns about its use in military systems. US Defense Undersecretary Emil Michael stated that the AI model could negatively affect the Pentagon’s technology supply chain. His comments came after Anthropic filed a lawsuit challenging the decision.
Also Read : Oil Jumps Even After Reserve Deal
Michael explained that the Pentagon and Anthropic have different policy approaches regarding the use of artificial intelligence. The US military wanted unrestricted access to the AI system for all lawful purposes. However, Anthropic placed limits on how its technology could be used, especially in areas such as mass surveillance and autonomous weapons.
Policy dispute over AI use leads Pentagon to label Anthropic a supply chain risk
According to Michael, these restrictions could make military systems less effective during combat situations. He said the Pentagon cannot rely on technology that carries policy preferences different from its own. Because of this disagreement, the US Defense Department officially classified Anthropic as a supply chain risk.
Despite the decision, the Pentagon is still using Anthropic’s Claude model in some classified networks. The company has integrated custom AI systems into certain military operations, making an immediate switch difficult. As a result, the Pentagon has planned a six-month transition period while shifting its systems to OpenAI technology.
Meanwhile, Anthropic has strongly opposed the designation and filed a lawsuit against the US government. The company claims the decision is unfair and could cost it hundreds of millions of dollars in contracts. At the same time, the controversy has drawn public attention, with many users showing support for Claude and increasing its popularity on app platforms.
Also Read : US Envoy Credits Trump–Modi Ties for Trade Deal


[…] Also Read: Pentagon explains reason for banning Claude calls Anthropic a supply risk […]