Highlights
- Court blocks Pentagon ban on Anthropic over military technology dispute.
- Judge says designation would punish company for disagreeing with government.
- Anthropic refused Pentagon use for autonomous weapons and domestic surveillance.
A California judge rejected a proposal by Pete Hegseth, US secretary of war, to declare the company a supply chain risk after it refused to allow unrestricted use of its technology for military purposes.
The court said the move could "cripple" Anthropic, which is valued at $380bn (£285bn) and ranks among America's biggest AI companies.
"Defendants' designation of Anthropic as a 'supply chain risk' is likely both contrary to law and arbitrary and capricious," the judge wrote. "
These broad measures do not appear to be directed at the government's stated national security interests. Instead, these measures appear designed to punish Anthropic."
Military technology clash
Anthropic's Claude bot is already in use by the Pentagon. In recent months, its technology and data analysis were used for the capture of Nicolas Maduro and to identify strike targets in Donald Trump's war on Iran.
However, the sides have clashed over Anthropic's refusal to allow the Pentagon to use its technology for autonomous weapons and surveillance of Americans.
Trump has called them a "radical left, woke company", and Hegseth said last month he was branding the company a supply chain risk, a label usually reserved for suppliers to hostile military powers.
The supply chain risk designation would have prevented US federal agencies from working with Anthropic, even for non-military use.
It would also have prevented military suppliers from using it when performing work for the Pentagon.
On a strict reading, it would have required companies such as Google and Amazon, which have both invested in Anthropic, to cut ties with the company.
Anthropic had warned the move would cost it hundreds of millions of dollars in revenue.
The judge granted an injunction on Thursday night, pending a full legal case, stating: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government."
The company thanked the court for a "swift decision" and said: "Our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
Emil Michael, the US under secretary of war, who had been leading negotiations with Anthropic, called the ruling a "disgrace", claiming "dozens of factual errors" in the judgement. The US government has a week to appeal.





