Highlights
- Google becomes third AI firm to sign Pentagon deal.
- 950 Google employees sign letter opposing unrestricted military AI use.
- Pentagon labelled Anthropic a supply-chain risk after it refused terms.
Anthropic took a public stand against the Trump administration by refusing to give the Department of Defense unlimited access to its AI.
The company wanted safeguards to stop its AI being used for domestic mass surveillance and autonomous weapons. The Pentagon did not accept those conditions.
Because Anthropic refused, the Department of Defense labelled it a "supply-chain risk," a term usually applied to foreign adversaries.
The two are now in a legal dispute. Last month, a judge granted Anthropic an injunction against the designation while the case continues.
Google is the third company to step in after Anthropic walked away. OpenAI and xAI had already signed deals with the Department of Defense.
Google's agreement includes language stating it does not intend its AI to be used for domestic mass surveillance or autonomous weapons, similar to wording in OpenAI's contract.
However, it is unclear whether these provisions are legally binding.
Google went ahead with the deal even though 950 of its employees signed an open letter urging it to follow Anthropic's example and demand similar safeguards.
OpenAI's rushed deal
OpenAI chief executive Sam Altman admitted the company's deal with the Department of Defense was "definitely rushed" and that "the optics don't look good."
After talks between Anthropic and the Pentagon broke down, president Donald Trump directed federal agencies to stop using Anthropic's technology after a six-month transition period.
OpenAI quickly announced a deal to deploy its models in classified environments.
With Anthropic drawing clear limits around autonomous weapons and mass domestic surveillance, and Altman claiming OpenAI had the same limits, questions arose about whether OpenAI was being straightforward about its own safeguards.
OpenAI published a blog post stating its models cannot be used for mass domestic surveillance, autonomous weapon systems, or high-stakes automated decisions such as social credit systems.
Critics have challenged these protections. Writer Mike Masnick of Techdirt argued the deal "absolutely does allow for domestic surveillance" because it ties data collection to Executive Order 12333, which he described as the mechanism the NSA uses to conduct domestic surveillance.
Altman responded to questions on X, saying OpenAI "really wanted to de-escalate things" and believed "the deal on offer was good."
Hundreds of employees from both OpenAI and Google have called on the Department of Defense to withdraw its designation of Anthropic, and urged Congress to challenge what they see as an inappropriate use of authority against an American technology company.













