Tech start-up Anthropic has refused to meet the Pentagon's demands for unrestricted military use of AI, setting up a legal showdown with the Trump administration.Artificial intelligence lab Anthropic filed suit on Monday challenging a move by the Pentagon last month officially designating the company as a "supply chain risk" after it refused to allow unrestricted military use of its AI system, Claude.
The US Defense Department had demanded Anthropic remove guardrails blocking its AI for functions like autonomous weapons or domestic surveillance.
Also Read | India's Cheetah Population Crosses 50 Milestone After 5 New Cubs Born at Kuno National Park.
Before the deadline for a deal passed on February 27, Anthropic CEO Dario Amodei had warned US Defense Secretary Pete Hegseth about the risks of using untested AI in autonomous warfare and refused to remove use restrictions.
The Pentagon argued that technology companies were not in a position to dictate matters of warfare. Trump said the
After Hegseth's announcement, Anthropic said they would challenge the designation as legally unsound, arguing it would set a dangerous precedent for other technology companies doing business with the government.
What does Anthropic argue?
The lawsuits filed in California where Anthropic is based, and in Washington DC, both aim to undo the designation and block its enforcement.
Anthropic said the Trump administration's actions were "unprecedented and unlawful" and that the company was being penalized for "expressing the principle" that AI "maximizes positive outcomes for humanity" is used in "the safest and the most responsible" manner.
In the complaint filed in California and reported by the Wall Street Journal, Anthropic said the government was "seeking to destroy" the company's economic value.
Anthropic has said even the most advanced AI models are still not reliable enough for automated weapons systems, and also said the use of its AI in surveillance systems would be a violation of fundamental rights.
What does the Pentagon say?
The Pentagon has insisted that it needs full use of AI-powered functionality for "any lawful" use, and has argued that Anthropic's refusal to do so amounts to a private company imposing policy restrictions on matters of defense.
However, the move by the Pentagon was seen as an extreme step, as Anthropic is the only US-based tech company to ever have been designated as a supply chain risk. Up until now, the designation has only been applied to foreign technology companies deemed a security risk, such as Chinese telecom giant Huawei.
According to US law, a supply chain risk applies to systems that could "sabotage" or "maliciously introduce" unwanted functions.
Anthropic is a leading AI lab with investors including Amazon. The ban essentially bars Anthropic from doing business with federal agencies. It could also affect how Anthropic does business with contractors and suppliers
US President Donald Trump issued a government-wide ban on Anthropic technology, saying the company was run by "left wing nutjobs."
As Anthropic seeks to contain fallout from the designation, Amodei said last week the designation still had a "narrow scope" and businesses could still use Anthropic tools in projects unrelated to the Defense Department.
The company had been negotiating use restrictions for months with the Pentagon after it signed a $200 million contract in July 2025, which was cancelled after the falling-out last month.
At the time a press release lauded the advancement of "responsible AI in defense operations."
Despite the legal row and the risk designation, Claude is still heavily embedded in the Defense Department's operational intelligence systems. US media have reported Claude was heavily used in planning the US-Israel attack on Iran last week.
Edited by: Jenipher Camino Gonzalez
(The above story first appeared on LatestLY on Mar 10, 2026 12:50 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).













Quickly


