- The Washington Times - Monday, March 9, 2026

Artificial intelligence firm Anthropic sued the Trump administration Monday over its move to designate the company a “supply chain risk” to national security, setting up a high-stakes legal battle that could shape the future of AI use in the military.

The San Francisco-based company filed two lawsuits, one in federal court in California and a second in the federal appeals court in Washington, that challenge the Pentagon’s extraordinary supply chain risk designation, which was announced last week.

It’s the first time an American company has received that designation. It cuts the company off from federal contracts and prohibits other firms from doing business with Anthropic if those firms want their own lucrative deals with the government.



The lawsuits are the latest developments in a long-running saga between the two sides, centering on how Anthropic’s popular Claude AI tool can be used in the military. 

Anthropic wants iron-clad assurances the tool would never be used to develop fully autonomous weapons or for mass domestic surveillance. The Pentagon said it would not use Claude for either purpose but appeared reluctant to put any promises to the company in writing, insisting that Claude must be available for “all lawful uses.”

In its court filing, Anthropic said the administration has overstepped.

“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the executive’s unlawful campaign of retaliation.”

The dispute comes even as defense officials tell The Washington Times that Claude is being used in the U.S. military campaign against Iran. 

Advertisement
Advertisement

President Trump last week expressed frustration with the company because it refused to give the military unlimited access to its technology.

“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Mr. Trump told Politico in an interview.

Anthropic says its models currently are the only ones approved for use in any classified settings, but rival company OpenAI recently announced its own deal with the Pentagon. Notably, the OpenAI statement on the issue says its deal addresses concerns about domestic surveillance and autonomous weapons.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the company said.

The Pentagon said it intended to wind down Claude AI’s use for military applications over the next six months. 

Advertisement
Advertisement

But even amid the supply chain risk designation and federal lawsuits, the two sides are still reportedly in talks and seeking a path forward for Claude in the military. 

Defense and National Security Correspondent John T. Seward contributed to this article, which is based in part on wire service reports.

• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.

Copyright © 2026 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.