OPINION:
Instead of relishing their role in the U.S. military’s unequivocally successful capture of Venezuelan dictator Nicolas Maduro, the executives at Anthropic are openly questioning the ethics of using emerging technology to conduct military missions.
This rift has grown into a wound, and the Pentagon is now actively considering ending its relationship with Anthropic. It has also reignited a debate about the industry’s role in national defense, specifically whether a contractor working with the U.S. military should retain veto power over how its products are used or for what missions they are leveraged.
According to multiple reports, the Pentagon is now threatening to sever ties with Anthropic and label it a “supply chain risk.” Defense officials insist artificial intelligence tools acquired by the military must be available for “all lawful purposes,” including weapons development, intelligence work and battlefield operations. However, the executives at Anthropic, in contrast, are demanding veto power to limit the use of its technology.
Last year, Anthropic specifically sought and won a lucrative contract with the Pentagon, reportedly worth around $200 million, to supply its Claude model for defense applications. Yet now, the company appears to be regretting the consequences of that decision, as the military increasingly deploys AI in operations that corporate executives oppose.
It looks like the worst instincts of the Silicon Valley ethos are asserting themselves as the arbiter and gatekeeper of what should or should not be done with their technology. That they, a private corporation, should determine how government agencies may use technology. This is not merely a contractual disagreement about computing resources; it is a deeper dispute about who should decide the ethics of war and peace in the 21st century.
Contrast this posture with those of other American companies that have partnered with the U.S. military without quibbling. During World War II, Ford Motor Co. transformed its automobile factories into one of the largest aircraft production operations in the world. At its Willow Run plant in Michigan, Ford produced thousands of B-24 Liberator bombers for the War Department. The company did not request a review of target lists or reserve the right to object to specific missions. It retooled its assembly lines and built what the government requested. The understanding was simple: Civilian leaders would determine strategy; American industry would supply the tools.
Fast-forward to today. Oshkosh Defense manufactures the Joint Light Tactical Vehicle, the armored vehicle used across the U.S. military. When awarded these Pentagon contracts, Oshkosh executives never publicly equivocated about how those vehicles might be deployed. Instead, they routinely emphasize their commitment to delivering mission-ready capability to the warfighter.
Private companies of all kinds already supply goods and services to the U.S. military, and no one expects them to adjudicate ethical dilemmas about how those goods are used. We don’t ask the mechanic who tunes a fighter jet to sign off on the flight path, nor do we ask the baker providing bread to the mess hall to approve the rules of engagement. Why should a software engineer in a glass-walled office in San Francisco have more say over a mission than the generals on the ground or the representatives in Congress?
Yet this is effectively what Anthropic is trying to do with its AI platform. Declaring special ethical rules that the military must follow before it will allow its technology to be used. That position is not only untenable in the defense context but also undermines the foundational principles of civilian control of the military and democratic accountability.
Anthropic acts as if it were the first company in history to contract with the Department of Defense. On the contrary, the U.S. has always relied on private industry to supply emerging technologies that strengthen its defense. Radar, jet engines, GPS, cybersecurity systems, even food service contracts. None came with corporate veto power over military use.
Our system entrusts national defense to elected leaders and their civilian appointees. They bear the moral and political responsibility for the use of force. They answer to voters. Private contractors do not.
There is a real debate to be had about artificial intelligence and the future of warfare. Yet that debate belongs in Congress, in the executive branch and in the public square. It should not be dictated by corporate compliance offices.
The American defense industrial base was built on a simple pact: Industry provides the best tools in the world, and the American people, through their elected leaders, decide how to use them. If Silicon Valley is too refined to honor that pact, then the Pentagon should look to the American innovators who will.
• Nathan Leamer is the executive director of Build American AI, a 501(c)4 nonprofit working to advance U.S. policy leadership in artificial intelligence.

Please read our comment policy before commenting.