Anthropic and the Pentagon Are Clashing Over What Claude Can Do in War, and the Maduro Raid Lit the Fuse
A dispute over AI guardrails for military use has put Anthropic's $200M Pentagon contract at risk, exposing a systemic tension between AI safety commitments and national security demands.
Overview
A low-profile negotiation between Anthropic and the United States Defense Department has ruptured into an open confrontation over the terms under which the AI company’s Claude model can be used in military operations. The dispute has put a contract worth up to $200 million at risk, prompted threats to brand Anthropic a national security liability, and surfaced fundamental questions about whether AI companies can maintain ethical guardrails when their technology reaches the battlefield.
At the center of the standoff is a January 2026 operation in Venezuela that resulted in the capture of Nicolás Maduro — an operation in which Claude, deployed through Palantir Technologies, was reportedly used in real time.
The Contract and the Conflict
In July 2025, Anthropic joined Google, OpenAI, and Elon Musk’s xAI as one of several leading AI companies awarded contracts by the Defense Department, each valued at up to $200 million, as reported by Axios. Anthropic became the only AI company to deploy its models on the Pentagon’s classified networks and provide customized versions to national security customers — an unusual degree of access that made the eventual dispute all the more fraught.
Negotiations over the forward-looking terms of use for that contract have stalled. The Pentagon wants to deploy Claude across “all lawful use cases” without company-imposed restrictions beyond what existing law requires. Anthropic has refused to cross two specific lines: enabling mass surveillance of American citizens, and enabling fully autonomous weapons systems that operate without human authorization.
As CNBC reported, months of difficult negotiations have yielded little movement. Pentagon Undersecretary for Research and Engineering Emil Michael summarized the department’s position bluntly: “You can’t have an AI company sell AI to the Department of War and don’t let it do Department of War things.”
The Venezuela Trigger
The negotiations were already tense when the Maduro raid crystallized the underlying dispute. According to reporting by Axios and NBC News, Claude was deployed via Anthropic’s partnership with Palantir during the active operation — going beyond pre-operation intelligence preparation to real-time operational support, including analysis of satellite imagery.
In the aftermath, a senior Anthropic executive reached out to a senior Palantir executive to ask whether Claude had been used during the raid. The Palantir executive subsequently reported the inquiry to the Pentagon. The outreach was interpreted inside the Defense Department as a signal that Anthropic might disapprove of the use of its model in an active combat operation — a view that sources say alarmed defense officials.
An Anthropic spokesperson contested this characterization, stating the company had not discussed the use of Claude in specific operations with the Pentagon or with Palantir outside of “routine discussions on strictly technical matters,” according to NBC News. Regardless of the precise content of the exchange, the episode accelerated an already deteriorating relationship.
Escalation: Supply Chain Risk
Defense Secretary Pete Hegseth’s response, reported by The Washington Post and Axios, was to move toward labeling Anthropic a “supply chain risk” — a designation ordinarily reserved for foreign adversaries and one that would severely restrict the company’s ability to work with the U.S. government across all agencies, not merely the Defense Department.
Emil Michael told Anthropic it should “cross the Rubicon,” invoking the metaphor of an irrevocable commitment to a military alliance, according to The Hill. He framed the company’s guardrails as undemocratic overreach, arguing that limitations on technology use should be established through Congress and regulatory agencies — not unilaterally by private companies.
“What we’re not going to do is let any one company dictate a new set of policies,” Michael said, according to reporting reviewed by CNBC.
As of February 22, The Washington Post reported that both sides still appear to want a deal — but on terms that currently remain irreconcilable.
The Stakes for Anthropic
For Anthropic, the conflict represents a direct collision between its commercial ambitions and its stated mission of responsible AI development. The company has built its brand around safety-first AI, and its Acceptable Use Policy explicitly prohibits deployment in applications that could enable autonomous weapons or mass surveillance of civilians.
The supply chain risk designation would be commercially devastating: it could effectively bar Anthropic from the entire federal procurement market — a fast-growing segment that has attracted hundreds of millions in contracts across the defense, intelligence, and civilian government sectors. The designation would also carry significant reputational consequences in international markets where governments and regulators scrutinize AI vendors’ relationships with the U.S. military.
The Broader Pattern
Anthropic is not the first AI company to navigate this terrain. CNBC noted that Google withdrew from the Pentagon’s Project Maven in 2018 following internal employee protests over the use of AI in drone targeting, only to re-engage with defense customers years later under revised terms. The Pentagon’s CTO explicitly referenced that precedent when urging Anthropic to reconsider.
The dispute also arrives as multiple AI companies hold simultaneous defense contracts worth hundreds of millions of dollars — Google, OpenAI, and xAI are all party to the same July 2025 contracting vehicle as Anthropic. How those companies navigate similar terms-of-use questions will shape the norms for commercial AI in national security contexts for years.
What Remains Unclear
The precise technical details of how Claude was used during the Venezuela operation have not been publicly confirmed by either Anthropic or the Defense Department. It is unclear whether the model’s use fell within the existing terms of Anthropic’s contract, or whether it extended into territory the company would categorize as a policy violation.
Nor is it clear what specific mechanism would be required for Hegseth to formally designate Anthropic a supply chain risk, or whether that threshold has been legally met. Sources cited by The Washington Post suggest negotiations continue, with both sides still seeking a framework that preserves some version of the commercial relationship.
The outcome will likely set a precedent — either for AI companies successfully defending their right to set usage limits on military customers, or for the Defense Department establishing that national security access demands supersede corporate AI safety policies.