NEWYou can now listen to Fox News articles!
A dispute stemming from questions about the use of AI firm’s Anthropic’s model during the U.S. operation targeting Venezuelan leader Nicolás Maduro has triggered a Pentagon review of the company’s partnership, with senior officials raising concerns that Anthropic could represent a “supply chain risk.”
Axios first reported on the growing tensions between the Pentagon and Anthropic, a tech company known for emphasizing safeguards on AI.
Anthropic won a $200 million contract with the Pentagon in July 2025.
TRUMP’S VENEZUELA STRIKE SPARKS CONSTITUTIONAL CLASH AS MADURO IS HAULED INTO US
Its AI model, Claude, was the first model brought into classified networks.
Now, “The Department of War’s relationship with Anthropic is being reviewed,” chief Pentagon spokesman Sean Parnell told Fox News Digital.
“Our nation requires that our partners be willing to help our warfighters in any fight.”

The Pentagon is reviewing Anthropic, led by Dario Amodei, above, as a “supply chain risk.” (Priyanshu Singh/Reuters )
According to a senior administration official, tensions escalated when Anthropic asked whether Claude was used for the raid to capture Maduro, “which caused real concerns across the Department of War indicating that they might not approve if it was.”
“Given Anthropic’s behavior, many senior officials in the DoW are starting to view them as a supply chain risk,” said a senior War Department official. “We may require that all our vendors and contractors certify that they don’t use any Anthropic models.”
The officials did not elaborate on when Anthropic made the inquiry or to whom. Axios reported, citing a senior administration official, that Anthropic raised the question with an executive at Palantir, its partner in Pentagon contracting.
Palantir could not immediately be reached for comment.
Anthropic disputed that characterization. A spokesperson said the company “has not discussed the use of Claude for specific operations with the Department of War” and has not discussed such matters with industry partners “outside of routine discussions on strictly technical matters.”
The spokesperson added that Anthropic’s conversations with the Pentagon “have focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance — none of which relate to current operations.”

The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. (Kurt “CyberGuy” Knutsson)

According to a senior administration official, tensions escalated when Anthropic asked whether Claude was used for the raid to capture Maduro, “which caused real concerns across the Department of War indicating that they might not approve if it was.” (Adam Gray/Reuters)
WHAT THE ALLEGED ‘SONIC WEAPON’ USED IN VENEZUELA MAY ACTUALLY HAVE BEEN
“We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right,” the spokesperson said.
Pentagon officials, however, denied that restrictions related to mass surveillance or fully autonomous weapons are at the center of the current dispute.
The Pentagon has been pushing leading AI firms to authorize their tools for “all lawful purposes,” seeking to ensure commercial models can be deployed in sensitive operational environments without company-imposed restrictions.
A senior War Department official said other leading AI firms are “working collaboratively with the Pentagon in good faith” to ensure their models can be used for all lawful purposes.
“OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok have all have agreed to this in the military’s unclassified systems with one agreeing across all systems already, and we are optimistic the rest of companies will get there on classified settings in the near future,” the official said.
How this conflict resolves could shape future defense AI contracting. If the Department insists on unrestricted access for lawful military uses, companies may face pressure to narrow or reconsider internal safeguards when working with national security customers.
Conversely, resistance from companies with safety-focused policies highlights growing friction at the intersection of national security and corporate AI governance — a tension increasingly visible as frontier AI systems are integrated into defense operations.
Neither Anthropic nor the Pentagon confirmed whether Claude was used in the Maduro operation. Advanced AI systems like Claude, however, are designed to do something human analysts struggle with under time pressure: digest enormous volumes of information in seconds.
In a high-risk overseas operation, that could mean rapidly sorting intercepted communications, summarizing intelligence reports, flagging inconsistencies in satellite imagery, or cross-referencing travel records and financial data to confirm a target’s location. Instead of combing through hundreds of pages of raw intelligence, planners could ask the system to surface the most relevant details and identify potential blind spots.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
AI models can also help war planners run through scenarios — what happens if a convoy reroutes, if weather shifts, or if a target moves unexpectedly. By quickly synthesizing logistics data, terrain information and known adversary patterns, the system can present commanders with options and risks in near real time.
The debate over fully autonomous weapons — systems capable of selecting and engaging targets without a human decision-maker in the loop — has become one of the most contentious issues in military AI development. Supporters argue such systems could react faster than humans in high-speed combat. Critics warn that removing human judgment from lethal decisions raises profound legal and accountability concerns if a machine makes a fatal mistake.