As the U.S. military’s partnership with artificial intelligence giant Anthropic teeters on the edge of collapse, the Pentagon’s top technology official told CBS News the department has offered compromises in order to reach a deal with the company.
The Pentagon has given Anthropic until Friday at 5:01 p.m. to either let the military use the company’s AI model for “all lawful purposes” or risk losing a lucrative Pentagon contract. The AI startup has sought guardrails that explicitly bar its powerful Claude model from being used to conduct mass surveillance of Americans or carry out military operations on its own.
The Pentagon’s chief technology officer Emil Michael told CBS News on Thursday that the military has “made some very good concessions” — though Anthropic suggested the concessions were inadequate.
In reference to Anthropic’s concern about mass surveillance, Michael said the Defense Department would “put it in writing that we’re specifically acknowledging” federal laws that restrict the military from surveilling Americans. And as to its other concern, Michael said “we’re specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons.” He also said the military invited Anthropic to participate in its AI ethics board.
Asked why the military will not specifically put in writing that Anthropic’s model can’t be used for mass surveillance of Americans or to make final targeting decisions without human involvement, Michael said those uses of AI are already barred by the law and by Pentagon policies.
“At some level, you have to trust your military to do the right thing,” said Michael.
“But we do have to be prepared for the future. We do have be prepared for what China is doing,” Michael said, referring to how U.S. adversaries use AI. “So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
An Anthropic spokesperson said Thursday that new contract language it received overnight from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” the company said.
Anthropic CEO Dario Amodei said in a separate statement Thursday that the Pentagon’s threats to cut off its contracts “do not change our position: we cannot in good conscience accede to their request.” He added that “we hope they reconsider.”
If the military and Anthropic do not reach a deal by Friday’s deadline, the military plans to cut off its partnership with the company and designate it a supply chain risk, Pentagon spokesman Sean Parnell said earlier Thursday. Officials are also considering invoking the Defense Production Act to make Anthropic adhere to the military’s requests, sources told CBS News.
Michael did not confirm that the Defense Production Act could be used, but he said that “no company is going to take out any software that’s being used in this department until we have an alternative.” Michael added that he’s working on partnerships with alternative AI firms.
At risk for Anthropic is its status as the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir. Anthropic was awarded a $200 million contract with the Defense Department last summer to deploy its AI capabilities to advance national security.
The feud has highlighted a broader disagreement among policymakers and tech firms over how best to mitigate the potential risks posed by AI.
Amodei has long been vocal about the potential dangers of unconstrained AI, and has made a focus on safety and transparency a core part of his company’s identity. He’s also backed what he calls “sensible AI regulation.”
In the case of Anthropic’s Pentagon contract, Amodei said Thursday that “frontier AI systems are simply not reliable enough to power fully autonomous weapons,” and that autonomous weapons “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
He also said he’s concerned AI systems could pose a surveillance risk by piecing together “scattered, individually innocuous data into a comprehensive picture of any person’s life.”
The Trump administration, meanwhile, has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete, and has warned against what it calls “woke” AI models. In a speech last month, Defense Secretary Pete Hegseth pledged, “we will not employ AI models that won’t allow you to fight wars.”
Michael told CBS News that the disagreement is partially ideological, “and the way I describe that ideology is: they’re afraid of the power of AI.”
He said that the military is only interested in using AI lawfully, and is looking to “treat it like any other technology” — which means that if it isn’t used for lawful purposes, “that’s on us.”
“You can’t put the rules and the policies of the United States military and the government in the hands of one private company,” said Michael.