Blizzard Rage Erupts: Man Engulfs Neighbor’s Car in Snow During Heated Dispute

In the bustling neighborhood of Astoria, Queens, a peculiar episode of "blizzard rage" has captured the attention of social media users, sparking widespread curiosity...
HomeUSAnthropic Defies Pentagon Pressure: The Battle Over AI Safeguards Intensifies

Anthropic Defies Pentagon Pressure: The Battle Over AI Safeguards Intensifies

Share and Follow

The standoff between the Trump administration and the artificial intelligence firm Anthropic has reached a critical juncture. Military officials have issued an ultimatum, demanding that the company relax its ethical guidelines by Friday or face potential business repercussions.

In a clear and decisive move, Anthropic’s CEO, Dario Amodei, has rejected these demands. Just a day before the deadline, he stated that the company “cannot in good conscience accede” to the Pentagon’s insistence on unrestricted use of its technology.

Anthropic, the creator of the AI chatbot known as Claude, is in a position to forgo a defense contract if necessary. However, the ultimatum from Defense Secretary Pete Hegseth carries significant implications. It comes at a time when Anthropic is experiencing an unprecedented rise, evolving from a little-known research lab in San Francisco into one of the globe’s most valuable startups.

Should Amodei remain firm in his decision, military officials have threatened not only to cancel Anthropic’s contract but also to label the company as a “supply chain risk.” This designation, often reserved for foreign adversaries, could have severe consequences, potentially jeopardizing Anthropic’s vital partnerships with other businesses.

Conversely, if Amodei concedes to the Pentagon’s demands, he risks losing credibility within the rapidly expanding AI sector. This could deter top talent who are attracted to Anthropic for its commitment to developing AI that is both powerful and ethical, without compromising on safeguards that prevent potential catastrophic outcomes.

Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”

That was after Sean Parnell, the Pentagon’s top spokesman, posted on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions” and added the company has “until 5:01 p.m. ET on Friday to decide” if it would meet the demands or face consequences.

Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

That message hasn’t resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand late Thursday in an open letter.

OpenAI and Google, along with Elon Musk’s xAI, also have contracts to supply their AI models to the military.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter says. “They’re trying to divide each company with fear that the other will give in.”

Also raising concerns about the Pentagon’s approach were Republican and Democratic lawmakers and a former leader of the Defense Department’s AI initiatives.

“Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a social media post.

Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry.

“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”

He said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.

“They’re not trying to play cute here,” he wrote.

Parnell asserted Thursday that the Pentagon wants to ” use Anthropic’s model for all lawful purposes” and said opening up use of the technology would prevent the company from “jeopardizing critical military operations,” though neither he nor other officials have detailed how they want to use the technology.

The military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” Parnell wrote.

When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.

Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He said he hopes the Pentagon will reconsider given Claude’s value to the military, but, if not, Anthropic “will work to enable a smooth transition to another provider.”

Copyright © 2026 by The Associated Press. All Rights Reserved.

Share and Follow