Anthropic CEO Dario Amodei has refused to comply with a Pentagon ultimatum requiring the artificial intelligence company to relax its ethical constraints on the use of its technology, risking a designation as a supply chain risk and the loss of its defense contract. The deadline for a decision is 5:01 p.m. ET on Friday, and Amodei has declared that his company ‘cannot in good conscience accede’ to the demand.

Supply Chain Risk and Ethical Safeguards

The Pentagon’s demand comes as Anthropic, known for its advanced chatbot Claude, has grown rapidly from a small research lab in San Francisco to one of the world’s most valuable AI startups. The company has sought assurances that its technology will not be used for mass surveillance or in fully autonomous weapons, but the Pentagon’s proposed contract language, according to Anthropic, ‘framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.’

If Amodei does not comply, military officials have warned they will not only terminate Anthropic’s contract but also ‘deem them a supply chain risk,’ a label typically reserved for foreign adversaries. This designation could disrupt the company’s partnerships with other businesses and jeopardize its expansion plans.

Industry Reactions and Open Letter

The standoff has drawn support from top AI researchers and tech workers at competing firms such as OpenAI and Google. In an open letter, employees from these companies voiced solidarity with Amodei’s stance, stating that the Pentagon is attempting to ‘divide each company with fear that the other will give in.’

OpenAI CEO Sam Altman, who once worked with Amodei at OpenAI before co-founding Anthropic in 2021, expressed concern over the Pentagon’s ‘threatening’ approach in a CNBC interview. He said, ‘I mostly trust them as a company, and I think they really do care about safety.’ Altman also noted that OpenAI and most of the AI field share similar red lines regarding the use of their technologies.

Elon Musk, another major player in the AI space, took a different stance, supporting the Trump administration and criticizing Anthropic for allegedly opposing ‘Western Civilization.’ Musk’s comments were in response to a previous version of Anthropic’s guiding principles that emphasized ‘consideration of non-Western perspectives.’

Historical Parallels and Legal Concerns

Retired Air Force General Jack Shanahan, a former leader of the Defense Department’s AI initiatives, has voiced concerns about the Pentagon’s approach. He recalled the opposition from Google employees during the Project Maven initiative, which used AI to analyze drone footage for military operations. Shanahan, who led that project, stated that he is ‘sympathetic to Anthropic’s position’ and believes that the AI models used in chatbots like Claude are ‘not ready for prime time in national security settings.’

Sean Parnell, the Pentagon’s top spokesman, stated that the military ‘wants to use Anthropic’s model for all lawful purposes’ and that broadening access would prevent the company from ‘jeopardizing critical military operations.’ However, officials have not provided specific details on how they intend to use the technology.

During a meeting between Defense Secretary Pete Hegseth and Amodei, military officials warned that they could invoke the Defense Production Act, a Cold War-era law that would grant the military sweeping authority to use Anthropic’s products even if the company does not approve. Amodei called this a contradiction, noting that one threat labels Anthropic a security risk while the other declares Claude essential to national security.

Amodei said he hopes the Pentagon will reconsider, given the value of Claude to the military. If not, he said, Anthropic ‘will work to enable a smooth transition to another provider.’

The debate over the ethical use of AI in military applications has intensified as major tech companies handle the balance between innovation and responsibility. With Anthropic’s position gaining support from key industry players, the situation remains highly charged and could influence future policies on AI governance and national security.