President Donald Trump has directed all U.S. federal agencies to immediately stop using Anthropic’s AI model, Claude, and has labeled the company a ‘supply chain risk’—a designation typically reserved for firms from adversarial nations. The move comes after Anthropic refused the Pentagon’s request to remove safeguards on the military’s use of the AI model, citing ethical concerns about its potential use in mass surveillance and autonomous weapons.

Impact on Pentagon Contracts and Military Operations

The Pentagon is set to terminate its contract with Anthropic, which was valued at up to $200 million, and will require other government contractors to certify they do not use Claude in their workflows. Anthropic will also be barred from all other government work, with a six-month wind-down period to allow for the transition to alternative AI models.

The decision is particularly notable because Claude is the only AI model currently used in the military’s classified systems. It was reportedly used in the operation to capture Venezuelan President Nicolás Maduro and could be employed in future military operations, such as those in Iran.

Defense officials have acknowledged the difficulty of replacing Claude, with one describing the process of disentangling from the model as a ‘huge pain in the ass.’ The move also complicates operations for AI software firm Palantir, which relies on Claude for its most sensitive work with the military and will likely need to secure an alternative.

Political and Ethical Tensions

Trump’s directive, published on Truth Social, accused Anthropic of attempting to ‘strong-arm’ the Department of Defense and align its policies with ‘woke’ values instead of the U.S. Constitution. He called the company’s stance a ‘disastrous mistake’ and vowed the U.S. would ‘never allow a radical left, woke company to dictate how our great military fights and wins wars.’

Anthropic CEO Dario Amodei had previously rejected the Pentagon’s ‘best and final offer,’ stating the company could not in good conscience comply with the request. In response, senior Pentagon official Emil Michael accused Amodei of having a ‘God complex’ and putting the nation’s safety at risk.

Amodei stated that if the Pentagon chooses to offboard Anthropic, the company will work to ensure a smooth transition to another provider, minimizing disruption to military operations. However, the financial stakes are high, as Anthropic stands to lose hundreds of millions in potential government contracts and risk losing clients who may avoid its AI due to the blacklisting.

Broader Implications for AI Regulation

The Pentagon argues that the use of AI in military applications raises complex ethical and legal questions, and that it is impractical to litigate each case with private companies. It has demanded that all AI firms make their models available for ‘all lawful purposes,’ a stance that has drawn criticism from Anthropic and others in the industry.

Defense Secretary Pete Hegseth has repeatedly criticized ‘woke AI,’ and the Trump administration has grown increasingly hostile toward Anthropic, despite the military’s reliance on its technology. A defense official told Axios that the U.S. is still engaging with Anthropic out of necessity, acknowledging the company’s capabilities.

Elon Musk’s xAI has recently signed an agreement allowing the military to use its AI model, Grok, in classified systems. However, sources suggest that Grok will not serve as a direct replacement for Claude. Meanwhile, Google’s Gemini and OpenAI’s ChatGPT are available in unclassified systems, and the Pentagon is accelerating talks to integrate them into classified environments.

In response to the situation, hundreds of employees from Google and OpenAI have signed a petition urging their companies to align with Anthropic’s stance on ethical AI use. OpenAI CEO Sam Altman has confirmed that the company will maintain similar red lines regarding surveillance and autonomous weapons but remains open to negotiating with the Pentagon.

Anthropic has not yet indicated whether it will challenge the government’s designation in court. The company, which has experienced rapid growth and is gaining traction in key enterprise applications, now faces a potential reckoning over its ethical position and its financial future.