President Donald Trump ordered a major air strike in Iran using Anthropic’s AI tools just hours after banning the company from federal use, revealing a complex and immediate conflict between policy and military operations.

Anthropic AI in Military Operations

According to sources familiar with the situation, U.S. Central Command (Centcom) has been using Anthropic’s Claude AI tool for intelligence assessments, target identification, and simulating battle scenarios in the Middle East. The tool has been integral to ongoing operations against Iran, despite the administration’s recent decision to ban its use.

Centcom did not confirm whether the AI tool was used in the recent strikes, but its involvement in high-profile missions, such as the U.S. military operation that captured Venezuelan President Nicolás Maduro, highlights its deep integration into military planning.

The use of Claude in these scenarios highlights the challenges of phasing out the technology. The administration has said it will take six months to complete the transition, a process complicated by the tool’s widespread use among partners like data-mining firm Palantir.

Tensions Between Pentagon and Anthropic

The administration and Anthropic have been in a months-long dispute over the use of the AI models by the Pentagon. Trump ordered federal agencies to stop working with the company, and the Defense Department has designated it a security threat and a risk to its supply chain.

This decision followed Anthropic’s refusal to allow the Pentagon to use its tools in all lawful scenarios during contract negotiations. Additionally, Anthropic’s lobbying against the administration’s AI policies and its ties to organizations that are major Democratic donors have further strained relations with the Trump administration.

According to a Pentagon official, the conflict has led to deals with competitors such as OpenAI, the maker of ChatGPT, and Elon Musk’s xAI, for use in classified settings. However, AI experts say it will take months to replace Claude with these models, given the complexity of the transition.

Impact on Military Operations

The use of Anthropic’s AI tools in military operations has significant implications for the effectiveness and speed of U.S. military responses. According to one military analyst, the tools have been used to process vast amounts of data and provide real-time insights that are critical in high-stakes scenarios.

“These AI tools have become a part of the military’s operational fabric,” said a defense analyst who requested anonymity. “Removing them overnight would disrupt operations and potentially delay critical decisions on the battlefield.”

The situation also raises questions about the reliability of alternative AI models and the potential for operational delays while the transition is underway. The administration has not provided a detailed timeline for the replacement of Claude with other models.

Anthropic has not commented on the administration’s ban, but the company has previously stated that it is committed to ensuring the responsible use of its technology. It has also emphasized its focus on ethical AI development and transparency in its operations.

The recent strikes in the Middle East, which reportedly involved Anthropic’s AI tools, have reignited debates about the role of AI in military operations. The incident highlights the challenges of aligning policy decisions with the practical needs of the military.

With the administration moving forward with its ban on Anthropic, the Pentagon faces the daunting task of replacing a tool that has become deeply embedded in its operations. This process could take months, during which the military may rely on less sophisticated or less integrated alternatives.

As the situation unfolds, the implications for both national security and the broader AI industry remain uncertain. The use of AI in military operations is likely to remain a contentious and rapidly evolving issue in the coming months.