The U.S. military reportedly used Anthropic’s Claude AI in strikes against Iran on February 28, according to multiple reports, despite a dispute over ethical guardrails and a subsequent ban by the Trump administration. This marks the first confirmed use of a large language model (LLM) in a real-world military operation, signaling a new era in warfare where AI plays a central role in intelligence, targeting, and simulation.
AI Integration in the Pentagon
The U.S. Department of Defense has been accumulating AI models and tools for warfare since at least 2024. In July 2025, the Pentagon awarded Anthropic a contract worth up to $200 million for AI services, including the use of its Claude models. Public records show only a small portion of the contract had been paid by early 2026, indicating a gradual integration rather than a one-time purchase.
Anthropic began significant engagement with U.S. defense and intelligence agencies in late 2024. In November 2024, the company partnered with Palantir and Amazon Web Services to supply Claude to U.S. defense and intelligence systems, including classified environments. In June 2025, Anthropic introduced Claude Gov, a version tailored for government and national security workflows, which was in active use at U.S. intelligence and defense agencies by late 2025.
Role of Claude in the Iran Strikes
According to reports, Anthropic’s Claude AI models were used in intelligence assessment, target identification, and simulation of battle scenarios for the U.S. military. The U.S. Central Command (Centcom) in the Middle East used the models to process and analyze vast amounts of data, including intercepts, satellite imagery, and signals intelligence, to generate summaries, threat evaluations, and situational insights.
Anthropic’s AI models were not used to independently control weapons systems during the Iran attack. Instead, they provided insights, summaries, and simulations to human operators. The reports indicate that Claude was not used to make lethal decisions without human oversight and did not act as the mastermind of the strikes.
Despite a ban issued by President Donald Trump in late February, the integration of Claude into intelligence assessment, war simulation, and target identification systems was reportedly so deep that the Iran strikes proceeded with Claude’s support anyway during the transition period.
Contract Dispute and Alternatives
The dispute between the Pentagon and Anthropic centered on usage rights. The Pentagon demanded broader rights to use Claude for all lawful purposes, including potential battlefield targeting and weapon support. Anthropic refused to remove its ethical constraints, which are part of its Constitutional AI framework. This means Claude is programmed to reject use in fully autonomous lethal weapons and will not authorize strikes without human oversight.
OpenAI’s ChatGPT and underlying GPT series models, including frontier LLMs such as o1 or successors, have reportedly been used by the Pentagon. Following the Anthropic dispute, OpenAI reached a new agreement for classified network deployment of its models and tools, with safety guardrails in place.
Alphabet, Google’s parent company, has provided Gemini for Government products to the Pentagon, available for unclassified use, including through the GenAI.mil platform that began rolling out in late 2025. The company has been in negotiations for expansion into classified systems.
Elon Musk’s xAI, the parent of Grok, has provided a Grok for Government suite for military use, initially for unclassified tasks. The company signed an agreement in February for use in classified systems as well, positioning it as a potential rapid replacement amid the Anthropic fallout.
With Anthropic’s withdrawal from expanded terms, OpenAI and xAI stepped in rapidly to meet the classified AI needs of the U.S. military. This indicates a growing reliance on AI for military operations, particularly in intelligence assessment, target identification, and operational simulations.
One thing is now very clear: AI has become deeply integrated in U.S. military planning and execution by early 2026, particularly in intelligence assessment, target identification, and operational simulations. That has become evident in the Iran strikes. It will be used again. We are already in the AI era of war.
Comments
No comments yet
Be the first to share your thoughts