WASHINGTON — Tensions between the Pentagon and Anthropic boiled over this past week as the Defense Department weighed canceling a contract with the AI firm.

Defense officials had grown frustrated with Anthropic’s hesitation to agree to expanded terms for its Claude chatbot. The Pentagon sought access for all lawful uses, according to a report late last month. Negotiations dragged on.

The breaking point came after news emerged that Palantir, Anthropic’s partner, deployed Claude during an operation to apprehend Maduro in Venezuela. Anthropic inquired with Palantir about the specifics of that deployment. Pentagon officials reacted swiftly.

Incensed by the questions, the department threatened to slap Anthropic with a supply chain risk label. Such designations typically target foreign entities like Huawei, mandating the military to sever commercial relationships. Anthropic holds unique clearance to handle classified material among large language models.

The label would disrupt ties mere months ahead of Anthropic’s anticipated IPO. The company, valued at $18.4 billion in its last funding round, relies on government contracts amid rapid growth.

Anthropic CEO Dario Amodei has insisted the firm lacks political motivations. In a statement, the company described talks with the Pentagon as productive. It framed the Palantir inquiry as routine due diligence in a standard discussion, according to company representatives.

Details of the Claude usage in the Maduro operation remain unclear. Palantir, a major defense contractor, integrated the chatbot into its platforms. The raid targeted Maduro amid political upheaval in Venezuela last year.

Pentagon spokespeople declined immediate comment on the threat. A department source confirmed the escalation to The Wall Street Journal. Anthropic’s contract, valued in the low millions, covers Claude’s deployment for national security tasks.

The clash spotlights deeper frictions over AI governance. The Pentagon pushes for broad access to advanced tools. Startups like Anthropic prioritize safety guardrails and transparency. Amodei has publicly advocated limits on military AI applications.

Even a resolution may not settle broader debates. Who controls powerful AI—developers, end users or regulators? Questions persist as the U.S. military races to integrate AI amid competition with China.

Anthropic launched Claude in 2023 as a rival to OpenAI’s ChatGPT. Backed by Amazon and Google investments totaling billions, it emphasizes constitutional AI to align with human values. The firm employs over 500 people in San Francisco.

Defense contracts form a sliver of Anthropic’s revenue, dwarfed by commercial cloud deals. Still, losing clearance could signal risks to investors ahead of the IPO, potentially targeted for mid-2025.

Industry watchers see the episode as a test case. Similar disputes have arisen with other AI providers. OpenAI faced scrutiny over military ties before pivoting to consumer focus. xAI, Elon Musk’s venture, openly courts defense dollars.

For now, both sides signal willingness to negotiate. Anthropic reiterated commitment to U.S. national security. Pentagon needs AI expertise as classified workloads surge.

The outcome could reshape how AI firms engage with government. Developers may tighten terms. Militaries worldwide adapt accordingly.