Anthropic, a prominent artificial intelligence company based in San Francisco, has filed two lawsuits in federal courts to challenge the Pentagon’s designation of the firm as a ‘supply chain risk’ over its refusal to allow unrestricted military use of its AI chatbot, Claude. The lawsuits, filed Monday in California and the Washington, D.C. federal appeals court, argue that the Pentagon’s actions are unlawful and an overreach of executive power.

Legal Challenge to Pentagon’s Designation

The Pentagon formally designated Anthropic as a supply chain risk last week, following a highly publicized dispute over the potential military applications of its AI technology. This designation effectively bars the company from participating in defense contracts and could have severe consequences for its business operations.

Anthropic’s lawsuit argues that the government’s actions are unprecedented and violate constitutional protections for free speech. The company claims that no federal statute authorizes the Pentagon’s actions and that the designation is an unlawful retaliation against its stance on AI ethics.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit states. “No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”

AI Ethics and Military Use

Anthropic has been vocal about its stance on the ethical use of AI, specifically opposing the use of its technology for mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other officials publicly insisted that the company must accept “all lawful uses” of Claude and threatened penalties if Anthropic did not comply.

The designation of Anthropic as a supply chain risk is a significant departure from previous practices, as it is the first known instance of the federal government using this authority against a U.S. company. This designation was originally designed to prevent foreign adversaries from compromising national security systems, but its application to a domestic firm has raised questions about its scope and intent.

President Donald Trump also announced that he would order federal agencies to stop using Anthropic’s technology, though he granted the Pentagon six months to phase out products that are deeply embedded in classified military systems, including those used in the Iran war.

Impact on Business and Revenue

Anthropic’s lawsuits also name other federal agencies, including the Treasury and State Departments, after officials ordered employees to stop using the company’s services. The firm has sought to clarify that the Trump administration’s penalty is limited to military contractors using Claude in defense-related work.

Clarifying this distinction is crucial for Anthropic, as most of its projected $14 billion in revenue this year comes from businesses and government agencies using Claude for tasks such as computer coding. According to a recent investment announcement, the company is valued at $380 billion, with more than 500 customers paying at least $1 million annually for its AI services.

Anthropic stated in a Monday release that “seeking judicial review does not change our longstanding commitment to using AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”

The Pentagon did not comment on the lawsuits, citing a policy of not discussing ongoing litigation. Legal experts suggest that the outcome of these cases could set important precedents for how the government regulates AI technologies and the balance between national security and corporate rights.

As the legal battle unfolds, the implications for Anthropic and the broader AI industry remain uncertain. The lawsuits could influence future policies on the use of AI in national defense and the regulatory landscape for emerging technologies.