Microsoft and a group of retired military leaders have joined forces in a legal effort to block the Trump administration’s designation of Anthropic as a supply chain risk. The move comes after Defense Secretary Pete Hegseth labeled the artificial intelligence company’s products as a threat to national security, effectively barring it from military contracts.
Legal Challenge to Pentagon’s Decision
In a recent legal filing, Microsoft challenged the decision by Defense Secretary Pete Hegseth, who last week moved to exclude Anthropic from military work by citing its AI products as a threat to national security. The software giant argues that the designation is a misuse of government authority and could have severe economic consequences that are not in the public interest.
The filing, submitted in a federal court in San Francisco, where Anthropic is based, states that the Pentagon’s action “forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company.” Microsoft is seeking a temporary lifting of the designation to allow for more “reasoned discussion” between Anthropic and the Trump administration.
The legal battle is not just being pursued by Microsoft. A group of 22 former high-ranking U.S. military officials, including former secretaries of the Air Force, Army and Navy, and a former head of the Coast Guard, have also filed a court document. They argue that Hegseth’s actions are a retribution against a private company that has displeased the leadership and warn that the move threatens the rule-of-law principles that have long strengthened the military.
Ethical Concerns and Contract Disputes
The dispute with the Pentagon began after Anthropic refused to allow unrestricted military use of its AI model, Claude. This refusal triggered a public disagreement with the Trump administration, which then ordered all federal agencies to stop using the AI model. President Donald Trump also issued directives to discontinue the use of Claude in federal operations.
Microsoft’s filing supports Anthropic’s two ethical red lines that were a sticking point in the contract negotiations. The company stated that it believes American AI should not be used to conduct domestic mass surveillance or initiate war without human control. This position, according to Microsoft, is consistent with the law and broadly supported by American society, as the government acknowledges.
Microsoft’s legal brief also highlights the potential economic impact of the Pentagon’s decision. As a major government contractor, the company warns that the supply chain risk designation could lead to significant financial repercussions and disrupt the broader AI industry.
Other tech companies and organizations have also voiced support for Anthropic. A group of AI developers from Google and OpenAI, as well as organizations such as the Cato Institute and the Electronic Frontier Foundation, have filed their own legal documents in support of the company.
Military Leadership and National Security Concerns
The group of retired military leaders includes prominent figures such as former CIA director Michael Hayden, a retired Air Force general, and retired Coast Guard Adm. Thad Allen, who led the government response to Hurricane Katrina. Their filing emphasizes that the Pentagon’s actions are not in the interest of national security but instead undermine the rule of law.
U.S. District Judge Rita Lin, who was nominated to the bench by President Joe Biden in 2022, is presiding over the case in federal court in San Francisco. Anthropic has also filed a separate and more narrow case in the federal appeals court in Washington, D.C. Lin has scheduled a hearing for March 24.
While neither legal filing directly mentions the war in Iran, which began shortly after Trump and Hegseth announced their punishment of Anthropic, retired military officials have warned that the sudden uncertainty of targeting a technology widely embedded in military platforms could disrupt planning and put soldiers at risk during ongoing operations.
The current commander of U.S. Central Command, Adm. Brad Cooper, confirmed in a recent social media post that the military is using “advanced AI tools” to “sift through vast amounts of data in seconds.” He emphasized, however, that “humans will always make final decisions on what to shoot and what not to shoot and when to shoot.”
Anthropic was, until recently, the only one of its peers approved for use in classified military networks. However, due to the dispute with the Pentagon, military officials have indicated they are looking to shift that work to competitors such as Google, OpenAI, and Elon Musk’s xAI.
Comments
No comments yet
Be the first to share your thoughts