A federal judge in California has rejected the Pentagon’s attempt to block Anthropic from providing its AI tools to government agencies, according to the BBC. Judge Rita Lin ruled on Thursday that directives from former President Donald Trump and current Secretary of Defense Pete Hegseth, which called for an immediate halt to the use of Anthropic’s tools by government agencies, could not be enforced at this time.

Legal battle over AI and national security

Judge Lin’s decision came after Anthropic, an artificial intelligence company based in San Francisco, filed a lawsuit against the Pentagon and several other federal agencies — the company argued that the government’s actions violated its First Amendment rights and had already begun to harm its business.

In her ruling. Judge Lin said the government was attempting to “cripple Anthropic” and “chill public debate” over its technology. She noted that the government’s claims were based on concerns about Anthropic’s tools being used by the Department of Defense, but that the actions taken were not justified under national security laws.

“This appears to be classic First Amendment retaliation,” the judge wrote in her order. The ruling means that Anthropic’s tools. Including its AI chatbot Claude. Will continue to be used by government agencies and by any outside company working with the military until the lawsuit is resolved.

An Anthropic spokeswoman said the company was “pleased” with the ruling from the federal court in California, but added that its focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.

Pentagon’s argument and Anthropic’s response

The Pentagon has argued in the case that it came to fear what Anthropic could do with its technology, which is widely used in government and military operations, because of its refusal to accept new contract terms. That created a genuine need for the supply chain risk label, according to the defense.

Judge Lin noted in her order that Trump and Hegseth in their public statements referred to Anthropic as “woke” and comprised of “left-wing nut jobs,” not its lack of security. “If this were merely a contracting impasse, DoW would presumably have just stopped using Claude,” Judge Lin wrote, referencing the Department of War, a secondary name for the Department of Defense. “The challenged actions, however, far exceed the scope of what could reasonably address such a national security interest.”

Anthropic had been negotiating with the Department of Defense for months prior to filing its lawsuit over new demands linked to a planned expansion of its $200m contract. The Pentagon wanted the contract to only say it could use Anthropic’s tools for “any lawful use.” Anthropic and its CEO Dario Amodei were concerned that would open the door to its tools being used for mass surveillance of Americans and fully autonomous weapons.

The dispute over the contract terms spilled into the public view in February, when Hegseth issued a deadline for Anthropic to accept its new contract terms. The company declined to do so, leading to the legal battle.

Impact on AI and government operations

Anthropic’s tools are currently used in a variety of government and military operations, and the judge’s ruling means that the use of these tools will continue until the legal dispute is resolved. The case has drawn attention to the growing concerns around AI technology and its potential uses in national security and surveillance.

The ruling has also raised questions about the balance between national security interests and the protection of free speech. Judge Lin’s decision highlights the potential for government actions to be perceived as an attempt to suppress dissent or limit public debate on emerging technologies.

The case is expected to have broader implications for how AI companies interact with the government and how national security concerns are addressed in the context of technological innovation. The outcome could influence how other AI firms handle similar issues in the future.

Representatives of the White House and the Department of Defense did not respond to requests for comment on the ruling. Anthropic has said it will continue to work with the government to ensure that its AI tools are used in a way that benefits the public while maintaining appropriate safeguards.