Judge Blocks Pentagon Order Branding Anthropic a National Security Risk
Ian Duncan The Washington Post
The Anthropic logo is seen on a phone. (photo: Dado Ruvic/Reuters) Judge Blocks Pentagon Order Branding Anthropic a National Security Risk
Ian Duncan The Washington Post
The artificial intelligence lab argued that the Trump administration was punishing it for speaking about the risks of its technology.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” District Court Judge Rita F. Lin wrote.
The immediate practical implications of the ruling are unclear, but it represents a clear victory for the AI lab, which has been involved in a bitter power struggle with the Defense Department over the use of its Claude system by the military. Defense officials pushed the company to allow for the technology to be used for any lawful purpose, but Anthropic wanted a bar on it being used in mass domestic surveillance and to power fully autonomous weapons.
After the dispute spilled into public view, the Pentagon terminated talks with the company and issued orders labeling it a “supply-chain risk.” The extraordinary move grouped a leading American AI firm alongside tech firms with links to hostile foreign governments. Defense Secretary Pete Hegseth said he was ordering all military contractors to stop using Claude — a declaration with far-reaching consequences for the tightly interwoven tech industry.
Anthropic said it was grateful for the judge’s ruling.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said in a statement.
The Defense Department did not immediately respond to a request for comment.
Anthropic argued in court that the government was overstepping its legal authority and punishing the company for exercising its rights to speak about its technology’s risks. The company said in legal filings that the administration’s actions had made customers wary, even those with no ties to the federal government, and could cost it billions in future revenue.
“I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said during a hearing this week.
Lin wrote that her order does not prevent the Pentagon from choosing to stop doing business with Anthropic, but she barred the Trump administration from taking broader steps against the company. Her ruling is not the final say because a separate case related to a different law is playing out in another federal court in Washington. A panel of judges handling that case has yet to issue a ruling.
Claude is deeply embedded in the military’s systems and while the Trump administration said it would transition away from the technology, it has been continuing to use it in support of its bombing campaign in Iran.