The growing crisis between Anthropic, one of the leading artificial intelligence companies in the US, and the Pentagon has deepened with a new decision from a federal appeals court. The federal appeals court in Washington D.C. rejected Anthropic's request for a temporary injunction against its blacklisting by the US Department of Defense.
Court Emphasizes "Public Interest"
The court's decision clearly stated that the balance of interests between the parties weighed heavily in favor of the government. The judges emphasized that the harm faced by the company was largely financial, whereas restricting the Pentagon's access to critical AI technologies by judicial means in an active military conflict environment could have greater consequences.
The decision acknowledged that Anthropic might suffer "irreparable harm" without a temporary injunction, but stated that these harms were primarily economic. Addressing the company's freedom of speech claims, the decision noted that insufficient evidence was presented to prove that this right was effectively obstructed during the ongoing legal process.
This decision by the appeals court contradicts a different ruling made by a federal court in San Francisco at the end of March. That court had temporarily halted the Trump administration's enforcement of a usage ban on Anthropic's Claude model.
According to the situation emerging from these two separate decisions, Anthropic has not been completely sidelined. The company will be able to continue working with other federal agencies but will be excluded from direct contracts with the Department of Defense. Furthermore, defense contractors will not be able to use the Claude model in projects they conduct with the Pentagon.
Background
In early March, the Pentagon classified Anthropic as a "supply chain risk." This classification was based on the claim that the company's technology could threaten US national security. Anthropic, however, strongly objected to this decision. The company argued that the classification was unconstitutional, arbitrary, and a retaliatory measure inconsistent with legal procedures.
The tension between the parties is essentially based on a technical and ethical disagreement. The Pentagon demanded unlimited access to Anthropic's AI models for all legal purposes. In response, the company sought assurances that its technology would not be used in fully autonomous weapon systems or large-scale "domestic surveillance" activities.
When this disagreement could not be resolved, the matter was taken to court.
The company's technology had previously been one of the first AI solutions used in the Pentagon's classified networks and had stood out for its integration capabilities with defense contractors like Palantir.
0 Comments: