Appeals Court Sides With Pentagon, Upholds Blacklisting of AI Firm Anthropic
By Jack Queen
NEW YORK, April 8 (Reuters) — In a pivotal development for the intersection of artificial intelligence and national security, a Washington, D.C. federal appeals court has declined to halt the Pentagon's blacklisting of AI company Anthropic. The decision delivers a temporary win for the Defense Department, which designated the maker of the Claude AI assistant as a supply-chain risk—a move that bars the company from defense contracts and could precipitate a government-wide ban.
The ruling, issued on Wednesday by a panel of the U.S. Court of Appeals for the District of Columbia Circuit, is not a final judgment on the merits but allows the blacklisting to stand as the legal challenge proceeds. This outcome directly contrasts with a separate ruling last month by a California federal judge, who blocked a related designation, suggesting the Pentagon may have unlawfully retaliated against Anthropic for its stance on AI safety.
The conflict stems from Anthropic's refusal, citing ethical and safety concerns, to permit the U.S. military to use its Claude chatbot for surveillance activities or in autonomous weapons systems. Defense Secretary Pete Hegseth subsequently invoked two distinct procurement statutes to label the company a national security risk. Anthropic alleges the designation—the first of its kind publicly issued under these obscure laws—was an overreach of authority, inflicted without due process, and has already cost the company potential billions in revenue and reputational damage.
In court filings, Anthropic argues the government violated its First Amendment rights by punishing its AI safety views and its Fifth Amendment right to due process by not allowing a chance to contest the decision. The Justice Department counters that the action was a contractual dispute, not retaliation, and that Anthropic's restrictions create operational uncertainty and risk for military systems.
The D.C. case involves a statute with broader implications, potentially triggering an interagency review that could expand the blacklist across the civilian federal government. The parallel case in California concerns a narrower law affecting only Pentagon contracts related to military information systems.
Analysis & Impact: This legal stalemate underscores the growing tension between rapid AI innovation and established national security protocols. The outcome could set a precedent for how the U.S. government engages with—or excludes—private tech firms that impose ethical limits on the use of their technology. A final ruling against the Pentagon could weaken its leverage in future contract negotiations with AI developers, while a victory would solidify its authority to mandate compliance for companies seeking lucrative government work.
Reactions & Commentary
"This isn't just about one contract; it's a chilling signal to the entire tech industry," said Dr. Lena Chen, a technology ethics professor at Georgetown University. "If companies can be penalized for implementing their own safety guardrails, it forces a dangerous choice between principle and participation."
Marcus Thorne, a former Pentagon procurement officer, offered a different view: "The Department of Defense has a non-negotiable mandate to secure its supply chain. Anthropic's refusal created a tangible risk. The court correctly recognized that national security cannot be held hostage to a private company's unilateral ethical policy."
The sharpest criticism came from Riley Carson, an activist with the Tech Accountability Project: "This is a blatant, bullying retaliation. The administration is punishing Anthropic for having a conscience. It's an attempt to strong-arm the AI sector into building weaponized systems without question, and it sets a terrifying precedent for silencing ethical dissent."
General (Ret.) David P. Miller added context: "The legal discrepancy between the D.C. and California courts highlights the lack of clear regulatory frameworks for AI in defense. Until Congress acts, we'll see more of these chaotic, case-by-case battles that leave both industry and government in limbo."
(Reporting by Jack Queen in New York; Editing by Noeleen Walder and Cynthia Osterman)