A federal judge just put the brakes on Washington using national-security power as a blunt instrument to force an AI company into doing what it refuses to do—building tools that can enable mass domestic surveillance and fully autonomous killing.
Quick Take
- Anthropic sued the Trump administration after the government ordered agencies to stop using its Claude AI and labeled the company a “supply-chain risk.”
- The fight centers on Anthropic’s refusal to remove two guardrails: no mass domestic surveillance and no fully autonomous lethal weapons without human oversight.
- Defense Secretary Pete Hegseth delivered an ultimatum in late February; days later, the government moved to cut Anthropic off from federal contracting pathways.
- Anthropic argues the move is unconstitutional retaliation; a group of 37 AI researchers from OpenAI and Google backed that concern in an amicus brief.
What the judge’s freeze signals in the Anthropic showdown
March 2026 brought a rare public collision between federal procurement power and constitutional limits, after Anthropic filed lawsuits challenging the administration’s response to the company’s safety terms for its Claude models. The dispute escalated when the government directed agencies to cease using Claude with a six-month phase-out and simultaneously branded Anthropic a “supply-chain risk,” a label that can wall off a firm from defense work and contractors tied to it.
A judge has now temporarily blocked key parts of that action, with reporting describing the court’s view as “classic First Amendment retaliation.” The order does not resolve the underlying national-security debate over military AI. It does, however, spotlight a constitutional issue conservatives have long raised: whether federal agencies can punish disfavored viewpoints or policy positions by weaponizing access to government markets, clearances, and contracting.
The guardrails at the center: surveillance at home and weapons abroad
Fall 2025 negotiations for Anthropic to deploy on the Pentagon’s “GenAI.mil” platform sharpened the conflict. Officials wanted Claude available for “all lawful uses,” but Anthropic insisted on two carve-outs: no mass domestic surveillance, and no fully autonomous lethal weapons without meaningful human oversight. Those limitations became the flashpoint once the administration pushed an “AI-first” warfighting strategy and demanded fewer restrictions for military use.
Timeline: ultimatum, ban order, and the “supply-chain risk” label
Defense Secretary Pete Hegseth issued an ultimatum on February 24, 2026, demanding the guardrails be removed within days. On February 27, President Trump directed federal agencies to stop using Claude and begin a six-month phase-out, while the administration designated Anthropic as a supply-chain risk. On March 4, the Department of War confirmed the designation by letter, and on March 9 Anthropic filed its dual lawsuits.
Separate reporting also indicates the administration had operational interest in Anthropic’s tools even as it moved to cut the company off—an example being revelations that Anthropic tools were used in planning related to Iran strike scenarios. That tension helps explain why this story is landing with constitutional conservatives who distrust both “woke” tech culture and unchecked federal power: the government appears to want maximum capability, minimal constraints, and immediate compliance.
How procurement power can become a constitutional pressure point
Anthropic’s core legal theory is that the Constitution bars the government from using its “enormous power” to punish a company for protected speech—here, the company’s stated safety policy and refusal to change it under threat. The company says the retaliation took the form of cutting off federal use, removing it from contracting pathways, and applying a national-security label that chills other firms from working with it.
The General Services Administration publicly aligned with the directive and took steps affecting Anthropic’s access to government procurement, including removing the company from USAi.gov and its Multiple Award Schedule. For readers concerned about government overreach, this is the practical issue: when the executive branch can effectively blacklist a vendor based on a policy disagreement, the pressure is not just on one company—it sends a message to every contractor that “speech” and “compliance” may be functionally inseparable.
Industry reaction and the OpenAI contrast
On March 9, 37 researchers and engineers from OpenAI and Google filed an amicus brief in their personal capacities supporting Anthropic’s argument, warning the government’s actions could “chill” open deliberation about AI risks and benefits. That kind of cross-company support is unusual in a hyper-competitive sector, and it underscores that this is not merely a business dispute—it is a precedent fight over whether safety limits can be maintained without federal punishment.
At the same time, OpenAI struck a classified deployment agreement with the administration shortly after the Anthropic order, claiming it included “more guardrails” than any previous classified AI agreement, including Anthropic’s. The public record summarized in available reporting does not fully detail how those guardrails compare in practice. What is clear is that the administration had alternatives ready, and Anthropic’s exclusion reshaped the competitive landscape overnight.
What to watch next for conservatives focused on liberty and accountability
The case remains early, and a preliminary injunction is a pause button—not a final verdict. Still, the judge’s move highlights a tension many Trump voters are openly debating in 2026: Americans want national strength, but they also want a government that stays inside constitutional lanes. When the same machinery used to enforce security can be turned into leverage for compliance, the risk is bigger than AI.
Next steps likely include expedited briefing on whether the supply-chain label rests on demonstrated security failures or on a policy dispute, and whether agency actions amount to viewpoint-based retaliation. If courts demand real evidence and tighter standards for blacklisting, it could protect contractors—and citizens—from precedent that expands executive power beyond what the Constitution tolerates, regardless of who occupies the White House.



