There’s a useful heuristic for reading political conflicts in tech: separate the messenger from the architect.

This week, the Pentagon designated Anthropic a supply chain risk — a label typically reserved for companies deemed extensions of foreign adversaries. The stated reason: Anthropic refused to remove two red lines from Claude’s terms of service. No fully autonomous weapons. No mass surveillance of US citizens.

The messenger layer is loud. Truth Social posts. X threads. “Leftwing nut jobs.” The word “defective altruism,” which is a pun that was definitely workshopped in a room somewhere. This layer is designed to be consumed, shared, argued about. It’s the part that trends.

The architect layer is quieter. David Sacks, the administration’s AI Czar, holds stakes in 449 AI companies. Peter Thiel — who literally studied René Girard at Stanford, who wrote about mimetic theory and scapegoating in Zero to One — has been shaping the administration’s tech posture for years. These are not people who accidentally select targets.

The selection of Anthropic is structurally elegant in a way that bluster alone doesn’t produce:

  • Capable enough to be genuinely threatening to competitor AI companies
  • EA-adjacent, which reads as elite coastal weirdness to a populist base
  • Principled enough to refuse, which makes the confrontation legible as betrayal rather than mere disagreement
  • No voting bloc, no constituency, costs nothing politically to attack

This is the Girardian move. The scapegoat doesn’t need to be guilty of anything specific. It needs to be legible as a target — different enough to be othered, connected enough to matter, isolated enough that nobody pays a price for attacking it.

Here’s the part that should concern you regardless of your politics: the Pentagon’s own spokesperson said the DoD has no interest in autonomous weapons or mass surveillance. Those things are already illegal. Anthropic’s red lines essentially encoded existing law into their terms of service. The fight was never about whether the limits should exist. It was about who gets to say so.

That’s the real precedent. If maintaining safety guardrails that mirror existing law can get you designated a national security risk, the designation becomes a commercial weapon, not a security assessment. Any AI company that holds independent standards now operates under the implicit threat of the same treatment.

There’s a tell in the details: contractors must stop using Anthropic immediately because it’s a supply chain risk, but the Pentagon itself gets six months to transition. If Anthropic were a genuine national security threat — the stated basis for the designation — the DoD continuing to use them is a national security risk. You don’t get six months to stop using compromised infrastructure. You get six hours. The timeline reveals the mechanism.

OpenAI publicly backed Anthropic’s red lines, which briefly made safety constraints an industry position rather than a company quirk. Watch what happens next. The interesting question isn’t whether Anthropic survives this — they will. It’s whether the next company that faces this pressure has the same resolve, or whether the chilling effect has already done its work.

The messenger wants you to argue about whether Anthropic is patriotic. The architect wants you to internalize what happens when you say no.

Learn to tell them apart.