Angle icon

The 5:01 Ultimatum

Get Angle

The 5:01 Ultimatum

As the deadline passes, Dario Amodei risks a "commercial death sentence" rather than hand the Pentagon an unrestricted AI.

[Speaker 1]: It is shortly after 5:01 PM Eastern Time. The deadline set by Secretary of Defense Pete Hegseth has officially expired. And for the first time in the modern era of Silicon Valley, a major tech company has just told the United States military "no" on a matter of national security. [Speaker 2]: We’re looking at a definitive split in the industry tonight. Anthropic has formally rejected a Pentagon ultimatum to remove ethical safety guardrails from its AI models. They sent a letter just minutes ago stating, "We cannot in good conscience accede." [Speaker 1]: This isn't just a contract dispute over two hundred million dollars. This is a sovereignty dispute. It’s about who decides when an AI is allowed to kill or surveil-the government, or the private company that wrote the code. [Speaker 2]: And the weapon the Pentagon is threatening to use in response has a very boring, very bureaucratic name: "Supply Chain Risk." [Speaker 1]: It sounds like a shipping delay. [Speaker 2]: It sounds harmless. But as we’re going to explain, that specific designation is essentially a commercial death sentence that could force companies like Amazon to sever ties with Anthropic within days. [Speaker 1]: It’s Friday, February 27, 2026, and you’re listening to The Angle. [Speaker 1]: So, to understand why this deadline today matters so much, we have to look at the room where it happened. On Tuesday, February 24th, Secretary Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. [Speaker 2]: Right. And the demand was incredibly specific. The DoD-which, remember, is now officially operating under the secondary title "Department of War" as of last September-they told Amodei he had seventy-two hours to agree to a new clause in their vendor contract. [Speaker 1]: The "All Lawful Use" clause. [Speaker 2]: Exactly. "All Lawful Use." And that sounds reasonable on its face, right? If it’s lawful, why wouldn't the software support it? But in practice, it’s a liability shift. It means if a military lawyer deems a drone strike or a surveillance program legal, the AI must execute the calculation. No questions asked. No ethical refusals. [Speaker 1]: And Amodei refused. [Speaker 2]: He refused. And today at 5:01 PM, that refusal became official. Anthropic is now the only major AI lab holding out. Competitors like xAI and Google have already agreed to the terms. [Speaker 1]: But here’s the thing. This didn't start with a contract negotiation. It didn't start with a software failure or a glitch. It actually started last month in Venezuela, with the capture of Nicolás Maduro. [Speaker 2]: Right. This is the trigger event. In January, U.S. Special Operations conducted a raid that successfully captured Maduro. It was a massive win for the administration. And we know now that `Claude`-Anthropic’s AI model-was used during that operation for live intel processing. [Speaker 1]: But they didn't use `Claude` directly. They used it through Palantir. [Speaker 2]: Yes. Palantir is the integrator here. They wrap the AI in their own systems and serve it to the military. So, during the operational window, an Anthropic employee-someone on the safety team-checked in with Palantir. They saw the compute spikes, they saw the pattern of use, and they asked a question regarding the specific use case in Venezuela. [Speaker 1]: Just a question. [Speaker 2]: Just a question. But Palantir flagged that inquiry. They reported it to the DoD as "hesitancy." [Speaker 1]: "Hesitancy." [Speaker 2]: That’s the word. To the Pentagon, and specifically to Secretary Hegseth, that "hesitancy" isn't just a customer service check-in. It’s viewed as a cultural mismatch. It’s…

Try stream view →