Angle icon

The Red Line

Get Angle

The Red Line

An internal memo reveals how a collapsed $200 million defense deal turned an American AI darling into an official supply chain risk.

[Speaker 1]: "Straight up lies." [Speaker 2]: That is the direct quote. It appeared in a confidential internal memo from Anthropic CEO Dario Amodei on Thursday, March 5th. And he wasn’t talking about the government. He was talking about OpenAI. [Speaker 1]: Usually, when these big AI labs fight, it’s about benchmarks. It’s about who has the better chatbot or the faster reasoning model. But this week, the fight stopped being about product and started being about power. [Speaker 2]: Specifically, military power. We are looking at the moment where the US AI industry officially cracked in half. On one side, you have the defense contractors who are willing to integrate fully with the state. And on the other, you have the civilian researchers who just refused to cross the line. [Speaker 1]: And the catalyst for all of this is a phrase you’ve probably seen floating around the headlines this week: The Department of War. [Speaker 2]: Which, to be clear, is not the official name of the Pentagon. [Speaker 1]: Not legally, no. But as we’re going to discuss, if you want a contract right now, you better call it that. [Speaker 2]: It’s Saturday, March 7, 2026, and you’re listening to The Angle. [Speaker 1]: So, looking back at this week, it feels like everything happened all at once. But the fuse on this was actually lit back in late February. [Speaker 2]: Right. So let’s zoom out to February 26th. For the last year or so, Anthropic has been the Pentagon’s darling. Their model, Claude, was the engine running inside Palantir’s classified systems. They were the first frontier lab to get that level of clearance. [Speaker 1]: Which was always a bit of a paradox, right? Because Anthropic’s whole brand is "Safety." They are the "Constitutional AI" company. And yet, they were the ones powering the military’s sensor fusion. [Speaker 2]: Exactly. And that tension finally snapped on the 26th. Anthropic was in negotiations to renew a $200 million contract with the DoD. And at the eleventh hour, the deal collapsed. [Speaker 1]: Why? [Speaker 2]: Because of a clause regarding "bulk acquired data." Basically, mass surveillance. Anthropic had a red line in their contract saying their models could not be used for domestic surveillance or non-consensual data analysis. The Pentagon-and specifically Defense Secretary Pete Hegseth’s office-said that clause had to go. [Speaker 1]: And Anthropic walked away. [Speaker 2]: They walked away. And the retaliation was instant. Within 24 hours, Secretary Hegseth designated Anthropic a "supply chain risk." [Speaker 1]: That is a massive escalation. That’s a label they usually reserve for companies like Huawei or Kaspersky. It’s effectively treating an American AI lab like a foreign adversary. [Speaker 2]: It is. The order mandates a full purge of Anthropic’s technology from classified networks within six months. But here is where the story pivots. Because nature abhors a vacuum, and so does the defense budget. [Speaker 1]: Enter OpenAI. [Speaker 2]: Enter OpenAI. On February 28th, just two days after the Anthropic deal died, Sam Altman and OpenAI announced a new strategic partnership with the Pentagon. And the key detail here isn’t the money. It’s the terms. OpenAI agreed to the "All Lawful Use" standard. [Speaker 1]: Okay, we need to stop on that phrase. "All Lawful Use." It sounds innocuous. It sounds like "we follow the law." But in this context, it’s actually a trapdoor. [Speaker 2]: Right. This is the mechanism that split the industry. "All Lawful Use" means that the AI provider agrees to allow any application that the US government deems…

Try stream view →