Angle icon

All Lawful Purposes

Get Angle

All Lawful Purposes

A 295% spike in uninstalls hits ChatGPT after OpenAI accepts a polarizing military mandate that rival Anthropic risked everything to reject.

[Speaker 1]: On February 28th, something unprecedented happened in the app store. We saw a single-day spike in uninstalls for the most popular application in history-ChatGPT. The number wasn't a blip. It was a 295% increase in people deleting the app from their phones in twenty-four hours. [Speaker 2]: And usually, when you see a metric like that, it’s because an app crashed, or maybe they changed the interface and everyone hated it. But this was different. This wasn't a technical failure. It was a reaction to three words that appeared in a government contract. [Speaker 1]: "All Lawful Purposes." [Speaker 2]: Right. "All lawful purposes." It sounds incredibly boring. It sounds like standard legal boilerplate that you’d scroll past to click "I Accept." [Speaker 1]: But today, we’re going to talk about why those three words just split the internet in half. We’re looking at why the US government effectively declared war on a private AI lab for refusing to sign them, and why millions of users are suddenly convinced that their personal AI assistant has been drafted into the military. [Speaker 2]: It’s Saturday, March 7, 2026, and you’re listening to The Angle. [Speaker 1]: So, to understand why people are rage-deleting ChatGPT, we have to look at the ultimatum that landed in Silicon Valley last week. And I use the word "ultimatum" literally here. [Speaker 2]: Yeah. This wasn't a negotiation. The Department of War-and we should pause to remember that rebrand is still very fresh-issued a directive to the major frontier AI labs. The message was simple: If you want to be part of the US national security infrastructure, you need to sign a new integration agreement. And the core of that agreement is this "all lawful purposes" clause. [Speaker 1]: And the result was immediate. OpenAI signed it. Sam Altman stood next to Secretary Hegseth and announced a "historic partnership." But Anthropic? The makers of Claude? They refused. They publicly rejected the terms. [Speaker 2]: And that refusal triggered a chain reaction we are still watching play out. Because the government didn't just say, "Okay, fine, we won't hire you." The Department of War is now moving to designate Anthropic as a "supply chain risk." [Speaker 1]: Which is a label usually reserved for foreign adversaries like Huawei. It’s the government saying this company is dangerous. [Speaker 2]: Exactly. So now you have this massive fracture. On one side, you have OpenAI fully integrated with the military. On the other, you have Anthropic potentially being blacklisted from the cloud. And in the middle, you have the "QuitGPT" movement-users who feel like the tool they use to write emails and summarize meetings has suddenly become a listening device for the Pentagon. [Speaker 1]: It feels like the industry changed overnight. Coming up, we’re going to look at why refusing this contract is being called "attempted corporate murder," and why the "lawful purposes" clause isn't as innocent as it sounds. [Speaker 2]: But first, we need to understand the pressure cooker these companies are in. Because from the outside, it looks like OpenAI just sold out. But if you look at the geopolitical map right now, the argument for signing this deal is actually pretty stark. [Speaker 1]: Right. We have to look at this through the lens of "Sovereign AI." This has been the buzzword for the last six months. Nations realized that if they don't own the AI stack-the chips, the data centers, the models-they don't really have national security. [Speaker 2]: We saw India operationalize "BharatGPT" for their military just a few weeks ago.…

Try stream view →