Angle icon

The Circuit Breaker

Get Angle

The Circuit Breaker

Discover how a single red heart emoji signaled the end of an era and tripped the circuit breaker on a half-trillion-dollar industry.

[Speaker 1]: November 18, 2023. The tech world is refreshing their feeds every three seconds. We’re expecting a lawsuit, maybe a press release, or at least a leaked memo. But instead, X-formerly Twitter-starts flooding with a single character. [Speaker 2]: A red heart emoji. ❤️. [Speaker 1]: Just that. No text. Hundreds of them. [Speaker 2]: It started with a few researchers, then the engineers, and then, shockingly, the interim CEO, Mira Murati. It looked like this spontaneous, heartwarming digital vigil. [Speaker 1]: And at the time, that’s how we all read it. A show of loyalty to Sam Altman, who had just been fired. But we’re recording this in January 2026, looking back with two years of hindsight. And when you look at those red hearts now, they look less like a valentine, and more like a vote. [Speaker 2]: A vote that ended up reshaping an industry worth half a trillion dollars. [Speaker 1]: So we started pulling on this thread-how a nonprofit board fired a CEO to save humanity, and how that decision ultimately destroyed the nonprofit itself. [Speaker 2]: Today we’re looking at the OpenAI crisis of 2023, the exodus that followed, and the final restructuring in late 2025 that changed the rules of the game. [Speaker 1]: Because there’s an angle to this story that gets lost in the drama. We usually talk about this as a clash of personalities. But it was actually a clash of structures. [Speaker 2]: Let’s walk through how the circuit breaker tripped, and why it can never happen again. [Speaker 1]: Okay, to understand why Sam Altman was fired, we have to go back to the way OpenAI was built. Because honestly, looking at it now, it seems insane. [Speaker 2]: It really does. It was a structural paradox. OpenAI was founded in 2015 as a nonprofit. The mission wasn’t to make money; it was to build AGI-Artificial General Intelligence-that was safe for humanity. [Speaker 1]: But here’s the problem. Building AGI is expensive. You need billions of dollars for compute power. Nonprofits can’t raise that kind of cash. [Speaker 2]: Right. So in 2019, Altman creates this "capped-profit" subsidiary. [Speaker 1]: Sidebar for a second-because this is the mechanism that caused the explosion. How exactly did that work? [Speaker 2]: Think of it like a nesting doll. The big doll on the outside is the Nonprofit Board. They have no equity. They don’t own shares. Their only job is the mission. Inside that doll is the For-Profit company, the one Microsoft invested in, the one worth billions. [Speaker 1]: So the people with zero financial stake had absolute firing power over the guy running the billion-dollar business. [Speaker 2]: Exactly. Investors called it a "governance bug." The board called it a "circuit breaker." If they felt the profit motive was endangering humanity, they could pull the lever and kill the company. [Speaker 1]: And in late 2023, the circuit breaker tripped. [Speaker 2]: It wasn't just one thing. It was a slow burn. The board was split. On one side, you had the "Accelerators"-Altman, Greg Brockman-who wanted to ship products to learn. On the other, the "Safetyists," led by Chief Scientist Ilya Sutskever and board members Helen Toner and Tasha McCauley. [Speaker 1]: There was that paper, right? Helen Toner published something that really set Altman off. [Speaker 2]: Yeah, just weeks before the firing. She published a research paper that basically compared OpenAI’s safety approach unfavorably to their rival, Anthropic. [Speaker 1]: Which is... awkward. A board member publicly criticizing her own company’s safety record while they’re…

Try stream view →