AGI is coming. When it arrives, it will be able to do everything a human can do — but faster, cheaper, and at scale. The companies building it promise utopia: a world of abundance where nobody needs to work. What they do not mention is that they will own the machines, and you will not.
The essay The Coming AI-Capitalist Apocalypse argues that this trajectory leads somewhere specific. Not because anyone plans it, but because every actor — every corporation, every government, every investor — follows the same short-term incentives, and those incentives all point in the same direction. Mass unemployment. Economic irrelevance. The collapse of democratic accountability. A self-sustaining economy that no longer needs consumers. And eventually, billions of people cut off from the means of survival — not through conspiracy, but through the quiet, rational indifference of a system that has moved on without them.
Every step along this road has a natural off-ramp — a corrective mechanism that, in isolation, should prevent catastrophe. Voters should elect leaders who redistribute. New jobs should emerge as old ones disappear. Competing elites should create openings for the rest of us. International cooperation should constrain the worst outcomes. Revolt should be the last resort.
The argument of this essay is that every single off-ramp is blocked. Not by the same obstacle — but by other steps in the same system. Each corrective mechanism is neutralised by another part of the very dynamic it is supposed to correct. The system is self-sealing. This diagram maps the trap.
Tap any step to see its off-ramp — and what blocks it. Tap the button on the bottom left of the screen to open the table of contents for the full essay.
Select a step to see the mechanism.