| ▲ | Multiplayer 10 hours ago | |
The math here is mixing categories. The token calculation for a single 1-GW datacenter is fine, but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless. It’s like taking the annual revenue of one factory and using it to argue that an entire global build-out can’t be profitable. On top of that, the revenue estimate uses retail GPT-5.1 pricing, which is the absolute highest-priced model on the market, not what a hyperscaler actually charges for bulk workloads. IBM’s number refers to many datacenters built over many years, each with different models, utilization patterns, and economics. So this particular comparison doesn’t show that AI can’t be profitable—it’s just comparing one plant’s token output to everyone’s debt at once. The real challenges (throughput per watt, falling token prices, capital efficiency) are valid, but this napkin math isn’t proving what it claims to prove. | ||
| ▲ | qnleigh 6 hours ago | parent [-] | |
> but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless. Aren't they comparing annual revenue to the annual interest you might have to pay on $8T? Which the original article estimates at $800B. That seems consistent. | ||