| ▲ | vessenes 2 hours ago | |||||||||||||
This is a good analyst report - lots of data. Conclusion - firms are spending ahead of sustained revenues right now, and a lot of the money is going offshore to TSMC, basically. I’m not certain of the conclusion - I think a lot depends on amortization schedules - if data centers are fully booked right now, then we don’t need very long amortization schedules at the reported 60+% margin on inference to see this capex fully paid off. My prior is that we are seeing something like 1/10,000th or so of the reasonable inference demand the world has fulfilled. There’s a note in the analysis that might back this - it says that we are seeing one of the only times ever where hardware prices are rising over time. Combined with spot prices at lambda labs (still quite high I’d say), it doesn’t look like we’re seeing a drop in inference demand. Under those circumstances, the first phases of this bet, cross-industry, look like they will pay off. If that’s true, as an investment strategy, I’d just buy the basket - oAI, Anthropic, GOOG, META, SpaceX, MSFT, probably even Oracle, and wait. We’ll either get the rotating state of the art frontier capacity we’ve gotten in the last 18 months, or one of those will have lift off. Of those, I think MSFT is the value play - they’re down something like 20% in the last six months? Satya’s strategy seems very sensible to me - slow hyperscale buildouts in the US (lots of competition) and do them everywhere else in the world (still not much competition). For countries that can’t build their own frontier models, the next best thing is going to be running them in local datacenters; MSFT has long standing operational bases everywhere in the world, it’s arguably one of their differentiators compared to GOOG/META. | ||||||||||||||
| ▲ | scrollop 2 hours ago | parent [-] | |||||||||||||
If a different architecture to LLMs is invented (that could actually "think", that could potentially reach AGI), then perhaps it would be more efficient than LLMs. Perhaps LLMs can make themselves more efficient. They can't even remember "properly". Hallucinations cripple them for serious, professional uses. If they may hallucinate 5% of the time and you are asking mission critical queries, that's a problem. Perhaps all of these data centers won't be needed. At least not by some of the current AI companies that won't keep up. If that happens to OpenAI, that would be quite a shock to the financial system (and GDP). Microsoft's changes to windows have alienated some of their userbase. Copilot is poor compared to it's rivals. There's a reason they are down 20%. Linux adoption use is accelerating (still too low!). And don't forget AI on device. When it becomes "good enough" for most tasks, data centre use will reduce. With the talk of Nvidia backtracking and saying they won't invest 100 billion in OpenAI, Oracle in a poor financial position with the loan's for it's upcoming data centres becoming more expensive and dubious (they could fail to pay them)- the picture isn't as positive as you make it out to be. Which makes me think that you have an ulterior motive. | ||||||||||||||
| ||||||||||||||