▲ | keeda 7 days ago | |||||||
I posted a comment yesterday regarding this with links to a couple relevant studies: https://news.ycombinator.com/item?id=44793392 -- briefly: * Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs. * Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models. The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met. The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen. With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it. | ||||||||
▲ | Animats 7 days ago | parent [-] | |||||||
> Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs. Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know. | ||||||||
|