| ▲ | pvab3 7 hours ago | ||||||||||||||||||||||
inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model | |||||||||||||||||||||||
| ▲ | SequoiaHope 18 minutes ago | parent | next [-] | ||||||||||||||||||||||
Run out of training data? They’re going to put these things in humanoids (they are weirdly cheap now) and record high resolution video and other sensor data of real world tasks and train huge multimodal Vision Language Action models etc. The world is more than just text. We can never run out of pixels if we point cameras at the real world and move them around. I work in robotics and I don’t think people talking about this stuff appreciate that text and internet pictures is just the beginning. Robotics is poised to generate and consume TONS of data from the real world, not just the internet. | |||||||||||||||||||||||
| ▲ | zozbot234 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> We will run out of additional material to train on This sounds a bit silly. More training will generally result in better modeling, even for a fixed amount of genuine original data. At current model sizes, it's essentially impossible to overfit to the training data so there's no reason why we should just "stop". | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | yourapostasy 6 hours ago | parent | prev [-] | ||||||||||||||||||||||
Inference leans heavily on GPU RAM and RAM bandwidth for the decode phase where an increasingly greater amount of time is being spent as people find better ways to leverage inference. So NVIDIA users are currently arguably going to demand a different product mix when the market shifts away from the current training-friendly products. I suspect there will be more than enough demand for inference that whatever power we release from a relative slackening of training demand will be more than made up and then some by power demand to drive a large inference market. It isn’t the panacea some make it out to be, but there is obvious utility here to sell. The real argument is shifting towards the pricing. | |||||||||||||||||||||||