▲ | kouteiheika 4 days ago | |
I still don't understand what exactly you are disagreeing with. Meta is paying the big bucks because to train a big LLM in a reasonable time you need *scale*. But the process itself is the same as full fine-tuning, just scaled up across many GPUs. If I would be patient enough to wait a few years/decades for my single GPU to chug through 15 trillion tokens then I could too train a Llama from scratch (assuming I feed it the same training data). | ||
▲ | fooker 3 days ago | parent [-] | |
> you need scale. No, training state of the art LLMs is still a bit of alchemy. We don't understand what works and what doesn't. Meta is paying 100M each to hire AI researchers not because they know how to scale (they aren't bringing GPUs lol), but mainly because they remember what worked and what didn't for training GPT4. > If I would be patient.. No, you'd spend the time and resources training and end up with something worse than even GPT3. This is what made Deepseek appear in headlines for two months straight. Plenty of other companies have 100x more resources and are actively trying to have their own LLMs. Including big names like Apple and Oracle. They haven't managed to. |