▲ | int_19h 3 days ago | ||||||||||||||||
I think the point here is that if you have to pretrain it for every specific task, it's not artificial general intelligence, by definition. | |||||||||||||||||
▲ | og_kalu 3 days ago | parent [-] | ||||||||||||||||
There isn't any general intelligence that isn't receiving pre-traning. People spend 14 to 18+ years in school to have any sort of career. You don't have to pretrain it for every little thing but it should come as no surprise that a complex non-trivial game would require it. Even if you explained all the rules of chess clearly to someone brand new to it, it will be a while and lots of practice before they internalize it. And like I said, LLM pre-training is less like a machine reading text and more like Evolution. If you gave a corpus of chess rules, you're only training a model that knows how to converse about chess rules. Do humans require less 'pre-training' ? Sure, but then again, that's on the back of millions of years of evolution. Modern NNs initialize random weights and have relatively very little inductive bias. | |||||||||||||||||
|