▲ | ACCount37 3 days ago | ||||||||||||||||
A lot of the touted "fundamental limitations of LLMs" are less "fundamental" and more "you're training them wrong". So there are improvements version to version - from both increases in raw model capabilities and better training methods being used. | |||||||||||||||||
▲ | ijk 3 days ago | parent [-] | ||||||||||||||||
I'm frustrated by the number of times I encounter people assuming that the current model behavior is inevitable. There's been hundreds of billions of dollars spent on training LLMs to do specific things. What exactly they've been trained on matters; they could have been trained to do something else. Interacting with a base model versus an instruction tuned model will quickly show you the difference between the innate language faculties and the post-trained behavior. | |||||||||||||||||
|