| ▲ | londons_explore 11 hours ago | |||||||
> These models somehow just generalize dramatically worse than people. It's a very fundamental thing My guess is we'll discover that biological intelligence is 'learning' not just from your experience, but that of thousands of ancestors. There are a few weak pointers in that direction. Eg. A father who experiences a specific fear can pass that fear to grandchildren through sperm alone. [1]. I believe this is at least part of the reason humans appear to perform so well with so little training data compared to machines. | ||||||||
| ▲ | HarHarVeryFunny 7 hours ago | parent | next [-] | |||||||
From both an architectural and learning algorithm perspective, there is zero reason to expect an LLM to perform remotely like a brain, nor for it to generalize beyond what was necessary for it to minimize training errors. There is nothing in the loss function of an LLM to incentivize it to generalize. However, for humans/animals the evolutionary/survival benefit of intelligence, learning from experience, is to correctly predict future action outcomes and the unfolding of external events, in a never-same-twice world. Generalization is key, as is sample efficiency. You may not get more than one or two chances to learn that life-saving lesson. So, what evolution has given us is a learning architecture and learning algorithms that generalize well from extremely few samples. | ||||||||
| ||||||||
| ▲ | 10 hours ago | parent | prev [-] | |||||||
| [deleted] | ||||||||