Remix.run Logo
HarHarVeryFunny 7 hours ago

From both an architectural and learning algorithm perspective, there is zero reason to expect an LLM to perform remotely like a brain, nor for it to generalize beyond what was necessary for it to minimize training errors. There is nothing in the loss function of an LLM to incentivize it to generalize.

However, for humans/animals the evolutionary/survival benefit of intelligence, learning from experience, is to correctly predict future action outcomes and the unfolding of external events, in a never-same-twice world. Generalization is key, as is sample efficiency. You may not get more than one or two chances to learn that life-saving lesson.

So, what evolution has given us is a learning architecture and learning algorithms that generalize well from extremely few samples.

jebarker 5 hours ago | parent [-]

> what evolution has given us is a learning architecture and learning algorithms that generalize well from extremely few samples.

This sounds magical though. My bet is that either the samples aren’t as few as they appear because humans actually operate in a constrained world where they see the same patterns repeat very many times if you use the correct similarity measures. Or, the learning that the brain does during human lifetime is really just a fine-tuning on top of accumulated evolutionary learning encoded in the structure of the brain.