Remix.run Logo
HarHarVeryFunny 16 hours ago

I'd say that whether to expect more brain-like capabilities out of Transformers is more an objective matter of architecture - what's missing - and learning algorithms, not "collective arguments". If a Transformer simply can't do something - has no mechanism to support it (e.g. learn at run time), then it can't do it, regardless of whether Sam Altman tells you it can, or tries to spin it as unimportant!

A Transformer is just a fixed size stack of transformer layers, with one-way data flow through this stack. It has no internal looping, no internal memory, no way to incrementally learn at runtime, no autonomy/curiosity/etc to cause it to explore and actively expose itself to learning situations (assuming it could learn, which it anyways can't), etc!

These are just some of the most obvious major gaps between the Transformer architecture and even the most stripped down cognitive architecture (vs language model) one might design, let alone an actual human brain which has a lot more moving parts and complexity to it.

The whole Transformer journey has been fascinating to watch, and highly informative as to how far language and auto-regressive prediction can take you, but without things like incremental learning and the drive to learn, all you have is a huge, but fixed, repository of "knowledge" (language stats), so you are in effect building a giant expert system. It may be highly capable and sufficient for some tasks, but this is not AGI - it's not something that could replace an intern and learn on the job, or make independent discoveries outside of what is already deducible from what is in the training data.

One of the really major gaps between an LLM and something capable of learning about the world isn't even the architecture with all it's limitations, but just the way they are trained. A human (and other intelligent animals) also learns by prediction, but the feedback loop when the prediction is wrong is essential - this is how you learn, and WHAT you can learn from incorrect predictions is limited by the feedback you receive. In the case of a human/animal the feedback comes from the real world, so what you are able to learn critically includes things like how your own actions affect the world - you learn how to be able to DO things.

An LLM also learns by prediction, but what it is predicting isn't real world responses to it's own actions, but instead just input continuations. It is being trained to be a passive observer of other people's "actions" (limited to the word sequences they generate) - to predict what they will do (say) next, as opposed to being an active entity that learns not to predict someone else's actions, but to predict it's own actions and real-world responses - how to DO things itself (learn on the job, etc, etc).