▲ | sixo 4 days ago | |||||||||||||
Roughly, actual intelligence needs to maintain a world model in its internal representation, not merely an embedding of language, which is a very different data structure and probably will be learned in a very different way. This includes things like: - a map of the world, or concept space, or a codebase, etc - causality - "factoring" which breaks down systems or interactions into predictable parts Language alone is too blurry to do any of these precisely. | ||||||||||||||
▲ | coldtea 4 days ago | parent | next [-] | |||||||||||||
>Roughly, actual intelligence needs to maintain a world model in its internal representation And how's that not like stored information (memories) and weighted links between each and/or between groups of them? | ||||||||||||||
| ||||||||||||||
▲ | astrange 4 days ago | parent | prev | next [-] | |||||||||||||
> Roughly, actual intelligence needs to maintain a world model in its internal representation This is GOFAI metaphor-based development, which never once produced anything useful. They just sat around saying things like "people have world models" and then decided if they programmed something and called it a "world model" they'd get intelligence, it didn't work out, but then they still just went around claiming people have "world models" as if they hadn't just made it up. An alternative thesis "people do things that worked the last time they did them" explains both language and action planning better; eg you don't form a model of the contents of your garbage in order to take it to the dumpster. https://www.cambridge.org/core/books/abs/computation-and-hum... | ||||||||||||||
| ||||||||||||||
▲ | 4 days ago | parent | prev | next [-] | |||||||||||||
[deleted] | ||||||||||||||
▲ | SweetSoftPillow 4 days ago | parent | prev [-] | |||||||||||||
Please check an example #2 here: https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/... It is not "language alone" anymore. LLMs are multimodal nowadays, and it's still just the beginning. And keep in mind that these results are produced by a cheap, small and fast model. | ||||||||||||||
|