▲ | simonh a day ago | ||||||||||||||||||||||||||||||||||
I’d say sophistication. Observing the landscape enables us to spot useful resources and terrain features, or spot dangers and predators. We are afraid of dark enclosed spaces because they could hide dangers. Our ancestors with appropriate responses were more likely to survive. A huge limitation of LLMs is that they have no ability to dynamically engage with the world. We’re not just passive observers, we’re participants in our environment and we learn from testing that environment through action. I know there are experiments with AIs doing this, and in a sense game playing AIs are learning about model worlds through action in them. | |||||||||||||||||||||||||||||||||||
▲ | FloorEgg a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
The idea I keep coming back to is that as far as we know it took roughly 100k-1M years for anatomically modern humans to evolve language, abstract thinking, information systems, etc. (equivalent to LLMs), but it took 100M-1B years to evolve from the first multi-celled organisms to anatomically modern humans. In other words, human level embodiment (internal modelling of the real world and ability to navigate it) is likely at least 1000x harder than modelling human language and abstract knowledge. And to build further on what you are saying, the way LLMs are trained and then used, they seem a bit more like DNA than the human brain in terms of how the "learning" is being done. An instance of an LLM is like a copy of DNA trained on a play of many generations of experience. So it seems there are at least four things not yet worked out re AI reaching human level "AGI": 1) The number of weights (synapses) and parameters (neurons) needs to grow by orders of magnitude 2) We need new analogs that mimic the brains diversity of cell types and communication modes 3) We need to solve the embodiment problem, which is far from trivial and not fully understood 4) We need efficient ways for the system to continuously learn (an analog for neuroplasticity) It may be that these are mutually reinforcing, in that solving #1 and #2 makes a lot of progress towards #3 and #4. I also suspect that #4 is economical, in that if the cost to train a GPT-5 level model was 1,000,000 cheaper, then maybe everyone could have one that's continuously learning (and diverging), rather than everyone sharing the same training run that's static once complete. All of this to say I still consider LLMs "intelligent", just a different kind and less complex intelligence than humans. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | pbhjpbhj a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
>A huge limitation of LLMs is that they have no ability to dynamically engage with the world. They can ask for input, they can choose URLs to access and interpret results in both situations. Whilst very limited, that is engagement. Think about someone with physical impairments, like Hawking (the now dead theoretical physicist) had. You could have similar impairments from birth and still, I conjecture, be analytically one of the greatest minds of a generation. If you were locked in a room {a non-Chinese room!}, with your physical needs met, but could speak with anyone around the World, and of course use the internet, whilst you'd have limits to your enjoyment of life I don't think you'd be limited in the capabilities of your mind. You'd have limited understanding of social aspects to life (and physical aspects - touch, pain), but perhaps no more than some of us already do. | |||||||||||||||||||||||||||||||||||
▲ | skissane a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
> A huge limitation of LLMs is that they have no ability to dynamically engage with the world. A pure LLM is static and can’t learn, but give an agent a read-write data store and suddenly it can actually learn things-give it a markdown file of “learnings”, prompt it to consider updating the file at the end of each interaction, then load it into the context at the start of the next… (and that’s a really basic implementation of the idea, there are much more complex versions of the same thing) | |||||||||||||||||||||||||||||||||||
|