▲ | FloorEgg a day ago | |||||||||||||||||||||||||
The idea I keep coming back to is that as far as we know it took roughly 100k-1M years for anatomically modern humans to evolve language, abstract thinking, information systems, etc. (equivalent to LLMs), but it took 100M-1B years to evolve from the first multi-celled organisms to anatomically modern humans. In other words, human level embodiment (internal modelling of the real world and ability to navigate it) is likely at least 1000x harder than modelling human language and abstract knowledge. And to build further on what you are saying, the way LLMs are trained and then used, they seem a bit more like DNA than the human brain in terms of how the "learning" is being done. An instance of an LLM is like a copy of DNA trained on a play of many generations of experience. So it seems there are at least four things not yet worked out re AI reaching human level "AGI": 1) The number of weights (synapses) and parameters (neurons) needs to grow by orders of magnitude 2) We need new analogs that mimic the brains diversity of cell types and communication modes 3) We need to solve the embodiment problem, which is far from trivial and not fully understood 4) We need efficient ways for the system to continuously learn (an analog for neuroplasticity) It may be that these are mutually reinforcing, in that solving #1 and #2 makes a lot of progress towards #3 and #4. I also suspect that #4 is economical, in that if the cost to train a GPT-5 level model was 1,000,000 cheaper, then maybe everyone could have one that's continuously learning (and diverging), rather than everyone sharing the same training run that's static once complete. All of this to say I still consider LLMs "intelligent", just a different kind and less complex intelligence than humans. | ||||||||||||||||||||||||||
▲ | kla-s a day ago | parent [-] | |||||||||||||||||||||||||
Id also add that 5) We need some sense of truth. Im not quite sure if the current paradigm of LLMs are robust enough given the recent Anthropic Paper about the effect of data quality or rather the lack thereof, that a small bad sample can poison the well and that this doesn’t get better with more data. Especially in conjunction with 4) some sense of truth becomes crucial in my eyes (Question in my eyes is how does this work? Something verifiable and understandable like lean would be great but how does this work with more fuzzy topics…). | ||||||||||||||||||||||||||
|