▲ | danans 16 hours ago | |||||||||||||||||||||||||||||||
> It's incredibly difficult to compress information without have at least some internal model of that information. Whether that model is a "world model" that fits the definition of folks like Sutton and LeCunn is semantic. Sutton's emphasizes his point by saying is that LLMs trying to reach AGI is futile because their world models are less capable that a squirrel's, in part because the squirrel has direct experiences and its own goals, and is capable of continual learning based on those in real time, whereas an LLM has none of those. Finally he says if you could recreate the intelligence of a squirrel you'd be most of the way toward AGI, but you can't do that with an LLM. | ||||||||||||||||||||||||||||||||
▲ | LarsDu88 14 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
This is actually a pretty good point, but quite honestly isn't this just an implementation detail? We can wire up a squirrel robot, give it a wifi connection to a Cerebras inference engine with a big context window, then let it run about during the day collecting a video feed while directing it to do "squirrel stuff". Then during the night, we make it go to sleep and use the data collected during the day to continue finetuning the actual model weights in some data center somewhere. After 2 years, this model would have a ton of "direct experiences" about the world. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | ninetyninenine 15 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Except Sutton has no idea or even a clue about the internal model of a squirrel. He just uses it as a symbol for utterly stupid but still smarter than an LLM. It’s semantic manipulation in attempt to prove his point but he proves nothing. We have no idea how much of the world a squirrel understands. We understand LLMs more than squirrels. Arguably we don’t know if LLMs are more intelligent than squirrels. > Finally he says if you could recreate the intelligence of a squirrel you'd be most of the way toward AGI, but you can't do that with an LLM. Again he doesn’t even have a quantitative baseline for what intelligence means for a squirrel and how intelligent a squirrel is compared to an LLM. We literally have no idea if LLMs are more intelligent or less and no direct means of comparing what is more or less an apple and an orange. | ||||||||||||||||||||||||||||||||
|