▲ | ninetyninenine 13 hours ago | ||||||||||||||||||||||
Except Sutton has no idea or even a clue about the internal model of a squirrel. He just uses it as a symbol for utterly stupid but still smarter than an LLM. It’s semantic manipulation in attempt to prove his point but he proves nothing. We have no idea how much of the world a squirrel understands. We understand LLMs more than squirrels. Arguably we don’t know if LLMs are more intelligent than squirrels. > Finally he says if you could recreate the intelligence of a squirrel you'd be most of the way toward AGI, but you can't do that with an LLM. Again he doesn’t even have a quantitative baseline for what intelligence means for a squirrel and how intelligent a squirrel is compared to an LLM. We literally have no idea if LLMs are more intelligent or less and no direct means of comparing what is more or less an apple and an orange. | |||||||||||||||||||||||
▲ | danans 13 hours ago | parent [-] | ||||||||||||||||||||||
> We have no idea how much of the world I squirrel understands. We understand LLMs more than squirrels Based on our understanding of biology and evolution we know that a squirrel brain works more similarly to the way we humans do vs an LLM. To the extent we understand LLMs, it's because they are strictly less complex than both ours and squirrels' brains, not because they are better model for our intelligence. They are a thin simulation of human language generation capability mediated via text. We also see that a squirrel, like us, is capable of continuous learning driven by its own goals, all on an energy budget many orders of magnitude lower than LLMs. That last part is a strong empirical indication that suggests that LLMs are a dead end for AGI, given that the real world employs harsh energy constraints on biological intelligences. Also remember that Sutton is still of an AI maximalist. He isn't saying that AGI isn't possible, just that LLMs can't get us there. | |||||||||||||||||||||||
|