▲ | ACCount37 4 days ago | |||||||
"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so". I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains. LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle. They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead. | ||||||||
▲ | card_zero 4 days ago | parent [-] | |||||||
You must excuse me, it's well past my bedtime and I only entered into this to-and-fro by accident. But LLMs are very bad in some domains compared to humans, you say? Naturally I wonder which domains you have in mind. Three things humans have that look to me like they matter to the question of what intelligence is, without wanting to chance my arm on formulating an actual definition, are ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will" (rather 1930s that one) or curiosity. But those might all be one thing. This basic drive, the notion of what to do next, makes you create ideas - maybe. Here I'm inclined to repeat "fuck knows". If you won't be drawn on a binary distinction, that seems to mean that everything is slightly intelligent, and the difference in quality of the intelligence of humans is a detail. But details interest me, you see. | ||||||||
|