▲ | card_zero 4 days ago | ||||||||||||||||
Well, fuck knows. However, that doesn't automatically make this a "no true Scotsman" argument. Sometimes we just don't know an answer. Here's a question for you, actually: what's the criterion for being non-intelligent? | |||||||||||||||||
▲ | ACCount37 4 days ago | parent [-] | ||||||||||||||||
"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so". I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains. LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle. They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead. | |||||||||||||||||
|