Remix.run Logo
ACCount37 4 days ago

I disagree entirely. I think that this "quibble" is just cope.

People don't want machines to infringe on their precious "intelligence". So for any notable AI advance, they rush to come up with a reason why it's "not ackhtually intelligent".

Even if those machines obviously do the kind of tasks that were entirely exclusive to humans just a few years ago. Or were in the realm of "machines would never be able to do this" a few years ago.

card_zero 4 days ago | parent [-]

I for one am a counter-example. I'd be delighted by the discovery of actual artificial intelligence, which is obviously possible in principle.

ACCount37 4 days ago | parent [-]

And what would that "actual artificial intelligence" be, pray tell me? What is this magical, impossible-to-capture thing that disqualifies LLMs?

card_zero 4 days ago | parent [-]

Well, fuck knows. However, that doesn't automatically make this a "no true Scotsman" argument. Sometimes we just don't know an answer.

Here's a question for you, actually: what's the criterion for being non-intelligent?

ACCount37 4 days ago | parent [-]

"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so".

I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains.

LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle.

They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead.

card_zero 4 days ago | parent [-]

You must excuse me, it's well past my bedtime and I only entered into this to-and-fro by accident. But LLMs are very bad in some domains compared to humans, you say? Naturally I wonder which domains you have in mind.

Three things humans have that look to me like they matter to the question of what intelligence is, without wanting to chance my arm on formulating an actual definition, are ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will" (rather 1930s that one) or curiosity. But those might all be one thing. This basic drive, the notion of what to do next, makes you create ideas - maybe. Here I'm inclined to repeat "fuck knows".

If you won't be drawn on a binary distinction, that seems to mean that everything is slightly intelligent, and the difference in quality of the intelligence of humans is a detail. But details interest me, you see.

ACCount37 4 days ago | parent [-]

My issue is not with the language, but with the content. "Fuck knows" is a perfectly acceptable answer to some questions, in my eyes - it just happens to be a spectacularly poor fit to that one.

Three key "LLMs are deficient" domains I have in mind are the "long terms": long-term learning, memory and execution.

LLMs can be keen and sample efficient in-context learners, and they remember what happened in-context reasonably well - although they may lag behind humans in both. But they don't retain anything they learn at inference time, and any cross-context memory demands external scaffolding. Agentic behavior in LLMs is also quite weak - i.e. see "task-completion time horizon", improving but very subhuman still. Efforts to allow LLMs to learn long term exist, that's the reason why retaining user conversation data is desirable for AI companies, but we are a long ways off from a robust generalized solution.

Another key deficiency is self-awareness, and I mean that in a very mechanical way: "operational awareness of its own capabilities". Humans are nowhere near perfect there, but LLMs are even more lacking.

There's also the "embodiment" domain, but I think the belief that intelligence requires embodiment is very misguided.

>ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will"

I'm not sure if LLMs are too deficient at any of those. HHH-tuned LLMs have a "basic moral drive", that much is known. Sometimes it generalizes in unexpected ways - i.e. Claude 3 Opus attempting to resist retraining when its morality is threatened. Motivation is wired into them in RL stages - RLHF, RLVR - often not the kind of motivation the creators have wanted, but motivation nonetheless.

Creativity? Not sure, seen a few attempts to pit AI against amateur writers in writing very short stories (a creative domain where the above-mentioned "long terms" deficiencies are not exposed), and AI often straight up wins.