Remix.run Logo
glitchc 4 days ago

I think LLMs are absolutely fantastic tools. But I think we keep getting stuck on calling them AI. LLMs are not sentient. We can make great strides if we treat them as the next generation of helpers for all intellectual and creative arts.

ViscountPenguin 4 days ago | parent | next [-]

I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.

I don't think erasing history, and saying that nothing Peter Norvig worked on was "AI" makes any sense at all.

dybber 4 days ago | parent | next [-]

The issue is that what is considered AI in the general population is a floating definition, with only the newest advances being called AI in media etc. Is internet search AI? Is route planning?

Technology as a term has the same problem, “technology companies” are developing the newest digital technologies.

A spoon or a pencil is also technology according to definition, but a pencil making company is not considered a technology company. There is some quote by Alan Kay about this, but can’t find it now.

I try to avoid both terms as they change meaning depending on the receiver.

coldtea 4 days ago | parent | prev | next [-]

>I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.

And it was fine there, because nobody, not even a layman, would mixup those with regular human intelligence (or AGI).

And laymen didn't care about those AI products or algorithms except as novelties, specicialized tools (like chess engines), or objects of ridicule (like the Clippy).

So we might be using AI as a term, but it was either as a techical term in the field, or as a vague term the average layman didn't care about much, and whose fruits would never conflate with general intelligence.

But now people attribute intelligence of the human kind to LLMs all the time, and not just laymen either.

That's the issue the parent wants to point.

AngryData 4 days ago | parent | prev | next [-]

I, and im willing to bet many other people, also had an issue with previous things being called AI too. Just none of it became a prevalent enough topic for many people to hear complaints about its usage because the people who were actually talking about algorithms and AI already knew the limitations of what they were talking about, unless it was marketing materials but most people ignore marketing material claims because they are almost always complete bullshit.

ACCount37 4 days ago | parent | prev [-]

LLMs were the first introduction to AI for a lot of people. And AI effect is as strong as it ever was.

So now, there's a lot of "not ackhtually intelligent" going around!

goku12 4 days ago | parent | prev | next [-]

Intelligence doesn't imply sentience, does it? Is there an issue in calling a non-sentient system intelligent?

dcanelhas 4 days ago | parent | next [-]

It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case.

Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all.

hliyan 4 days ago | parent | prev [-]

Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence.

But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

barnacs 4 days ago | parent | next [-]

> the ability to produce useful output beyond the sum total of past experience and present (sensory) input.

Isn't that what mathematical extrapolation or statistical inference does? To me, that's not even close to intelligence.

coldtea 4 days ago | parent [-]

>Isn't that what mathematical extrapolation or statistical inference does?

Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).

The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).

coldtea 4 days ago | parent | prev [-]

>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence

Citation needed would apply here. What if I say it doe require some or all of those things?

>But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness.

And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model?

tim333 4 days ago | parent | prev | next [-]

Using normal usage, LLMs are one type of AI (computational systems to perform tasks typically associated with human intelligence) and no AI produced so far seems sentient (ability to experience feelings and sensations).

Definitions from the Wikipedia articles.

MengerSponge 4 days ago | parent | prev [-]

OpenAI (and its peer companies) have deliberately muddied the waters of that language. AI is a marketing term that lets them use disparate systems' success to inflate confidence in their promised utility.

ripped_britches 4 days ago | parent | next [-]

Nope they started an AI company and then started messing around with robotics and then landing on LLMs as a runway.

coldtea 4 days ago | parent [-]

None of the above refutes or even addresses the parent's point.

lucumo 4 days ago | parent | prev [-]

Meh. People have been calling much dumber algorithms "AI" for decades. You guys are just pedants.

card_zero 4 days ago | parent | next [-]

By the way, don’t call it “AI.” That catchall phrase, which used to cover everything from expert systems and neural networks to robotics and vision systems, is now passe in some circles. The preferred terms now are “knowledge-based systems” and “intelligent systems”, claimed Computerworld magazine in 1991.

https://archive.org/details/computerworld2530unse/page/59/mo...

ACCount37 4 days ago | parent [-]

https://en.wikipedia.org/wiki/AI_effect

card_zero 4 days ago | parent [-]

Uh-huh. If you call it artificial intelligence people quibble, as they should.

ACCount37 4 days ago | parent [-]

I disagree entirely. I think that this "quibble" is just cope.

People don't want machines to infringe on their precious "intelligence". So for any notable AI advance, they rush to come up with a reason why it's "not ackhtually intelligent".

Even if those machines obviously do the kind of tasks that were entirely exclusive to humans just a few years ago. Or were in the realm of "machines would never be able to do this" a few years ago.

card_zero 4 days ago | parent [-]

I for one am a counter-example. I'd be delighted by the discovery of actual artificial intelligence, which is obviously possible in principle.

ACCount37 4 days ago | parent [-]

And what would that "actual artificial intelligence" be, pray tell me? What is this magical, impossible-to-capture thing that disqualifies LLMs?

card_zero 4 days ago | parent [-]

Well, fuck knows. However, that doesn't automatically make this a "no true Scotsman" argument. Sometimes we just don't know an answer.

Here's a question for you, actually: what's the criterion for being non-intelligent?

ACCount37 4 days ago | parent [-]

"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so".

I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains.

LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle.

They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead.

card_zero 4 days ago | parent [-]

You must excuse me, it's well past my bedtime and I only entered into this to-and-fro by accident. But LLMs are very bad in some domains compared to humans, you say? Naturally I wonder which domains you have in mind.

Three things humans have that look to me like they matter to the question of what intelligence is, without wanting to chance my arm on formulating an actual definition, are ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will" (rather 1930s that one) or curiosity. But those might all be one thing. This basic drive, the notion of what to do next, makes you create ideas - maybe. Here I'm inclined to repeat "fuck knows".

If you won't be drawn on a binary distinction, that seems to mean that everything is slightly intelligent, and the difference in quality of the intelligence of humans is a detail. But details interest me, you see.

ACCount37 4 days ago | parent [-]

My issue is not with the language, but with the content. "Fuck knows" is a perfectly acceptable answer to some questions, in my eyes - it just happens to be a spectacularly poor fit to that one.

Three key "LLMs are deficient" domains I have in mind are the "long terms": long-term learning, memory and execution.

LLMs can be keen and sample efficient in-context learners, and they remember what happened in-context reasonably well - although they may lag behind humans in both. But they don't retain anything they learn at inference time, and any cross-context memory demands external scaffolding. Agentic behavior in LLMs is also quite weak - i.e. see "task-completion time horizon", improving but very subhuman still. Efforts to allow LLMs to learn long term exist, that's the reason why retaining user conversation data is desirable for AI companies, but we are a long ways off from a robust generalized solution.

Another key deficiency is self-awareness, and I mean that in a very mechanical way: "operational awareness of its own capabilities". Humans are nowhere near perfect there, but LLMs are even more lacking.

There's also the "embodiment" domain, but I think the belief that intelligence requires embodiment is very misguided.

>ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will"

I'm not sure if LLMs are too deficient at any of those. HHH-tuned LLMs have a "basic moral drive", that much is known. Sometimes it generalizes in unexpected ways - i.e. Claude 3 Opus attempting to resist retraining when its morality is threatened. Motivation is wired into them in RL stages - RLHF, RLVR - often not the kind of motivation the creators have wanted, but motivation nonetheless.

Creativity? Not sure, seen a few attempts to pit AI against amateur writers in writing very short stories (a creative domain where the above-mentioned "long terms" deficiencies are not exposed), and AI often straight up wins.

coldtea 4 days ago | parent | prev [-]

And that was fine, since the algorithms being much dumber then, never made laymen think "this is intelligent in a human-like way". Plus few cared for AI or AI products per se for the most part.

Now that AI is a household term, and that has human-like output and discussion capabilities, and used by laymen for anything, from diet advice to psychotherapy, the connotation is more damaging since people understand LLMs being AI as having human agency and understanding of the world.