Remix.run Logo
goku12 4 days ago

Intelligence doesn't imply sentience, does it? Is there an issue in calling a non-sentient system intelligent?

dcanelhas 4 days ago | parent | next [-]

It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case.

Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all.

hliyan 4 days ago | parent | prev [-]

Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence.

But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

barnacs 4 days ago | parent | next [-]

> the ability to produce useful output beyond the sum total of past experience and present (sensory) input.

Isn't that what mathematical extrapolation or statistical inference does? To me, that's not even close to intelligence.

coldtea 4 days ago | parent [-]

>Isn't that what mathematical extrapolation or statistical inference does?

Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).

The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).

coldtea 4 days ago | parent | prev [-]

>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence

Citation needed would apply here. What if I say it doe require some or all of those things?

>But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness.

And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model?