Remix.run Logo
hliyan 4 days ago

Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence.

But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

barnacs 4 days ago | parent | next [-]

> the ability to produce useful output beyond the sum total of past experience and present (sensory) input.

Isn't that what mathematical extrapolation or statistical inference does? To me, that's not even close to intelligence.

coldtea 4 days ago | parent [-]

>Isn't that what mathematical extrapolation or statistical inference does?

Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).

The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).

coldtea 4 days ago | parent | prev [-]

>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence

Citation needed would apply here. What if I say it doe require some or all of those things?

>But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.

What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness.

And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model?