▲ | coldtea 4 days ago | |
>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence Citation needed would apply here. What if I say it doe require some or all of those things? >But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated. What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness. And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model? |