▲ | goku12 4 days ago | ||||||||||||||||||||||
Intelligence doesn't imply sentience, does it? Is there an issue in calling a non-sentient system intelligent? | |||||||||||||||||||||||
▲ | dcanelhas 4 days ago | parent | next [-] | ||||||||||||||||||||||
It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case. Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all. | |||||||||||||||||||||||
▲ | hliyan 4 days ago | parent | prev [-] | ||||||||||||||||||||||
Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence. But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated. | |||||||||||||||||||||||
|