| ▲ | semiquaver a day ago | ||||||||||||||||||||||
Don’t worry, goalpost shifting will ensure that no matter how useful LLMs get, there will always be a large contingent of people who insist that anything non-human is not thinking, just sparkling cognition. | |||||||||||||||||||||||
| ▲ | ozlikethewizard a day ago | parent [-] | ||||||||||||||||||||||
LLMs are not/will never be thinking though, no matter how good they get? You could potentially argue that there is some level of cognition during the training phases (as long as that isn't being outsourced to humans anyways), but generation of output is stachostic selection of most common (/highly ranked if tuned) following patterns? They cannot learn things outside of training, nor do they actually "know" things. To use the parrot example from above, a parrot doesnt "know" what the words its been taught to mimic are, nor does an LLM "know" what the concept of love is, its just be trained to regurgitate the words that are used by humans to describe such a thing. This isn't a criticism of LLMs, that's what they're supposed to do, but its certainly not cognition. | |||||||||||||||||||||||
| |||||||||||||||||||||||