▲ | jibal 4 days ago | ||||||||||||||||
> I'm not brave enough to draw a public conclusion about what this could mean. I'm brave enough to be honest: it means nothing. LLMs execute a very sophisticated algorithm that pattern matches against a vast amount of data drawn from human utterances. LLMs have no mental states, minds, thoughts, feelings, concerns, desires, goals, etc. If the training data were instead drawn from a billion monkeys banging on typewriters then the LLMs would produce gibberish. All the intelligence, emotion, etc. that appears to be in the LLM is actually in the minds of the people who wrote the texts that are in the training data. This is not to say that an AI couldn't have a mind, but LLMs are not the right sort of program to be such an AI. | |||||||||||||||||
▲ | Joeri 4 days ago | parent | next [-] | ||||||||||||||||
LLMs are not people, but they are still minds, and to deny even that seems willfully luddite. While they are generating tokens they have a state, and that state is recursively fed back through the network, and what is being fed back operates not just at the level of snippets of text but also of semantic concepts. So while it occurs in brief flashes I would argue they have mental state and they have thoughts. If we built an LLM that was generating tokens non-stop and could have user input mixed into the network input, it would not be a dramatic departure of today’s architecture. It also clearly has goals, expressed in the RLHF tuning and the prompt. I call those goals because they directly determine its output, and I don’t know what a goal is other than the driving force behind a mind’s outputs. Base model training teaches it patterns, finetuning and prompt teaches it how to apply those patterns and gives it goals. I don’t know what it would mean for a piece of software to have feelings or concerns or emotions, so I cannot say what the essential quality is that LLMs miss for that. Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation? I don’t believe in souls, or at the very least I think they are a tall claim with insufficient evidence. In my view, neurons in the human brain are ultimately very simple deterministic calculating machines, and yet the full richness of human thought is generated from them because of chaotic complexity. For me, all human thought is pattern matching. The argument that LLMs cannot be minds because they only do pattern matching … I don’t know what to make of that. But then I also don’t know what to make of free will, so really what do I know? | |||||||||||||||||
| |||||||||||||||||
▲ | jibal 4 days ago | parent | prev [-] | ||||||||||||||||
"they are still minds, and to deny even that seems willfully luddite" Where do people get off tossing around ridiculous ad hominems like this? I could write a refutation of their comment but I really don't want to engage with someone like that. "For me, all human thought is pattern matching" So therefore anyone who disagrees is "willfully luddite", regardless of why they disagree? FWIW, I helped develop the ARPANET, I've been an early adopter all my life, I have always had a keen interest in AI and have followed its developments for decades, as well as Philosophy of Mind and am in the Strong AI / Daniel Dennett physicalist camp ... I very much think that AIs with minds are possible (yes the human algorithm running in silicon would have feelings, whatever those are ... even the dualist David Chalmers agrees as he explains with his "principle of organizational invariance"). My views on whether LLMs have them have absolutely nothing to do with Luddism ... that judgment of me is some sort of absurd category mistake (together with an apparently complete lack of understanding of what Luddism is). | |||||||||||||||||
|