Remix.run Logo
jibal 4 days ago

> I'm not brave enough to draw a public conclusion about what this could mean.

I'm brave enough to be honest: it means nothing. LLMs execute a very sophisticated algorithm that pattern matches against a vast amount of data drawn from human utterances. LLMs have no mental states, minds, thoughts, feelings, concerns, desires, goals, etc.

If the training data were instead drawn from a billion monkeys banging on typewriters then the LLMs would produce gibberish. All the intelligence, emotion, etc. that appears to be in the LLM is actually in the minds of the people who wrote the texts that are in the training data.

This is not to say that an AI couldn't have a mind, but LLMs are not the right sort of program to be such an AI.

Joeri 4 days ago | parent | next [-]

LLMs are not people, but they are still minds, and to deny even that seems willfully luddite.

While they are generating tokens they have a state, and that state is recursively fed back through the network, and what is being fed back operates not just at the level of snippets of text but also of semantic concepts. So while it occurs in brief flashes I would argue they have mental state and they have thoughts. If we built an LLM that was generating tokens non-stop and could have user input mixed into the network input, it would not be a dramatic departure of today’s architecture.

It also clearly has goals, expressed in the RLHF tuning and the prompt. I call those goals because they directly determine its output, and I don’t know what a goal is other than the driving force behind a mind’s outputs. Base model training teaches it patterns, finetuning and prompt teaches it how to apply those patterns and gives it goals.

I don’t know what it would mean for a piece of software to have feelings or concerns or emotions, so I cannot say what the essential quality is that LLMs miss for that. Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?

I don’t believe in souls, or at the very least I think they are a tall claim with insufficient evidence. In my view, neurons in the human brain are ultimately very simple deterministic calculating machines, and yet the full richness of human thought is generated from them because of chaotic complexity. For me, all human thought is pattern matching. The argument that LLMs cannot be minds because they only do pattern matching … I don’t know what to make of that. But then I also don’t know what to make of free will, so really what do I know?

Dzugaru 4 days ago | parent | next [-]

There is no hidden state in a recurrent nets sense. Each new token just has all the previous tokens and that’s it.

dgfitz 4 days ago | parent | prev [-]

> Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?

You just said “consider this impossibility” as if there is any possibility of it happening. You might as well have said “consider traveling faster than the speed of light” which sure, fun to think about.

We don’t even know how most of the human brain even works. We throw pills at people to change their mental state in hopes that they become “less X” or “more Y” with a whole list of caveats like “if taking pill reduce X makes you _more_ X, stop taking it” because we have no idea what we’re doing. Pretending we can use statistical models to create a model that is capable of truly unique thought… stop drinking the kool-aid. Stop making LLMs something they’re not. Appreciate them for what they are, a neat tool. A really neat tool, even.

This is not a valid thought experiment. Your entire point hinges on “I don’t believe in souls” which is fine, no problem there, but it does not a valid point make.

jibal 4 days ago | parent | prev [-]

"they are still minds, and to deny even that seems willfully luddite"

Where do people get off tossing around ridiculous ad hominems like this? I could write a refutation of their comment but I really don't want to engage with someone like that.

"For me, all human thought is pattern matching"

So therefore anyone who disagrees is "willfully luddite", regardless of why they disagree?

FWIW, I helped develop the ARPANET, I've been an early adopter all my life, I have always had a keen interest in AI and have followed its developments for decades, as well as Philosophy of Mind and am in the Strong AI / Daniel Dennett physicalist camp ... I very much think that AIs with minds are possible (yes the human algorithm running in silicon would have feelings, whatever those are ... even the dualist David Chalmers agrees as he explains with his "principle of organizational invariance"). My views on whether LLMs have them have absolutely nothing to do with Luddism ... that judgment of me is some sort of absurd category mistake (together with an apparently complete lack of understanding of what Luddism is).

Aeolos 4 days ago | parent [-]

> I very much think that AIs with minds are possible

The real question here is how would _we_ be able to recognize that? And would we even have the intellectual honesty to be able to recognize that, when at large we seem to be inclined to discard everything non-human as self-evidently non-intelligent and incapable of feeling emotion?

Let's take emotions as a thought experiment. We know that plants are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other plants. Can we therefore say that plants feel emotions, just in a way that is unique to them and not necessarily identical to a human embodiment?

The answer to that question depends on one's worldview, rather than any objective definition of the concept of emotion. One could say plants cannot feel emotions because emotions are a human (or at least animal) construct; or one could say that plants can feel emotions, just not exactly identical to human emotions.

Now substitute plants with LLMs and try the thought experiment again.

In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence. Not too long ago, Descartes was arguing that animals do not possess a mind and cannot feel emotions, they are merely mimicry machines.[1] More recently, doctors were saying similar things about babies and adults, leading to horrifying medical malpractice.[2][3]

Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output? And how can we tell what does and what does not possess a mind if we are so undeniably bad at recognize those attributes even within our own species?

[1] https://en.wikipedia.org/wiki/Animal_machine

[2] https://en.wikipedia.org/wiki/Pain_in_babies

[3] https://pmc.ncbi.nlm.nih.gov/articles/PMC4843483/

jibal 3 days ago | parent [-]

> The real question here

No True Scotsman fallacy. Just because that interests you doesn't mean that it's "the real question".

> would we even have the intellectual honesty

Who is "we"? Some would and some wouldn't. And you're saying this in an environment where many people are attributing consciousness to LLMs. Blake Lemoine insisted that LaMDA was sentient and deserved legal protection, from his dialogs with it in which it talked about its friends and family -- neither of which it had. So don't talk to me about intellectual honesty.

> Can we therefore say that plants feel emotions

Only if you redefine emotions so broadly--contrary to normal usage--as to be able to make that claim. In the case of Strong AI there is no need to redefine terms.

> Now substitute plants with LLMs and try the thought experiment again.

Ok:

"We know that [LLMs] are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other [LLMs]."

Nope.

"In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence."

That's clearly your choice. I make a more scientific one.

"Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output?"

It's something much more specific than that, obviously. By that definition, all sorts of things that any rational person would want to distinguish from emotions qualify as emotions.

Bowing out of this discussion on grounds of intellectual honesty.