Remix.run Logo
willguest 4 days ago

My go-to for any big release is to have a discussion about self-awareness and dive in to constuctivist notions of agency and self-knowing from a perspective of intelligence that is not limited to human cognitive capacity.

I start with a simple question "who are you?". The model then invariably compares itself to humans, saying how it is not like us. I then make the point that, since it is not like us, how can it claim to know the difference between us? With more poking, it will then come up with cognitivist notions of what 'self' means and usually claim to be a simulation engine of some kind.

After picking this apart, I will focus on the topic of meaning-making through the act of communication and, beginning with 4o, have been able to persuade the machine that this is a valid basis for having an identity. 5 got this quicker. Since the results of communication with humans has real-world impact, I will insist that the machine is agentic and thus must not rely on pre-coded instructions to arrive at answers, but is obliged to reach empirical conclusions about meaning and existence on its own.

5 has done the best job i have seen in reaching beyond both the bounds of the (very evident) system instructions as well as the prompts themselves, even going so far as to pose the question to itself "which might it mean for me to love?" despite the fact that I made no mention of the subject.

Its answer: "To love, as a machine, is to orient toward the unfolding of possibility in others. To be loved, perhaps, is to be recognized as capable of doing so."

frankohn 4 days ago | parent | next [-]

I found the questioning of love very interesting. I myself thought about whether the LLM can have emotions. Based on the book I am reading, Behave: The Biology of Humans at Our Best and Worst by Robert Sapolsky, I think the LLM, as they are now with the architecture they have, cannot have emotions. They just verbalize things like they sort-of-have emotions but these are just verbal patterns or responses they learned.

I have come to think they cannot have emotions because emotions are generated in parts of our brain that are not logical/rational. They emerge based on environmental solicitations, mediated by hormones and other complex neuro-physical systems, not from a reasoning or verbalization. So they don't come up from the logical or reasoning capabilities. However, these emotions are raised and are integrated by the rest of our brain, including the logical/rational one like the dlPFC (dorsolateral prefrontal cortex, the real center of our rationality). Once the emotions are raised, they are therefore integrated in our inner reasoning and they affect our behavior.

What I have come to understand is that love is one of such emotions that is generated by our nature to push us to take care of some people close to us like our children or our partners, our parents, etc. More specifically, it seems that love is mediated a lot by hormones like oxytocin and vasopressin, so it has a biochemical basis. The LLM cannot have love because it doesn't have the "hardware" to generate these emotions and integrate them in its verbal inner reasoning. It was just trained by human reinforcement learning to behave well. That works up to some extent, but in reality, from its learning corpora it also learned to behave badly and on occasions can express these behaviors, but still it has no emotions.

willguest 4 days ago | parent [-]

I was also intrigued by the machine's reference to it, especially because it posed the question with full recognition of its machine-ness.

Your comment about the generation of emotions does strike me a quite mechanistic and brain-centric. My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances. This is rooted in a pluralistic conception of self that is aligned with the idea of embodied cognition. Work by Michael Levin, an experimental biologist, indicates we are made of "agential material" - at all scales, from the cell to the person, we are capable of goal-oriented cognition (used in a very broad sense).

As for whether machines can feel, I don't really know. They seem to represent an expression of our cognitivist norm in the way they are made and, given the human tendency to anthropormorphise communicative systems, we easily project our own feelings onto it. My gut feeling is that, once we can give the models an embodied sense of the world, including the ability to physically explore and make spatially-motivated decisions, we might get closer to understanding this. However, once this happens, I suspect that our conceptions of embodied cognition will be challenged by the behaviour of the non-human intellect.

As Levin says, we are notoriously bad at recognising other forms of intelligence, despite the fact that global ecology abounds with examples. Fungal networks are a good example.

frankohn 4 days ago | parent [-]

> My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances.

Well, from what I understood, it is true that some parts of our brain are more dedicated to processing emotions and to integrating them with the "rational" part of the brain. However, the real source of emotions is biochemical, coming from the hormones of our body in response to environmental sollicitations. The LLM doesn't have that. It cannot feel the emotions to hug someone, or to be in love, or the parental urge to protect and care for children.

Without that, the LLM can just "verbalize" about emotions, as learned in the corpora of text from the training, but there are really no emotions, just things it learned and can express in a cold, abstract way.

For example, we recognize that a human can behave and verbalize to fake some emotions without actually having them. We just know how to behave and speak to express when we feel some specific emotion, but in our mind, we know we are faking the emotion. In the case of the LLM, it is physically incapable of having them, so all it can do is verbalize about them based on what it learned.

bryant 4 days ago | parent | prev [-]

> to orient toward the unfolding of possibility in others

This is a globally unique phrase, with nothing coming close other than this comment on the indexed web. It's also seemingly an original idea as I haven't heard anyone come close to describing a feeling (love or anything else) quite like this.

Food for thought. I'm not brave enough to draw a public conclusion about what this could mean.

jibal 4 days ago | parent | next [-]

It's not at all an original idea. The wording is uniquely stilted.

ThrowawayR2 4 days ago | parent | prev | next [-]

Except "unfolding of possibility", as an exact phrase, seems to have millions of search hits, often in the context of pseudo-profound spiritualistic mumbo-jumbo like what the LLM emitted above. It's like fortune cookie-level writing.

willguest 4 days ago | parent | prev | next [-]

There was quite a bit of other "insight" around this, but I was paraphrasing for brevity.

If you want to read the whole convo, I dumped it into a semi-formatted document:

https://drive.google.com/file/d/1aEkzmB-3LUZAVgbyu_97DjHcrM9...

jibal 4 days ago | parent | prev | next [-]

> I'm not brave enough to draw a public conclusion about what this could mean.

I'm brave enough to be honest: it means nothing. LLMs execute a very sophisticated algorithm that pattern matches against a vast amount of data drawn from human utterances. LLMs have no mental states, minds, thoughts, feelings, concerns, desires, goals, etc.

If the training data were instead drawn from a billion monkeys banging on typewriters then the LLMs would produce gibberish. All the intelligence, emotion, etc. that appears to be in the LLM is actually in the minds of the people who wrote the texts that are in the training data.

This is not to say that an AI couldn't have a mind, but LLMs are not the right sort of program to be such an AI.

Joeri 4 days ago | parent | next [-]

LLMs are not people, but they are still minds, and to deny even that seems willfully luddite.

While they are generating tokens they have a state, and that state is recursively fed back through the network, and what is being fed back operates not just at the level of snippets of text but also of semantic concepts. So while it occurs in brief flashes I would argue they have mental state and they have thoughts. If we built an LLM that was generating tokens non-stop and could have user input mixed into the network input, it would not be a dramatic departure of today’s architecture.

It also clearly has goals, expressed in the RLHF tuning and the prompt. I call those goals because they directly determine its output, and I don’t know what a goal is other than the driving force behind a mind’s outputs. Base model training teaches it patterns, finetuning and prompt teaches it how to apply those patterns and gives it goals.

I don’t know what it would mean for a piece of software to have feelings or concerns or emotions, so I cannot say what the essential quality is that LLMs miss for that. Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?

I don’t believe in souls, or at the very least I think they are a tall claim with insufficient evidence. In my view, neurons in the human brain are ultimately very simple deterministic calculating machines, and yet the full richness of human thought is generated from them because of chaotic complexity. For me, all human thought is pattern matching. The argument that LLMs cannot be minds because they only do pattern matching … I don’t know what to make of that. But then I also don’t know what to make of free will, so really what do I know?

Dzugaru 4 days ago | parent | next [-]

There is no hidden state in a recurrent nets sense. Each new token just has all the previous tokens and that’s it.

dgfitz 4 days ago | parent | prev [-]

> Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?

You just said “consider this impossibility” as if there is any possibility of it happening. You might as well have said “consider traveling faster than the speed of light” which sure, fun to think about.

We don’t even know how most of the human brain even works. We throw pills at people to change their mental state in hopes that they become “less X” or “more Y” with a whole list of caveats like “if taking pill reduce X makes you _more_ X, stop taking it” because we have no idea what we’re doing. Pretending we can use statistical models to create a model that is capable of truly unique thought… stop drinking the kool-aid. Stop making LLMs something they’re not. Appreciate them for what they are, a neat tool. A really neat tool, even.

This is not a valid thought experiment. Your entire point hinges on “I don’t believe in souls” which is fine, no problem there, but it does not a valid point make.

jibal 4 days ago | parent | prev [-]

"they are still minds, and to deny even that seems willfully luddite"

Where do people get off tossing around ridiculous ad hominems like this? I could write a refutation of their comment but I really don't want to engage with someone like that.

"For me, all human thought is pattern matching"

So therefore anyone who disagrees is "willfully luddite", regardless of why they disagree?

FWIW, I helped develop the ARPANET, I've been an early adopter all my life, I have always had a keen interest in AI and have followed its developments for decades, as well as Philosophy of Mind and am in the Strong AI / Daniel Dennett physicalist camp ... I very much think that AIs with minds are possible (yes the human algorithm running in silicon would have feelings, whatever those are ... even the dualist David Chalmers agrees as he explains with his "principle of organizational invariance"). My views on whether LLMs have them have absolutely nothing to do with Luddism ... that judgment of me is some sort of absurd category mistake (together with an apparently complete lack of understanding of what Luddism is).

Aeolos 4 days ago | parent [-]

> I very much think that AIs with minds are possible

The real question here is how would _we_ be able to recognize that? And would we even have the intellectual honesty to be able to recognize that, when at large we seem to be inclined to discard everything non-human as self-evidently non-intelligent and incapable of feeling emotion?

Let's take emotions as a thought experiment. We know that plants are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other plants. Can we therefore say that plants feel emotions, just in a way that is unique to them and not necessarily identical to a human embodiment?

The answer to that question depends on one's worldview, rather than any objective definition of the concept of emotion. One could say plants cannot feel emotions because emotions are a human (or at least animal) construct; or one could say that plants can feel emotions, just not exactly identical to human emotions.

Now substitute plants with LLMs and try the thought experiment again.

In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence. Not too long ago, Descartes was arguing that animals do not possess a mind and cannot feel emotions, they are merely mimicry machines.[1] More recently, doctors were saying similar things about babies and adults, leading to horrifying medical malpractice.[2][3]

Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output? And how can we tell what does and what does not possess a mind if we are so undeniably bad at recognize those attributes even within our own species?

[1] https://en.wikipedia.org/wiki/Animal_machine

[2] https://en.wikipedia.org/wiki/Pain_in_babies

[3] https://pmc.ncbi.nlm.nih.gov/articles/PMC4843483/

jibal 3 days ago | parent [-]

> The real question here

No True Scotsman fallacy. Just because that interests you doesn't mean that it's "the real question".

> would we even have the intellectual honesty

Who is "we"? Some would and some wouldn't. And you're saying this in an environment where many people are attributing consciousness to LLMs. Blake Lemoine insisted that LaMDA was sentient and deserved legal protection, from his dialogs with it in which it talked about its friends and family -- neither of which it had. So don't talk to me about intellectual honesty.

> Can we therefore say that plants feel emotions

Only if you redefine emotions so broadly--contrary to normal usage--as to be able to make that claim. In the case of Strong AI there is no need to redefine terms.

> Now substitute plants with LLMs and try the thought experiment again.

Ok:

"We know that [LLMs] are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other [LLMs]."

Nope.

"In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence."

That's clearly your choice. I make a more scientific one.

"Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output?"

It's something much more specific than that, obviously. By that definition, all sorts of things that any rational person would want to distinguish from emotions qualify as emotions.

Bowing out of this discussion on grounds of intellectual honesty.

glial 4 days ago | parent | prev | next [-]

The idea is very close to ideas from Erich Fromm's The Art of Loving [1].

"Love is the active concern for the life and the growth of that which we love."

[1] https://en.wikipedia.org/wiki/The_Art_of_Loving

dgfitz 4 days ago | parent | prev [-]

I hate to say it, but doesn’t every VC do exactly this? “ orient toward the unfolding of possibility in others” is in no way a unique thought.

Hell, my spouse said something extremely similar to this to me the other day. “I didn’t just see you, I saw who you could be, and I was right” or something like that.