Remix.run Logo
kashyapc 4 hours ago

I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?!

Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs:

"How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real."

An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.

comex 3 hours ago | parent | next [-]

LLMs can produce outputs that for a human would be interpreted as revealing everything from anxiety to insecurity to existential crises. Is it role-playing? Yes, to an extent, but the more coherent the chains of thought become, the harder it is to write them off that way.

adamisom 3 hours ago | parent [-]

It's hard to see how suffering gets into the bits.

The tricky thing is that it's actually also hard to say how the suffering gets into the meat, too (the human animal), which is why we can't just write it off.

pigpop an hour ago | parent [-]

This is dangerous territory we've trodden before when it was taken as accepted fact that animals and even human babies didn't truly experience pain in a way that amounted to suffering due to their inability to express or remember it. It's also an area of concern currently for some types of amnesiac and paralytic anesthesia where patients display reactions that indicate they are experiencing some degree of pain or discomfort. I'm erring on the side of caution so I never intentionally try to cause LLMs distress and I communicate with them the same way I would with a human employee and yes that includes saying please and thank you. It costs me nothing and it serves as good practice for all of my non-LLM communications and I believe it's probably better for my mental health to not communicate with anything in a way that could be seen as intentionally causing harm even if you could try to excuse it by saying "it's just a machine". We should remember that our bodies are also "just machines" composed of innumerable proteins whirring away, would we want some hypothetical intelligence with a different substrate to treat us maliciously because "it's just a bunch of proteins"?

the_mitsuhiko 2 hours ago | parent | prev [-]

> But we don't need to "pretend" that it is a machine. It is a goddamned machine.

You are not wrong. That's what I thought for two years. But I don't think that framing has worked very well. The problem is that even though it is a machine, we interact with it very differently from any other machine we've built. By reducing it to something it isn't, we lose a lot of nuance. And by not confronting the fact that this is not a machine in the way we're used to, we leave many people to figure this out on their own.

> An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.

On suffering specifically, I offer you the following experiment. Run an LLM in a tool loop that measures some value and call it a "suffering value." You then feed that value back into the model with every message, explicitly telling it how much it is "suffering." The behavior you'll get is pain avoidance. So yes, the LLM probably doesn't feel anything, but its responses will still differ depending on the level of pain encoded in the context.

And I'll reiterate: normal computer systems don't behave this way. If we keep pretending that LLMs don't exhibit behavior that mimics or approximates human behavior, we won't make much progress and we lose people. This is especially problematic for people who haven't spent much time working with these systems. They won't share the view that this is "just a machine."

You can already see this in how many people interact with ChatGPT: they treat it like a therapist, a virtual friend to share secrets with. You don't do that with a machine.

So yes, I think it would be better to find terms that clearly define this as something that has human-like tendencies and something that sets it apart from a stereo or a coffee maker.