| ▲ | the_mitsuhiko 4 hours ago | |||||||||||||||||||||||||||||||
> Maybe the balance of spending time with machines vs. fellow primates is out of whack. It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy. But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries. | ||||||||||||||||||||||||||||||||
| ▲ | kashyapc 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?! Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs: "How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real." An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | mekoka 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries. Why would you say pretending? I would say remembering. | ||||||||||||||||||||||||||||||||