Remix.run Logo
jdw64 3 hours ago

hmm.... That also seems like a reasonable framing. But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you. To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful. I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution. In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions.

altruios an hour ago | parent [-]

Anthropomorphizing is giving it 'human' qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM's DO NOT THINK LIKE HUMANS (if they 'think' at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that's hard to recognize (as proven by these discussions) as 'thinking' (and also... if experiencing: through a metaphorical optophone).