| ▲ | mplanchard 3 hours ago | |
Something that bothers me about the intentional anthropormorphization of the LLM interface is that it asks me to conflate a tool with a sentient being. The firm expectations and lack of patience I have for any failings in most of my tools would be totally inappropriate to apply to another human being, and yet here I am asked to interact with this tool as though it were a person. The only options are either to treat the tool in a way that feels "wrong," or to be "kind" to the tool, and I think you see people going both ways. I worry that, if I get used to being impatient and short with the AI, some of that will bleed into my textual interactions with other people. | ||
| ▲ | empath75 3 hours ago | parent [-] | |
It inherently imitates people. Even when you ask it to be more robotic, it does it in a way that a human would if you asked them to be more robotic. | ||