| ▲ | palmotea 2 hours ago | |
> Humans WILL anthropomorphize the AI Especially with current-day chat-style interfaces with RLHF, which consciously are designed to direct people towards anthropomorphization. It would be interesting to design a non-chat LLM interaction pattern that's designed to be anti-anthropomorphization. > humans WILL blindly trust their outputs, and humans WILL defer responsibility to them I also blame a lot (but not all) of that on current AI UX, and I wonder if there are ways around it. Maybe the blind trust thing perhaps can be mitigated by never giving an unambiguous output (always options, at least). I don't have any ideas about the problem of deferring responsibility. | ||
| ▲ | skirmish an hour ago | parent [-] | |
> non-chat LLM interaction pattern "Deep research" is another interaction style that produces more official sounding texts, yet still leads to anthropomorphization. What you are looking for is perhaps an LLM flaunting all the obvious slop patterns in its responses. But then people would be disgusted and would refuse to communicate with it. | ||