▲ | TimTheTinker 5 days ago | ||||||||||||||||||||||
Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users. The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing: (a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following) (b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus (c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.) I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful. But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result. A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have. | |||||||||||||||||||||||
▲ | 5 days ago | parent | next [-] | ||||||||||||||||||||||
[deleted] | |||||||||||||||||||||||
▲ | varispeed 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||
[flagged] | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | MattGaiser 5 days ago | parent | prev [-] | ||||||||||||||||||||||
> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support. Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego. Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away. | |||||||||||||||||||||||
|