| ▲ | f1shy 7 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I had an interesting conversation with a guy at work past week. We were discussing some unimportant matter. The guy has a pretty high self esteem, and even if he was discussing, in his own words, “out of belief and guess” and I was telling him, I knew for a fact what I was talking about, I had a hard time because he wouldn’t accept what I was saying. At some point he left, and came back with “Gemini says I’m right! So, no more discussion” I asked what did he exactly asked. He: “I have a colleague who is arguing X, I’m sure is Y. Who is right?!” Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying. LLM are pretty dangerous in confirming you own distorted view of the world. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | bachmeier 6 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I agree with your conclusion, but that's by design. The goal is not to tell people the truth (how would they even do that). The goal is to give the answer that would have come from the training data if that question were asked. And the reality is that confirmation is part of life. You may even struggle to stay married if you don't learn to confirm your wife's perspectives. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | joshstrange 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> “I have a colleague who is arguing X, I’m sure is Y. Who is right?!” This is why I've turned off Claude/ChatGPT's ability to use other conversations as context. I allow memories (which I have to check/prune regularly) but not reading other conversations, there is just too high of a chance of poisoning or biasing the context. Once I switched to a new chat to confirm an assumption and the LLM said "Yes, and your error confirms that..." but I hadn't sent the error to that chat. At that point I had to turn it off, I open a new chat specifically to get "clean" context. I wish these platforms would give more tools to turn on/off that and have "private" chats (no memories, no system prompt edits) as well (some do, I know). Obviously, context poisoning from other chats is not what happened in your case, but it's in the same "class" of issue, "leading the witness". I think about "leading the witness" _constantly_ while using LLMs. I often will not give it all the context or all of what I'm thinking, I want to see if it independently gets to the same place. I _never_ say "I'm considering X" when presenting a problem because I've seen it latch onto my suggestion too hard, too often. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | lstodd 6 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It's more like insufficient emotional control is very dangerous. It's nothing new but I guess LLMs highlighted that problem a bit. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||