▲ | gastonmorixe 5 days ago | |
I was dating someone and after a while I started to feel something was not going well. I exported all the chats timestamped from the very first one and asked a big SOTA LLM to analyze the chats deeply in two completely different contexts. One from my perspective, and another from his perspective. It shocked me that the LLM after a long analysis and dozen of pages, always favored and accepted the current "user" persona situation as the more correct one and "the other" as the incorrect one. Since then I learned not to trust them anymore. LLMs are over-fine tuned to be people pleasers, not truth seekers, not fact and evidence grounded assistants. Just need to run everything important in a double-blind way and mitigate this. | ||
▲ | labrador 5 days ago | parent | next [-] | |
It sounds like you were both right in different ways and don't realize it because you're talking past each other. I think this happens a lot in relationship dynamics. A good couples therapist will help you reconcile this. You might try that approach with your LLM. Have it reconcile your two points of view. Or not, maybe they are irreconcilable as in "irreconcilable differences" | ||
▲ | mathiaspoint 5 days ago | parent | prev | next [-] | |
If you've ever messed with early GPTs you'll remember how the attention will pick up on patterns early in the context and change the entire personality of the model even if those patterns aren't instructional. It's a useful effect that made it possible to do zero shot prompts without training but it means stuff like what you experienced is inevitable. | ||
▲ | frahs 5 days ago | parent | prev | next [-] | |
What if you don't say which side you are, so that it's a neutral third party observer? | ||
▲ | OsrsNeedsf2P 5 days ago | parent | prev [-] | |
This is cool but also wtf |