▲ | tills13 2 days ago | |||||||
One of the most frustrating parts about "AI" in its current form is that you can challenge it on anything and it play dumb, being like " oops I'm sowwy I was wrong, you're right" I wish it would either: grow a spine and double down (in the cases that it's right or partially right) or simply admit when something is beyond its capability instead of guessing or this like low-percentage Markov chain continuation. | ||||||||
▲ | braebo a day ago | parent | next [-] | |||||||
I was once in an argument with Claude over a bug we were trying to identify in my code, and it refused to concede my argument for almost 20 minutes. It turned out to be correct, and boy was I glad it didn’t capitulate (as it often does). I came up with a prompt that actually reproduces this behavior more reliably than any other I’ve tried: ”When presented with questions or choices, treat them as genuine requests for analysis. Always evaluate trade-offs on their merits, never try to guess what answer the user wants. Focus on: What does the evidence suggest? What are the trade-offs? What is truly optimal in this context given the user's ultimate goals? Avoid: Pattern matching question phrasing to assumed preferences, reflexive agreement, reflexive disagreement, or hedging that avoids taking a position when one is warranted by the evidence.” | ||||||||
▲ | doener a day ago | parent | prev [-] | |||||||
https://openai.com/de-DE/index/why-language-models-hallucina... tl;dr: The models are optimized against a test that evaluates incorrect answers just as well as no answer at all. Therefore, they guess when in doubt. | ||||||||
|