Remix.run Logo
Springtime 3 days ago

I've never thought the reason behind this was to make the user always feel correct but rather that many times an LLM (especially lower tier models) will just get various things incorrect and it doesn't have a reference for what is correct.

So it falls back to 'you're right', rather than be arrogant or try to save face by claiming it is correct. Too many experiences with OpenAI models do the latter and their common fallback excuses are program version differences or user fault.

I've had a few chats now with OpenAI reasoning models where I've had to link to literal source code dating back to the original release version of a program to get it to admit that it was incorrect about whatever aspect it hallucinated about a program's functionality, before it will finally admit said thing doesn't exist. Even then it will try and save face by not admitting direct fault.