▲ | seperman 3 days ago | |||||||
Very interesting. Why does Claude find more problems if we mention the code is written by another developer? | ||||||||
▲ | mcintyre1994 3 days ago | parent | next [-] | |||||||
Total guess, but maybe it breaks it out of the sycophancy that most models seem to exhibit? I wonder if they’d also be better at things like telling you an idea is dumb if you tell it it’s from someone else and you’re just assessing it. | ||||||||
▲ | bgilly 3 days ago | parent | prev | next [-] | |||||||
In my experience, Claude will criticize others more than it will criticize itself. Seems similar to how LLMs in general tend to say yes to things or call anything a good idea by default. I find it to be an entertaining reflection of the cultural nuances embedded into training data and reinforcement learning processes. | ||||||||
| ||||||||
▲ | daveydave 3 days ago | parent | prev | next [-] | |||||||
I would guess the training data (conversational as opposed to coding specific solutions) is weighted towards people finding errors in others work, more than people discussing errors in their own. If you knew there was an error in your thinking, you probably wouldn't think that way. | ||||||||
▲ | gdudeman 3 days ago | parent | prev [-] | |||||||
Claude is very agreeable and is an eager helper. It gives you the benefit of the doubt if you're coding. It also gives you the benefit of the doubt if you're looking for feedback on your developers work. If you give it a hint of distrust "my developer says they completed this, can you check and make sure, give them feedback....?" Claude will look out for you. |