Remix.run Logo
gitaarik 3 days ago

You're absolutely right!

I also get this too often, when I sometimes say something like "would it be maybe better to do it like this?" and then it replies that I'm absolutely right, and starts writing new code. While I was rather wondering what Claude may think and advice me whether that's the best way to go forward.

jghn 3 days ago | parent | next [-]

It doesn't fully help in this situation but in general I've found to never give it an either/or and to instead present it with several options. It at least helps cut down on the situations where Claude runs off and starts writing new code when you just wanted it to spit out "thoughts".

psadri 3 days ago | parent | prev | next [-]

I have learnt to not ask leading questions. Always phrase questions in a neutral way and ask for pro/con analysis of each option.

mkagenius 3 days ago | parent [-]

But then it makes an obvious mistake and you correct it and it says "you are absolutely right". Which is fine for that round but you start doubting whether its just sycophancy.

gryn 3 days ago | parent | next [-]

You're absolutely right! its just sycophancy.

shortrounddev2 3 days ago | parent | prev | next [-]

Yeah I've learned to not really trust it with anything opinionated. Like "whats the best way to write this function" or "is A or B better". Even asking for pros/cons, its often wrong. You need to really only ask LLMs for verifiable facts, and then verify them

giancarlostoro 2 days ago | parent | prev [-]

If you ask for sources the output will typically be either more correct, or you will be able to better assess the source of the output.

YeahThisIsMe 3 days ago | parent | prev | next [-]

It doesn't think

CureYooz 3 days ago | parent [-]

You'are absolutely right!

ethin 3 days ago | parent | prev | next [-]

It does this to me too. I have to add instructions like "Do not hesitate to push back or challenge me. Be cold, logical, direct, and engage in debate with me." to actually get it to act like something I'd want to interact with. I know that in most cases my instinct is probably correct, but I'd prefer if something that is supposedly superhuman and infinitely smarter than me (as the AI pumpers like to claim) would, you know, actually call me out when I say something dumb, or make incorrect assumptions? Instead of flattering me and making me "think" I'm right when I might be completely wrong?

Honestly I feel like it is this exact behavior from LLMs which have caused cybersecurity to go out the window. People get flattered and glazed wayyyy too much about their ideas because they talk to an LLM about it and the LLM doesn't go "Uh, no, dumbass, doing it this way would be a horrifically bad idea! And this is why!" Like, I get the assumption that the user is usually correct. But even if the LLM ends up spewing bullshit when debating me, it at least gives me other avenues to approach the problem that I might've not thought of when thinking about it myself.

skerit 3 days ago | parent | prev | next [-]

This is indeed super annoying. I always have to add something like "Don't do anything just yet, but could it be ..."

Pxtl 3 days ago | parent [-]

Yes, I've had to tell it over and over again "I'm just researching options and feasibility, I don't want code".

Self-Perfection 3 days ago | parent | prev | next [-]

I suspect this might be cultural thing. Some people might formulate their strong opinions that your approach is bad and your task should be done in another as gentle suggestions to avoid hurting your feelings. And Claude learned to stick to this cultural norm of communication.

As a workaround I try to word my questions to Claude in a way that does not leave any possibility to interpret them as showing my preferences.

For instance, instead of "would it be maybe better to do it like $alt_approach?" I'd rather say "compare with $alt_approach, pros and cons"

Pxtl 3 days ago | parent [-]

It feels like it trained on a whole lot of "compliment sandwich" responses and then failed to learn from the meat of that sandwich.

zaxxons 3 days ago | parent | prev [-]

Do not attempt to mold the LLM into everything you expect instead of just focusing on specific activities you need it to do. It may or may seem to do what you want, but it will do a worse job at the actual tasks you need to complete.