Remix.run Logo
dualvariable 3 days ago

I really wish they'd stop trying to suck up to me--all the "that's a really insightful question!" stuff.

I'm one of those aspy people who immediately don't trust other humans who try to fluff up my ego. Don't like it from a chatbot either.

But the fact that all the chatbots do it means that most people really crave that ego reinforcement.

awakeasleep 3 days ago | parent | next [-]

You can already fix this in ChatGPT.

Settings > Personalization:

1. Base Style & Tone: Efficient

2. Warmth: Less

3. Enthusiastic: Less

I am amazed that people can use it at all without these changes.

dgellow 2 days ago | parent [-]

Does that work in your experience? From what I see after a few rounds they go back to being incredibly annoying.

I dealt with frustrating software ,y whole life but LLMs are the only type that make me what to scream at it from actual anger

awakeasleep 21 hours ago | parent [-]

Well, it works perfectly for text based interactions, but if you try to do the thing where you can have a voice conversation with the robot, it doesn't seem to do much.

As a result I only try that voice once per new model release.

idle_zealot 3 days ago | parent | prev | next [-]

I do have to wonder what the mix is between "our data show this is how most people want to be talked to" and "these tokens lead to better responses on objective measures of correctness." That is, in the training data insightful questions are tangled with insightful answers, so if the bot basically always treats the user like a genius it gets on the track that leads to better answers.

Or yeah, it's just people being weak to flattery.

astrange 3 days ago | parent | prev [-]

LLMs are only capable of thinking out loud, so in some sense this part of the answer is helping to convince it that it's answering a good question.

Same reason for the "That's not X, it's Y" construct. It actually needs to say that.

(Some exceptions for reasoning models.)