Remix.run Logo
pipnonsense 18 hours ago

So that’s why I am getting clickbaity last sentences in every response now at ChatGPT.

Things like ”If you want, I can also show a very fast Photoshop-style trick in Krita that lets you drag-copy an area in one step (without copy/paste). It’s hidden but extremely useful.”

Every single chat now has it. Not only the conversational prompt with “I can continue talking about this”, but very clickbaity terms like: almost nobody knows about this, you will be surprised, all VIPs are now using this car, do you want to know which it is? Etc

KellyCriterion 18 hours ago | parent | next [-]

I find -again- Claude (web) here outstanding & very comfortable:

In most of my discussions throughout the day, it doesnt ask any "follow up" questions at the end. Very often it says thingslike: "you have two options: A - ..... and B - while the one includes X and the other Y..."

But this is was OP underlined: Claude is popular amongst businesses, most "non-tech" people dont even know that it exists.

buzzy_hacker 18 hours ago | parent | prev | next [-]

Same here. “Do you want the one useful tip related to this topic that most people miss? It’s quite surprising.”

If it were so useful, just tell me in the first place! If you say “Yes” then it’s usually just a regurgitation of your prior conversation, not actually new information.

This immediately smelled of engagement bait as soon as the pattern started recently. It’s omnipresent and annoying.

dostick 18 hours ago | parent | next [-]

Yes, ChatGPT just recently started to add these engagement phrased follow-ups; “If you want, I can also show you one very common sign people miss that tells you…”

Esophagus4 18 hours ago | parent | prev [-]

You can tell it not to do this in your personalized context.

The model doesn’t always obey it, but 80% of the time it’s worked for me.

dkrich 18 hours ago | parent | prev | next [-]

This and also constantly saying stupid things like “yes that is a great observation and that’s how the pros do it for this very reason!” for a specific question that doesn’t apply to anything anyone else is doing

jjallen 18 hours ago | parent | prev | next [-]

This is not just OpenAI though. I don’t think this is new in general for these AI chat apps. Claude at the very least asks a question as the last part of its responses I believe every time.

Bengalilol 18 hours ago | parent | prev [-]

Those "Prompt-YES-baity" last sentences are somehow counterproductive.