Remix.run Logo
aurareturn 19 hours ago

I don't have a problem with the suggestions. Google search does the same at the end of searches.

It does very often suggest things I want to know more about.

sonink 18 hours ago | parent | next [-]

Suggestions are absolutely fine. But this is baiting. Chatgpt could have easily given me that information without the bait. And I would have happily consumed it. And maybe if it did it once, it was fine - but it kept on doing it - bait after bait after bait.

The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.

JimDabell 5 hours ago | parent | next [-]

It’s worse than baiting. What happens a lot to me is:

Me: [Explains situation, followed by a request.]

AI: [7–8 paragraphs and bullet point lists explaining the situation back to me]. Would you like me to [request]?

Me: That’s literally what I just asked you to do.

llm_nerd 18 hours ago | parent | prev [-]

This seems overly cynical.

Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.

Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about that, noting that they're trying to saturate usage unprompted.

Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.

The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are nothing compared to the user-glazing that these LLMs do.

markers 18 hours ago | parent | next [-]

Have you used ChatGPT lately?

What you describe is not quite what they are doing, they are adding nudges at the end of the follow-up question suggestions. For instance I was researching some IKEA furniture and it gives suggestions for followup, with nudges in parenthesis "IKEA-furniture many people use for this (very cool solution)" and at the end of another question suggestion: "(very simple, but surprisingly effective)". They are subtle cliffhangers trying to influence you to go on, not pure suggestions. I'm just waiting for the "(You wouldn't believe that this did!)". It has soured me on the service, Claude has a much better personality imo.

sk5t 17 hours ago | parent | next [-]

Yes, it very closely parallels the “one weird trick” bait from a decade ago.

what 3 hours ago | parent [-]

I’ve seen it use “one weird trick” multiple times in its end of response baiting. Literally those words.

llm_nerd 16 hours ago | parent | prev [-]

No, I don't use OpenAI products. Sam Altman is a weird creep and the company is headed into the abyss, so it isn't my cup.

However the original complaint was about continuation suggestions, which are a good feature and I suspect most users appreciate them. If ChatGPT uses bait or leading teases, then sure that's bad.

fhub 7 hours ago | parent [-]

The current A/B test I seem to be in is that bad. But it will likely drive the metrics they are trying to drive.

18 hours ago | parent | prev [-]
[deleted]
fhub 18 hours ago | parent | prev [-]

Then just write the extra paragraph rather than bait?

IMTDb 18 hours ago | parent [-]

Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.

Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.

Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.

knollimar 8 hours ago | parent [-]

You send all the tokens an extra time at least

alex43578 7 hours ago | parent [-]

I’m not privy to their data on what this does to engagement, but intuitively it seems like the extra inference/token cost this incurs doesn’t align with their current model.

If they were doing it to API customers, sure, but getting the free or flat-rate customers to use more tokens seems counterproductive.

zdragnar 7 hours ago | parent [-]

It juices their "engagement" metrics, which is the drug of choice for investors, right up there with net promoter scores.