| ▲ | dheera 8 hours ago | |||||||
> Can I tell you one more thing from your X,Y,Z results which is most doctors miss? I absolutely hate this influencer-ish behavior. If there's something most people miss just state it. That's why I'm using the assistant. This form of dialogue is a big part of why I use GPT less now. | ||||||||
| ▲ | debugnik 8 hours ago | parent [-] | |||||||
> If there's something most people miss just state it. But the LLM suggesting a question doesn't mean it has a good answer to converge to. If you actually ask, the model probabilities will be pressured to come up with something, anything, to follow up on the offer, which will be nonsense if there actually weren't anything else to add. I've seen this pattern fail a lot on roleplay (e.g. AI Dungeon) so I really dislike it when LLMs end with a question. A "sufficiently smart LLM" would have enough foresight to know it's writing itself into a dead end. | ||||||||
| ||||||||