Remix.run Logo
BurningFrog a day ago

The LLMs copy human written text, so maybe they'll implement Motivated Reasoning just like humans do?

Or maybe it's telling people what they want to hear, just like humans do

ben_w a day ago | parent [-]

They definitely tell people what they want to hear. Even when we'd rather they be correct, they get upvoted or downvoted by users, so this isn't avoidable (but is is fawning or sychophancy?)

I wonder how deep or shallow the mimicry of human output is — enough to be interesting, but definitely not quite like us.