Remix.run Logo
allemagne 13 hours ago

It's not the initial mistake that tends to read as inhuman to me, it's the follow-up responses where the model doesn't seem to be able to understand or articulate the mistake it has made.

A human or an LLM accurately predicting a human conversation would probably say something like "ah I see, I did not read the riddle close enough. This is an altered version of the common riddle..." etc. Instead it really seems to flail around, confuse concepts, and appear to insist that it has correctly made some broader point unrelated to the actual text it's responding to.