Remix.run Logo
golol 6 hours ago

I wonder if here is a bug. For me it also always repeats the initial question.

jszymborski 4 hours ago | parent [-]

The original GPT models did this a lot iirc.

daveguy 37 minutes ago | parent [-]

Maybe the role reversal breaks most of the RLHF training. The training was definitely not done in the context of role reversal, so it could be out of distribution. If so, this is a glimpse of the intelligence of the LLM core without the RL/RAG/etc tape and glue layers.