▲ | jszymborski 11 hours ago | |
The original GPT models did this a lot iirc. | ||
▲ | daveguy 8 hours ago | parent [-] | |
Maybe the role reversal breaks most of the RLHF training. The training was definitely not done in the context of role reversal, so it could be out of distribution. If so, this is a glimpse of the intelligence of the LLM core without the RL/RAG/etc tape and glue layers. |