▲ | Timpy a day ago | |
The models outlined in the white paper have a training step that uses reinforcement learning _without human feedback_. They're referring to this as "outcome-based RL". These models (DeepSeek-R1, OpenAI o1/o3, etc) rely on the "chain of thought" process to get a correct answer, then they summarize it so you don't have to read the entire chain of thought. DeepSeek-R1 shows the chain of thought and the answer, OpenAI hides the chain of thought and only shows the answer. The paper is measuring how often the summary conflicts with the chain of thought, which is something you wouldn't be able to see if you were using an OpenAI model. As another commenter pointed out, this kind of feels like a jab at OpenAI for hiding the chain of thought. The "chain of thought" is still just a vector of tokens. RL (without-human-feedback) is capable of generating novel vectors that wouldn't align with anything in its training data. If you train them for too long with RL they eventually learn to game the reward mechanism and the outcome becomes useless. Letting the user see the entire vector of tokens (and not just the tokens that are tagged as summary) will prevent situations where an answer may look or feel right, but it used some nonsense along the way. The article and paper are not asserting that seeing all the tokens will give insight to the internal process of the LLM. |