| ▲ | ACCount37 12 hours ago | ||||||||||||||||
I don't see much similarity? Unless you're looking at self-distillation in general and not just this use of it. | |||||||||||||||||
| ▲ | oliver236 10 hours ago | parent [-] | ||||||||||||||||
How not? I think the analogy is actually pretty specific to this paper, not just self-distillation in general. During sleep your brain replays experiences but noisy and distorted. The replays are often incoherent as narratives (dreams are weird). But the consolidation still works because the value isn't in the narrative coherence, it's in the activation patterns at each moment. Important pathways get strengthened, weak ones get pruned. Section 4.4 of this paper is what makes the connection click. They cranked training temperature to 2.0 with no truncation. 62% of the sampled outputs had no extractable code. Coherent Python that devolves into multilingual gibberish halfway through. The model still improved (+5.7pp pass@1). This makes no sense if you think the model is learning from good code examples. But it makes a lot of sense if you think of it as the model replaying its own knowledge back to itself in a noisy/distorted form, and the replay process strengthening what matters (sharp distributions at "lock" positions where one token is correct, broad distributions at "fork" positions where multiple approaches work) while pruning what doesn't (distractor tails). The model doesn't learn anything new. It just wakes up performing better because what it already knew got cleaned up. How is this comment not at number 1?? | |||||||||||||||||
| |||||||||||||||||