| ▲ | Der_Einzige 3 hours ago | |
I did not claim that post training doesn't happen on these models, and you are being extremely patronizing (I publish quite a bit of research on LLMs at top conferences). I claimed that OpenAI overindexed on getting away with aggressive post-training on old pre-training checkpoints. Gemini / Anthropic correctly realized that new pre-training checkpoints need to happen to get the best gains in their latest model releases (which get post-trained too). | ||
| ▲ | ianbutler 2 minutes ago | parent [-] | |
If you read that as patronizing that says more about you than me personally, I have no idea who you are so your own insecurity at what is a rather unloaded explanation perplexes me. | ||