| ▲ | Der_Einzige 3 hours ago | ||||||||||||||||
Anthropic and Gemini still release new pre-training checkpoints regularly. It's just OpenAI who got stupid on that. RIP GPT-4.5 | |||||||||||||||||
| ▲ | ianbutler 3 hours ago | parent [-] | ||||||||||||||||
All models released from those providers go through stages of post training too, none of the models you interact with go from pre-training to release. An example of the post training pipeline is tool calling, that is to my understanding a part of post training and not pre training in general. I can't speak to what the exact split is or what is a part of post training versus pre training at various labs but I am exceedingly confident all labs post train for effectiveness in specific domains. | |||||||||||||||||
| |||||||||||||||||