| ▲ | ianbutler 4 hours ago | |||||||||||||||||||||||||
I'd argue we jumped that shark since the shift in focus to post training. Labs focus on getting good at specific formats and tasks. The generalization argument was ceded (not in the long term but in the short term) to the need to produce immediate value. Now if a format dominates it will be post trained for and then it is in fact better. | ||||||||||||||||||||||||||
| ▲ | Der_Einzige 3 hours ago | parent [-] | |||||||||||||||||||||||||
Anthropic and Gemini still release new pre-training checkpoints regularly. It's just OpenAI who got stupid on that. RIP GPT-4.5 | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||