| ▲ | BruceEel 7 hours ago | |
Very, very cool. Besides the top-performing models, it's interesting (if I'm reading this correctly) that gpt-5.2 did ~2x better than gpt-5.2-codex.. why? | ||
| ▲ | NitpickLawyer 7 hours ago | parent [-] | |
> gpt-5.2 did ~2x better than gpt-5.2-codex.. why? Optimising a model for a certain task, via fine-tuning (aka post-training), can lead to loss of performance on other tasks. People want codex to "generate code" and "drive agents" and so on. So oAI fine-tuned for that. | ||