▲ | pietz 5 hours ago | |
Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right? I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster. | ||
▲ | simonw 3 hours ago | parent [-] | |
OpenAI say: "Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex." So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article. |