Remix.run Logo
muzani 9 hours ago

I'm fine with just Copilot.

Opus 4.5 has excellent tool use, meaning it can jump in and out of a broad undocumented codebase better. It can evaluate what the code is trying to do. It's perfect for PRs - caught things like people submitting code that looks right, but ended up running a poorly documented/incomplete method.

GPT codex just messes up a lot for me. Whatever I'm doing with it, it's not working. The plain GPT-5.2 is good overall, but it confidently makes mistakes and tell you that it's done.

If you have an excellent codebase, GPT 5.2 might actually work better. If you're not sure what you're doing or are using AI to find out how things work, then Opus 4.5 is great.

The Claude models are also very much behind in terms of UI and visuals.

Take note that a lot of the benchmarks are on Python. What I'm finding is all the major ones make mistakes, but they make mistakes differently. OpenAI and Anthropic tend to mimic one another for some reason, while Grok and Gemini tend to give very different answers.