| ▲ | karmasimida 3 hours ago | ||||||||||||||||
> With Codex (5.3), the framing is an interactive collaborator: you steer it mid-execution, stay in the loop, course-correct as it works. > With Opus 4.6, the emphasis is the opposite: a more autonomous, agentic, thoughtful system that plans deeply, runs longer, and asks less of the human. Ain't the UX is the exact opposite? Codex thinks much longer before gives you back the answer. | |||||||||||||||||
| ▲ | cwyers 6 minutes ago | parent | next [-] | ||||||||||||||||
Codex now lets you tell the LLM tgings in the middle of its thinking without interrupting it, so you can read the thinking traces and tell it to change course if it's going off track. | |||||||||||||||||
| ▲ | WilcoKruijer 2 hours ago | parent | prev | next [-] | ||||||||||||||||
Yes, you’re right for 4.5 and 5.2. Hence they’re focusing on improving the opposite thing and thus are actually converging. | |||||||||||||||||
| ▲ | xd1936 3 hours ago | parent | prev | next [-] | ||||||||||||||||
I've also had the exact opposite experience with tone. Claude Code wants to build with me, and Codex wants to go off on its own for a while before returning with opinions. | |||||||||||||||||
| |||||||||||||||||
| ▲ | bt1a an hour ago | parent | prev [-] | ||||||||||||||||
This is most likely an inference serving problem in terms of capacity and latency given that Opus X and the latest GPT models available in the API have always responded quickly and slowly, respectively | |||||||||||||||||