| ▲ | davely 6 hours ago |
| I’ve been on the Claude Code train for a while but decided to try Codex last week after they announced the $100 USD Pro plan. I’ve been pretty happy with it! One thing I immediately like more than Claude is that Codex seems much more transparent about what it’s thinking and what it wants to do next. I find it much easier to interrupt or jump in the middle if things are going to wrong direction. Claude Code has been slowly turning into this mysterious black box, wiping out terminal context any time it compacts a conversation (which I think is their hacky way of dealing with terminal flickering issues — which is still happening, 14 months later), going out of the way to hide thought output, and then of course the whole performance issues thing. Excited to try 4.7 out, but man, Codex (as a harness at least) is a stark contrast to Claude Code. |
|
| ▲ | pxc 6 hours ago | parent | next [-] |
| > One thing I immediately like more than Claude is that Codex seems much more transparent about what it’s thinking and what it wants to do next. I find it much easier to interrupt or jump in the middle if things are going to wrong direction. I've finally started experimenting recently with Claude's --dangerously-skip-permissions and Codex's --dangerously-bypass-approvals-and-sandbox through external sandboxing tools. (For now just nono¹, which I really like so far, and soon via containerization or virtual machines.) When I am using Claude or Codex without external sandboxing tools and just using the TUI, I spend a lot of time approving individual commands. When I was working that way, I found Codex's tendency to stop and ask me whether/how it should proceed extremely annoying. I found myself shouting at my monitor, "Yes, duh, go do the thing!". But when I run these tools without having them ask me for permission for individual commands or edits, I sometimes find Claude has run away from me a little and made the wrong changes or tried to debug something in a bone-headed way that I would have redirected with an interruption if it has stopped to ask me for permissions. I think maybe Codex's tendency to stop and check in may be more valuable if you're relying on sandboxing (external or built-in) so that you can avoid individual permissions prompts. -- 1: https://nono.sh/ |
|
| ▲ | arcanemachiner 6 hours ago | parent | prev | next [-] |
| There is a new flag for terminal flickering issues: > Claude Code v2.1.89: "Added CLAUDE_CODE_NO_FLICKER=1 environment variable to opt into flicker-free alt-screen rendering with virtualized scrollback" |
| |
| ▲ | gck1 2 hours ago | parent [-] | | Such an interesting choice for a flag name. NO_BUG_PLEASE=1 |
|
|
| ▲ | ipkstef 5 hours ago | parent | prev | next [-] |
| there is an official codex plugin for claude. I just have them do adversarial reviews/implementations. etc with each other. adds a bit of time to the workflow but once you have the permissions sorted it'll just engage codex when necessary |
|
| ▲ | cmrdporcupine 6 hours ago | parent | prev [-] |
| Do this -- take your coworker's PRs that they've clearly written in Claude Code, and have Codex/GPT 5.4 review them. Or have Codex review your own Claude Code work. It then becomes clear just how "sloppy" CC is. I wouldn't mind having Opus around in my back pocket to yeet out whole net new greenfield features. But I can't trust it to produce well-engineered things to my standards. Not that anybody should trust an LLM to that level, but there's matters of degree here. |
| |
| ▲ | kevinsync 5 hours ago | parent | next [-] | | I've been using Claude and Codex in tandem ($100 CC, $20 Codex), and have made heavy use of claude-co-commands [0] to make them talk. Outside of the last 1-2 weeks (which we now have confirmation YET AGAIN that Claude shits the fucking bed in the run-up to a new model release), I usually will put Claude on max + /plan to gin up a fever dream to implement. When the plan is presented, I tell it to /co-validate with Codex, which tends to fill in many implementation gaps. Claude then codes the amended plan and commits, then I have a Codex skill that reviews the commit for gaps, missed edge cases, incorrect implementation, missed optimizations, etc, and fix them. This had been working quite well up until the beginning of the month, Claude more or less got CTE, and after a week of that I swapped to $100 Codex, $20 CC plans. Now I'm using co-validation a lot less and just driving primarily via Codex. When Claude works, it provides some good collaborative insights and counter-points, but Codex at the very least is consistently predictable (for text-oriented, data-oriented stuff -- I don't use either for designing or implementing frontend / UI / etc). As always, YMMV! [0] https://github.com/SnakeO/claude-co-commands | | |
| ▲ | hulk-konen an hour ago | parent | next [-] | | Some variation of this is the way. You should not get dependent on one black box. Companies will exploit that dependency. My version of this is having CC Pro, Cursor Pro, and OpenCode (with $10 to Codex/GLM 5.1) --> total $50. My work doesn't stop if one of these is having overloaded servers, etc. And it's definitely useful to have them cross-checking each other's plans and work. | |
| ▲ | cmrdporcupine 5 hours ago | parent | prev [-] | | This more or less mimics a flow that I had fairly good results from -- but I'm unwilling to pay for both right now unless I had a client or employer willing to foot the bill. Claude Code as "author" and a $20 Codex as reviewer/planner/tester has worked for me to squeeze better value out of the CC plan. But with the new $100 codex plan, and with the way Anthropic seemed to nerf their own $100 plan, I'm not doing this anymore. |
| |
| ▲ | afavour 6 hours ago | parent | prev | next [-] | | > It then becomes clear just how "sloppy" CC is. Have you done the reverse? In my experience models will always find something to criticize in another model's work. | | |
| ▲ | cmrdporcupine 6 hours ago | parent [-] | | I have, and in fact models will find things to criticize in their own work, too, so it's good to iterate. But I've had the best results with GPT 5.4 |
| |
| ▲ | woadwarrior01 6 hours ago | parent | prev [-] | | It cuts both ways. What I usually do these days is to let codex write code, then use claude code /simplify, have both codex and claude code review the PR, then finally manually review and fixup things myself. It's still ~2x faster than doing everything by myself. | | |
| ▲ | cmrdporcupine 6 hours ago | parent [-] | | I often work this way too, but I'll say this: This flow is exhausting. A day of working this way leaves me much more drained than traditional old school coding. | | |
| ▲ | woadwarrior01 6 hours ago | parent [-] | | 100%. On days when I'm sleep deprived (once or twice a week), I fallback to this flow. On regular days, I tend to write more code the old school way and use things things for review. |
|
|
|