| ▲ | patates 2 hours ago | |
Claude makes more detailed plans that seem better if you just skim them, but when analyzed, has a lot of errors, usually. It compensates for most during implementation if you make it use TDD by using superpower et al, or just telling it to do so. GPT 5.4 makes more simple plans (compared to superpowers - a plugin from the official claude plugin marketplace - not the plan mode), but can better fill the details while implementing. Plan mode in Claude Code got much better in the last months, but the lacking details cannot be compensated by the model during the implementation. So my workflow has been: Make claude plan with superpowers:brainstorm, review the spec, make updates, give the spec to gpt, usually to witness grave errors found by gpt, spec gets updates, another manual review, (many iterations later), final spec is written, write the plan, gpt finds mind boggling errors, (many iterations later), claude agent swarm implements, gpt finds even more errors, I find errors, fix fix fix, manual code review and red tests from me, tests get fixed (many iterations later) finally something usable with stylistic issues at most (human opinion)! This happens with the most complex features that'd be a nightmare to implement even for the most experienced programmers of course. For basic things, most SOTA modals can one-shot anyway. | ||
| ▲ | giwook 39 minutes ago | parent [-] | |
Interesting. Have you ever had Claude re-review its plan after having it draft the original plan? Or do you give it to GPT right away to review? Just curious as I'm trying to branch out from using Claude for everything, and I've been following a somewhat similar workflow to yours, except just having Claude review and re-review its plan (sometimes using different roles, e.g. system architect vs SWE vs QA eng) and it will similarly identify issues that it missed originally. But now I'm curious to try this while weaving in more GPT. | ||