| ▲ | kilroy123 5 hours ago |
| Personally, I have Claude do the coding. Then 5.2-high do the reviewing. |
|
| ▲ | mmaunder 25 minutes ago | parent | next [-] |
| I might flip that given how hard it's been for Claude to deal with longer context tasks like a coding session with iterations vs a single top down diff review. |
|
| ▲ | seunosewa 5 hours ago | parent | prev | next [-] |
| Then I pass the review back to Claude Opus to implement it. |
| |
| ▲ | VladVladikoff 4 hours ago | parent | next [-] | | Just curious is this a manual process or you guys have automated these steps? | | |
| ▲ | ricketycricket 4 hours ago | parent | next [-] | | I have a `codex-review` skill with a shell script that uses the Codex CLI with a prompt. It tells Claude to use Codex as a review partner and to push back if it disagrees. They will go through 3 or 4 back-and-forth iterations some times before they find consensus. It's not perfect, but it does help because Claude will point out the things Codex found and give it credit. | | | |
| ▲ | _zoltan_ 3 hours ago | parent | prev [-] | | zen-mcp (now called pal-mcp I think) and then claude code can actually just pass things to gemini (or any other model) |
| |
| ▲ | kilroy123 2 hours ago | parent | prev [-] | | Sometimes, depends on how big of a task. I just find 5.2 so slow. |
|
|
| ▲ | _zoltan_ 3 hours ago | parent | prev [-] |
| I have Opus 4.5 do everything then review it with Gemini 3. |