| ▲ | esperent 7 hours ago | |||||||
I switched over to codex with pi last week. Even though I strongly dislike OpenAI and I hope this is a temporary solution, they're the only one of the frontier models that let me use my own harness and after recent CC shenanigans I'm done with proprietary harnesses. The immediate thing I've noticed: I get way more out of the codex $100 plan than I was getting out of the Anthropic $200. Like, probably 2x at least. The other think I've noticed: when using strict guardrails, TDD, reviews etc. I cannot notice any quality difference. Not only between Opus and Codex but even between the most recent models - GPT 5.3 code, GPT 5.4, and now GPT 5.5. Well, 5.5 uses a huge amount of my session limits. 5.3 is very light, 5.4 somewhere in between. So now I use 5.4 for the main session/debugging/planning and then execute with 5.3. Regarding usage, of course, it's hard to say how much is the model and how much is coming from Claude code and all this ridiculous malware scanning. But it's nice to use a lightweight harness like pi and see that even with all my personal instructions, a good bunch of skills, custom tools etc., if I start a session and say "hi" I'm starting out with about 15k of context used. I think a closely equivalent setup in CC would start at 30-40k context. | ||||||||
| ▲ | gwerbin 6 hours ago | parent [-] | |||||||
What's your Pi setup? | ||||||||
| ||||||||