Remix.run Logo
mccoyb a day ago

If anyone from OpenAI is reading this -- a plea to not screw with the reasoning capabilities!

Codex is so so good at finding bugs and little inconsistencies, it's astounding to me. Where Claude Code is good at "raw coding", Codex/GPT5.x are unbeatable in terms of careful, methodical finding of "problems" (be it in code, or in math).

Yes, it takes longer (quality, not speed please!) -- but the things that it finds consistently astound me.

sinatra 15 hours ago | parent | next [-]

Piggybacking on this post. Codex is not only finding much higher quality issues, it’s also writing code that usually doesn’t leave quality issues behind. Claude is much faster but it definitely leaves serious quality issues behind.

So much so that now I rely completely on Codex for code reviews and actual coding. I will pick higher quality over speed every day. Please don’t change it, OpenAI team!

F7F7F7 13 hours ago | parent | next [-]

Every plan Opus creates in Planning mode gets run through ChatGPT 5.2. It catches at least 3 or 4 serious issues that Claude didn’t think of. It typically takes 2 or 3 back and fourths for Claude to ultimately get it right.

I’m in Claude Code so often (x20 Max) and I’m so comfortable with my environment setup with hooks (for guardrails and context) that I haven’t given Codex a serious shot yet.

SkyPuncher 12 hours ago | parent [-]

The same thing can be said about Opus running through Opus.

It's often not that a different model is better (well, it still has to be a good model). It's that the different chat has a different objective - and will identify different things.

pietz 5 hours ago | parent | next [-]

That's a fair point and yet I deeply believe Codex is better here. After finishing a big task, I used two fresh instances of Claude and Codex to review it. Codex finds more issues in ~9 out of 10 cases.

While I prefer the way Claude speaks and writes code, there is no doubt that whatever Codex does is more thorough.

sinatra 11 hours ago | parent | prev | next [-]

My (admittedly one person's anecdotal) experience has been that when I ask Codex and Claude to make a plan/fix and then ask them both to review it, they both agree that Codex's version is better quality. This is on a 140K LOC codebase with an unreasonable amount of time spent on rules (lint, format, commit, etc), on specifying coding patterns, on documenting per workspace README.md, etc.

shinycode 7 hours ago | parent | prev [-]

Every time Claude Code finishes a task, I plan a full review of its own task with a very detailed plan and it catches itself many things it didn’t see before. It works well and it’s part of the process of refinement. We all know it’s almost never 100% hit of the first try on big chunks of code generated.

a24j 5 hours ago | parent [-]

How exactly do you plan/initiate a review from the terminal? open up a new shell/instance of claude and initiate the review with fresh context?

an hour ago | parent | next [-]
[deleted]
fragmede 5 hours ago | parent | prev [-]

Yeah. It dumps context into various .md files, like TODO.md.

AmazingTurtle 7 hours ago | parent | prev [-]

Have you tried telling Claude not to leave serious quality issues behind?

ifwinterco 21 hours ago | parent | prev | next [-]

I think the issue is for them "quality, not speed" means "expensive, not cheap" and they can't pass that extra cost on to customers

tejohnso 16 hours ago | parent | next [-]

> they can't pass that extra cost on to customers

I don't understand why not. People pay for quality all the time, and often they're begging to pay for quality, it's just not an option. Of course, it depends on how much more quality is being offered, but it sounds like a significant amount here.

mccoyb 21 hours ago | parent | prev | next [-]

I'm happy to pay the same right now for less (on the max plan, or whatever) -- because I'm never running into limits, and I'm running these models near all day every day (as a single user working on my own personal projects).

I consistently run into limits with CC (Opus 4.5) -- but even though Codex seems to be spending significantly more tokens, it just seems like the quota limit is much higher?

Computer0 21 hours ago | parent | next [-]

I am on the $20 plan for CC and Codex, I feel like a session of usage on CC == ~20% Codex usage / 5 hours in terms of time spent inferencing. It has always seemed way more geneous than I would expect.

Aurornis 19 hours ago | parent [-]

Agreed. The $20 plans can go very far when you're using the coding agent as an additional tool in your development flow, not just trying to hammer it with prompts until you get output that works.

Managing context goes a long way, too. I clear context for every new task and keep the local context files up to date with key info to get the LLM on target quickly

girvo 18 hours ago | parent | next [-]

> I clear context for every new task and keep the local context files up to date with key info to get the LLM on target quickly

Aggressively recreating your context is still the best way to get the best results from these tools too, so it has a secondary benefit.

heliumtera 16 hours ago | parent [-]

It is ironic that in the gpt-4 era, when we couldn't see much value in this tools, all we could hear was "skill issues", "prompt engineering skills". Now they are actually quite capable for SOME tasks, specially for something that we don't really care about learning, and they, to a certain extent, can generalize. They perform much better than in gpt-4 era, objectively, across all domains. They perform much better with the absolute minimum input, objectively, across all domains. If someone skipped the whole "prompt engineering" and learned nothing during that time, this person is more equiped to perform well. Now I wonder how much I am leaving behind by ignoring this whole "skills, tools, MCP this and that, yada yada".

conradev 13 hours ago | parent | next [-]

Prompt engineering (communicating with models?) is a foundational skill. Skills, tools, MCPs, etc. are all built on prompts.

My take is that the overlap is strongest with engineering management. If you can learn how to manage a team of human engineers well, that translates to managing a team of agents well.

miek 10 hours ago | parent | prev | next [-]

Minimal prompting yielding better results? I haven't found this to be the case at all.

neom 15 hours ago | parent | prev | next [-]

Any thoughts on your wondering? I too am wondering about the same mistake I might be making.

fragmede 4 hours ago | parent | prev [-]

My answer is that the code they generate is still crap, so the new skill is in being able to spot the ways and places it wrote crap code, and how to quickly tell it to refactor to fix specific issues, and still come out ahead on productivity. Nothing like an ultra wide screen monitor (LG 40+) and having parallel codex or claude sessions going, working on a bunch of things at once in parallel. Get good at git worktree. Use them to make tools that make your own life easier that you previously wouldn't even have bothered to make. (chrome extensions and MCPs!)

The other skill is in knowing exactly when to roll up your sleeves and do it the old fashioned way. Which things they're good/useful for, and which things they aren't.

theonething 17 hours ago | parent | prev [-]

do you mean running /compact often?

Aurornis 2 hours ago | parent | next [-]

If I want to continue the same task, I run /compact

If I want to start a new task, I /clear and then tell it to re-read the CLAUDE.md document where I put all of the quick context: Description of the project, key goals, where to find key code, reminders for tools to use, and so on. I aggressively update this file as I notice things that it’s always forgetting or looking up. I know some people have the LLM update their context file but I just do it myself with seemingly better results.

Using /compact burns through a lot of your usage quota and retains a lot of things you may not need. Giving it directions like “starting a new task doing ____, only keep necessary context for that” can help, but hitting /clear and having it re-read a short context primer is faster and uses less quota.

dionian 10 hours ago | parent | prev [-]

I'm not who you asked, but i do the same thing, i keep important state in doc files and recreate sessions from that state. this allows me to clear context and reconstruct my status on that item. I have a skill that manages this

joquarky 10 hours ago | parent [-]

Using documents for state helps so much with adding guardrails.

I do wish that ChatGPT had a toggle next to each project file instead of having to delete and reupload to toggle or create separate projects for various combinations of files.

hadlock 16 hours ago | parent | prev | next [-]

I noticed I am not hitting limits either. My guess is OpenAI sees CC as a real competitor/serious threat. Had OAI not given me virtually unlimited use I probably would have jumped ship to CC by now. Burning tons of cash at this stage is likely Very Worth It to maintain "market leader" status if only in the eyes of the media/investors. It's going to be real hard to claw back current usage limits though.

andai 19 hours ago | parent | prev [-]

If you look at benchmarks, the Claude models score significantly higher intelligence per token. I'm not sure how that works exactly, but they are offset from the entire rest of the chart on that metric. It seems they need less tokens to get the same result. (I can't speak for how that affects performance on very difficult tasks though, since most of mine are pretty straightforward.)

So if you look at the total cost of running the benchmark, it's surprisingly similar to other models -- the higher price per token is offset by the significantly fewer tokens required to complete a task.

See "Cost to Run Artificial Analysis Index" and "Intelligence vs Output Tokens" here

https://artificialanalysis.ai/

...With the obligatory caveat that benchmarks are largely irrelevant for actual real world tasks and you need to test the thing on your actual task to see how well it does!

golly_ned 18 hours ago | parent | prev | next [-]

I wonder how much their revenue really ends up contributes towards covering their costs.

In my mind, they're hardly making any money compared to how much they're spending, and are relying on future modeling and efficiency gains to be able to reduce their costs but are pursuing user growth and engagement almost fully -- the more queries they get, the more data they get, the bigger a data moat they can build.

erik 16 hours ago | parent | next [-]

Inference is almost certainly very profitable.

All the money they keep raising goes to R&D for the next model. But I don't see how they ever get off that treadmill.

mbesto 2 hours ago | parent | next [-]

> Inference is almost certainly very profitable.

It almost certainly is not. Until we know what the useful life of NVIDIA GPUs are, then it's impossible to determine whether this is profitable or not.

panarky 25 minutes ago | parent [-]

The depreciation schedule isn't as big a factor as you'd think.

The marginal cost of an API call is small relative to what users pay, and utilization rates at scale are pretty high. You don't need perfect certainty about GPU lifespan to see that the spread between cost-per-token and revenue-per-token leaves a lot of room.

And datacenter GPUs have been running inference workloads for years now, so companies have a good idea of rates of failure and obsolescence. They're not throwing away two-year-old chips.

ithkuil 8 hours ago | parent | prev [-]

Is there a possible future where the inference usage increases because there will be many many more customers and R&D grows Lower than inference?

Or is it already saturated?

nimchimpsky 18 hours ago | parent | prev [-]

"In my mind, they're hardly making any money compared to how much they're spending"

everyone seems to assume this, but its not like its a company run by dummies, or has dummy investors.

They are obviously making awful lot of revenue.

alwillis 11 hours ago | parent | next [-]

>> "In my mind, they're hardly making any money compared to how much they're spending"

> everyone seems to assume this, but its not like its a company run by dummies, or has dummy investors.

It has nothing to do with their management or investors being "dummies" but the numbers are the numbers.

OpenAI has data center rental costs approaching $620 billion, which is expected to rise to $1.4 trillion by 2033.

Annualized revenue is expected to be "only" $20 billion this year.

$1.4 trillion is 70x current revenue.

So unless they execute their strategy perfectly, hit all of their projections and hoping that neither the stock market or economy collapses, making a profit in the foreseeable future is highly unlikely.

[1]: "OpenAI's AI money pit looks much deeper than we thought. Here's my opinion on why this matters" - https://diginomica.com/openais-ai-money-pit-much-deeper-we-t...

Daneel_ 17 hours ago | parent | prev | next [-]

To me it seems that they're banking on it becoming indispensable. Right now I could go back to pre-AI and be a little disappointed but otherwise fine. I figure all of these AI companies are in a race to make themselves part of everyone's core workflow in life, like clothing or a smart phone, such that we don't have much of a choice as to whether we use it or not - it just IS.

That's what the investors are chasing, in my opinion.

zozbot234 17 hours ago | parent [-]

It'll never be literally indispensible, because open models exist - either served by third-party providers, or even ran locally in a homelab setup. A nice thing that's arguably unique about the latter is that you can trade scale for latency - you get to run much larger models on the same hardware if they can chug on the answer overnight (with offload to fast SSD for bulk storage of parameters and activations) instead of just answering on the spot. Large providers don't want to do this, because keeping your query's activations around is just too expensive when scaled to many users.

mbesto 2 hours ago | parent | prev | next [-]

> They are obviously making awful lot of revenue.

It's not hard to sell $10 worth of products if you spend $20. profit is more important than revenue.

troupo 17 hours ago | parent | prev [-]

Revenue != profit.

They are drowning in debt and go into more and more ridiculous schemes to raise/get more money.

--- start quote ---

OpenAI has made $1.4 trillion in commitments to procure the energy and computing power it needs to fuel its operations in the future. But it has previously disclosed that it expects to make only $20 billion in revenues this year. And a recent analysis by HSBC concluded that even if the company is making more than $200 billion by 2030, it will still need to find a further $207 billion in funding to stay in business.

https://finance.yahoo.com/news/openai-partners-carrying-96-b...

--- end quote ---

zozbot234 17 hours ago | parent | prev [-]

The "quality" model can cost $200/month. They'll be fine.

stared 6 hours ago | parent | prev | next [-]

If you want to combine Claude Code coding with reasoning, it is easy to do it with a plugin - https://github.com/stared/gemini-claude-skills, wrote for myself, but shared in case anyone wants. Somehow bigger context here: https://quesma.com/blog/claude-skills-not-antigravity/.

energy123 10 hours ago | parent | prev | next [-]

Second this but for the chat subscription. Whatever they did with 5.2 compared to 5.0 in ChatGPT increased the test-time compute and the quality shows. If only they would allow more tokens to be submitted in one prompt (it's currently capped at 46k for Plus). I don't touch Gemini 3.0 Pro now (am also subbed there) unless I need the context length.

baseonmars 18 hours ago | parent | prev | next [-]

absolutely second this. I'm mainly a claude code user, but i have codex running in another tab and for code reviews and it's absolutely killer at analyzing flows and finding subtle bugs.

mkagenius 13 hours ago | parent [-]

Have you tried Claude Code in the second tab instead, that would be a fair comparison.

smoe 16 hours ago | parent | prev | next [-]

Do you think that for someone who only needs careful, methodical identification of “problems” occasionally, like a couple of times per day, the $20/month plan gets you anywhere, or do you need the $200 plan just to get access to this?

hatefulmoron 15 hours ago | parent | next [-]

I've had the $20/month plan for a few months alongside a max subscription to Claude; the cheap codex plan goes a really long way. I use it a few times a day for debugging, finding bugs, and reviewing my work. I've ran out of usage a couple of times, but only when I lean on it way more than I should.

I only ever use it on the high reasoning mode, for what it's worth. I'm sure it's even less of a problem if you turn it down.

Foobar8568 10 hours ago | parent [-]

$200 on claude for vibe coding, $20 on codex for code review and "brainstorming". I use other LLM for a 2nd - 3rd - 4th opinion.

nl 15 hours ago | parent | prev | next [-]

The $20 does this fine.

The OpenAI token limits seem more generous than the Anthropic ones too.

rbancroft 15 hours ago | parent [-]

Listening to Dario at the NYT DealBook summit, and reading between the lines a bit, it seems like he is basically saying Anthropic is trying to be a reponsible, sustainable business and charging customers accordingly, and insinuating that OpenAI is being much more reckless, financially.

nl 14 hours ago | parent [-]

I think it's difficult to estimate how profitable both are - depends too much on usage and that varies so much.

I think it is widely accepted that Anthropic is doing very well in enterprise adoption of Claude Code.

In most of those cases that is paid via API key not by subscription so the business model works differently - it doesn't rely on low usage users subsidizing high usage users.

OTOH OpenAI is way ahead on consumer usage - which also includes Codex even if most consumers don't use it.

I don't think it matters - just make use of the best model at the best price. At the moment Codex 5.2 seems best at the mid-price range, while Opus seems slightly stronger than Codex Max (but too expensive to use for many things).

jvermillard 8 hours ago | parent | prev [-]

I use it everyday and the $20 plan is fine

apitman 21 hours ago | parent | prev | next [-]

It's annoying though because it keeps (accurately) pointing out critical memory bugs that I clearly need to fix rather than pretending they aren't there. It's slowing me down.

gnatolf 18 hours ago | parent [-]

Love it when it circles around a minor issue that I clearly described as temporary hack instead of recognizing the tremendously large gaping hole in my implementation right next to it.

rane 8 hours ago | parent | prev | next [-]

Exactly. This is why the workflow of consulting Gemini/Codex for architecture and overall plan, and then have Claude implement the changes is so powerful.

jvermillard 9 hours ago | parent | prev | next [-]

I use it mainly for embedded programming and I find codex way better than claude. I don't my the delay anyway I'm slower to code carefully crafted C

tgtweak 21 hours ago | parent | prev | next [-]

Anecdotally I've found it very good in the exact same case for multi-agent workflows - as the "reviewer"

kilroy123 21 hours ago | parent | prev | next [-]

Interesting what I've seen is it spins and thinks forever. Then just breaks. Which is beyond frustrating.

mccoyb 21 hours ago | parent | next [-]

If by "just breaks" means "refuses to write code / gives up or reverts what it does" -- yes, I've experienced that.

Experiencing that repeatedly motivated me to use it as a reviewer (which another commenter noted), a role which it is (from my experience) very good at.

I basically use it to drive Claude Code, which will nuke the codebase with abandon.

kilroy123 20 hours ago | parent | next [-]

I've seen it think for a long time and then just timeout or something? It just stops and nothing happens.

JamesSwift 19 hours ago | parent [-]

Ive had the same but i only use it through zed so I wasnt sure if it was a codex issue or a zed issue

fragmede 4 hours ago | parent | prev [-]

I've had codex rm -rf the git repo it's working in while running in yolo mode. Twice, even! (Play with fire, you're gonna get burnt.)

I had it whip this up to try and avoid this, while still running it in yolo mode (which is still not recommended).

https://gist.github.com/fragmede/96f35225c29cf8790f10b1668b8...

baq 21 hours ago | parent | prev [-]

we're all senior continue engineers nowadays it seems

johnnyfived 16 hours ago | parent | prev | next [-]

Agreed, I'm surprised how much much care the "extra high" reasoning allows. It easily catches bugs in code other LLMs won't, using it to review Opus 4.5 is highly effective.

garbagecoder 17 hours ago | parent | prev | next [-]

Agree. Codex just read my source code for a toy lisp I wrote in ARM64 assembly and learned how to code in that lisp and wrote a few demo programs for me. The was impressive enough. Then it spent some time and effort to really hunt down some problems--there was a single bit mask error in my garbage collector that wasn't showing up until then. I was blown away. It's the kind of thing I would have spent forever trying to figure out before.

josephg 16 hours ago | parent | next [-]

I've been writing a little port of the SeL4 OS kernel to rust, mostly as a learning exercise. I ran into a weird bug yesterday where some of my code wasn't running - qemu was just exiting. And I couldn't figure out why.

I asked codex to take a look. It took a couple minutes, but it managed to track the issue down using a bunch of tricks I've never seen before. I was blown away. In particular, it reran qemu with different flags to get more information about a CPU fault I couldn't see. Then got a hex code of the instruction pointer at the time of the fault, and used some tools I didn't know about to map that pointer to the lines of code which were causing the problem. Then took a read of that part of the code and guessed (correctly) what the issue was. I guess I haven't worked with operating systems much, so I haven't seen any of those tricks before. But, holy cow!

Its tempting to just accept the help and move on, but today I want to go through what it did in detail, including all the tools it used, so I can learn to do the same thing myself next time.

heliumtera 16 hours ago | parent | prev [-]

Maybe you're a garbage programmer and that error was too obvious. Interesting observation, though.

edit: username joke, don't get me banned

echelon 20 hours ago | parent | prev [-]

> If anyone from OpenAI is reading this

(unrelated, but piggybacking on requests to reach the teams)

If anyone from OpenAI or Google is reading this, please continue to make your image editing models work with the "previz-to-render" workflow.

Image edits should strongly infer pose and blocking as an internal ControlNet, but should be able to upscale low-fidelity mannequins, cutouts, and plates/billboards.

OpenAI kicks ass at this (but could do better with style controls - if I give a Midjourney style ref, use it) :

https://imgur.com/gallery/previz-to-image-gpt-image-1-x8t1ij...

https://imgur.com/a/previz-to-image-gpt-image-1-5-3fq042U

Google fails the tests currently, but can probably easily catch up :

https://imgur.com/a/previz-to-image-nano-banana-pro-Q2B8psd