Remix.run Logo
Tiberium 3 hours ago

Thanks for confirming my impressions, it's been like 4 months now that I've arrived at the same conclusions. GPT models are just better at any kind of low-level work: reverse engineering including understanding what the decompiled code/assembly does, renaming that decompiled code (functions/types), any kind of C/C++, way more reliable security research (Opus will find way more, but most will turn out to be false positives). I've had GPT create non-trivial custom decompilers for me for binaries built with specific compilers (it's a much simpler task than what IDA Pro/Ghidra are doing but still complex), and modify existing Java decompilers.

Regarding speed, I don't use xhigh that often, and surprisingly for me GPT 5.4 high is faster than Claude 4.6 Opus high (unless you enable fast mode for Opus).

Of course I still use Opus for frontend, for some small scripts, and for criticizing GPT's code style, especially in Python (getattr).

antirez 2 hours ago | parent [-]

In the SCSI controller work I mentioned, a very big part of the work was indeed reasoning about assembly code and how IRQs and completion of DMAs worked and so forth. Opus, even if TOOLS.md had the disassembler and it was asked to use it many times, didn't even bothered much. GPT 5.4 did instead a very great reverse engineering work, also it was a lot more sensible to my high level suggestions, like: work in that way to make more isolated progresses and so forth.

amluto 2 hours ago | parent [-]

GPT 5.4 is remarkably good at figuring out machine code using just binutils. Amusingly, I watched it start downloading ghidra, observe that the download was taking a while, and then mostly succeed at its assignment with objdump :)