|
| ▲ | Cthulhu_ 17 minutes ago | parent | next [-] |
| I'd argue that as long as it produced working code it's better than nothing, in this case. |
|
| ▲ | sigseg1v 39 minutes ago | parent | prev | next [-] |
| If you ask an LLM to do a statically verifiable task without writing a simple verifier for it, and it hallucinates, that mistake is on you because it's a very quick step to guarantee something like this succeeds. |
|
| ▲ | orbital-decay 2 hours ago | parent | prev | next [-] |
| It's a labeling task with benign failure modes, much better suited for an LLM compared to generation |
|
| ▲ | empiricus 4 hours ago | parent | prev | next [-] |
| Well, I mean just choosing better names, don't touch the actual code. and you can also add a basic human filtering step if you want. You cannot possible say that "v12" is better than "header.size". I would argue that even hallucinated names are good: you should be able to think "but this position variable is not quite correctly updated, maybe this is not the position", which seems better than "this v12 variable is updated in some complicated way which I will ignore because it has no meaning". |
|
| ▲ | jitl 4 hours ago | parent | prev [-] |
| i think for obj-c specifically (can’t speak to other langs) i’ve had a great experience. it does make little mistakes but ai oriented approach makes it faster/easier to find areas of interest to analyze or experiment with. obj-c sendmsg use makes it more similar to understanding minified JS than decompiling static c because it literally calls many methods by string name. |