| ▲ | kneel25 10 hours ago |
| > After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns. I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors. |
|
| ▲ | zamadatix 6 hours ago | parent | next [-] |
| Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results. The part that concerns me is whether this part will actually come in time or not: > The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time. Of course, it wouldn't be the first time Andreas delivered more than I expected :). |
| |
| ▲ | kneel25 3 hours ago | parent [-] | | That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo. | | |
| ▲ | zamadatix an hour ago | parent [-] | | If completing the above successfully doesn't necessarily test these abilities then where does the concern about having these abilities come into play? |
|
|
|
| ▲ | herrkanin 10 hours ago | parent | prev | next [-] |
| Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well. |
| |
| ▲ | kneel25 10 hours ago | parent | next [-] | | They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning) | | |
| ▲ | iamleppert 6 hours ago | parent [-] | | I keep hearing people say "but as humans we actually understand". What evidence do you have of the material differences in what understanding an LLM has, and what version a human has? What processes do we fundamentally do, that an LLM does not or cannot do? What here is the definition of "understanding", that, presumably an LLM does not currently do, that humans do? | | |
| ▲ | mcpar-land 5 hours ago | parent | next [-] | | https://ml-site.cdn-apple.com/papers/the-illusion-of-thinkin... | |
| ▲ | kneel25 3 hours ago | parent | prev [-] | | Well a material difference is we don’t input/output in tokens I guess. We have a concept of gaps and limits to knowledge, we have factors like ego, preservation, ambition that go into our thoughts where LLM just has raw data. Understanding the implication of a code change is having an idea of a desired structure, some idea of where you want to head to and how that meshes together. LLM has zero of any of that. Just because it can copy the output of the result of those factors I mention doesn’t mean they operate the same. |
|
| |
| ▲ | Fervicus 10 hours ago | parent | prev | next [-] | | With humans though, I wouldn't have to review 20k lines of code at once. | | |
| ▲ | glhaynes 9 hours ago | parent [-] | | So ask the AI to just translate one little chunk at a time, right? | | |
| |
| ▲ | DetroitThrow 10 hours ago | parent | prev [-] | | >Your argument is just as applicable on human code reviewers. The tests many of us use for how capable a model or harness is is usually based around whether they can spot logical errors readily visible to humans. Hence: https://news.ycombinator.com/item?id=47031580 |
|
|
| ▲ | u_sama 10 hours ago | parent | prev | next [-] |
| That is what the testing suite is there to check, no? |
| |
| ▲ | layer8 10 hours ago | parent | next [-] | | No. Testing generally can only falsify, not verify. It’s complementary to code review, not a substitute for it. | |
| ▲ | kneel25 10 hours ago | parent | prev [-] | | You mean the testing suite generated by AI? | | |
|
|
| ▲ | cardanome 10 hours ago | parent | prev [-] |
| Yeah, I lost all interest in the ladybird project now that it is AI slop. No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap. |