Remix.run Logo
coldtea 3 hours ago

>That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case.

So? We didn't prove human code "isn't going to fail due to some obscure or unforessen corner case" either (aside the tiny niche of formal verification).

So from that aspect it's quite similar.

>so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology.

You seem to imply they do some sort of random iteration until the tests pass, which is not the case. Usually they can see the test failing, and describe the issue exactly in the way a human programmer would, then fix it.

zozbot234 3 hours ago | parent [-]

> describe the issue exactly in the way a human programmer would

Human programmers don't usually hallucinate things out of thin air, AIs like to do that a whole lot. So no, they aren't working the exact same way.

coldtea 3 hours ago | parent [-]

>Human programmers don't usually hallucinate things out of thin air

Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on.

>So no, they aren't working the exact same way.

However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would".

qsera 3 hours ago | parent [-]

That is not hallucinating...

LLM hallucinating is not an edge case. It is how they generate output 100% time. Mainstream media only calls it "hallucination" when the output is wrong, but from the point of view of a LLM, it is working exactly it is supposed to....

coldtea 2 hours ago | parent [-]

>LLM hallucinating is not an edge case. It is how they generate output 100% time

If enough of the time it matches reality -- which it does, it doesn't matter. Especially in a coding setup, where you can verify the results, have tests you wrote yourself, and the end goal is well defined.

And conversely, if a human is a bullshitter, or ignorant, or liar, or stupid, it doesn't matter if they end up with useless stuff "in a different way" than an LLM hallucinating. The end result regarding the low utility of his output is the same.

Besides, one theory of cognition (pre LLM even) is of the human brain as a prediction machine. In which case, it's not that different than an LLM in principle, even if the scope and design is better.