| ▲ | coldtea 3 hours ago | |||||||||||||||||||||||||
>That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case. So? We didn't prove human code "isn't going to fail due to some obscure or unforessen corner case" either (aside the tiny niche of formal verification). So from that aspect it's quite similar. >so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology. You seem to imply they do some sort of random iteration until the tests pass, which is not the case. Usually they can see the test failing, and describe the issue exactly in the way a human programmer would, then fix it. | ||||||||||||||||||||||||||
| ▲ | zozbot234 3 hours ago | parent [-] | |||||||||||||||||||||||||
> describe the issue exactly in the way a human programmer would Human programmers don't usually hallucinate things out of thin air, AIs like to do that a whole lot. So no, they aren't working the exact same way. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||