▲ | Transfinity 7 days ago | |||||||
> LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over. I feel personally described by this statement. At least on a bad day, or if I'm phoning it in. Not sure if that says anything about AI - maybe just that the whole "mental models" part is quite hard. | ||||||||
▲ | apples_oranges 7 days ago | parent | next [-] | |||||||
It means something is not understood. Could be the product, the code in question, or computers in general. 90% of coders seem to be lacking foundational knowledge imho. Not trying to hate on anyone, but when you have the basics down, you can usually see quickly where the problem is, or at least must be. | ||||||||
| ||||||||
▲ | bagacrap 6 days ago | parent | prev [-] | |||||||
So LLMs are always phoning it in, on a bad day, etc. Great. I recently tried to get AI to refactor some tests, which it proceeded to break. Then it iterated a bit till it had gotten the pass rate back up to 75%. At this point it declared victory. So yes, it does really seem like a human who really doesn't want to be there. |