Remix.run Logo
cogman10 6 hours ago

So long as you view AI as a sometimes competent liar, then it can be useful.

I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.

However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.

And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

fluoridation 2 hours ago | parent [-]

>AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?

* A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.

* A developer who understood the problem but failed to implement the algorithm that solves it.

* An algorithm was used that doesn't solve the problem.

* The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.

* Everything used to be correct, but an environment change made it so the correct solution stopped being correct.

In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.