Remix.run Logo
latchup a day ago

> So what about all these huge codebases you are expected to understand but you have not written?

You do not need to fully understand large codebases to use them; this is what APIs are for. If you are adventurous, you might hunt a bug in some part of a large codebase, which usually leads you from the manifestation to the source of the bug on a fairly narrow path. None of this requires "understanding all these huge codebases". Your statement implies a significant lack of experience on your part, which makes your use of LLMs for code generation a bit alarming, to be honest.

The only people expected to truly understand huge codebases are those who maintain them. And that is exactly why AI PRs are so insulting: you are asking a maintainer to vet code you did not properly vet yourself. Because no, you do not understand the generated code as well as if you wrote it yourself. By PRing code you have a subpar understanding of, you come across as entitled and disrespectful, even with the best of intentions.

> That is just an opinion.

As opposed to yours? If you don't want to engage meaningfully with a comment, then there is no need to reply.

> I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.

See, I could hit you with "That is just an opinion" here, especially as your statement is entirely anecdotal But I won't, because that would be lame and cowardly.

When you say "because I wanted it to be that way", what exactly does that mean? You told an extremely complex, probabilistic, and uninterpretable automaton what you want to write, and it wrote it not approximately, but exactly as you wanted it? I don't think this is possible from a mathematical point of view.

You further insist that you "understand ALL parts" of the output. This actually is possible, but seems way too time-inefficient to be plausible. It is very hard to exhaustively analyze all possible failure modes of code, whether you wrote it yourself or not. There is a reason why certifying safety-critical embedded code is hell, and why investigating isolated autopilot malfunctions in aircraft takes experts years. That is before we consider that those systems are carefully designed to be highly predictable, unlike an LLM.