Remix.run Logo
latchup 9 days ago

> Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.

And that's a problem. By typing out the code, your brain has time to process its implications and reflect on important implementation details, something you lose out on almost entirely when letting an LLM generate it.

Obviously, your high-level intentions and architectural planning are not tied to typing. However, I find that an entire class of nasty implementation bugs (memory and lifetime management, initialization, off-by-one errors, overflows, null handling, etc.) are easiest to spot and avoid right as you type them out. As a human capable of nonlinear cognition, I can catch many of these mid-typing and fix them immediately, saving an significant amount of time compared to if I did not. It doesn't help that LLMs are highly prone to generate these exact bugs, and no amount of agentic duct tape will make debugging these issues worthwhile.

The only two ways I see LLM code generation bring any value to you is if:

* Much of what you write is straight-up boilerplate. In this case, unless you are forced by your project or language to do this, you should stop. You are actively making the world a worse place.

* You simply want to complete your task and do not care about who else has to review, debug, or extend your code, and the massive costs in capital and human life quality your shitty code will incur downstream of you. In this case, you should also stop, as you are actively making the world a worse place.

johnisgood 8 days ago | parent | next [-]

So what about all these huge codebases you are expected to understand but you have not written? You can definitely understand code without writing it yourself.

> The only two ways I see LLM code generation bring any value to you is if

That is just an opinion.

I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.

latchup a day ago | parent [-]

> So what about all these huge codebases you are expected to understand but you have not written?

You do not need to fully understand large codebases to use them; this is what APIs are for. If you are adventurous, you might hunt a bug in some part of a large codebase, which usually leads you from the manifestation to the source of the bug on a fairly narrow path. None of this requires "understanding all these huge codebases". Your statement implies a significant lack of experience on your part, which makes your use of LLMs for code generation a bit alarming, to be honest.

The only people expected to truly understand huge codebases are those who maintain them. And that is exactly why AI PRs are so insulting: you are asking a maintainer to vet code you did not properly vet yourself. Because no, you do not understand the generated code as well as if you wrote it yourself. By PRing code you have a subpar understanding of, you come across as entitled and disrespectful, even with the best of intentions.

> That is just an opinion.

As opposed to yours? If you don't want to engage meaningfully with a comment, then there is no need to reply.

> I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.

See, I could hit you with "That is just an opinion" here, especially as your statement is entirely anecdotal But I won't, because that would be lame and cowardly.

When you say "because I wanted it to be that way", what exactly does that mean? You told an extremely complex, probabilistic, and uninterpretable automaton what you want to write, and it wrote it not approximately, but exactly as you wanted it? I don't think this is possible from a mathematical point of view.

You further insist that you "understand ALL parts" of the output. This actually is possible, but seems way too time-inefficient to be plausible. It is very hard to exhaustively analyze all possible failure modes of code, whether you wrote it yourself or not. There is a reason why certifying safety-critical embedded code is hell, and why investigating isolated autopilot malfunctions in aircraft takes experts years. That is before we consider that those systems are carefully designed to be highly predictable, unlike an LLM.

godelski 9 days ago | parent | prev [-]

The best time to debug is when writing code.

The best time to review is when writing code.

The best time to iterate on design is when writing code.

Writing code is a lot more than typing. It's the whole chimichanga