| ▲ | godelski 10 days ago |
| > But once that’s done, why not?
Do you have the same understanding of the code?Be honest here. I don't think you do. Just like none of us have the same understanding of the code somebody else wrote. It's just a fact that you understand the code you wrote better than code you didn't. I'm not saying you don't understand the code, that's different. But there's a deeper understanding to code you wrote, right? You might write something one way because you had an idea to try something in the future based on an idea to had while finding some bug. Or you might write it some way because some obscure part of the codebase. Or maybe because you have intuition about the customer. But when AI writes the code, who has responsibility over it? Where can I go to ask why some choice was made? That's important context I need to write code with you as a team. That's important context a (good) engineering manager needs to ensure you're on the right direction. If you respond "well that's what the AI did" then how that any different from the intern saying "that's how I did it at the last place." It's a non-answer, and infuriating. You could also try to bullshit an answer, guessing why the AI did that (helpful since you promoted it), but you're still guessing and now being disingenuous. It's a bit more helpful, but still not very helpful. It's incredibly rude to your coworkers to just bullshit. Personally I'd rather someone say "I don't know" and truthfully I respect them more for that. (I actually really do respect people that can admit they don't know something. Especially in our field where egos are quite high. It's can be a mark of trust that's *very* valuable) Sure, the AI can read the whole codebase, but you have hundreds or thousands of hours in that codebase. Don't sell yourself short. Honestly I don't mind the AI acting as a reviewer to be a check before you submit a PR, but it just doesn't have the context to write good code. AI tries to write code like a junior, fixing the obvious problem that's right in front of you. But it doesn't fix the subtle problems that come with foresight. No, I want you to stumble through that code because while you write code you're also debugging and designing. Your brain works in parallel, right? I bet it does even if you don't know it. I want you stumbling through because that struggling is helping you learn more about the code and the context that isn't explicitly written. I want you to develop ideas and gain insights. But AI writing code? That's like measuring how good a developer is by the number of lines of code they write. I'll take quality over quantity any day of the week. Quality makes the business run better and waste fewer dollars debugging the spaghetti and duct tape called "tech debt". |
|
| ▲ | D13Fd 10 days ago | parent | next [-] |
| If you wrote the code, then you’ll understand it and know why it is written the way you wrote it. If the AI writes the code, you can still understand the code, but you will never know why the code is written that way. The AI itself doesn’t know, beyond the fact that that’s how it is in the training data (and that’s true even if it could generate a plausible answer for why, if you asked it). |
| |
| ▲ | jmcodes 10 days ago | parent | next [-] | | I don't agree entirely with this. I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way. If people are letting the LLM decide how the code will be written then I think they're using them wrong and yes 100% they won't understand the code as well as if they had written it by hand. LLMs are just good pattern matchers and can spit out text faster than humans, so that's what I use them for mostly. Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to. | | |
| ▲ | latchup 9 days ago | parent | next [-] | | > Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to. And that's a problem. By typing out the code, your brain has time to process its implications and reflect on important implementation details, something you lose out on almost entirely when letting an LLM generate it. Obviously, your high-level intentions and architectural planning are not tied to typing. However, I find that an entire class of nasty implementation bugs (memory and lifetime management, initialization, off-by-one errors, overflows, null handling, etc.) are easiest to spot and avoid right as you type them out. As a human capable of nonlinear cognition, I can catch many of these mid-typing and fix them immediately, saving an significant amount of time compared to if I did not. It doesn't help that LLMs are highly prone to generate these exact bugs, and no amount of agentic duct tape will make debugging these issues worthwhile. The only two ways I see LLM code generation bring any value to you is if: * Much of what you write is straight-up boilerplate. In this case, unless you are forced by your project or language to do this, you should stop. You are actively making the world a worse place. * You simply want to complete your task and do not care about who else has to review, debug, or extend your code, and the massive costs in capital and human life quality your shitty code will incur downstream of you. In this case, you should also stop, as you are actively making the world a worse place. | | |
| ▲ | johnisgood 8 days ago | parent | next [-] | | So what about all these huge codebases you are expected to understand but you have not written? You can definitely understand code without writing it yourself. > The only two ways I see LLM code generation bring any value to you is if That is just an opinion. I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way. | | |
| ▲ | latchup a day ago | parent [-] | | > So what about all these huge codebases you are expected to understand but you have not written? You do not need to fully understand large codebases to use them; this is what APIs are for. If you are adventurous, you might hunt a bug in some part of a large codebase, which usually leads you from the manifestation to the source of the bug on a fairly narrow path. None of this requires "understanding all these huge codebases". Your statement implies a significant lack of experience on your part, which makes your use of LLMs for code generation a bit alarming, to be honest. The only people expected to truly understand huge codebases are those who maintain them. And that is exactly why AI PRs are so insulting: you are asking a maintainer to vet code you did not properly vet yourself. Because no, you do not understand the generated code as well as if you wrote it yourself. By PRing code you have a subpar understanding of, you come across as entitled and disrespectful, even with the best of intentions. > That is just an opinion. As opposed to yours? If you don't want to engage meaningfully with a comment, then there is no need to reply. > I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way. See, I could hit you with "That is just an opinion" here, especially as your statement is entirely anecdotal But I won't, because that would be lame and cowardly. When you say "because I wanted it to be that way", what exactly does that mean? You told an extremely complex, probabilistic, and uninterpretable automaton what you want to write, and it wrote it not approximately, but exactly as you wanted it? I don't think this is possible from a mathematical point of view. You further insist that you "understand ALL parts" of the output. This actually is possible, but seems way too time-inefficient to be plausible. It is very hard to exhaustively analyze all possible failure modes of code, whether you wrote it yourself or not. There is a reason why certifying safety-critical embedded code is hell, and why investigating isolated autopilot malfunctions in aircraft takes experts years. That is before we consider that those systems are carefully designed to be highly predictable, unlike an LLM. |
| |
| ▲ | godelski 9 days ago | parent | prev [-] | | The best time to debug is when writing code. The best time to review is when writing code. The best time to iterate on design is when writing code. Writing code is a lot more than typing. It's the whole chimichanga |
| |
| ▲ | godelski 10 days ago | parent | prev [-] | | > I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way.
That's a different "why". > If people are letting the LLM decide how the code will be written then I think they're using them wrong
I'm unconvinced you can have an LLM produce code and you do all the decision making. These are fundamentally at odds. I am convinced that it will tend to follow your general direction, but when you write the code you're not just writing either.I don't actually ever feel like the LLMs help me generate code faster because when writing I am also designing. It doesn't take much brain power to make my fingers move. They are a lot slower than my brain. Hell, I can talk and type at the same time, and it isn't like this is an uncommon feat. But I also can't talk and type if I'm working on the hard part of the code because I'm not just writing. People often tell me they use LLMs to do boilerplate. I can understand this, but at the same time it begs the question "why are you writing boilerplate?" or "why are you writing so much boilerplate?" If it is boilerplate, why not generate it through scripts or libraries? Those have a lot of additional benefits. Saves you time, saves your coworkers time, and can make the code a lot cleaner because you're now explicitly saying "this is a routine". I mean... that's what functions are for, right? I find this has more value and saves more time in the long run than getting the LLMs to keep churning out boilerplate. It also makes things easier to debug because you have far fewer things to look at. |
| |
| ▲ | godelski 10 days ago | parent | prev [-] | | Exactly! Thanks for summing it up. There needs to be some responsible entity that can discuss the decisions behind the code. Those decisions have tremendous business value[0] [0] I stress because it's not just about "good coding". Maybe in a startup it only matters that "things work". But if you're running a stable business you care if your machine might break down at any moment. You don't want the MVP. The MVP is a program that doesn't want to be alive but you've forced into existence and it is barely hanging on |
|
|
| ▲ | tincholio 3 days ago | parent | prev | next [-] |
| My take on this is that when outsourcing the code writing, you miss out on building a mental model of how it works that you do develop when doing it yourself. The degree to which that is a problem is probably variable, I suppose. |
|
| ▲ | mikepurvis 10 days ago | parent | prev [-] |
| So the most recent thing that I did a bunch of vibe coding on was typescript actions for GHA. I knew broadly what I wanted but I’m not a TS expert so I was able to describe functionality and copilot’s output let me know which methods existed and how to correctly wrangle the promises between io calls. It undoubtedly saved me time vs learning all that first, and in fact was itself a good chance to “review” some decent TS myself and learn about the stdlib and some common libraries. I don’t think that effort missed many critical idioms and I would say I have decent enough taste as an engineer that I can tell when something is janky and there must be a better way. |
| |
| ▲ | godelski 10 days ago | parent [-] | | I think this is a different use case. The context we're talking about is building software. A GitHub action is really a script. Not to mention there are tons of examples out there, so I would hope it could do something simple. Vibe coding scripts isn't what people are typically concerned about. > but I’m not a TS expert
Although this is ultimately related. How can you verify that it is working as intended? You admit to not having those skills. To clarify, I'm sure "it's working" but can you verify the "as intended" part? This is the hard part of any coding. Getting things working isn't trivial, but getting things working right takes a lot more time. > So the most recent thing that I did
I'll share a recent thing I tried too...I was working on a setup.py file and I knew I had done something small and dumb, but was being blind to it. So I pulled up claude code and had it run parallel to my hunt. Asked it to run the build command and search for the error. It got caught up in some cmake flags I was passing, erroneously calling them errors. I get a number of prompts in and they're all wrong. I fixed the code btw, it was a variable naming error (classic!). I've also had success with claude, but it is super hit or miss. I've never gotten it to work well for anything remotely complicated if there also isn't the code in a popular repo I could just copy paste. But it is pretty hit or miss for even scripts, which I write a lot of bash. People keep telling me it is great for bash and honestly guys, just read the man pages... (and use some god damn functions!) |
|