| ▲ | hombre_fatal 3 hours ago |
| It's a tough pill for some HNers to swallow, but with a good process, you can vibe-code really good software, and software far more tested, edge-cased, and thoughtful than you would have come up with, especially for software that isn't that one hobby passion project that you love thinking about. |
|
| ▲ | ehutch79 3 hours ago | parent | next [-] |
| vibe coding implies a complete lack of process. The definition is basically YOLO.... https://x.com/karpathy/status/1886192184808149383 |
| |
| ▲ | hombre_fatal 2 hours ago | parent [-] | | My process is just getting claude code to generate a plan file and then rinsing it through codex until it has no more advice left. I'd consider it vibe-coding if you never read the code/plan. For example, you could package this up in a bash alias `vibecode "my prompt"` instead of `claude -p "my prompt"` and it surely is still vibe-coding so long as you remain arms length from the plan/code itself. |
|
|
| ▲ | andoando 2 hours ago | parent | prev | next [-] |
| I mean to be fair, if you are using agents more than likely you are not thinking about aspects of the code as deeply as you would have before.
If you write things yourself you spend far more time thinking about every little decision that you're making. Even for tests, I always thought the real valuable part of it was that it forced you to think about all the different cases, and that just having bunch of green checkboxes if anything was luring developers into a false sense of security |
| |
| ▲ | hombre_fatal 2 hours ago | parent [-] | | There's definitely a trade-off, but it's a lopsided one that favors AI. Before AI, you were often encumbered with the superficial aspects of a plan or implementation. So much that we often would start implementing first and then kinda feel it out as we go, saving advanced considerations and edge-cases for the future since we're not even sure what the impl will be. That's useful for getting a visceral read on how a solution might feel in its fetal stage. But it takes a lot of time/energy/commitment to look into the future to think about edge cases, tests, potential requirement churn, alternative options, etc. and planning today around that. With AI, agents are really good at running preformed ideas to their conclusion and then fortify it with edge-cases, tests, and trade-offs. Now your expertise is better spent deciding among trade-offs and deciding on what the surface area looks like. Something that also just came to mind is that before AI, you would get married to a solution/abstraction because it would be too expensive to rewrite code/tests. But now, refactoring and updating tests is trivial. You aren't committed to a bad solution anymore. Or, your tests are kinda lame and brittle because they're vibe-coded (as opposed to not existing at all)? Ok, AI will change them for you. I also think we accidentally put our foot on the scale in these comparisons. The pre-AI developer we'll imagine as a unicorn who always spends time getting into the weeds to suss out the ideal solution of every ticket with infinite time and energy and enthusiasm. The post-AI developer we'll imagine as someone who is incompetent. And we'll pit them against each other to say "See? There's a regression". | | |
| ▲ | andoando 2 hours ago | parent | next [-] | | I think I agree. Fast iteration in many cases > long thought out ideas going the wrong direction. The issue is purely a mentality one where AI makes it really easy to push features fast without spending as much time thinking through them. That said, iteration is much more difficult on established codebases, especially with production workflows where you need to be more than extra careful your migration is backwards compatible, doesn't mess up feature x,y,z,d across 5 different projects relying on some field or logical property. | |
| ▲ | mattmanser 2 hours ago | parent | prev [-] | | Unless you go through the code with a tooth comb, you're not even aware of what trade-offs the AI has made for you. We've all just seen the Claude Code source code. 4k class files. Weird try/catches. Weird trade-offs. Basic bugs people have been begging to fix left untouched. Yes, there's a revolution happening. Yes, it makes you more productive. But stop huffing the kool-aid and be realistic. If you think you're still deciding about the trade-offs, I can tell you with sincerity that you should go try and refactor some of the code you're producing and see what trade-offs the AI is ACTUALLY making. Until you actually work with the code again, it's ridiculously easy to miss the trade-offs the AI is making while it's churning out it's code. I know this because we've got some AI heavy users on our team who often just throwing the AI code straight into the repo with properly checking it. And worse, on a code review, it looks right, but then when something goes wrong, you go "why did they make that decision?". And then you notice there's a very AI looking comment next to the code. And it clicks. They didn't make that decision, they didn't choose between the trade-offs, the AI did. I've seen weird timezone decisions, sorting, insane error catching theatre, changing parts of the code it shouldn't have even looked at, let alone changed. In the FE sphere it's got no clue how to use UseEffect or UseMemoization, it litters every div with tons of unnecessary CSS, it can't split up code for shit, in the backend world it's insanely bad at following prior art on things like what's the primary key field, what's the usual sorting priority, how it's supposed to use existing user contexts, etc. And the amount of times it uses archaic code, from versions of the language 5-10 years ago is really frustrating. At least with Typescript + C#. With C# if you see anything that doesn't use the simpler namespacing or doesn't use primary constructors it's a dead give-away that it was written with AI. | | |
| ▲ | bombcar an hour ago | parent [-] | | I feel this is the key - three years ago everyone on HN would be able to define "technical debt" and how it was bad and they hated it but had to live with it. We've now build a machine capable of something that can't even be called "technical debt" anymore - perhaps "technical usury" or something, and we're all supposed to love it. Most coders know that support and maintenance of code will far outlast and out weigh the effort required to build it. |
|
|
|
|
| ▲ | thejazzman 3 hours ago | parent | prev | next [-] |
| Shhhhh stop telling them! We don’t need more competition :) |
|
| ▲ | hatmanstack 3 hours ago | parent | prev | next [-] |
| This, but I think everybody that's awake knows this. I still not a fan of this project regardless, it's polishing a turd. |
| |
|
| ▲ | mbreese 2 hours ago | parent | prev | next [-] |
| I’ve said it before here, but my mind was swayed after talking with a product manager about AI coding. He offhandedly commented that “he’s been vibe coding for years, just with people”. He wasn’t thinking much about it at the time, but it resonated with me. To some agents are tools. To others they are employees. |
| |
| ▲ | IrishTechie an hour ago | parent [-] | | I had a similar realisation in IT support - I regularly discover the answers I get from junior-to mid-level engineers need to be verified, are based on false assumption or are wildly wrong, so why am I being so critical of LLM responses. Hopefully some day they’ll make it to senior engineer levels of reasoning, but in the meantime they’re just as good as many on the teams I work with and so have their place. |
|
|
| ▲ | sph 2 hours ago | parent | prev [-] |
| Produce this "far more tested, edge-cased, and thoughtful" vibe-coded software for us to judge, please. All I hear are empty promises of better software, and in the same breath the declaration that quality is overrated and time-to-ship is why vibecoding will eventually win. It's either one, or the other. |