Remix.run Logo
j45 a day ago

Just because we can code something faster or cheaper doesn't increase the odds it will be right.

falcor84 a day ago | parent [-]

Arguably it does, because being able to experience something gives you much more insight into whether it's right or not - so being able to iterate quickly many times, continuously updating your spec and definition of done should help you get to the right solution. To be clear, there is still effort involved, but the effort becomes more about the critical evaluation rather than the how.

packetlost a day ago | parent | next [-]

But that's not the only problem.

To illustrate, I'll share what I'm working on now. My companies ops guy vibe coded a bunch of scripts to manage deployments. On the surface, they appear to do the correct thing. Except they don't. The tag for the Docker image used is hardcoded in a yaml file and doesn't get updated anywhere unless you do it manually. The docs don't even mention half of the necessary scripts/commands or implicit setup necessary for any of it to work in the first place, much less the tags or how any of it actually works. There are two completely different deployment strategies (direct to VM with docker + GCP and a GKE-based K8s deploy). Neither fully work, and only one has any documentation at all (and that documentation is completely vibed, so has very low information density). The only reason I'm able to use this pile of garbage at all is because I already know how all of the independent pieces function and can piece it together, but that's after wasting several hours of "why the fuck aren't my changes having an effect." There are very, very few lines of code that don't matter in well architected systems, but many that don't in vibed systems. We already have huge problems with overcomplicated crap made exclusively by humans, that's been hard enough to manage.

Vibe coding consistently gives the illusion of progress by fixing an immediate problem at the expense of piling on crap that obscures what's actually going on and often breaks exiting functionality. It's frankly not sustainable.

That being said, I've gotten some utility out of vibe coding tools, but it mostly just saves me some mental effort of writing boring shit that isn't interesting, innovative, or enjoyable, which is like 20% of mental effort and 5% of my actual work. I'm not even going to get started on the context switching costs. It makes my ADHD feel happy but I'm confident I'm less productive because of the secondary effects.

dchuk 19 hours ago | parent | next [-]

If you’re able to articulate the issues this clearly, it would take like an hour to “vibe code” away all of these issues. That’s the actual superpower we all have now. If you know what good software looks like, you can rough something out so fast, then iterate and clean it up equally fast, and produce something great an order of magnitude faster than just a few months ago.

A few times a week I’m finding open source projects that either have a bunch of old issues and pull requests, or unfinished todos/roadmaps, and just blasting through all of that and leaving a PR for the maintainer while I use the fork. All tested, all clean best practice style code.

Don’t complain about the outputs of these tools, use the tools to produce good outputs.

reval a day ago | parent | prev [-]

The post you’re r replying to gets this right- lead time is everything. The fast you can iterate, the more likely that what you are doing is correct.

I’ve had a similar experience to what you’re describing. We are slower with AI… for now. Lean into it. Exploit the fact that you can now iterate much faster. Solve smaller problems. Solve them completely. Move on.

lelanthran 3 hours ago | parent | prev [-]

Iteration only matters when the feedback is used to improve.

Your model doesn't improve. It can't.

baq 3 hours ago | parent | next [-]

The magic of test time inference is the harness can improve even if the model is static. Every task outcome informs the harness.

thenaturalist 2 hours ago | parent [-]

> The magic

Hilarious that you start with that as TAO requires

- Continuous adaptation makes it challenging to track performance changes and troubleshoot issues effectively.

- Advanced monitoring tools and sophisticated logging systems become essential to identify and address issues promptly.

- Adaptive models could inadvertently reinforce biases present in their initial training data or in ongoing feedback.

- Ethical oversight and regular audits are crucial to ensure fairness, transparency, and accountability.

Not much magic in there if it requires good old human oversight every step of the way, is there?

mountainriver 2 hours ago | parent | prev [-]

Your model can absolutely improve

thenaturalist 2 hours ago | parent [-]

How would that work out barring a complete retraining or human in the loop evals?