Remix.run Logo
dgunay 3 days ago

Letting go of the particulars of the generated code is proving difficult for me. I hand edit most of the code my agents produce for taste even if it is correct, but I feel that in the long term that's not the optimal use of my time in agent-driven programming. Maybe the models will just get so good that they know how I would write it myself.

bilekas 3 days ago | parent [-]

I would argue this approach will help you in the long term with code maintainability. Which I feel will be one of the biggest issues down the line with AI generated codebases as they get larger.

monkpit 3 days ago | parent [-]

The solution is to codify these sorts of things in prompts and tool use and gateways like linters etc. you have to let go…

dgunay 5 hours ago | parent | next [-]

I have been doing this, and it does sort of work, but the problem is that for things that can't easily be turned into deterministic lints, prompting isn't 100% reliable. Every bit you go against the LLM's training data, it's more likely to forget to do it.

bilekas 3 days ago | parent | prev [-]

What do you mean "you have to let go".

I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?

jaggederest 3 days ago | parent [-]

Well, the point is, if it's not a hobby, you have to encode your preferences in lint and formatters, rather than holding onto manually messing with the output.

It's really freeing to say "Well, if the linter and the formatter don't catch it, it doesn't matter". I always update lint settings (writing new rules if needed) based on nit PR feedback, so the codebase becomes easier to review over time.

It's the same principle as any other kind of development - let the machine do what the machine does well.