Remix.run Logo
rdrd 15 hours ago

First you have to be very specific with what you mean by idiomatic code - what’s idiomatic for you is not idiomatic for an LLM. Personally I would approach it like this:

1) Thoroughly define step-by-step what you deem to be the code convention/style you want to adhere to and steps on how you (it) should approach the task. Do not reference entire files like “produce it like this file”, it’s too broad. The document should include simple small examples of “Good” and “Bad” idiomatic code as you deem it. The smaller the initial step-by-step guide and code conventions the better, context is king with LLMs and you need to give it just enough context to work with but not enough it causes confusion.

2) Feed it to Opus 4.5 in planning mode and ask it to follow up with any questions or gaps and have it produce a final implementation plan.md. Review this, tweak it, remove any fluff and get it down to bare bones.

3) Run the plan.md through a fresh Agentic session and see what the output is like. Where it’s not quite correct add those clarifications and guardrails into the original plan.md and go again with step 3.

What I absolutely would NOT do is ask for fixes or changes if it does not one-shot it after the first go. I would revise plan.md to get it into a state where it gets you 99% of the way there in the first go and just do final cleanup by hand. You will bang your head against the wall attempting to guide it like you would a junior developer (at least for something like this).

XenophileJKO 11 hours ago | parent [-]

With the current generation of model, it really isn't necessary to restart every time you don't like something. Certainly this depends on the model. Most of my recent experience is with Claude Sonnet/Opus and Gpt-5.x.

I very often, when reviewing code, think of better abstractions or enhancements and just continue asking for refactors inline. Very very rarely does the model fall off the rails.

I suppose if your unit of work was very large you might have more issues perhaps? Generally though, large units of work have other issues as well.

rdrd 11 hours ago | parent [-]

Yes I too have found newer models (mostly Opus) to be much better at iterative development. With that being said if I have very strong architectural/developmental steer on what I believe the output should be [mostly for production code where I thoroughly review absolute everything] it’s better to have a documented spec with everything covered rather than trying to clean up via an agent conversation. In the team I’m in we keep all plan.mds for a feature, previously before AI tooling we created/revised these plans in Confluence, so to some degree reworking the plan is more an artefact of the previous process and not necessarily a best practice I don’t think.

XenophileJKO 9 hours ago | parent [-]

Understandable. Certainly my style is not applicable to everyone. I tend to "grow" my software more organically. Usually because the more optimal structure isn't evident until you are actually looking at how all the contracts fit together or what dependencies are needed. So adding a lot of plan/documentation just slows me down.

I tend to create a very high level plan, then code systems, then document the resulting structure if I need documentation.

This works well for very iterative development where I'm changing contracts as I realize the weak point of the current setup.

For example, I was using inheritence for specialized payloads in a pipeline, then realized if I wanted to attach policies/behaviours to them as they flow through the pipeline, I was better off just changing the whole thing to a payload with bag of attached aspects.

Often those designs are not obvious when making the initial architectural plan. So I approach development using AI in much the same way: Generate code, review, think, request revision, repeat.

This really only applies when establishing architecturs though, which is generally the hardest part. Once you have an example, then you can mostly one-shot new instances or minor enhancements.