Remix.run Logo
brap 5 days ago

My process is basically

1. Give it requirements

2. Tell it to ask me clarifying questions

3. When no more questions, ask it to explain the requirements back to me in a formal PRD

4. I criticize it

5. Tell it to come up with 2 alternative high level designs

6. I pick one and criticize it

7. Tell it to come up with 2 alternative detailed TODO lists

8. I pick one and criticize it

9. Tell it to come up with 2 alternative implementations of one of the TODOs

10. I pick one and criticize it

11. Back to 9

I usually “snapshot” outputs along the way and return to them to reduce useless context.

This is what produces the most decent results for me, which aren’t spectacular but at the very least can be a baseline for my own implementation.

It’s very time consuming and 80% of the time I end up wondering if it would’ve been quicker to just do it all by myself right from the start.

codingdave 5 days ago | parent | next [-]

Definitely sounds slower than doing it yourself.

I am falling into a pattern of treating AI coding like a drunk mid-level dev: "I saw those few paragraphs of notes you wrote up on a napkin, and stayed up late Saturday night while drinking and spat out this implementation. you like?"

So I can say to myself, "No, do not like. But the overall gist at least started in the right direction, so I can revise it from here and still be faster than had I done it myself on Monday morning."

jvanderbot 5 days ago | parent [-]

The most useful thing I've found is "I need to do X, show me 3 different popular libraries that do it". I've really limited my AI use to "Lady's Illustrated Primer" especially after some bad experiences with AI code from devs who should know better.

z3c0 4 days ago | parent [-]

I don't even frame my requests conversationally. They usually read like brief demands, sometimes just comma delimited technologies followed by a goal. Works fine for me, but I also never prompt anything that I don't already understand how to do myself. Keeps the cart behind the horse.

scuff3d 4 days ago | parent [-]

I've started putting in my system prompt "keep answers brief and don't talk in the first/second person". Gets rid of all the annoying sycophancy and stops it from going on for ten paragraphs. I can ask for more details when I need it.

rco8786 5 days ago | parent | prev | next [-]

> It’s very time consuming and 80% of the time I end up wondering if it would’ve been quicker to just do it all by myself right from the start.

Yes, this. Every time I read these sort of step by step guides to getting the best results with coding agents it all just sounds like boatloads of work that erase the efficiency margins that AI is supposed to bring in the first place. And anecdotally, I've found that to be true in practice as well.

Not to say that AI isn't useful. But I think knowing when and where AI will be useful is a skill in and of itself.

Leynos 5 days ago | parent [-]

At least for me, I can have five of these processes running at once. I can also use Deepresearch for generating the designs with a survey of literature. I can use NotebookLM to analyse the designs. And I use Sourcery, CodeRabbit, Codex and Codescene together to do code review.

It took me a long time to get there with custom cli tools and browser userscripts. The out of the box tooling is very limited unless you are willing to pay big £££s for Devin or Blitzy.

htrp 4 days ago | parent [-]

paid big bucks for devin... still was limited and not very good

jwrallie 5 days ago | parent | prev | next [-]

I think I’m working at lower levels, but usually my flow is:

- I start to build or refactor the code structure by myself creating the basic interfaces or skip to the next step when they already exist. I’ll use LLMs as autocomplete here.

- I write down the requirements and tell which files are the entry point for the changes.

- I do not tell the agent my final objective, only one step that gets me closer to it, and one at a time.

- I watch carefully and interrupt the agent as soon as I see something going wrong. At this point I either start over if my requirement assumptions were wrong or just correct the course of action of the agent if it was wrong.

Most of the issues I had in the past were from when I write down a broad objective that requires too many steps at the beginning. Agents cannot judge correctly when they finished something.

stavros 5 days ago | parent | prev | next [-]

I have a similar, though not as detailed, process. I do the same as you up to the PRD, then give it the PRD and tell it the high level architecture, and ask it to implement components how I want them.

It's still time-consuming, and it probably would be faster for me to do it myself, but I can't be bothered manually writing lines of code any more. I maybe should switch to writing code with the LLM function by function, though.

bluefirebrand 4 days ago | parent | next [-]

> but I can't be bothered manually writing lines of code any more. I maybe should switch to writing code with the LLM function by function, though.

Maybe you should consider a change of career :/

stavros 4 days ago | parent [-]

Why?

scuff3d 4 days ago | parent | prev [-]

That's like a chef saying they can't be bothered to cook...

jononor 4 days ago | parent | next [-]

Doesn't a head chef in a restaurant context delegate a lot to other people for the cooking? And of course they use many tools also. And pre-prepared parts also, often from external suppliers.

stavros 4 days ago | parent | prev [-]

If the final dish is excellent, does it matter if the chef made it themselves, or if they instructed the sous-chef how to make it?

scuff3d 4 days ago | parent | prev [-]

Yeah, sounds like it would have been far quicker to use the AI to give you a general overview of approaches/libraries/language features/etc, and then done the work yourself.