Remix.run Logo
tabs_or_spaces 5 hours ago

My workflow is a bit different.

* I ask the LLM for it's understanding of a topic or an existing feature in code. It's not really planning, it's more like understanding the model first

* Then based on its understanding, I can decide how great or small to scope something for the LLM

* An LLM showing good understand can deal with a big task fairly well.

* An LLM showing bad understanding still needs to be prompted to get it right

* What helps a lot is reference implementations. Either I have existing code that serves as the reference or I ask for a reference and I review.

A few folks do it at my work do it OPs way, but my arguments for not doing it this way

* Nobody is measuring the amount of slop within the plan. We only judge the implementation at the end

* it's still non deterministic - folks will have different experiences using OPs methods. If claude updates its model, it outdates OPs suggestions by either making it better or worse. We don't evaluate when things get better, we only focus on things not gone well.

* it's very token heavy - LLM providers insist that you use many tokens to get the task done. It's in their best interest to get you to do this. For me, LLMs should be powerful enough to understand context with minimal tokens because of the investment into model training.

Both ways gets the task done and it just comes down to my preference for now.

For me, I treat the LLM as model training + post processing + input tokens = output tokens. I don't think this is the best way to do non deterministic based software development. For me, we're still trying to shoehorn "old" deterministic programming into a non deterministic LLM.