▲ | raphinou 9 hours ago | ||||||||||||||||
I usually ask it to build a feature based on a specification I wrote. If it is not exactly right, it is often the case that editing it myself is faster than iterating with the ai, which has sometimes put me in an infinite loop of corrections requests. Have you encountered this too? | |||||||||||||||||
▲ | prox 7 hours ago | parent | next [-] | ||||||||||||||||
For me I only use it as a second opinion, I got a pretty good idea of what I want and how to do it, and I can ask any input on what I have written. This gives me the best results sofar. | |||||||||||||||||
▲ | notarobot123 8 hours ago | parent | prev | next [-] | ||||||||||||||||
Have you tried a more granular strategy - smaller chunks and more iterative cycles? | |||||||||||||||||
| |||||||||||||||||
▲ | pdimitar 7 hours ago | parent | prev [-] | ||||||||||||||||
This only happens if you want it to one-shot stuff, or if you fall under the false belief that "it is so close, we just need to correct these three things!". Yes I have encountered it. Narrowing focus and putting constraints and guiding it closer made the LLM agent much better at producing what I need. It boils down to me not writing the code really. Using LLMs actually sharpened my architectural and software design skills. Made me think harder and deeper at an earlier stage. |