▲ | logicchains 6 hours ago | |
>Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs. Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly. I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong. | ||
▲ | surgical_fire an hour ago | parent | next [-] | |
> I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong. It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning. Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time. Are there gains in terms of speed? Yeah. Are they meaningful? Kind of. | ||
▲ | dwaltrip 5 hours ago | parent | prev [-] | |
Do you have any example prompts of the level of specificity and task difficulty you usually do? I oscillate between finding them useful and finding it annoying to get output that is actually good enough. How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do? |