▲ | logicchains 12 hours ago | ||||||||||||||||||||||
>I have been using it for coding for some time, but I don't think I'm getting much value out of it. I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases. | |||||||||||||||||||||||
▲ | surgical_fire 10 hours ago | parent | next [-] | ||||||||||||||||||||||
> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving. I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful). Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs. Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it. My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated. Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | rurp 5 hours ago | parent | prev [-] | ||||||||||||||||||||||
If you ever end up working on large complicated code bases you'll likely have an easier time relating to the sentiment. LLMs are vastly better at small greenfield coding than for working on large projects. I think 100% of the people I've heard rave about AI coding are using them for small isolated projects. Among people who work on large projects sentiment seems to range from mildly useful to providing negative value. |