Remix.run Logo
logicchains 12 hours ago

>I have been using it for coding for some time, but I don't think I'm getting much value out of it.

I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.

surgical_fire 10 hours ago | parent | next [-]

> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did.

I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.

I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).

Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.

My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.

Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.

logicchains 6 hours ago | parent [-]

>Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly.

I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.

surgical_fire an hour ago | parent | next [-]

> I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.

It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning.

Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time.

Are there gains in terms of speed? Yeah. Are they meaningful? Kind of.

dwaltrip 5 hours ago | parent | prev [-]

Do you have any example prompts of the level of specificity and task difficulty you usually do? I oscillate between finding them useful and finding it annoying to get output that is actually good enough.

How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do?

rurp 5 hours ago | parent | prev [-]

If you ever end up working on large complicated code bases you'll likely have an easier time relating to the sentiment. LLMs are vastly better at small greenfield coding than for working on large projects. I think 100% of the people I've heard rave about AI coding are using them for small isolated projects. Among people who work on large projects sentiment seems to range from mildly useful to providing negative value.