▲ | cmrdporcupine 7 days ago | |
Honestly it forces you -- rightfully -- to step back and be the one doing the planning. You can let it do the grunt coding, and a lot of the low level analysis and testing, but you absolutely need to be the one in charge on the design. It frankly gives me more time to think about the bigger picture within the amount of time I have to work on a task, and I like that side of things. There's definitely room for a massive amount of improvement in how the tool presents changes and suggestions to the user. It needs to be far more interactive. | ||
▲ | mock-possum 7 days ago | parent | next [-] | |
That’s my experience as well - I’m the one with the mental model, my responsibility is using text to communicate that model to the LLM using language it will recognize from its training data to generate the code to follow suit. My experience with prompting LLMs for codegen is really not much different from my experience with querying search engines - you have to understand how to ‘speak the language’ of the corpus being searched, in order to find the results you’re looking for. | ||
▲ | micromacrofoot 7 days ago | parent | prev [-] | |
Yes this is exactly it, you need to talk to Claude about code on a design/architecture level... just telling it what you want the code to output will get you stuck in failure loops. I keep saying it and no one really listens: AI really is advanced autocomplete. It's not reasoning or thinking. You will use the tool better if you understand what it can't do. It can write individual functions pretty well, stringing a bunch of them together? not so much. It's a good tool when you use it within its limitations. |