▲ | Rochus 5 days ago | |||||||||||||
I'm pretty good at prompting and I successfully use Perplexity (mostly with Claude Sonnet 4) to develop concepts, sometimes with the same session expanded over several days. I think the user interface is much superior over Claude.ai. My hope was that the newer Claude Opus 4.1 would be much better in solving complicated coding tasks, which doesn't seem to be the case. For this I had to subscribe to claude.ai. Actually I didn't see much difference in performance, but a much worse UI and availability experience. When it comes to developing a complex topic in a factual dialogue, Claude Sonnet Thinking seems to me to be even more suitable than Claude Opus. | ||||||||||||||
▲ | epolanski 5 days ago | parent [-] | |||||||||||||
I'll be more detailed in my second reply. 1) Your original post asks a lot if not too much out the LLM, the expectation you have is too big, to the point that to get anywhere near decent results you need a super detailed prompt (if not several spec documents) and your conclusion stands true: it might be faster to just do it manually. That's the state of LLMs as of today. Your post neither hints at such detailed and laborious prompting nor seem to recognize you've asked it too much, displaying that you are not very comfortable with the limitations of the tool. You're still exploring what it can and what it can't do. But that also implies you're yet not an expert. 2) The second takeaway that you're not yet as comfortable with the tools as you think you are is clearly context management. 2/3k locs of code are way too much. It's a massive amount of output to hope for good results (this also ties with the quality of the prompt, with the guidelines and code practices provided, etc, etc). 3) Neither 1 or 2 are criticisms of your conclusions or opinions, if anything, they are confirmations of your point that LLMs are not there. But what I disagree with is the rush into concluding that AI coding provides net 0 benefits out of your experience. That I don't share. Instead of settling on what it could do (help with planning, writing a spec file, writing unit tests, providing the more boilerplate-y part of the code) and use the LLM to reduce the friction (and thus provide a net benefit), you essentially asked it to replace you and found out the obvious: that LLMs cannot take care of non-trivial business logic yet, and even when they can the results are nowhere near being satisfactory. But that doesn't mean that AI-assisted coding is useless and the net benefit is 0, or negative, it only becomes so as the expectations on the tool are too big and the amount of information provided is either too small to return consistent results or too large for the context to be an issue. | ||||||||||||||
|