▲ | preommr 2 days ago | |
> A repeated trend is that Claude Code only gets 70-80% of the way, which is fine and something I wish was emphasized more by people pushing agents. I have been pretty successful at using llms for code generation. I have a simple rule that something is either 90%>ai or none at all (exluding inline completions, and very obvious text editing). The model has an inherent understanding of some problems due to it's training data (e.g. setting up a web server with little to no deps in golang), that it can do with almost 100% certainty, where it's really easy to blaze through in a few minutes, and then I can setup the architecture for some very flat code flows. This can genuinely improve my output by 30%-50% | ||
▲ | MPSimmons 2 days ago | parent | next [-] | |
Agree with your experiences. I've also found that if I build a lightweight skeleton of the structure of the program, it does a much better job. Also, ensuring that it does a full fledged planning/non-executing step before starting to change things leads to good results. I have been using Cline in VSCode, and I've been enjoying it a lot. | ||
▲ | randmeerkat 2 days ago | parent | prev [-] | |
> I have a simple rule that something is either 90%>ai or none at all… 10% is the time it works 100% of the time. |