▲ | epolanski 5 days ago | ||||||||||||||||||||||
Imho your post summarizes 90% of the posts I see about AI coding on HN: not understanding the tools, not understanding their strenghts and weaknesses, not being good at prompting or context management yet forming strong(ish) opinions. If you don't know what they are good at and how to use them of course you may end up with mixed results and yes, you may waste time. That's a criticism I have also towards AI super enthusiasts (especially vibe coders, albeit you won't find much here), they often confuse the fact that LLMs often one shot 80% of the solutions with the idea that LLMs are 80% there, whereas the Pareto principle well applies to software development where it's the hardest 20% that's gonna prove difficult. | |||||||||||||||||||||||
▲ | Sebalf a day ago | parent | next [-] | ||||||||||||||||||||||
Vibe coder that reads hacker news chiming in here. I think that the true usefullness of LLMs for coding is often lost on the usual audience of this website, because most people here have extremely high standards of what they expect LLMs to accomplish. But take people like me, I am an MD who was always into computers, but that just ended up going down a separate series of life decision, and could never find the time or energy to actually learn to code. When GPT-4 arrived, I started trying out using it for a medically-related coding hobby project, which eventually escalated into an ongoing PhD. Now, the fact is that this whole thing would just never have happened without LLMs. I would have never even thought of starting such a project, and if I did I wouldn't have had the time, and would have never made any progress even if I did. Vibe coding enabled me to do something entirely outside the scope of my previous capabilities. And the reality is that if I hadn't been able to do everything myself (down to the point of installing hardware managing the servers I am using), the project as a whole just wouldn't have happened. The code I produce isn't going into production anywhere, it is only used for my particular purposes, it is not exposed to the web in any way, and so typical LLM issues like security etc. are a non-issue. And while my understanding of what my code is actually doing is pretty rudimentary (for instance, basic syntax conventions is something I just never bothered to learn), this doesn't really matter in practice. If it works it works. | |||||||||||||||||||||||
▲ | Rochus 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||
I'm pretty good at prompting and I successfully use Perplexity (mostly with Claude Sonnet 4) to develop concepts, sometimes with the same session expanded over several days. I think the user interface is much superior over Claude.ai. My hope was that the newer Claude Opus 4.1 would be much better in solving complicated coding tasks, which doesn't seem to be the case. For this I had to subscribe to claude.ai. Actually I didn't see much difference in performance, but a much worse UI and availability experience. When it comes to developing a complex topic in a factual dialogue, Claude Sonnet Thinking seems to me to be even more suitable than Claude Opus. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | mihaaly 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||
How do you know that your humble opinion is right about who knows what tool and how deep? Even if you know better than themselves how musch they know, isn't the tool inadequate just yet for power use then when it is sooo easy to misuse? Too much tweeking and adapting users to the needs of the tool (vs. the other way around) and there is little point using those (which is a bit of the sickness of modern day computing: 'with computers you can solve problems lightning fast that you wouldn't have without them') | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | cztomsik 4 days ago | parent | prev [-] | ||||||||||||||||||||||
The situation has improved a little bit over the last few months but LLMs are still only barely usable in languages like C/C++/Zig - and it's not about prompting. I would say that LLMs are usable for JS/Python and while the code is not always what I'd write myself, it can be used and improved later (unless you are working on perf-sensitive JS app, then it's useless again). And it might be also something with GC, because I suppose the big boys are doing some GRPO over synthetically generated/altered source code (I would!) but obviously doing that in C++ is much more challenging - and I'd expect Rust to be straight impossible. |