▲ | wordofx 2 days ago | |
> Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code. This seems like half of HN with how much HN hates AI. Those who hate it or say it’s not useful to them seem to be fighting against it and not wanting to learn how to use it. I still haven’t seen good examples of it not working even with obscure languages or proprietary stuff. | ||
▲ | drzaiusx11 2 days ago | parent | next [-] | |
Anyone who has mentored as part of a junior engineer internship program AND has attempted to use current gen ai tooling will notice the parallels immediately. There are key differences though that are worth highlighting. The main difference is that with the current batch of genai tools, the AI's context resets after use, whereas a (good) intern truly learns from prior behavior. Additionally, as you point out, the language and frameworks need to be part of the training set since AI isn't really "learning" it's just prepolulating a context window for its pre-existing knowledge (token prediction), so ymmv depending on hidden variables from the secret (to you, the consumers) training data and weights. I use Ruby primarily these days, which is solidly in the "boring tech" camp and most AIs fail to produce useful output that isn't rails boilerplate. If I did all my IC contributions via directed intern commits I'd leave the industry out of frustration. Using only AI outputs for producing code changes would be akin to torture (personally.) Edit: To clarify I'm not against AI use, I'm just stating that with the current generation of tools it is a pretty lackluster experience when it comes to net new code generation. It excells at one off throwaway scripts and making large tedious redactors less drudgerly. I wouldn't pivot to it being my primary method of code generation until some of the more blatant productiviy losses are addressed. | ||
▲ | hn_acc1 2 days ago | parent | prev | next [-] | |
When it's best suggestion (for inline typing) is bring back a one-off experiment in a different git worktree from 3 months ago that I only needed that one time.. it does make me wonder. Now, it's not always useless. It's GREAT at adding debugging output and knowing which variables I just added and thus want to add to the debugging output. And that does save me time. And it does surprise me sometimes with how well it picks up on my thinking and makes a good suggestion. But I can honestly only accept maybe 15-20% of the suggestions it makes - the rest are often totally different from what I'm working on / trying to do. And it's C++. But we have a very custom library to do user-space context switching, and everything is built on that. | ||
▲ | halfcat 2 days ago | parent | prev | next [-] | |
> not wanting to learn how to use it I kind of feel this. I’ll code for days and forget to eat or shower. I love it. Using Claude code is oddly unsatisfying to me. Probably a different skillset, one that doesn’t hit my obsessive tendencies for whatever reason. I could see being obsessed with some future flavor of it, and I think it would be some change with the interface, something more visual (gamified?). Not low-code per se, but some kind of mashup of current functionality with graph database visualization (not just node force graphs, something more functional but more ergonomic). I haven’t seen anything that does this well, yet. | ||
▲ | LtWorf 2 days ago | parent | prev [-] | |
If you have to iterate 10 times, that is "not working", since it already wasted way more time than doing it manually to begin with. |