| ▲ | gradus_ad 3 days ago |
| But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype. *I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense. |
|
| ▲ | we_have_options 3 days ago | parent | next [-] |
| I've been playing with it on weekends for the last few months. 9 out of 10 projects, it's failed. Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails. I've been coding for over 20 years. If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist. |
| |
| ▲ | 6DM 3 days ago | parent | next [-] | | It doesn't work if you're treating it like a peer engineer. It only works if you treat it like you're a customer with no concern with how it works behind the scenes. That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market. Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this. Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do. | |
| ▲ | hombre_fatal 3 days ago | parent | prev | next [-] | | I've used Claude Code to do everything from vibe-code personal apps including a terminal on top of libghostty to building my perfect desktop environment on NixOS (I'd never used Nix until then). I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between? But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning. So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file. From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities. I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results. I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself. I'm way past the point of arguing anything here, just trying to help. | | |
| ▲ | mat_b 3 days ago | parent | next [-] | | > So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.
From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. This is exactly the workflow that works very well for me in Cursor (although I don't use their Plan Mode - I do my version of it). If you know the codebase well this can increase your speed/productivity quite a bit. Not trying to convince naysayers of this, their minds are already made up. Just wanted to chime in that this workflow does actually work very well (been using it for over 6 months). | |
| ▲ | aquariusDue 3 days ago | parent | prev [-] | | The first time I saw something like this in action was in a video about agentic blabla features in VS Code on the official VS Code YouTube channel. Pretty much write a complete and detailed specification, fire away and hope for the best. The workflow kinda clicked for me then but I still find a hard time adjusting to this potential new reality where slowly it won't make sense to generally write code "by hand" and only intervene to make pinpoint changes after reviewing a lot of code. I've been reading a book about the history of math and at some points in the beginning the author pointed out how some fields undergo a radical change within due to some discovery (e.g. quantum theory in physics) and the practitioners in that field inevitably go through this transformation where the generations before and after can't really relate to each other anymore. I'm paraphrasing quite a bit though so I'll just recommend people check out the book if they're interested: The History of Mathematics by Jacqueline Stedall And the aforementioned VS Code video, if I remember correctly: https://youtu.be/dutyOc_cAEU?si=ulK3MaYN7_CPO76k | | |
| ▲ | hombre_fatal 2 days ago | parent | next [-] | | I haven't written code by hand since December when Claude Opus 4.5 came out. It was clear that the inflection point arrived where it's at least as good as I am at implementing a plan. But not only that: it had good ideas like making impossible states impossible with a smart union type without being told and without me deeply modeling the domain in my head to derive a system invariant I could encode like that. It was depressing watching all of this unfold over the last few years, but now I'm taking on more projects and delivering more features/value than ever before. That was the reason I got into software anyways, to make good software that people like to use. > the generations before and after can't really relate to each other anymore Yeah, good point. In some ways it's already crazy to me that we used to write code by hand. Especially all the chore work, like migrating/refactoring, that's trivial for even a dumb LLM to do. It kinda feels like a liability now when I'm writing code, kinda like how it feels when the syntax highlighting or type-checker breaks in the editor and isn't giving you live feedback, so you're surprised when it compiles and runs on the first try. I remember having a hard time imagining what it was like for my dad to stub out his software program on paper until his scheduled appointment with the university punch card machine. And then sure being happy that I could just click a Run button in my editor to run my program. | |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | gradus_ad 3 days ago | parent | prev | next [-] | | Did it not work after the first try and you gave up? Did it not produce any usable code that you could hand tweak or build off of? I want to understand your definition of "failed" here. | | |
| ▲ | laserlight 3 days ago | parent [-] | | What's your definition of "working"? Do you consider it working, when you have to put more effort into prompting back-and-forth than writing it the old way? | | |
| ▲ | whateveracct 2 days ago | parent [-] | | I honestly think the people who love Claude were not super proficient coders. That's the only thing I can think of to explain why writing gobs of English and then code reviewing in a loop could be easier than just coding yourself. |
|
| |
| ▲ | bigstrat2003 3 days ago | parent | prev | next [-] | | > If there is no learning curve, why doesn't it work for me? Because LLMs are not actually good at programming, despite the hype. | | |
| ▲ | whateveracct 2 days ago | parent [-] | | I think they are better than a lot of people though, which is where their fans come from. |
| |
| ▲ | skybrian 2 days ago | parent | prev | next [-] | | There definitely is a learning curve. Not sure what you're doing. Are you trying to one-shot it? I think a decent place to start is: given a small web app, give it a bug report and ask it what causes the bug. | |
| ▲ | Kiro 3 days ago | parent | prev [-] | | Failing 9 out of 10 times for such simple tasks is indeed puzzling. I have no idea what you're doing to achieve that but I'm impressed. |
|
|
| ▲ | JohnFen 3 days ago | parent | prev | next [-] |
| > There is no learning curve*, that's the nature of AI. There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right? |
| |
| ▲ | laserlight 3 days ago | parent | next [-] | | “You're holding it wrong.” is the most common response I get, when I talk about problems I had with LLM-assisted coding. | | |
| ▲ | leptons 3 days ago | parent [-] | | You aren't holding it wrong, the truth is AI is a mixed bag, leaning towards a liability. If people really counted all the time they spend coddling the AI, trying again, then trying again and again and again to get a useful output, then having to clean up that output, they would see that the supposed efficiency gains are near zero if not negative. The only people it really helps are people who were not good at coding to begin with, and they will be the ones producing the absolute worst slop because they don't know the difference between good and bad code. AI is constantly trying to introduce bugs into my codebase, and I see it happening in real-time with AI code completion. So, no you aren't "holding it wrong", the other people are no different than the crypto-bro's who were pushing blockchain into everything and hoping it would stick. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | Imagine you are a JS dev and github comes out with a new search feature that's really good. it lets you use natural language to find open source projects really easily. So whenever you have a new project you check to see if something similar exists. And instead of starting from scratch you start from that and tweak it to fit what you want to do. If you were the type of person who makes tiny toy apps, or you worked on lots of small already been done stuff, you'd love doing this. It would speed you up so much. But if you worked on a big application with millions of users that had evolved into it's own snowflake through time and use, you'd get very little from it. I think I probably could benefit from looking at existing open source solutions and modifying them a lot of the time, and I kinda started out doing that at first. But eventually you realize that even though starting with something can save you time, it can also cost you a ton of time so it's frequently a wash or a net negative. | | |
| ▲ | leptons 2 days ago | parent [-] | | Nothing you described in this comment is only achievable with "AI". I've been able to search for and find open source projects since forever, and fork them and extend them, long before an LLM was a glimmer in Sam Altman's beady eye. | | |
| ▲ | sarchertech 2 days ago | parent [-] | | No it’s not at all. AI just makes finding it faster. But that’s my point AI isn’t that different from what you could already do before. Most of us didn’t do things that way before, so maybe programming like that is just a bad idea. |
|
| |
| ▲ | laserlight 3 days ago | parent | prev [-] | | > If people really counted [...] Exactly. I counted and reported my results in a previous thread [0]. [0] https://news.ycombinator.com/item?id=47272913 | | |
| ▲ | leptons 2 days ago | parent [-] | | I've started "racing" Claude when I have a somewhat simple task that I think it should be able to handle. I spend a few minutes writing out detailed instructions, which I already knew because I had to do initial discovery around the problem domain to understand what the goal was supposed to be. It took a while to be thorough enough writing it down for Claude, which is time I did not need to spend if I had just started writing the code myself - I'm sure the AI-bro's aren't considering the time it takes just to write down instructions to Claude vs just start coding. So then Claude starts discecting the instructions. I start writing some code. After a while Claude is done, and I've written about two or three dozen lines of code. Claude is way off, so I have to think about why and then write more instructions for it to follow. Then I continue coding. After a while Claude is done, and I've written about three dozen more lines of code. Claude is closer this time, but still not right. Round 3 of thinking about how Claude got it wrong and what to tell it to do now. Then I continue coding. After a while Claude is done (yet again), and I've written a lot more code and tested it and it's working as needed. The output Claude came up with is just a little bit off, so I have it rework the output a little bit and tell it to run again. I downloaded the resulting code Claude wrote and compared it to my solution, and I will take my solution every single time. Claude wrote a bloated monstrosity. This is my experience with "AI", and I'm honestly not loving it. It does sometimes save me time converting code from one language to another (when it works), or implementing simple things based on existing code (when it works), and a few other tasks (when it works), but overall I end up asking myself over and over "Is this really how developers want the future to be?" I'm skeptical that these LLM-based coding tools will ever get good enough to not make me feel ill about wasting my time typing instructions to them to produce code that is bloated and mostly not reusable. | | |
| ▲ | whateveracct 2 days ago | parent | next [-] | | I've done the racing thing too. Or I just reject its suggestions, do it better, and have it review and tell me why I did better. And writing those instructions when I race it..it's more cognitive effort for me than coding! | |
| ▲ | oro44 2 days ago | parent | prev [-] | | Interesting stuff. Thx for sharing! |
|
|
|
| |
| ▲ | bigstrat2003 3 days ago | parent | prev [-] | | Because the AI bros hyping it up are incapable of admitting that the hype is overblown. That would mean they have nothing to sell you, so of course they aren't going to say that. |
|
|
| ▲ | artine 3 days ago | parent | prev | next [-] |
| I gave Claude Code with Sonnet 4.6* a try a few weeks ago. I pointed it at a hobby project with less than 1kloc of C (about 26,500 characters) across ~10 modules and asked it to summarize what the project does. It used about $0.50 worth of tokens and gave a summary that was part spot on and part hallucinated. I then asked it how to solve a simple bug with an easy solution. It identified the right place to make the fix but its entire suggested solution was a one-liner invoking a hallucinated library method. I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least. * I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit. |
|
| ▲ | adriand 3 days ago | parent | prev | next [-] |
| I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at. If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did. |
|
| ▲ | ErroneousBosh 3 days ago | parent | prev | next [-] |
| I tried it. Either I don't know how to use it, or it just doesn't work. |
|
| ▲ | xigoi 3 days ago | parent | prev | next [-] |
| It’s only “easy to try” if you’re okay with using proprietary software and having to rely on an evil megacorporation that engages in cyber-warfare. |
| |
| ▲ | archagon 3 days ago | parent [-] | | Not to mention sucking on a monthly subscription tit that will go up in price by an order of magnitude once the market is captured. |
|
|
| ▲ | hombre_fatal 3 days ago | parent | prev | next [-] |
| I think it comes down to your own personality, appetite, and also how external factors like hype might impact you (resent, annoyance, curiosity, excitement). |
|
| ▲ | nDRDY 3 days ago | parent | prev | next [-] |
| Then what is the point? If what I'm doing can be done by Claude, as operated by someone who "doesn't need to get up to speed", then I really need to look at another career. |
|
| ▲ | nunez 3 days ago | parent | prev [-] |
| There's no learning curve if you don't care about token spend. |