| ▲ | prescriptivist 11 hours ago |
| I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet. I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff. |
|
| ▲ | lll-o-lll 5 hours ago | parent | next [-] |
| > I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Yes. We are this generations highly skilled artisans, facing our own industrial revolution. Just as the skilled textile workers and weavers of early 19’th century Britain were correct when they argued this new automated product was vastly inferior, it matters not at all. And just as they were also correct, that the government of the day was doing nothing to protect the lives and livelihoods of those who had spent decades mastering a difficult set of professional skills (the middle class of the day), the government of this day will also do nothing. And it doesn’t end with “IT”; anything that can be turned into a factory process with our new “thinking engines” will be. Perhaps we can do better in society this time around. I am not hopeful. |
| |
|
| ▲ | overgard 7 hours ago | parent | prev | next [-] |
| I'm using Claude every day, and it definitely makes me faster but.. I'm also able to give it a lot of very specific instructions and correct a lot of mistakes quickly because I look at the code and understand what it's doing; and I'm also asking it to write code in domains I understand. So I don't think these skills are obsolete at all. If anything, keeping them sharp is the only differentiator we have. "Agentic Engineering" is as much as joke as "Vibe Coding" is in my mind. The tools are powerful, but they don't make up for knowing how to code, and if you're just blindly trusting it it's going to end badly. |
| |
| ▲ | bryanrasmussen 7 hours ago | parent [-] | | >I'm using Claude every day, and it definitely makes me faster but.. I see a lot of posts about this, and I see a lot studies, also on HN, that show that this isn't the case. Now of the course the "this isn't the case" stuff is statistically, thus there can be individual developers whom are faster, but there can also be that an individual developer sometimes is faster and sometimes not but the times that they are faster are just so clearly faster that it sort of hides the times that they're not. Statistics of performance over a number of developers can flatten things out. But I don't know that is the case. So my question for you, and everyone that claims it makes them so perceptively and clearly faster - how do you know? Given all the studies showing that it doesn't make you faster, how are you so sure it does? | | |
| ▲ | peteforde 6 hours ago | parent | next [-] | | It's incredibly frustrating arguing these same points, over and over, every time that this comes up. You're asking people who are experienced developers absolutely chewing through checklists and peeking at HN while compiling/procrastinating/eating a sandwich/waiting for a prompt to finish to not just explain but quantify what is plainly obvious to those people, every day. You want us to bring paper receipts, like we have some incentive to lie to you. From our perspective, the gains are so obvious that it really does feel like you must just be doing something fundamentally wrong not to see the same wins. So when someone says "I can't make it do the magic that you're seeing" it makes me wonder why you don't have a long list of projects that you've never gotten around to because life gets in the way. Because... if you don't have that list, to us that translates as painfully incurious. It's inconceivable that you don't have such a list because just being a geek in this moment should be enough that you constantly notice things that you'd like to try. If you don't have that, it's like when someone tells you that they don't have an inner monologue. You don't love them any less, but it's very hard not to look at them a bit differently. | | |
| ▲ | KronisLV 38 minutes ago | parent | next [-] | | > It's incredibly frustrating arguing these same points, over and over, every time that this comes up. You're asking people who are experienced developers absolutely chewing through checklists and peeking at HN while compiling/procrastinating/eating a sandwich/waiting for a prompt to finish to not just explain but quantify what is plainly obvious to those people, every day. You want us to bring paper receipts, like we have some incentive to lie to you. This puts what I have been feeling in the recent months into words pretty concisely! To me, it really is a force multiplier: https://news.ycombinator.com/item?id=47271883 Of course, I still have to pay attention to what AI is doing, and figure out ways how to automate more code checks, but the gradual trend in my own life is more AI, not less: https://blog.kronis.dev/blog/i-blew-through-24-million-token... (though letting it run unconstrained/unsupervised is a mess, I generally like to make Claude Code create a plan and iterate on it with Opus 4.6, then fire off a review, since getting the Max subscription I don't really need Cerebras or other providers, though I still appreciate them) At the same time I've seen people get really bad results with AI, often on smaller models, or just expecting to give it vague instructions and get good results, with no automated linters or prebuild checks in place, or just copying snippets with no further context in some random chat session. Who knows, maybe there's a learning curve and a certain mindset that you need to have to get a benefit from the technology, to where like 80% of developers will see marginal gains or even detriment, which will show up in most of the current studies. A bit like how for a while architecturally microservices and serverless were all the rage and most people did an absolutely shit job at implementing them, before (hopefully) enough collective wisdom was gained of HOW to use the technology and when. | |
| ▲ | bryanrasmussen 5 hours ago | parent | prev [-] | | >It's incredibly frustrating arguing these same points, over and over, quite frankly there seems to be something incredibly frustrating in your life going on, but I'm not sure that the underlying cause of whatever is weighing on your mind at the moment is that I asked "how do you know that what you are feeling is actually true, in comparison to what studies show should be true?" (rephrased, as not reasonable to quote whole post) >From our perspective, the gains are so obvious that it really does feel like you must just be doing something fundamentally wrong not to see the same wins. From my perspective, when I think i am experiencing something that data from multiple sources tell me is not what is actually happening I try to figure out how I can prove what I am experiencing, I reflect upon myself, have I somehow deluded myself? No? Then how do I prove it when analysis of many similar situations to my own show a different result? You seem to think what I mean is people saying "Claude didn't help me, it wasn't worth it", no, just to clarify although I thought it was really clear, I am talking about numerous studies always being posted on HN so I'm sure you must have seen them where productivity gains from coding agents do not seem to actually show up in the work of those who use it. Studies conducted by third parties observing the work, not claims made by people performing the work. I'm not going to go through the rest of your post, I get the urge to be insulting, especially as a stress release if you have a particularly bad time recently. But frankly, statistically speaking, my life is almost certainly significantly worse than yours, and for that reason, but not that reason alone, I will also quite confidently state without hardly any knowledge of you specifically but just my knowledge of my life and comparison of having met people throughout it, that my list dwarfs yours. |
| |
| ▲ | prescriptivist 5 hours ago | parent | prev | next [-] | | I'm a principal engineer, been working on the same set of codebases for almost 10 years. I handle the 20% or so of my time that constitutes inbound faster than ever and I know because that inbound volume has clearly increased and yet I have, for the first time ever, begun chipping away at the "nice to have" backlog. My biggest time sink now is interviewing and code reviews -- the latter being directly proportional to the velocity increase across the teams I work with. Actually that's my biggest concern -- we are approaching a breaking point for code review volume. Sorry I don't have DX stats or token usage stats I can share, but based on the directives from on high, those stats are highly correlated (in the positive). [edit] And SEV rates are not meaningfully higher. | |
| ▲ | sameerds 7 hours ago | parent | prev | next [-] | | > everyone that claims it makes them so perceptively and clearly faster - how do you know? For me, AI tools act like supercharged code search and auto complete. I have been able to make changes in complex components that I have rarely worked on. It saved me a week of effort to find the exact API calls that will do what I needed. The AI tool wrote the code and I only had to act as a reviewer. Of course I am familiar with the entire project and I knew the shape of the code to expect. But it saved me from digging out the exact details. | | |
| ▲ | SquibblesRedux 5 hours ago | parent [-] | | > For me, AI tools act like supercharged ... search and auto complete. I think that is a fairly good definition of what an LLM is. I'd say the third leg of the definition is adjustable randomness. |
| |
| ▲ | anonnon 6 hours ago | parent | prev [-] | | > I see a lot of posts about this, and I see a lot studies, also on HN, that show that this isn't the case. Most of these studies were done one or more years ago, and predate the deployment and adoption of RLHF-based systems like Claude. Add to that, the AI of today is likely as bad as it's ever going to be (i.e., it's only going to get better). Though I do think the 10x claims are probably unfounded. | | |
| ▲ | bryanrasmussen 5 hours ago | parent [-] | | I mean obviously things will always be a little bit behind that one reads about, so this is one of the claims I see sometimes about these studies is they are out of date, and if working with the new models they would find that wasn't the case. but then that is one of the continuing claims one also sees about LLMS, that the newest model fixes whatever issue one is complaining about. And then the claim gets reiterated. The thing is when I use an AI I sort of feel these gains, but not any greatness, it's like wow it would have taken me days to write all this reasonable albeit sort of mediocre code. I mean that is definitely a productivity gain. Because a lot of times you need to write just mediocre code. But there are parts where I would not have written it like that. So if I go through fixing all these parts, how much of a gain did I actually get? As most posters on HN I am a conceited jerk, so I can claim that I have worked with lots of mediocre programmers (while ignoring the points where I was mediocre by thinking oh that didn't count I followed the documentation and how it was suggested to use the API and that was a stupid thing to do) and I certainly didn't fix everything that they did, because there just wasn't enough hours in the day. And they did build stuff that worked, much of the time, so now I got an automated version of that. sweet. But how do I quantify the productivity? Since there are claims put forth with statistical backing that the productivity is illusory. This is just one of those things that tend to affect me badly, I think X is happening, study shows X does not happen. Am I drinking too much Kool-Aid here or is X really happening!!? How to prove it!!? It is the kind of theoretical, logical problem seemingly designed to drive me out of my gourd. |
|
|
|
|
| ▲ | _dwt 11 hours ago | parent | prev | next [-] |
| Well, no, not with that attitude there won’t! I am not trying to insinuate that there is a conspiracy, or that posts like yours are part of it, but there has been a huge wave of posts and comments since February which narrow the Overton window to the distance between “it’s here and it’s great” and “I’m sad but it’s inevitable”. Humanity has possessed nuclear weapons for 80 years and has used them exactly twice in anger, at the very beginning of that span. We can in fact just NOT do things! Not every world-beating technology takes off, for one reason or another. Supersonic airliners. Eugenics. Betamax. The best time to air concerns was yesterday. The next best time is today. I think we technologists wildly overestimate public understand and underestimate public distrust of our work and of “AI” specifically. We’ve got CEOs stating that LLMs are a bigger deal than nuclear weapons or fire(!) and yet getting upset that the government wants control of their use. We’ve got giddy thinkpieces from people (real example from LinkedIn!) who believe we’ll hit 100% white collar unemployment in 5 years and wrap up by saying they’re “5% nervous and 95% excited”. If that’s what they really think, and how they really feel, it’s psychopathic! Those numbers get you a social scene that’ll make the French Revolution look like a tea party. (“And honestly? I’m here for it.”) So no, while I _think_ you’re correct, I don’t accept the inevitability of it all. There are possibilities I don’t want to see closed off (maybe data finally really is the new oil, and that’s the basis for a planetary sovereign wealth fund. Maybe every man, woman, and child who ever wrote a book or a program or an internet comment deserves a royalty check in the mail each month!) just yet. |
| |
| ▲ | prescriptivist 9 hours ago | parent | next [-] | | > We can in fact just NOT do things! I agree with you on that. Not just on AI but a lot of things that suck about this world, and in particular the United States. But capital is too powerful. And these tools are legitimately transformative for business. And business pays our bills and, more importantly, provides the healthcare insurance for our families. The wheel is a real fucking drag isn't it? I don't see anything short of a larger revolution against capital stopping or even stemming this. For that to really happen we would need a lot more people and interests than just those of software practitioners. Which may come yet when trucking jobs collapse and customer service jobs disappear. I don't know. I do know that I'm taking part in something that will potentially (likely?) seed the end of my career as I know it but it's just one of many contradictions that I live with. In the meantime the tools are impressive and I'm just figuring out how to live with them and do good work with them and as you can probably tell, I'm pretty convinced that's the best we can make of the situation right now. | |
| ▲ | overgard 7 hours ago | parent | prev [-] | | > We can in fact just NOT do things! 100% this. I don't know why we think that pouring trillions of dollars into something we barely understand to create an economic revolution that is almost certainly awful is at all "inevitable". We just need leaders that aren't complete idiots. I'm generally cynical, but I do see that normies (ie not in tech) are waking up a bit. I don't think the technology is inherently a bad thing, but the people that think that we should just do this as fast as possible to win "the race" should be shot into space as far as I'm concerned. To start with, we need a working SEC that can actually punish the grifting CEO's that are using fear to manipulate markets. |
|
|
| ▲ | bandrami 6 hours ago | parent | prev | next [-] |
| I'm still going to need at least one of my vendors to speed up their release pace before I'll believe that. I'm seeing a ton of churn and no actual new product. |
|
| ▲ | archagon 10 hours ago | parent | prev [-] |
| A new technology comes out — admittedly one that’s extraordinarily capable at some things — and suddenly conventional software engineering is “more or less obviated at this point”? I’m sorry, but that’s really fucking dumb. Do you think LLMs are actually intelligent? Do you think their capabilities exceed the quality of their training corpus? Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough? After all, what’s code but a bunch of boring glue and other crap that’s used to prop up a product idea until a few bucks can be extracted from it? Of course, there’s nothing wiser than tying the entirety of your career to a $20/month subscription (that will jump 10x in price as soon as the market is captured). Is writing solved because LLMs can make something decently readable? Why say anything at all when LLMs can glob your ideas into a glitzy article in a couple of seconds? I swear, some people in this field see no value in their programming work — like they’ve been dying to be product managers their entire lives. It is honestly baffling to me. All I see is a future full of horrifying security holes, heisenbugs, and performance regressions that absolutely no one understands. The Idiocracy of software. Fuck! |
| |
| ▲ | prescriptivist 9 hours ago | parent | next [-] | | > Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough? All I'm saying is you're gonna have to figure out how to do this with an agent. It's not that I don't see value in the craft; it's just that value is less important. As far as the new paradigms, the new frameworks, new studies in computer science -- they still exist, it's just that they are going to focus on how to mitigate heisenbugs, performance regressions and security holes in agent written code. Who knows.. in five years most of the code written may not even be readable. I'm not saying it's going to be like that, but it's entirely possible. In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself. And right now there is a category of engineers that exclusively use agents to create quality software and they are more efficient at it than anybody that just does it themselves. And that category is growing and growing every day. I may be out a job in five years because all of this. But I am seeing where this is going and it's clear and so I'm going to have to change with it. | | |
| ▲ | bandrami 6 hours ago | parent | next [-] | | > you're gonna have to figure out how to do this with an agent I'm really not, though, any more than I "had to" learn JavaScript 20 years ago or blockchains 5 years ago (neither of which I did). Hell, I still use Perl day-to-day. | | |
| ▲ | KronisLV 27 minutes ago | parent [-] | | Good for you! Most people will, though. If I hadn't learnt JavaScript, I couldn't work on a large chunk of the projects that put bread on the table for the past 5-10 years. If most folks don't learn AI (or its shortcomings and practicalities of it), then they will not be as competitive in the job market. Corpos don't care about the flaws. |
| |
| ▲ | norir 7 hours ago | parent | prev | next [-] | | > In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself. You can only speak for yourself. | |
| ▲ | archagon 9 hours ago | parent | prev [-] | | “When you're in Hollywood and you're a comedian, everybody wants you to do things besides comedy. They say, ‘OK, you're a stand-up comedian — can you act? Can you write? Write us a script?’ It's as though if I were a cook and I worked my ass off to become a good cook, they said, ‘All right, you're a cook — can you farm?’”
—Mitch Hedberg Agentic programming isn’t engineering: it’s a weird form of management where your workers don’t grow or learn and nobody really understands the system you’re building. That sounds like a hellish, pointless career and it’s not what I got into the field to do. So no thanks: I’ll just keep doing the kind of monkey engineering I find invaluable. Especially while most available models are owned and trained by authoritarian, billionaire, misanthropic cultists. Fortunately, I am not beholden to some AI-pilled corporation for salary. | | |
| |
| ▲ | thunky 8 hours ago | parent | prev | next [-] | | > I swear, some people in this field see no value in their programming work And others see too much value in their work. | | |
| ▲ | overgard 7 hours ago | parent [-] | | Yes, we should punish care and craftsmanship. That's a recipe for success. | | |
| |
| ▲ | sdf2df 9 hours ago | parent | prev [-] | | Lol. Im a CEO and Ive re-vamped my hiring process that has nothing to do with writing code. I test to see the way people think now. People like you would pass my interview. |
|