| ▲ | Agentic Coding Is a Trap(larsfaye.com) |
| 218 points by ayoisaiah 4 hours ago | 152 comments |
| |
|
| ▲ | fnordpiglet 2 hours ago | parent | next [-] |
| Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it. The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it. I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it. |
| |
| ▲ | byzantinegene an hour ago | parent | next [-] | | you have 35 years of experience and have already built up the learning capability and general framework to acquire new knowledge. you know how to use agentic coding as a tool to supplement your work. the juniors who start today don't have that, they overrely on agentic coding and do not know what they don't know | | |
| ▲ | ookblah an hour ago | parent | next [-] | | someone probably made this same argument against certain frameworks over the years and juniors still figured it out. we need to stop trying to babysit learning for hypothetical situations. the bar to "start" is lower and the bar to actually competency is higher now, juniors who want to actually learn instead of just pressing enter over and over again will do so regardless of whatever you do to "help" them. | | |
| ▲ | SpicyLemonZest 26 minutes ago | parent [-] | | It's not really a hypothetical. I work with one junior who's submitted an incorrect bugfix 3 times and counting; he seems genuinely incapable of processing the idea that there's a correctness issue he has to resolve, rather than a prompt engineering issue that will allow Claude to figure it out if only he asks in the right way. |
| |
| ▲ | throwaway041207 42 minutes ago | parent | prev | next [-] | | IMO, by the time todays juniors would have 5-10 years of expected experience, the entire field will be something different altogether. Language choice distribution will collapse (if not change altogether), whole new modalities of monitoring and progressive delivery guardrails will come into play, essentially creating a 24/7 incremental rollout of pure agentic code, correctness will be determined by a mix of language features and self-monitoring by models in production and automated testing against production snapshots in pre-production, and deep debugging will the be province of a select group of engineers and there will be a pathway to those roles for juniors, but those roles will be coveted and difficult to break into (and probably will require education and maybe even informal accreditation). | |
| ▲ | CGamesPlay 41 minutes ago | parent | prev | next [-] | | Exactly this. We need to be more precise than blanket statements like "agentic coding is a trap" and start figuring out what a "tasteful" application of agentic coding looks like. ChatGPT is destroying liberal arts curriculums because students can choose to not do anything of the thinking themselves and produce mediocre work that passes the bar. I think the same problem is showing itself with agentic coding, just with more directly measurable consequences (because the pile of software ends up failing in a more spectacular way than the pile of bad writing). | | |
| ▲ | hibikir 22 minutes ago | parent [-] | | On liberal arts is simply a matter of what the students want to get out of the class, vs what the teacher wants the students to do: There's a huge disconnect in goals and expectations, so there's no way for the teacher to actually win. The fact that there's such disconnect should give the departments pause. This doesn't happen at all for using agentic coding: What the programmer wants and what the boss wants are pretty well aligned. There are corner cases where someone isn't allowed to use LLMs, but does it anyway, but in most cases, the organization agrees. |
| |
| ▲ | bhagyeshsp 12 minutes ago | parent | prev | next [-] | | Self-taught, "junior" here. Due to English-language limitation my most adult life, I struggled to code. Used visual coding etc. But of course, I can't make a living on drag-and-drop harness. Comes in GPT-3.5, accelerated my learning. Now I'm running my incorporated company, just launched one software-hardware hybrid product. Second one is a micro-SaaS in closed beta. The point is: when people use "juniors" as a fixed shaped blobs of matter, they focus on the juniors that were in any case going to make mistakes: AI or not. Misses the key point of agentic usage. | | |
| ▲ | sterlind 8 minutes ago | parent [-] | | accelerated what learning? learning to code? learning to engineer? learning to manage? learning to market? |
| |
| ▲ | danenania 21 minutes ago | parent | prev | next [-] | | If a junior builds something with agents that turns into a mess they can’t debug, that will teach them something. If they care about getting better, they will learn to understand why that happened and how to avoid it next time. It’s not all that different than writing code directly and having it turn into a mess they can’t debug—something we all did when we were learning to program. It is in many ways far easier to write robust, modular, and secure software with agents than by hand, because it’s now so easy to refactor and write extensive tests. There is nothing magical about coding by hand that makes it the only way to learn the principles of software design. You can learn through working with agents too. | | | |
| ▲ | echelon 36 minutes ago | parent | prev [-] | | > the juniors who start today don't have that, they overrely on agentic coding and do not know what they don't know Y'all need to stop worrying about the kids. They're smarter than us and will run circles around us. They're going to look at us like dinosaurs and they're going to solve problems of scale and scope 10x or more than what we ever did. Hate to "old man yells at cloud" this, but so many people are falling into the trap because of personal biases. While the fear that "smartphones might make kids less computer literate" is true, that's because PCs are not as necessary as they once were. The kids that turn into engineers are fine and are every bit as capable. |
| |
| ▲ | jmuguy 2 hours ago | parent | prev | next [-] | | This post does not make the claim that "you should know everything about everything you work on" - its making the claim that writing code and being able to read code effectively are intrinsically linked. | | |
| ▲ | ray_v an hour ago | parent [-] | | I wonder if it's not so much the coding that people don't want to write, but it's more about the weight of all the orchestration, data engineering and research that has to be done (or, understood in the first place) to get anything off the ground these days. It feels off the charts complicated, and of course is now shifting rapidly. |
| |
| ▲ | grogenaut 2 hours ago | parent | prev | next [-] | | Agreed. I don't know anything about turning sand into transistors or assembly but do well. So I don't know my full stack either. What is important is not being afraid to learn the rest of your system and keeping an index. Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand. When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base." That holds even within just large code bases. | |
| ▲ | catlifeonmars 21 minutes ago | parent | prev | next [-] | | > The claim you should know everything about everything you work on is an intensely naive one. I disagree with this take. Personally, I pride myself in learning the code bases I work on in detail, sometimes better than the leads for those code bases. I’m not saying that everyone should do so, but it’s achievable and not naive at all. | |
| ▲ | girvo an hour ago | parent | prev | next [-] | | > The claim you should know everything about everything you work on is an intensely naive one Nothing in the article made that claim. | |
| ▲ | crjohns648 an hour ago | parent | prev | next [-] | | I have also seen the learning acceleration, there's a significantly increased set of techniques and technologies I have learned how to apply. From a person perspective though, I'm apprehensive about the effect AI will have on the human "very well read intern." People who know a lot very deeply about specific areas are fascinating to talk to, but now almost everyone is able to at least emulate deep knowledge about an area through the use of AI. The productivity is there, but the human connection is missing. | |
| ▲ | i_love_retros an hour ago | parent | prev | next [-] | | I think it's important to at least have a mental model of code you directly commit to the codebase, and that doesn't happen if it was written by an agent. | |
| ▲ | beepbooptheory 2 hours ago | parent | prev [-] | | "Hey! Just popping in to say that agentic coding is actually pretty great and is making me better in all the ways; but also want to say at same time that it's actually not all that different from anything else, so we can chalk up any critique to it to individual naivety and bias." |
|
|
| ▲ | keyle 2 hours ago | parent | prev | next [-] |
| As a senior developer, 25+ years, I have been thrown recently into a meeting "hey can you join in for 5 mins". I really don't like these meetings where you're dragged in in the middle of them without any clue. The questions came flying in fast, without any introduction, and this was about an external integration out of a dozen. They have their own lingo, different from ours, to make the situation worse. I had a _very hard time_ making sense of the questions, as I indeed relied heavily on a model to produce these integrations (extremely boring job + external thick specs provided). I'm still positive these would have simply not happened in a 10x the time if I did not use models, however, I'm now carefuly considering re-documenting the "ohhs" and "aahs" of these so that these kind of uncomfortable moments never happen again. I haven't felt so clueless and embarassed in a meeting, ever. All I could say was "I'll get back to you on that one, and that one, and this one". Cognitive debt is very real, and it hurts worse than technical debt on a personal level! Tech debt is shared across the team, cognitive debt is personal, and when you're the guy that built the thing, you should know better! To be continued... But from now on, the work isn't done if I don't get a little 5 mins flash-card type markdown list of "what is this" and "what is that", type glossary. |
| |
| ▲ | josephg an hour ago | parent | next [-] | | > As a senior developer, 25+ years, I have been thrown recently into a meeting "hey can you join in for 5 mins". This is a common thing doctors complain about. Patients come in, saying they just need a prescription for some drug or other. Good doctors often refuse to give any drugs or any advice until they understand the whole situation properly. If you're a senior developer, you're the one who has to push back against behaviour you don't like. You have the authority. "Hm, interesting question. I'm going to need more context before I can give you my point of view. Can you give me a quick overview of the system architecture / explain what actual problems you're trying to solve with this approach?" | | |
| ▲ | komali2 19 minutes ago | parent [-] | | > Can you give me a quick overview of the system architecture I think what the OP is saying is that it's the OP's job to know that, and didn't, because they over leverage the LLM. Like if a doctor was brought in on a cardio consult on their patient because they had a maybe unrelated heart condition, and the only thing they could answer to "why did you prescribe cemidine instead of decimine" is "lemme get back to you on that." |
| |
| ▲ | ryandrake 2 hours ago | parent | prev | next [-] | | What kind of place do you work where you get dragged into a meeting halfway through and then are peppered with technical questions without context, that you're expected to answer on the spot? Please let us know because I'm sure a lot of us want to avoid such a place. "I'll need to study the docs and code to answer these questions properly" is a perfectly fine (and very diplomatic) response to treatment like that. | | |
| ▲ | pkthunder an hour ago | parent | next [-] | | Not OP, but similar context (~20yr exp.). You absolutely can get away with "I'll need to dig more into this to give you a good answer" but you are _for sure_ expected to have at least some answer ready-to-go. Especially if it's under your purview. | |
| ▲ | WD-42 an hour ago | parent | prev | next [-] | | I don’t think it was made clear: the questions were about the code op “wrote” but they used a llm so couldn’t remember any of it. Probably got there from a git blame. This happens. | |
| ▲ | furyofantares 29 minutes ago | parent | prev | next [-] | | The implication is that, in the past, such a meeting would be fine, because they're an expert in what they've authored. It's "hey can you join for 5 minutes" because, in the past, they'd have had deep knowledge off the top of their head of the things they'd committed under their name. But now they're not an expert in the code they've recently committed. Maybe that's OK and expectations need to change, but I'd bet there are a lot of cases where the organization really wants to produce a (code, expert-in-the-code) pair, and should be willing to pay a little time to do that over producing just (code, guy-who-prompted-it). | |
| ▲ | solenoid0937 an hour ago | parent | prev | next [-] | | This can happen more or less anywhere if you spend enough time at a company. At some point you'll get pulled into a meeting like this because others think you're in expert in a codebase/area you're not. | |
| ▲ | marcosdumay an hour ago | parent | prev | next [-] | | From the way it's written, looks like it's his code that he wrote recently. It's quite common to search for the author of a piece of code to ask questions about that code. | |
| ▲ | keyle an hour ago | parent | prev [-] | | Startup life man. Happy to help, just rough sometimes. |
| |
| ▲ | chaidhat 25 minutes ago | parent | prev [-] | | I think that in an AI-native company, the people asking the question should be using their own set of the AI to query the codebase, before coming to ask you. The problem that you describe seems to be more relevant to an organization which has not fully embraced AI yet. |
|
|
| ▲ | enigmoid 2 hours ago | parent | prev | next [-] |
| > only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem. An additional factor: to find issues in generated code, the developer has to care. Many developers (especially at big firms) are already profoundly checked out from their work and are just looking for a way to close their tickets and pass the buck with the minimum possible effort. Those developers - even the capable ones - aren't going to put in the effort to understand their generated code well enough to find issues that the agents missed. Especially during the current AI-driven speed mania. |
| |
| ▲ | lgrapenthin 33 minutes ago | parent | next [-] | | Indeed. Generated code is also harder to read because it violates all semantic expectations that rely on the mental model of a human author. A generated piece of code is linguistically plausible but often unknowingly imitates common idioms so incoherently that the actual bug may be accidentally disguised in a way no sane human (even a bad programmer) could have come up with. Since LLMs have no internal evaluation, as a reviewer one has to account for it and evaluate line by line, rebuild from scratch any hidden rationale and tacit knowledge the LLM didn't have in the first place - only to be mislead into non concerns draining costly hours. At this point, the investment is often deeper than writing from scratch. | |
| ▲ | awakeasleep an hour ago | parent | prev [-] | | There are exceptions to this, but in big firms many developers on many teams are actually punished for caring. |
|
|
| ▲ | monksy 3 hours ago | parent | prev | next [-] |
| I kind of think this article misses the mark a little. There is skill loss from heavy AI use. But I want to acknowledge the awkward elephant in the room. AI Is making people too fast. I don't mean that a faster output is bad. It's a faster output and code rather than a full understanding and experience in producing the code. It's rewarding people who try to talk about business value rather than the people that are building and making safe decisions with deep knowledge. AI: Yes, its good and it can produce some good solutions, however it ultimately doesn't know what it's doing and at the best of cases needs strong orchestrators. We're in a cesspit of business driven development and they're not getting the right harsh and repulational punishments for bad decisions. |
| |
| ▲ | zbentley 34 minutes ago | parent | next [-] | | I don’t disagree with any of that, but I think the brutal truth is that the priority of most businesses was always that approximate, slipshod, business-driven development. The human engineering process was only coincidentally a check back against the worst outcomes of that philosophy, not intentionally one. | | |
| ▲ | wiieee 3 minutes ago | parent | next [-] | | Yeah but all these firms are going to get destroyed by firms led by people who are more disciplined and enforce rigour in thought etc that’ll be pushed through discouraging over-use of llm’s. Apple didn’t go from near bankrupt to where it is today without that discipline. | |
| ▲ | beej71 4 minutes ago | parent | prev [-] | | I agree as well. And now they can make slipshod products at 10x speed. |
| |
| ▲ | hypeatei 39 minutes ago | parent | prev [-] | | > We're in a cesspit of business driven development It's not just businesses doing it either, I regularly see big PRs get merged on open source projects that seem fine on the surface but contain a 1000 paper cuts worth of bugs (not critical, but just enough to annoy you) On top of that, the code wasn't idiomatic C++ (for this specific project) and the LLM completely ignored available APIs. Sure, it can be fixed, and maintainers should've caught it, but the amount of code being generated requires so much energy on everyone's behalf. |
|
|
| ▲ | ryandrake an hour ago | parent | prev | next [-] |
| Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. Let's examine a feature that takes a day to code: First, you've got to plan everything, using whatever Agile or Waterfall planning ritual your company uses, get the task breakdown, file the JIRA tickets, decide who's doing the work. That all can take days or even weeks. Then you need to write a design doc with your proposed design, and get that reviewed by your peers/teammates. Again, another week for any substantial feature. If there are multiple teams involved, you need to get buy-in and design agreement among those multiple teams, let's add another week. At some places, you need approval to commence work, which can take multiple days, depending on the approver's schedule and availability. Then, you take a day and write the code and make sure it passes tests. Then, it's code review time, and this can involve a lot of back and forth with your team, resulting in multiple iterations and additional code reviews. Another "days or weeks" stretch. At bigger companies, you're going to need to pass all sorts of reviews from other departments, like legal, privacy, performance, accessibility, QA... even if done in parallel, let's add a conservative 2 weeks. Finally, you push to staging, and need to get some soak time internally among dogfooders, so you have some confidence that it's working. +1 week. Then you're ready to push from staging to prod, but since you work at a serious company, nothing goes to 100% prod right away--you need to slowly ramp up and check feedback/metrics in case you need to roll back. The ramp to fully launched could take another two weeks. So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead... |
| |
| ▲ | AdieuToLogic an hour ago | parent | next [-] | | > Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. This reminds me of one of my software engineering axioms: When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
Choose your statements wisely.
> So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead...Well said. | |
| ▲ | dilyevsky 23 minutes ago | parent | prev | next [-] | | 1. models are now extremely good at totally automating tedious tasks such as updating dependancies, build/deploys scripts, unit tests, etc what used to take days now can takes minutes. Easily 50x speedup on this. This was non-trivial part of every engineer's day-to-day at an established company. "platform engineering" or whatever they call this now is dead. 2. technically risky ideas that you never would have tried because it didn't make sense from risk+effort/reward standpoint are now within reach. it isn't "go faster" per se but the speed at which you can try something out still changes the nature of engineering process. | | |
| ▲ | SpicyLemonZest 15 minutes ago | parent [-] | | > 1. models are now extremely good at totally automating tedious tasks such as updating dependancies, build/deploys scripts, unit tests, etc what used to take days now can takes minutes. Easily 50x speedup on this. This was non-trivial part of every engineer's day-to-day at an established company. "platform engineering" or whatever they call this now is dead. I confess that I don't understand why this isn't true, because it seems to be true on the micro level, but it really hasn't been my experience. The platform engineers I'm familiar with are desperately trying to tread water to keep their systems healthy against the now-higher code velocity without falling to pieces. (Perhaps people used to make minor day-to-day improvements while coding that Claude enables us to ignore?) |
| |
| ▲ | ajam1507 an hour ago | parent | prev | next [-] | | It very much depends on what kind of company you work for. You could never run a startup like this, for example. | |
| ▲ | CodeShmode an hour ago | parent | prev | next [-] | | Or you can have a conversation with an agent to build up a requirements/plan spec, asking it to analyse existing code patterns. When it seems like the agent has a good understanding of what needs to be done and how. Ask it to implement, keeping changes as a local spike. Ask the agent questions about all the other teams' code, reaching out to them for questions it can't answer or clarification. With agent capabilities atm this is rare or can be done fairly async: "please confirm these things". Maybe realise your code architecture is completely wrong. Manually code up some new abstractions that fit better, write the learnings into the spec plan. Strip out any implementation that largely doesn't fit your updated abstractions. Ask the agent to migrate the code to the new structure. Repeat until spike is operational and you're happy with the abstractions used Chat with the agent to create a Design Doc for the approach in the spike. Create a single JIRA ticket for "Productionise CodeShmode's spike". Get reviews and feedback from stakeholders. Integrate feedback into your spike, or even the original spec document and regenerate the whole thing. So much of the ritual you've outlined here is overhead from working in a large org where roles are siloed. When one person is empowered to do more then the actual work per person goes down and the overhead becomes the dominant. But that overhead isn't needed anymore because one person can now do many people's work. I've whipped up spikes in a few days that would've been a month of work across a team multiple DDs and approvals. In the past this wasn't feasible so we would need to justify what those people would work on. Now you can whip it up, show a working demo and ask "should we productionise this" | |
| ▲ | ex-aws-dude an hour ago | parent | prev | next [-] | | Not every company works like that Big tech has a lot of wankery like that but smaller companies can be fast and scrappy | | |
| ▲ | ryandrake an hour ago | parent [-] | | It's a good point, and your username definitely checks out. I've also worked at a startup and some smaller companies, and still there's a LOT of "wankery" just different types. No, don't write the code yet, you need to present it to the CEO. Oh, we need to also present it to investors. Wait for a bit, because we have another deal coming down the pipe and we might have to change direction. Oh, a VIP customer just played golf with the founder and you need to go to his house to fix his setup. So much of the software engineer's job in the software industry doesn't involve writing code. |
| |
| ▲ | gwerbin an hour ago | parent | prev | next [-] | | I think it depends a lot on how automated the agent is and how long you let it run for. Full automation where you try to build an entire piece of software with agents... yeah, no, we are not there yet. At least not a few you care about maintainability. Short-lived tightly-scoped agents can do alarmingly thorough and high-quality knowledge work, as long as the work itself is relatively mechanical and can either be carried out in independent chunks or sequentially. For example, a research agent like the Gemini "deep research" tool can save hours of digging around the web and compiling information. With careful prompting, sufficient background context, and good self-evaluation tools, an agentic loop can do very detailed data analysis, carry out serious statistics and machine learning projects, produce high-quality data visualization thereof, and put together a handy executive summary. They occasionally hallucinate, go off track, get confused, and make mistakes. But they "know" everything that's been published in English for the last 200 years, they never get tired, and the code they write is good enough for throwaway scripting. The real power of agents being able to write code is that they can be extremely self-sufficient and flexible in carrying out these kinds of tree- and sequence-structured knowledge work tasks. That's of course a different thing from "designing good software", which is neither tree-structured or sequential, and requires a level of intelligence (for lack of a better term) that LLMs do not seem to be capable of, at least not yet. But that's a more specific thing than just writing code in order to get stuff done that happens to require code. | |
| ▲ | threethirtytwo an hour ago | parent | prev [-] | | >Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. Let's examine a feature that takes a day to code: Ai writes the plans now. I just review and modify. |
|
|
| ▲ | ex-aws-dude 3 hours ago | parent | prev | next [-] |
| The thing is the code quality is still ultimately up to you Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write |
| |
| ▲ | kelnos 2 hours ago | parent | next [-] | | IME, it's faster and less frustrating to just write the code myself, if the goal is to get code to my quality standards. | | |
| ▲ | dilyevsky 43 minutes ago | parent | next [-] | | Respectfully, unless someone is really really bad at articulating what the quality standards are or works with a very niche stack that is definitely not the case anymore with SOTA models | |
| ▲ | throwaway894345 an hour ago | parent | prev [-] | | TL;DR — there’s a whole lot of craft in how you use agents I think that’s mostly true, but also I think there is some skill to using agents well. Specifically, work with agents to get a really good product requirements document, then task it out into very narrow user stories / vertical slices (this takes some iterating—the AI really seems to want to think in horizontal layers today), then maybe walk through the code interfaces to be super sure you are aligned. At each step, I make the agent interrogate me thoroughly with every question it can think of, and even if we stop now we will have a system design and tickets that are much higher quality than me thinking alone. I could hand those off to anyone to implement, but I think having an agent TDD their way through the code is the sweet spot. Whenever the agent is doing something I don’t like (e.g., some coding style thing), I pause and have another agent help me write a style guide that agents must read. This slows me down at first but I think it will pay off in time. |
| |
| ▲ | gerdesj 3 hours ago | parent | prev | next [-] | | "... iterating with the agent till the code is the exact same quality that you yourself would write" I don't want my code quality, I want AGI code quality - that's what I was promised and jetpacks and flying cars too! | |
| ▲ | bigstrat2003 an hour ago | parent | prev | next [-] | | Nothing is stopping you... but that's slower than just writing it yourself to begin with. AI productivity gains are a myth. | |
| ▲ | cyanydeez 3 hours ago | parent | prev [-] | | Sure, but then it's not really saving you time is it. | | |
| ▲ | hibikir 5 minutes ago | parent | next [-] | | In my experience, it sure saves time. a lot of quality has significant mechanical components LLMs do great. Hey, this series of 300 functional tests are reusing the same few patterns without helper methods clarifying intent. Give me an overview of possible meaningful methods that would simplify the duplication. Ok, 2, 4 and 5 are good, but rename 2 to X, and change the order of parameters in 5. Implement across the tests, and make sure it all passes. Still very significant savings over all that rather mechanical work. It's ultimately cheaper than doing a code review, and it's faster, because there's less need to manage the emotional state of the person whose code is being reviewed. Maybe I am a slow developer or something, but I am getting a lot of quality changes like that done that before I'd not have, solely because of time spent. And not increasing the quality just causes problems anyway. Given the same quality, more changes mean more outages than before, just by probability. Increasing rate of change demands a similar increase in quality if you don't want your production support costs to go up. So spending at least a bit of time on quality, letting the LLM do the nagging little things that before you didn't do beause they they took too long and were not a core part of quarterly goals is basically mandatory. | |
| ▲ | ex-aws-dude 3 hours ago | parent | prev | next [-] | | IMO it still does save time generally but it’s not as much of a huge gain if you’re doing this. I will admit there are occasional times after iterating so much I’m not sure if I’ve even saved time because going from “it works” to “it’s up to quality” takes so long | |
| ▲ | komat 3 hours ago | parent | prev | next [-] | | It still is if agent brings it up to quality fast . And yea usually does for me | | |
| ▲ | deadbabe 3 hours ago | parent [-] | | I mean you have to compare apples to apples. If you are coding by hand like the old days you are probably not literally writing everything from scratch anyway, you are copy pasting a bunch of shit off google and stackoverflow or installing open source libraries. | | |
| ▲ | a1o 2 hours ago | parent [-] | | I also reuse a lot of my own code. Either from libraries I built or just directly copy pasting (like boilerplate code for setting up the basics of something in my style). |
|
| |
| ▲ | 2ndorderthought 2 hours ago | parent | prev [-] | | Comment deleted because it was backwards | | |
| ▲ | brightball 2 hours ago | parent [-] | | But you’d have that coding it yourself… | | |
| ▲ | 2ndorderthought 2 hours ago | parent [-] | | Actually ignore my comment I misunderstood the premise. I meant not vibe coding is the way to save time with production issues. Not the other way around! |
|
|
|
|
|
| ▲ | dbrecht_ 17 minutes ago | parent | prev | next [-] |
| I wrote something that touches on a few of the same points not too long ago (namely vendor lock-in and inability to project cost with LLMs at the wheel). Tangential topics but I think there's a healthy enough amount of intersection to at least join in on the discussion. I do agree that if we just rely on AI for all outputs and some reviews (at least to a threshold, because we simply can't keep up with the AI throughput as humans) we will eventually have skills atrophy. Here's where the tangents intersect: I've been working on a way to have the best of both worlds. We can still use AI to generate a large swathe of code, but use good old software engineering to do it. My project (https://salesforce-misc.github.io/switchplane/) inverts the control. Rather than having LLM-as-runtime and doing all the things, you define and write LangGraph control flows that only use the LLM when judgement is actually required. The basic principle is: If it's deterministic, write it in code. If it requires judgement, use the LLM. Switchplane itself is local-only but the principles can be applied to deployed agentic services as well. Because the approach is code-first, we can have that vendor independence: Use whatever model you want anywhere in the graph. One goes down? No problem. Swap the config without impacting the overarching control flow. Cost becoming a factor? Limit LLM loops or constrain their access however you want. It's just code that needs to be updated. You control the runtime, not the LLM. Concerned about non-deterministic behaviour when you need determinism? Don't be. It's in code. Worried about skills atrophying because we're handing off everything to an LLM? That's mitigated somewhat here because you still need to think in systems in order to build execution graphs in the first place. It might not demo as well as a number of markdown files being executed by an LLM. It's definitely a more reliable approach in the long run though. |
|
| ▲ | slashdave 2 hours ago | parent | prev | next [-] |
| > When a sysadmin moved to AWS, they didn't feel like they were losing their ability to understand networking. Wait, is this the same AWS I have been using? |
|
| ▲ | mehagar 3 hours ago | parent | prev | next [-] |
| I've been using AI tools to brainstorm approaches and sometimes generate code, but actually doing the typing myself. That way I'm less likely to forget the mechanics and programming language over time. |
| |
| ▲ | kelnos 2 hours ago | parent | next [-] | | Same. Most of what I do is ask for an implementation plan, with minimal code, or no code, or pseudocode, and then write the actual code myself. This is for open source work, where the entire point of my enjoyment is that I write the code myself. I honestly wouldn't bother being an open source maintainer if the entire thing was just prompting an LLM to write code, and then reviewing it. That doesn't sound fulfilling at all. If this was an actual paid job, I do wonder how that would change my LLM use. The reason I'm a software developer at all is because I love the craft. The act of building, of using my brain to transform ideas into code... that's what I enjoy. If it was just prompting an LLM, would I still do that job? I don't know. I'd probably start looking into the idea of switching careers, at least. | |
| ▲ | a1o 2 hours ago | parent | prev | next [-] | | One approach you can use is to ask it to never write the code for you, which forces it to explain and then once you try the idea by coding yourself you get a better understanding of it. I use this approach with code I am required to maintain. It still bites me sometimes because the models still mixes a lot of incorrect information (usually just stuff that was correct in the past but is incorrect now). For throwaway and easy to verify scripts I ask it to generate, but I do ask to avoid over engineering and trying to catch all corner cases cause in scripts I prefer just letting things error as they are better understood as a step that failed. I also avoid languages I find hard to read (like powershell) and prefer to generate things that are short to fit in the monitor so I can read everything and understand (python, bash, batch are my goto scripting languages). | |
| ▲ | archargelod 2 hours ago | parent | prev | next [-] | | Same. I've also configured the system prompt to never give me a full solution or write a code for me. So whenever I ask it a question it produces a short 10 line example or even a pseudocode. This is far easier for me to reason about. I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong. | |
| ▲ | ex-aws-dude 2 hours ago | parent | prev | next [-] | | The thing is why would forgetting even matter if the AI can just remind you of anything that you forgot? | | |
| ▲ | kelnos 2 hours ago | parent [-] | | Remembering and understanding aren't the same thing. Merely reciting facts doesn't automatically give you the ability to apply those facts to solve problems. |
| |
| ▲ | platevoltage 3 hours ago | parent | prev [-] | | This is exactly what I do. I'm glad I'm not the only one. | | |
| ▲ | castedo 3 hours ago | parent [-] | | Me too, ... more or less. I'm mostly still typing, sometimes copy-and-pasting with typed changes, and rarely copy-and-pasting verbatim. With the caveat that in some cases, like prototypes, proofs-of-concept, and porting code between languages; then maybe many lines are copy-and-pasted verbatim. |
|
|
|
| ▲ | faangguyindia 35 minutes ago | parent | prev | next [-] |
| I don't use AI for everything, but I use AI to make repeatable, auditable workflows. For DevOps, I don't just ask AI to go on my production server and fix all issues. I ask it to write scripts, which I audit, then I dry run, then I test, and finally approve and run on production. I was just looking through HN search for "show HN", and I saw many fitness and calorie tracking apps. A lot of them disappeared just after a few months of launch; a few of them survived a year, then died out as their domain name expired. People are making things, but they are not reaching their "audience". I created https://macrocodex.app/, launched on 16 Mar 2026, and reached 10,000+ monthly active users. Fitness/Calorie tracking is a competitive space where there are tons of apps and services. I could never have built such an app because I do not know how to design pages; I can talk to a designer, but from past experience, it takes them a long time to understand what the market wants and projects. And companies with small budgets find it very difficult to find a good guy. Many of my projects never got shipped because I dreaded making landing pages, icons, UI, etc. I am not saying we did a very good job with AI on landing pages or UI at all; that's not an area of my expertise; the domain knowledge is, but the fact that many people find it useful, I think I’ve succeeded. I've even put a ticket system in the app for support and received a few bug reports, which I resolved. Here's the latency of my other service:
https://prnt.sc/6474F4gba_he I no longer use managed services in AWS, and my costs are very low; this enables me to offer my apps and services for free to many users. |
|
| ▲ | BonoboIO a minute ago | parent | prev | next [-] |
| 20+ years in. For me, agentic coding has been nothing short of a godsend -- not because it writes code for me, but because it lets me explore and prototype things that would have taken days of reading docs and ramping up. The creative surface area of what I can touch in a week has expanded dramatically. Where I worry is beginners. The hard-won intuition for "this is a reasonable approach" vs. "this will bite you in six months" takes years to develop. With experience, you steer the agent. Without it, the agent steers you -- and it steers confidently in every direction, good and bad alike. |
|
| ▲ | fathermarz 2 hours ago | parent | prev | next [-] |
| The slot machine lever is my least favourite opinion on the subject. Also, let’s not forget. The developer is rarely the person pitching the feature, and is normally given the constraints and the PRD… Soooo people can keep tiptapping on the keyboard, but eventually they need to open their mind to the possibility that “the old way” is actually dead. |
|
| ▲ | mikert89 3 hours ago | parent | prev | next [-] |
| I've come to the conclusion that if AI can do it, its not hard. None of the complicated software i work on can be reliably written by ai yet |
| |
| ▲ | LPisGood 2 hours ago | parent | next [-] | | What type of software are you talking about? | |
| ▲ | mock-possum 2 hours ago | parent | prev | next [-] | | I mean that’s been my line every time someone makes impressed noises when I say I’m a programmer - it’s really not that hard, it’s really just a question of whether you like it enough to put the work in, like anything else. “Don’t you have to be a math wiz?” No dude 95% of the time whatever you’re trying to do already has a very well researched approach, a lot of times you’re just picking which pre-vetted solution to adapt to your needs. | | |
| ▲ | mikert89 2 hours ago | parent [-] | | no i mean the opposite, some programming is actually hard | | |
| ▲ | Blahah an hour ago | parent [-] | | Right. Like anywhere the conceptual problems haven't been all figured out yet, or where higher order effects happen with scale or particular shapes of data/substrate and you don't know them in advance. Sometimes hard like interesting and you get to do really novel thinking. A load of p2p/decentralised things are hard like this. Also sometimes hard like you get to a particular challenge and it turns out to be a notoriously unsolved mathematical thing, or you push against subtle boundaries of core libraries, runtimes, systems etc. Working with metagenome assemblies is this kind of hard. Honestly the hard code I've done made such a difference to my brain. There's plenty of trivial stuff I'm happy to have automated, but of I can't work on the hard problems I may as well not be involved at all. |
|
| |
| ▲ | slopinthebag 2 hours ago | parent | prev [-] | | Yeah this is the same conclusion I have. I primarily use AI for UI code, and guess what, it's all basically mechanical drudgery anyways. Put a div here, or put a Box here, apply some style rules, etc. This shit should have been automated decades ago yet for some reason we're still writing the same stuff with a different "twist" today. Now if your career is built on writing out the same boilerplate code in its infinite slight variations every day, congrats, you've been automated. Thank god we can free up our intellects to focus on the actual hard problems, the ones that are somewhat cutting edge, the ones that actually push our field and humanity forward. Literally every example of AI generated code (without significant human input) is just basic stuff that is wholly unimpressive. Oh wow, you had an AI generate a Next.js app? It's writing HTML for you? It made a generic SAAS? Guess I'll become a farmer now. Or, wait, I'll continue to write my multithreaded real-time multiplayer network for a MMO, since the AI currently generates something that would get me fired 10 seconds ago if I tried to push it to production. It's amazing how you introduce just the slightest difficulty or novelty to an AI and it just craps the bed. And then you go online and apparently we're gonna be replaced -6 months ago or something. People need a reality check. | | |
| ▲ | throwaway894345 an hour ago | parent [-] | | I genuinely appreciated this comment—it made me chuckle. That said, I think there are better approaches to working with AI besides “here’s a big vague thing to work on, go write some code”. I think you have to iterate somewhat closely with the AI to write a doc describing exactly what you want the system to do and then scope out very narrow tickets and then have a separate agent do the TDD to actually produce the thing. The key insights here are (1) don’t let a code writing agent have too much scope—just a narrowly scoped ticket, (2) keep the coding agent’s context minimal, (3) don’t let the coding agent write much code without testing it. The agent should make very small changes at a time and then test that everything still works. You will still need to QA stuff and review PRs, but I think AI done properly can genuinely make some tasks better. |
|
|
|
| ▲ | notepad0x90 2 hours ago | parent | prev | next [-] |
| it's a fairly new way of doing things. I predict, in the future it will be more formalized and standardized like AGILE and SCRUM and all that boring stuff. The result of that though would be establishment of development patterns that are good practices. The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod. I'm still not convinced about the doom and gloom over developers being replaced. I'm not a dev as part of my main job function, but where I do use LLMs, it has been to do things I couldn't have done before because I just didn't have time, and had to de-prioritize. You can ship more and better features. I think LLMs being tools and all, there is too much focus on how the tool should be used without considering desired and actualized results. If you just want an app shipped with little hassle and that's it, just let Claude do most of the work and get it over with. If you have other requirements, well that's where the best practices and standards would come in the future (I hope), but for now we're all just reading random blog posts and see how others are faring and experimenting. |
| |
| ▲ | slashdave 2 hours ago | parent | next [-] | | > like AGILE and SCRUM Yeah, likely > development patterns that are good practices. Wait, now you lost me | |
| ▲ | kelnos 2 hours ago | parent | prev | next [-] | | > The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod. The article essentially claims that no, that line of thinking is false. If the agent writes all of it (or too much of it, where "too much" is still not well defined), then your ability to understand it will atrophy with time, and you will either a) never push to prod, because you can't understand it well enough, or b) push to prod anyway, and cause bugs and outages. I think the article is correct. > I'm still not convinced about the doom and gloom over developers being replaced. Agreed. The agents are just not good enough to write code unsupervised, or supervised by people without senior-level skills. And frankly it's hard to imagine them getting there. Each new release of the coding tools/models is a mixed bag. Some things are better, some things are worse, and the gains are diminishing with each iteration. I am afraid that we're going to hit a ceiling at some point, at least with the transformer architecture. > but for now we're all just reading random blog posts and see how others are faring and experimenting. Yes, exactly, and many people are not faring well. The article cites several examples of people feeling less capable after using LLMs to write code for a while. | |
| ▲ | andrekandre an hour ago | parent | prev [-] | | > standardized like AGILE and SCRUM
perhaps too cynical, but if its anything like agile and scrum in $CORPORATION it will just add to the daily slog and gum up everything... |
|
|
| ▲ | carterschonwald 3 hours ago | parent | prev | next [-] |
| the funny thing is once the llms got mostly good enough in november 2025 for me, it was mind boggling how much it helped me get stuff out of my head with ease. its easier for me to code now, because its like i have a 24/7 insane intern that needs to be supervised via pair programming but also understands most topics enough to be useful/ dangerous. ironically ive been spending much of my time iterating on ways to improve model reasoning and reliability and aside from the challenge of benchmark design, ive had some pretty good success!! my fork of omp: https://github.com/cartazio/oh-punkin-pi has a bunch of my ideas layered on top. ultimately its just a bridge till i’ve finished the build of the proper 2nd gen harness with some other really cool stuff folded in. not sure if theres a bizop in a hosted version of what ive got planned, but the changes ive done in my forks have made enough difference that i can see the different in per model reasoning |
|
| ▲ | doginasuit 2 hours ago | parent | prev | next [-] |
| I think of it as driver's seat vs back seat vs passenger seat. You always take the back seat and eventually you will forget how to drive. You insist on always being in the front seat and you will miss out on the occasions where the LLM happens to know the area very well, like working with an unfamiliar library or problem domain. If it is a place that you are just passing through, it's a great to let it take the wheel and see where it will takes you. If it is a place that you need to become familiar with, it's great to have a dependable navigator beside you. My sense is that a decade from now, the people who generally see their place as the driver seat but recognize when its not are going to be writing the code that matters. |
| |
| ▲ | bartread 2 hours ago | parent [-] | | I tend to think of it as like two pilots as on commercial airliners: you always have one pilot flying and one pilot monitoring. You can debate with agentic coding who is monitoring and who is flying but, if we assume the user is monitoring what that means, in practice, for me is that I'm reading and making sure I understand all the changes the agent is proposing to make, as well as providing instruction, guidance, correction, etc. That includes reading and understanding all the code changes. |
|
|
| ▲ | oompydoompy74 2 hours ago | parent | prev | next [-] |
| I can’t say that I’ve felt my skills atrophy, but I’ve also never found backend web development to be that difficult. 90% of my job for my entire career could be described as digital plumber. |
|
| ▲ | oxag3n an hour ago | parent | prev | next [-] |
| There's a subset of software engineers who understand most of the points from the article, but it looks more and more like this train can accelerate towards the cliff due to steam from burned money. “The market can stay irrational longer than you can stay solvent” quote is usually applied to markets, but it can be applied to software engineering as well - all jobs can be gone even if world will be submerged into technological crisis, with single nine availability (and I'm talking about 9% :) ) and all accounts compromised. |
|
| ▲ | yuedongze 2 hours ago | parent | prev | next [-] |
| AI doesn't automatically make us better human beings, but they only expose our worst parts. Most people are not born great leaders and managers (need rigorous training and experience), and empowering them with AI kind of pushes them into a spot where they suddenly need to "lead". To fight brainrot from AI overuse, we must try harder to maintain that developer's priority list. |
|
| ▲ | conqrr an hour ago | parent | prev | next [-] |
| I've been having the same feeling too. At work, I try to do a hybrid prompt where I fill in things at method level with some placeholder pseudocode and let the prompt fill in the blanks. This helps with remembering and keeping a memory map. But its a lost cause keeping up with other's PRs that are often very verbose and high volume. For a lot of backend programming without a very complex domain, I think this works fine. But I still want to be in touch with coding by hand and have ventured into systems programming, outside of work, which I feel AI is less useful for currently. |
|
| ▲ | turtleyacht 3 hours ago | parent | prev | next [-] |
| Would like to see a study of brain scans during flow, manual programming, compared to code review. If the conclusion is different parts of the brain are activated, then orchestration is a separate activity entirely. Reading code is not the same as writing code. However, the code review study needs to compare between surface scanning and reviewing long enough to get over a theoretical slough of perspective: when you assume the coding chair and are in their frame, whether the brain shifts into a different cognitive mode. Otherwise, just stamping "Looks good to me" is likely to lead to the same atrophy. There's no critical thought, even a self-summary of the change or active questioning. Thoughtful, deliberate code review just plain takes longer. AI can help here a lot, although it still takes over the "get into review mode" process. |
| |
| ▲ | winwang 3 hours ago | parent | next [-] | | I absolutely feel like a "different" part of my mind is loaded when seriously engineering something myself vs vibecoding+reviewing. Even the reviewing is more annoying in the latter mental context. | |
| ▲ | hgyyy 2 hours ago | parent | prev | next [-] | | Many firms are going to go bust because of dangerous assumptions they made re. Expectations of llm improvements. And they will deserve it. | |
| ▲ | deadbabe 2 hours ago | parent | prev [-] | | It is definitely not the same parts of a brain. Code review alone is kind of like being able to understand a foreign language enough to read it, but not really understand it in flowing conversation or being able to speak it, much less construct a complex piece of literature. Retention also suffers, as you will quickly forget what you just reviewed. What is the last PR you remember? |
|
|
| ▲ | dirtbag__dad 2 hours ago | parent | prev | next [-] |
| There’s too much in this article to comment on it all, but if we zoom into the first claim: > An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism. My question is why isn’t there an effort from the author to mitigate the insane things that LLMs do? For example, I set up a hexagonal design pattern for our backend. Claude Code printed out directionally ok but actually nonsensical code when I asked it to riff off the canonical example. Then, I built linters specific to the conventions I want. For example, all hexagonal features share the same directory structure, and the port.py file has a Protocol class suffixed with “Port”. That was better but there was a bunch of wheel spinning so then I built a scaffolder as part of the linter to print out templated code depending on what I want to do. Then I was worried it was hallucinating the data, so I wrote a fixture generator that reads from our db and creates accurate fixtures for our adapters. Since good code has never been “explained for itself 100%, without comments”, I employ BDD so the LLM can print out in a human readable way what the expected logical flow is. And for example, any disable of a custom rule I wrote requires and explanation of why as a comment. Meanwhile, I’m collecting feedback from the agents along the way where they get tripped up, and what can improve in the architecture so we can promote more trust to the output. Like, I only have a fixture printer because it called out that real data (redacted yes) would be a better truth than any mocks I made. Finally, code review is now less focused on the boilerplate and much more control flow in the use_case. The stakes to have shitty code in these in-house tools is almost zero since new rules and rule version bumps are enforced w a ratchet pattern. Let the world fail on first pass. Anyway, it seems to me like with investment you can slap rails on your code and stay sharp along the way. I have a strong vision for what works, am able to prove it deterministically with my homespun linters, and am being challenged by the LLMs daily with new ideas to bolt on. So I don’t know, seems like the issue comes down to choosing to mistrust instead of slap on rails. Edit: I wanted to ask if anyone is taking this approach or something similar, or have thought about things like writing linters for popular packages that would encourage a canonical implementation (I have seen some crazy crazy modeling with ORMs just from folks not reading the docs). HMU would love to chat youngii.jc@gmail |
|
| ▲ | 2ndorderthought 2 hours ago | parent | prev | next [-] |
| Lars we are on the same page. I use LLMs to help me scope and get a second set of eyes on the high levels of a task. Then I write the code. Often I automate boilerplate or boring objects but sometimes it's faster/better for me to just write them. Then I will ask an LLM to say write some tests. Then I will focus on the cases they missed and write those myself. I have been described as a decel and a Luddite though so be weary of my opinions. |
|
| ▲ | orbital-decay 2 hours ago | parent | prev | next [-] |
| >and then pulls the slot machine lever over and over Does anyone really do this? You want verification and self-correction in a loop, not rerolling and cherrypicking. The non-determinism point is really tiresome to hear over and over. |
| |
| ▲ | MattDamonSpace an hour ago | parent | next [-] | | The slot machine metaphor gets thrown around a lot but it hasn’t really described my experience with LLMs since ~2024 | |
| ▲ | girvo an hour ago | parent | prev | next [-] | | > Does anyone really do this Yes, lots of people. It’s a whole issue. | |
| ▲ | bigstrat2003 an hour ago | parent | prev [-] | | > The non-determinism point is really tiresome to hear over and over. When the problem is fixed, you'll stop hearing about it. | | |
| ▲ | orbital-decay a few seconds ago | parent [-] | | That's the question, how is it even a problem? There's nothing to fix. Don't reroll, verify and fix if incorrect. Repeat until it's right. |
|
|
|
| ▲ | est31 2 hours ago | parent | prev | next [-] |
| Re vendor lock in point: this is a harness issue really. Sure, CC is restricted to Anthropic models, but it's not the only harness out there. So if one vendor has an outage or botches the quality of their models due to compute shortage, you can switch to another vendor. LLMs are the easiest to switch. Of course, if hardware costs go up, so will all AI vendors. The only way out for the employer would be to directly buy the hardware (or do a fixed price deal with a cloud provider). Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc. I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it. |
| |
| ▲ | einsteinx2 2 hours ago | parent | next [-] | | CC isn’t even limited to Anthropic models, there’s a post on the front page right now to use it with Deepseek V4 since Deepseek provides an Anthropic compatible API and CC reads API URLs from env variables so you can override them. | |
| ▲ | fnordpiglet 2 hours ago | parent | prev | next [-] | | I’ve build a configuration transpiler to Claude code and codex and found I can switch pretty quickly between both and run both at once. At the moment codex performs better. Prior CC did. There is no vendor lockin and this is an old canard in technology that LLMs in fact themselves make irrelevant. Once you’ve got an implementation that uses X converting it to Y is almost trivial with an LLM because the spec is canonical in the reference. | | | |
| ▲ | slashdave 2 hours ago | parent | prev [-] | | They are also surprising good at finding bugs that humans often miss |
|
|
| ▲ | legerdemain 2 hours ago | parent | prev | next [-] |
| This author assumes that workforce development is a first-order priority for businesses, or at least for the health of the industry. Why make this assumption so confidently? The arrival of the electronic computer did not turn human computers into programmers, it simply eliminated them en masse. |
|
| ▲ | jmuguy 2 hours ago | parent | prev | next [-] |
| This is how I feel about things. Its like someone is demanding that I become a manager, when I was perfectly happy being a IC. And now I have to figure out how to be a manager of AI agents while at the same time not lose my ability to judge their work, or plan effectively, even though I'm not supposed to be doing things "by hand" anymore. But doing things "by hand" is how I reasoned through problems and figured out the plan to begin with. |
|
| ▲ | wolttam 2 hours ago | parent | prev | next [-] |
| I try to make understanding the bottleneck and it seems to work out for me while still delivering solid productivity gains. |
|
| ▲ | logickkk1 3 hours ago | parent | prev | next [-] |
| "Don't vibe code" but here's a deadline that's impossible without it. classic |
| |
| ▲ | 2ndorderthought 2 hours ago | parent [-] | | Software engineers and their leadership have been pushing back on terrible product managers for decades. AI isn't the reason to stop our time honored tradition. If anything we can write the emails faster now |
|
|
| ▲ | 0xbadcafebee 2 hours ago | parent | prev | next [-] |
| Nope. 1) Skills don't go away, you just get better at the things you do regularly, but your still have your old skills, 2) You only have vendor lock-in if you use lock-in devices (stop using Claude Code), 3) It's not an increase in complexity, it's a replacement, in order to gain efficiency (see: the cotton gin), 4) The increased cost is negligible considering average salary and resulting productivity |
|
| ▲ | taleodor an hour ago | parent | prev | next [-] |
| Frankly I don't buy atrophy argument at all. I (and I believe many other people) switched multiple frameworks and languages over the years. I.e., I was an expert in Chef more than 10 years ago, and nowadays I hardly remember anything about it. Calling this a cognitive decline is a significant stretch. If you're afraid of cognitive decline - try to get to proper orchestration using multiple agents. That's a fun exercise. |
|
| ▲ | mempko 44 minutes ago | parent | prev | next [-] |
| Writing code is not the hard part of software development. This is coming from someone who has programmed for 30 years, writing an average of 100k+ lines a year. The sooner programmers start thinking about modeling the domain, user mental models, architecture and data structures and less focus on the mechanics of writing code, the better. Writing code is the EASY part. LLMs have basically solved the easiest part of software development. They however are bad at all the stuff I mentioned. LLMs don't have a point of view, you do as a software developer. |
|
| ▲ | bitwize 3 hours ago | parent | prev | next [-] |
| This is exactly the same problem of "mechanical engineers' job is to design parts, not machine them, so we'll take training on machines out of the mech eng curriculum." Result: fresh mech eng grads do not know how to properly design parts because they have no idea how they are machined. |
| |
| ▲ | gavmor 2 hours ago | parent | next [-] | | Should ontogeny recapitulate phylogony in the trades? Ie, should we teach historical techniques and graduate to modernity? | |
| ▲ | marcus_holmes 3 hours ago | parent | prev | next [-] | | How do they solve this for mechanical engineering? Or is it an ongoing problem? | | |
| ▲ | bitwize 42 minutes ago | parent [-] | | I don't know if they've addressed it. But ~15 years ago, my father was mentoring some college students and noticed that while they had been taught to machine a block (i.e., the rudiments of machining), they had no idea how to design appropriate tolerances for e.g., a gear because they hadn't ever made anything that complicated. So it was an issue back then. Presumably these details are learned on the job through trial and error, or by oversight from a more senior engineer who understands the requirements better. But in the past it was understood you'd start learning them from actually building parts as part of the curriculum. |
| |
| ▲ | hahn-kev 3 hours ago | parent | prev | next [-] | | Vertical integration is valuable at many different scales | |
| ▲ | dboreham 3 hours ago | parent | prev [-] | | Surely they're made on CNC machines now? (well, since the 1970s) | | |
| ▲ | Kirby64 3 hours ago | parent | next [-] | | Doesn’t matter, if you ask for physically impossible features to cut this is something you could technically do. Or you ask for a feature that adds multiple setups to an otherwise simple part and makes it wildly expensive | |
| ▲ | analog31 3 hours ago | parent | prev | next [-] | | The CNC doesn't know either. What usually happens is that an engineer at the CNC shop figures it out for you. Knowing some machining still lets you design parts and assemblies that are some combination of cheaper, better, etc. This is noticeable with precision or high performance assemblies. And how many revisions are needed. | |
| ▲ | bitwize 37 minutes ago | parent | prev [-] | | Lathe, mill, CNC, or matter transmuter, doesn't matter. Effective design only becomes possible with intimate knowledge of how it is built. |
|
|
|
| ▲ | jbethune an hour ago | parent | prev | next [-] |
| He's speaking facts. I have had the same concern about excessive cognitive offloading. As others have noted in the comments, AI tools when used correctly, can actually make us smarter and help us learn faster. But that requires a very particular usage pattern that is different from what I see with all the vibe coding going on these days. I created a project called Ninchi to force myself to read my code and understand it. Recently I began also sharing it to see if there may be a larger need/opportunity. It's a small effort. We need to make a variety of efforts I think to encourage responsible AI usage before we end up drowning in slop. |
|
| ▲ | hsuduebc2 2 hours ago | parent | prev | next [-] |
| Only way to cope with this I found is to grind leetcode or advent of code. It's kinda funny how fast this all changed. Less funny part is the fact that I'm now kinda feared for my job in some time. |
|
| ▲ | iandanforth an hour ago | parent | prev | next [-] |
| Try this thought experiment. If, in 6 months, the agents were better coders than you are, would this argument still hold? This is a personal thought experiment so think it through for yourself. What would the consequence be if the agents really were better than you and you acknowledged that? The major premise of "It's a trap!" is that it matters if you lose your coding skill. (I'll gloss over general critical thinking and stick with coding for now) However in the world where on any given task it would be done to a higher level of quality and faster if you gave it to the agent, then what are you doing trying to do it yourself? There's plenty of room for that kind of thinking in hobbies, but in the professional world? Maybe you can add some value in code reviews, but you may also be better off never reading the code at all. Maybe the how of coding stops mattering and the what of products needs to be your top concern. I can tell you that the agents that I use today are much better coders than I am in the language we're using. I don't write it at all. I couldn't fizzbuzz in it. But with a small team we are building useful internal tools and features at a breakneck pace. I certainly feel the same feelings of getting dumber and losing my coding chops, but I have to step back and say, could what we've built have been built in 5x the time without agents? And the answer is probably no. The thing I'm mastering now is conjuring software with agents. What lets them rip, what slows them down, where they are today and where they will likely be tomorrow. I can tell you that you should re-invest in small, modular systems, because agents can build modules and greenfield projects instantly. I can tell you that there is a point at which agents fall over completely even on mid-sized projects, but that that point is receding with each new generation of model, and that Codex 5.4 XHigh Fast set to 500K context window is a beast. (5.5 has yet to win me over) I can tell you that pushing direct to main is viable, that PRs slow down fully agentic teams, and if your agents have sufficient permissions they can fix things fast enough to be let loose even knowing they may delete your service. I wouldn't do it with your main product yet (unless you're starting your startup today) and I wouldn't try it with a large legacy project. But maybe that rewrite you've always wanted to do is here and just a prompt away. Now, the sane among you will note that agents are not better today, that they might not ever be, and either way you should never trust a computer to make a decision because it can't suffer the consequences of its actions. Or more down to earth, there are some things that are too important to yolo. But I will argue that a huge swath of us work in domains where if you're willing to challenge some of the basic assumptions of software development (you should understand the code, it should be maintainable by humans, it should be built to last) then you'll be able to provide very useful software much more quickly than you would otherwise be able to do. Save the skill for your hobbies, and build things people want. |
|
| ▲ | threethirtytwo 33 minutes ago | parent | prev | next [-] |
| I think AI will evolve to the point where it produces working, bug free code. But that code won't necessarily be that readable, clean or modular. In the future the complexity or how "bad" the code is won't matter because the LLM will deal with the complexity and clean up the messes automatically. Your code wasn't modular enough to account for a certain new feature? Well the LLM will simply make it modular enough. Is the code too hacky to fix a bug? The LLM will make it less hacky if it was too hacky in the first place. OR the LLM can deal with the hackiness. That is the future. Your skills will atrophy in the same way humanities skills with the slide rule has atrophied. Going against the grain here which statistically is more likely to be right given how HN was so wrong about self driving and AI being useless for coding. I think HNers given that their identity is tied around coding are of course going to defend that identity till the bitter end in the same way artists did. |
|
| ▲ | komali2 34 minutes ago | parent | prev | next [-] |
| > When working on something new or something challenging, me typing out code is the process by which I figure out what we should even be doing. This is really validating to read. I recently was having a call with a friend where I was arguing against 100% AI usage, and I was saying, some problems the LLM just can't solve. He asked for an example, and I tried to explain a complex chart I was trying to make at a previous gig, and in the end said "well to be fair neither the AI or I could figure it out lol." He replied "how could you even code it if you didn't know exactly what you were trying to build? You're supposed to know exactly what you're building before you write a single line of code, that's what they teach you in school." He was poking fun at the fact that I have a boot camp background and he has a uni degree - it's been ten years for both of us now so he's running out of ways to poke fun at that difference as we even out our differences, but this one poke brought back about the old imposter syndrome, since my entire career, I've thought via coding. When I get a ticket, I tend to jump into the codebase to figure out the context I need to know about, the current patterns, what files I'll need to worry about; and while I'm there, I tend to start writing some things, and as I do that I pull in a shared function, and in doing so just check out of curiosity where else the function is used, and in doing so discover oh, actually, we have similar functionality elsewhere, lemme just abstract this work for this ticket and the previous functionality into a shared function, and use it in both places. And so on. Before I know it, I'm looking back at the ticket checking if I've covered everything, and sending in the PR. I've never had complaints about my productivity, in fact I'm often lauded for it so I think it at least hasn't been a process that slows me down long term even if it's meassier. But I had been wondering if it makes me less than a "real" engineer. I'm happy to hear others may doing it this way too. |
|
| ▲ | jdw64 3 hours ago | parent | prev | next [-] |
| How can we solve this at a more fundamental level? I think many people already recognize the problem: -“Our ability to write code is being damaged.”
-“If our ability to write code declines, our ability to recognize good code also declines.” But the problem is that the market no longer works without LLMs. Freelance rates and deadlines are now calibrated around LLM-assisted output. Even clients who write “do not vibe code” often set deadlines that are impossible to meet unless you use something like vibe coding. The client’s expectations themselves are becoming abnormal. That is the irony of the market. I honestly do not know what to do. Recent Hacker News discussions are mostly a negative echo chamber about AI use. In other places, it is often the opposite: only positive echo. But almost nobody discusses the actual solution. The main topics I keep seeing are roughly these: 1. Is the large repository PR system failing a fundamental stress test? Or should AI-generated(GEN AI) code simply not be merged? If PR review is moving from handmade production to mass production, how should the PR system change? Or should it remain the same? 2. As vendor lock-in continues, can we move toward local LLMs to escape it? Are cost and harness design manageable? What level of local model is required to reach a similar coding speed? 3. If we are forced to use agentic coding, how do we avoid damaging our own ability to code?
There is a passage from Christopher Alexander that I keep thinking about: “A whole academic field has grown up around the idea of ‘design methods’—and I have been hailed as one of the leading exponents of these so-called design methods. I am very sorry that this has happened, and want to state, publicly, that I reject the whole idea of design methods as a subject of study, since I think it is absurd to separate the study of designing from the practice of design. In fact, people who study design methods without also practicing design are almost always frustrated designers who have no sap in them, who have lost, or never had, the urge to shape things.”
— Christopher Alexander, 1971 This quote feels relevant to programming now.
If we separate the study and supervision fo programming from the actual practice of making, something important may be lost. In architecture, there is this idea that without practice, the architect loses meaning. But now the market is forcing the separation. People with enough symbolic capital and high status have the freedom not to use AI. But people lower in the market are under pressure to use it. So I think the discussion now needs to move beyond whether AI coding is good or bad. The real question is How do we keep using AI because the market demands it, while still preserving the human practice that makes programming meaningful and keeps our judgment alive? I think these are the important question.
How do you maintain market value without using AI? Or, if you do use AI, how do you avoid being treated as low-quality? If you do not use AI, how can you remain more competitive than people who do use it? If you do use AI, what advanatge do you have over people who do not use it, and how should you position yourself? I know that agentic coding can cause skill degradation. I can feel it happening to me already. But for someone like me, who does not have strong status, credentials, or symbolic capital, social and market pressure makes AI almost unavoidable. What frustrates me is that I do not see practical answers anywhere. |
| |
| ▲ | hunterpayne 3 hours ago | parent | next [-] | | "How can we solve this at a more fundamental level?" Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software. | | |
| ▲ | Saline9515 2 hours ago | parent | next [-] | | The issue is that you will end up without a job if the trend continues. It's similar to many cases of technical innovation - you can still have a few workers who do handcrafted works, but most of them have to use the machines, that may produce work of inferior quality but at much higher speed. The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month. It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS. | | |
| ▲ | hunterpayne 2 hours ago | parent [-] | | "unless you handwrite the highest possible quality at a quick pace" That's exactly what I do. I know I am lucky to be gifted in this skillset. But that's not a good reason to excuse people destroying the market for everyone. |
| |
| ▲ | jdw64 2 hours ago | parent | prev [-] | | I agree with what you are saying, but if I cannot get work, I may literally have nothing to eat tomorrow. So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival. I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle. In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively. But I cannot do that. What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills. I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position. | | |
| ▲ | hunterpayne 2 hours ago | parent [-] | | I'm not being idealistic. I'm being very practical. You have the survival problem exactly backwards. Continuing to use it, that's the real danger in a practical sense. That only leads one direction and that direction isn't in your best interest. |
|
| |
| ▲ | tap-snap-or-nap 2 hours ago | parent | prev [-] | | I use AI mainly to improve the parts which I know I am pretty bad and need help and restrict it as I gain competency. Which means choosing to say no to copy pasting most things I wish to exercise my brain for and reading, understanding, processing and writing it more diligently. Treating brain like any other muscle that requires a gym, we do have cars and segways that can help us travel long distances quickly but we still require as a specie to walk and run. People with disability and underprivileged backgrounds also need these tools so it is a good option to have generally. Secondly we are human, we have made very good tools, we need to stop obsessing with being more productive in output and learn to be better, more considerate and respectful to each other and other species to make this planet generally for better world. It is the only one we got. At the same time, be more mindful about where and how we utilise our time and attention, they are very limited and the most valuable resources we can possibly possess. |
|
|
| ▲ | threethirtytwo an hour ago | parent | prev | next [-] |
| Same thing happened with assembly language. |
|
| ▲ | oliv__ an hour ago | parent | prev | next [-] |
| I think the best way to go about this is to start with a manually coded codebase outlining the basic structure of your app (even if ported from some other project) so that you basically define the code "palette" and THEN using AI to add features / edit stuff. It won't do everything exactly the way you would've coded it but I find this model much better at setting and maintaining "guardrails" for your codebase so you don't find yourself wondering how it all fits together. |
|
| ▲ | luxuryballs an hour ago | parent | prev | next [-] |
| If you just scale it back a little bit so you’re having the agent write methods, services, tests, scaffolding, etc, and keep it concise, you can get a lot of productivity gain without giving up your control of the codebase. It feels like some developers are leaning too far into the “vibe coding” but I was getting a lot of accelerated development years ago when I was still just asking the chat window for code, there is def a sort of laziness trap. |
|
| ▲ | everyone 2 hours ago | parent | prev | next [-] |
| Im seeing the word "agentic" a lot here. Is there a difference between "Agentic Coding" and "I put prompt into gpt or claude and pasted code into my file" ? |
| |
| ▲ | Supermancho an hour ago | parent | next [-] | | It's a lot different than interacting with the webpage prompts. Running a client locally that can interact with your IDE, execute your test and build processes, interact with version control, write the files it suggests as a PR, and has context memory changes how you code, for sure. If you've used an LLM with context memory (eg Chatgpt plus) where it can infer things you mention or derive intent from previous conversations from weeks ago, it's gets eerie. | |
| ▲ | marcosdumay an hour ago | parent | prev | next [-] | | Agents will read all (or some, if you set this way) your code and apply the generated changes directly into as many files as needed. They can also get information from other services you have locally or run shell commands (like tests, or git) and use the result if you set them this way. It's quite different. | |
| ▲ | ex-aws-dude 2 hours ago | parent | prev | next [-] | | Yes, the agentic tools are much much better because they can gather their own context automatically and run feedback loops to self-correct errors. | |
| ▲ | bigstrat2003 an hour ago | parent | prev [-] | | Yeah, one sounds cooler. It's all just hype and vibes, no substance. |
|
|
| ▲ | slopinthebag 2 hours ago | parent | prev | next [-] |
| I think ignoring all else, generating code is not a new layer of abstraction. It's the same abstraction, we just have codegen machines now. The same skills are important regardless if the person is typing in the code or if a machine is producing it. |
|
| ▲ | EGreg 2 hours ago | parent | prev | next [-] |
| Agents are a first-generation technology. They propose and act at the same time.
I recommend you read https://safebots.ai/agents.html |
| |
|
| ▲ | phendrenad2 an hour ago | parent | prev [-] |
| > This is the sentiment being hyped up around the industry currently: traditional coding is all but dead, and Spec Driven Development (SDD) is the future. You generate a plan, and disconnect from writing any code Agentic agile > agentic waterfall (at least for now) Don't give the AI a spec, work with it every step of the way. > pulls the slot machine lever over and over (link to "One More Prompt: The Dopamine Trap of Agentic Coding") I'm sure the first cave-person to discover how to make fire was equally "addicted" to making fires. That doesn't really say anything about the underlying technology. > An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism I don't know what this means, exactly. Anyone have any ideas? > Atrophying skills for a wide swath of the population This is very real and something we're going to have to contend with. Software can't really become less complex, and there's a minimum amount of knowledge you need, with or without AIs there to help you. We may need specialized training academies for developers where they spend a few years without AI to learn to program, and then are given a few years of AI programming. > Vendor lock-in for individuals and entire teams This isn't really a big program, you can always switch AI providers if there's frequent downtime. > only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem Agreed... > Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively. ...well, yes and no. AI tooling can help you _reduce_ cognitive debt. Picture this: There is one senior developer (Person A) on the team who understands Service X. Your other developers could schedule time with Person A to get an understanding. Or, they could ask the AI to analyze the project and explain it to them. This scales much better, and if Person A is a poor communicator (let's face it, many senior engineers are), it might be the only working option. |