| ▲ | fnordpiglet 4 hours ago |
| Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it. The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it. I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it. |
|
| ▲ | byzantinegene 3 hours ago | parent | next [-] |
| you have 35 years of experience and have already built up the learning capability and general framework to acquire new knowledge. you know how to use agentic coding as a tool to supplement your work. the juniors who start today don't have that, they overrely on agentic coding and do not know what they don't know |
| |
| ▲ | throwaway041207 2 hours ago | parent | next [-] | | IMO, by the time todays juniors would have 5-10 years of expected experience, the entire field will be something different altogether. Language choice distribution will collapse (if not change altogether), whole new modalities of monitoring and progressive delivery guardrails will come into play, essentially creating a 24/7 incremental rollout of pure agentic code, correctness will be determined by a mix of language features and self-monitoring by models in production and automated testing against production snapshots in pre-production, and deep debugging will the be province of a select group of engineers and there will be a pathway to those roles for juniors, but those roles will be coveted and difficult to break into (and probably will require education and maybe even informal accreditation). | |
| ▲ | ookblah 2 hours ago | parent | prev | next [-] | | someone probably made this same argument against certain frameworks over the years and juniors still figured it out. we need to stop trying to babysit learning for hypothetical situations. the bar to "start" is lower and the bar to actually competency is higher now, juniors who want to actually learn instead of just pressing enter over and over again will do so regardless of whatever you do to "help" them. | | |
| ▲ | SpicyLemonZest 2 hours ago | parent [-] | | It's not really a hypothetical. I work with one junior who's submitted an incorrect bugfix 3 times and counting; he seems genuinely incapable of processing the idea that there's a correctness issue he has to resolve, rather than a prompt engineering issue that will allow Claude to figure it out if only he asks in the right way. | | |
| ▲ | jfreds an hour ago | parent | next [-] | | To be fair this was a thing before AI as well… | |
| ▲ | ookblah an hour ago | parent | prev [-] | | that's not the tooling's fault i feel. i've used LLMs to help explore and debug issues, point me to the right documentation to investigate, etc. I WISH i had something like this 30 years ago. |
|
| |
| ▲ | CGamesPlay 2 hours ago | parent | prev | next [-] | | Exactly this. We need to be more precise than blanket statements like "agentic coding is a trap" and start figuring out what a "tasteful" application of agentic coding looks like. ChatGPT is destroying liberal arts curriculums because students can choose to not do anything of the thinking themselves and produce mediocre work that passes the bar. I think the same problem is showing itself with agentic coding, just with more directly measurable consequences (because the pile of software ends up failing in a more spectacular way than the pile of bad writing). | | |
| ▲ | hibikir 2 hours ago | parent [-] | | On liberal arts is simply a matter of what the students want to get out of the class, vs what the teacher wants the students to do: There's a huge disconnect in goals and expectations, so there's no way for the teacher to actually win. The fact that there's such disconnect should give the departments pause. This doesn't happen at all for using agentic coding: What the programmer wants and what the boss wants are pretty well aligned. There are corner cases where someone isn't allowed to use LLMs, but does it anyway, but in most cases, the organization agrees. |
| |
| ▲ | bhagyeshsp 2 hours ago | parent | prev | next [-] | | Self-taught, "junior" here. Due to English-language limitation my most adult life, I struggled to code. Used visual coding etc. But of course, I can't make a living on drag-and-drop harness. Comes in GPT-3.5, accelerated my learning. Now I'm running my incorporated company, just launched one software-hardware hybrid product. Second one is a micro-SaaS in closed beta. The point is: when people use "juniors" as a fixed shaped blobs of matter, they focus on the juniors that were in any case going to make mistakes: AI or not. Misses the key point of agentic usage. | | |
| ▲ | sterlind 2 hours ago | parent [-] | | accelerated what learning? learning to code? learning to engineer? learning to manage? learning to market? | | |
| ▲ | bhagyeshsp an hour ago | parent [-] | | Learning the fundamentals of programming and their translation to code. I'm decent at engineering, managing and marketing solutions. |
|
| |
| ▲ | danenania 2 hours ago | parent | prev | next [-] | | If a junior builds something with agents that turns into a mess they can’t debug, that will teach them something. If they care about getting better, they will learn to understand why that happened and how to avoid it next time. It’s not all that different than writing code directly and having it turn into a mess they can’t debug—something we all did when we were learning to program. It is in many ways far easier to write robust, modular, and secure software with agents than by hand, because it’s now so easy to refactor and write extensive tests. There is nothing magical about coding by hand that makes it the only way to learn the principles of software design. You can learn through working with agents too. | | |
| ▲ | FridgeSeal an hour ago | parent | next [-] | | > that will teach them something. If they care about getting better, This pre-supposes the idea that the business is _willing_ to let that happen, which is increasingly unlikely. The current, widespread attitude amongst stakeholders is “who cares, get the model to fix it and move on”. At least, when we wrote code by hand, needing to fix things by hand was a forcing function: one that now, from the business perspective, no longer exists. | | |
| ▲ | danenania an hour ago | parent [-] | | If it’s broken and the dev can’t debug it, the business won’t have much of a choice. |
| |
| ▲ | wiieee 2 hours ago | parent | prev [-] | | “Currently an engineer at OpenAI” Don’t forget to mention that. |
| |
| ▲ | echelon 2 hours ago | parent | prev | next [-] | | > the juniors who start today don't have that, they overrely on agentic coding and do not know what they don't know Y'all need to stop worrying about the kids. They're smarter than us and will run circles around us. They're going to look at us like dinosaurs and they're going to solve problems of scale and scope 10x or more than what we ever did. Hate to "old man yells at cloud" this, but so many people are falling into the trap because of personal biases. While the fear that "smartphones might make kids less computer literate" is true, that's because PCs are not as necessary as they once were. The kids that turn into engineers are fine and are every bit as capable. | |
| ▲ | jachauhan 2 hours ago | parent | prev [-] | | [flagged] |
|
|
| ▲ | jmuguy 4 hours ago | parent | prev | next [-] |
| This post does not make the claim that "you should know everything about everything you work on" - its making the claim that writing code and being able to read code effectively are intrinsically linked. |
| |
| ▲ | ray_v 3 hours ago | parent [-] | | I wonder if it's not so much the coding that people don't want to write, but it's more about the weight of all the orchestration, data engineering and research that has to be done (or, understood in the first place) to get anything off the ground these days. It feels off the charts complicated, and of course is now shifting rapidly. |
|
|
| ▲ | grogenaut 3 hours ago | parent | prev | next [-] |
| Agreed. I don't know anything about turning sand into transistors or assembly but do well. So I don't know my full stack either. What is important is not being afraid to learn the rest of your system and keeping an index. Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand. When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base." That holds even within just large code bases. |
|
| ▲ | catlifeonmars 2 hours ago | parent | prev | next [-] |
| > The claim you should know everything about everything you work on is an intensely naive one. I disagree with this take. Personally, I pride myself in learning the code bases I work on in detail, sometimes better than the leads for those code bases. I’m not saying that everyone should do so, but it’s achievable and not naive at all. |
| |
| ▲ | fnordpiglet an hour ago | parent [-] | | Knowing it better than the leads isn’t that hard - they spend most their life in meetings and teaching people how to think. Knowing the code base in detail is important - but I’m certain unless you wrote it all, there are parts you don’t know. I’m sure what you do is build enough scaffold understanding and depth in the core parts you can visit any part and understand it. But I’m also certain there are parts that based on pure recall you care unaware of the details. Someone else wrote it, you haven’t had to read it yet, and thus it’s a black box. Either that or your code base is quite small relative to the team size, or the team is very unproductive. The supposition one person is fully aware of any growing code base built by a team or organization - or a monorepo being built by 10,000 developers over 15 years - is prideful. A lot of it works because it works and you accept that unless you need to inspect a part because it’s not working. Whether a machine wrote it or an intern 10 years ago did, it’s a black box until it has to not be. | | |
| ▲ | wiieee an hour ago | parent [-] | | The question is one of is it easier to untangle llm code or human? | | |
| ▲ | bhagyeshsp an hour ago | parent [-] | | The answer is similar to: whose code is easier to untangle: human-A or human-B? "Depends". |
|
|
|
|
| ▲ | girvo 3 hours ago | parent | prev | next [-] |
| > The claim you should know everything about everything you work on is an intensely naive one Nothing in the article made that claim. |
|
| ▲ | crjohns648 3 hours ago | parent | prev | next [-] |
| I have also seen the learning acceleration, there's a significantly increased set of techniques and technologies I have learned how to apply. From a person perspective though, I'm apprehensive about the effect AI will have on the human "very well read intern." People who know a lot very deeply about specific areas are fascinating to talk to, but now almost everyone is able to at least emulate deep knowledge about an area through the use of AI. The productivity is there, but the human connection is missing. |
|
| ▲ | i_love_retros 3 hours ago | parent | prev | next [-] |
| I think it's important to at least have a mental model of code you directly commit to the codebase, and that doesn't happen if it was written by an agent. |
|
| ▲ | beepbooptheory 3 hours ago | parent | prev [-] |
| "Hey! Just popping in to say that agentic coding is actually pretty great and is making me better in all the ways; but also want to say at same time that it's actually not all that different from anything else, so we can chalk up any critique to it to individual naivety and bias." |