| ▲ | Waterluvian 5 hours ago |
| Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Some companies comprehend how short-sighted this is and invest in professional development in one way or another. They want better engineers so that their operations run better. It's an investment and arguably a smart one. Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away. Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home. |
|
| ▲ | pfisherman 4 hours ago | parent | next [-] |
| This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents? I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much. Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods. |
| |
| ▲ | MetaWhirledPeas 3 hours ago | parent | next [-] | | > what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents? Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel. What will remain are the things that already differentiate a good developer from a bad one: - Able to review the output of coding agents - Able to guide the architecture of an application - Able to guide the architecture of a system - Able to minimize vulnerabilities - Able to ensure test quality - Able to interpret business needs - Able to communicate with stakeholders | | |
| ▲ | rkapsoro 3 hours ago | parent | next [-] | | I think you're agreeing with him. All of the things you just listed are key senior developer skills. | |
| ▲ | SoftTalker an hour ago | parent | prev | next [-] | | None of those things will be necessary if progress continues as it has. The AI will do all of that. In fact it will generate software that uses already proven architectures (instead of inventing new ones for every project as human developers like to do). The testing has already been done: they work. There are no vulnerabilites. They are able to communicate with stakeholders (management) using their native language, not technobabble that human developers like to use, so they understand the business needs natively. | | |
| ▲ | array_key_first an hour ago | parent | next [-] | | If this is the case then none of us will have jobs; we will be completely useless. I think, most likely, you'll still need developers in the mix to make sure the development is going right. You can't just have only business people, because they have no way to gauge if the AI is making the right decisions in regards to technical requirements. So even if the AI DOES get as good as you're saying, they wouldn't know that without developers. | |
| ▲ | bobthepanda an hour ago | parent | prev | next [-] | | Humans are still in the loop as the final signoff responsible for liability, and to do an audit you’ll need someone who knows what they’re looking at. | | | |
| ▲ | lelandbatey an hour ago | parent | prev [-] | | > They work For some definition of work, yes, not every definition. Their product is not without flaw, leaving room at for improvement, and room for improvement by more than only other AI. > There are no vulnerabilities That's just not true. There's loads of vulnerabilities, just as there's plenty of vulnerabilities in human written code. Try it, point an AI looking for vulns at the output of an AI that's been through the highest intensity and scrutiny workflow, even code that has already been AI reviewed for vulnerabilities. |
| |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | jnovek 3 hours ago | parent | prev [-] | | > Able to review the code output of coding agents That probably won’t be necessary in a few years. | | |
| ▲ | circlefavshape 3 hours ago | parent | next [-] | | It's necessary for devs right now, no matter how good they are, and it's those devs' code the models are trained on | | |
| ▲ | prewett 2 hours ago | parent [-] | | Even worse, the training set probably includes a lot of code that needed review but didn't get it... | | |
| ▲ | keeda 34 minutes ago | parent [-] | | If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data. Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues. |
|
| |
| ▲ | rafterydj 3 hours ago | parent | prev | next [-] | | I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society? I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry. | | |
| ▲ | ndriscoll 3 hours ago | parent [-] | | You could e.g. write specs and only review high level types plus have deterministic validation that no type escapes/"unsafe" hatches were used, or instruct another agent to create adversarial blackbox attempts to break functionality of the primary artifact (which is really just to say "perform QA"). As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant. | | |
| ▲ | surajrmal 2 hours ago | parent | next [-] | | Code quality will impact the effectiveness of ai. Less code to read and change in subsequent changes is still useful. There was a while where I became more of a paper architect and stopped coding for a while and I realized I wasn't able to do sufficient code reviews anymore because I lacked context. I went back into the code at some point and realized the mess my team was making and spent a long while cleaning it up. This improved the productivity of everyone involved. I expect AI to fall into a similar predicament. Without first hand knowledge of the implementation details we won't know about the problems we need to tell the AI to address. There are also many systems which are constrained in terms of memory and compute and more code likely puts you up against those limits. | | |
| ▲ | ndriscoll an hour ago | parent [-] | | I don't disagree that code quality is currently more important than it's ever been (to get the most out of the tools). I expect that quality will increase though as people refine either training or instructions. I was able to get much better (well factored, aligned to business logic) output that I'm generally happy-ish with a couple months ago with some coding guidelines I wrote. It's possible that newer models don't even need that, but they work well enough with it that I haven't touched those instructions since. |
| |
| ▲ | rafterydj 2 hours ago | parent | prev [-] | | I mean, sure, for programming macros. Or programming quick scripts, or type-safe or memory-safe programs. Or web frontends, or a11y, or whatever tasks for which people are using AI. But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful. When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes. |
|
| |
| ▲ | falkensmaize 3 hours ago | parent | prev [-] | | They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works. I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction. | | |
| ▲ | stevepotter 2 hours ago | parent [-] | | I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time. I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this. |
|
|
| |
| ▲ | rekrsiv 3 hours ago | parent | prev | next [-] | | The endgame in programming is reducing complexity before the codebase becomes impossible to reason about. This is not a solved problem, and most codebases the LLMs were trained on are either just before that phase transition or well past it. Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders. A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet. | | |
| ▲ | trollbridge 3 hours ago | parent [-] | | No kidding. So far the complexity introduced by LLM-generated code in my current codebase has taken far more time to deal with than the hand-written code. Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult. | | |
| ▲ | tpdly an hour ago | parent [-] | | Yeah, same. I like the silo idea, I'll have to explore that. I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me. |
|
| |
| ▲ | dspillett 3 hours ago | parent | prev | next [-] | | > This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents? If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off. Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish. | | |
| ▲ | falkensmaize 3 hours ago | parent [-] | | What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable. No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output. | | |
| ▲ | ArnoVW an hour ago | parent [-] | | And those that do have not yet understood what will happen when those seasoned workers retire, and there are no juniors or mid that can grow because they have been replaced by AI |
|
| |
| ▲ | bonoboTP 3 hours ago | parent | prev | next [-] | | I also remember a similar wave around 10-15 years ago regarding ML tooling and libraries becoming more accessible, more open source releases etc. People whose value add was knowing MATLAB toolboxes and keeping their code private got very afraid when Python numpy, scikit learn and Theano etc came to the forefront. And people started releasing the code with research papers on github. Anyone could just get that working code and start tweaking the equations put different tools and techniques together even if you didn't work in one of those few companies or didn't do an internship at a lab who were in the know. Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit. Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.) I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so. Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that. | | |
| ▲ | poody 2 hours ago | parent [-] | | This is so true... I am having issues with the change right now.. being older and trying to incorporate agentic workflow into MY workflow is difficult as I have trust issues with the new codebase.. I do have good people skills with my clients, but my secret sauce was my coding skilz.. and I built my identity around that.. | | |
| ▲ | dgb23 an hour ago | parent [-] | | The cure for me has been to write an agent myself from first principles. Tailored to my workflow, style, goals, projects and as close as possible to what I think is how an agent should work. I’m deliberately only using an existing agent as a rubber duck. It’s a very empowering learning experience. |
|
| |
| ▲ | tonyedgecombe 3 hours ago | parent | prev | next [-] | | Using a coding agent seems quite low skill to me. It’s hard to see it becoming a differentiator. Just look at the number of people who couldn’t code before and are suddenly churning out work to confirm that. | | |
| ▲ | bachmeier 3 hours ago | parent [-] | | > Using a coding agent seems quite low skill to me. I agree if that's all you can do. Using a coding agent to complement a valuable domain-specific skill is gold. | | |
| ▲ | nunez 2 hours ago | parent [-] | | Thus why many technical business-facing people are super excited about AI (at the cost of developers) |
|
| |
| ▲ | veidr 2 hours ago | parent | prev | next [-] | | It absolutely is, but the fundamental misunderstanding around this seems to be that "effectively using coding agents" is a superset of the 2023-era general understanding of "Senior Software Engineer". At least when you're talking about shipping software customers pay for, or debugging it, etc. Research, narrow specializations, etc may be a different category and some will indeed be obsoleted. | |
| ▲ | mcdeltat 4 hours ago | parent | prev | next [-] | | I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day. | | |
| ▲ | dasil003 4 hours ago | parent | next [-] | | Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job. | |
| ▲ | xeromal 2 hours ago | parent | prev [-] | | How much of software programmer work is interesting? A fraction of a percent? I'd argue most of us including most startups work on things that help make businesses money and that's pretty "boring" work. |
| |
| ▲ | windward 3 hours ago | parent | prev | next [-] | | Many of those skills have temporary value before they're incorporated into the models/harnesses | |
| ▲ | ozozozd 3 hours ago | parent | prev | next [-] | | There was a moment we thought JS had won. And then crypto. I personally believed low-level development was done. | | |
| ▲ | nunez 2 hours ago | parent | next [-] | | Claude Code is written in TypeScript, which compiles to JS, so I think JS _did_ win... | |
| ▲ | underlipton 3 hours ago | parent | prev [-] | | Crypto did win, just not where you're looking. |
| |
| ▲ | MrDarcy 4 hours ago | parent | prev | next [-] | | Not sure why this would catch heat rationally speaking. It is quite clear in a professional setting effective use of coding agents is the most important skill to develop as an individual developer. It’s also the most important capability engineering orgs can be working on developing right now. Software Engineering itself is being disrupted. | |
| ▲ | anticorporate 3 hours ago | parent | prev | next [-] | | I'd offer an edit that the most important skill may be knowing when the agent is wrong. There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work. | |
| ▲ | nunez 2 hours ago | parent | prev | next [-] | | > This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents? Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve. "Effectively" using agents means that you're writing specs and reading code (in batches through change diffs) instead of writing code directly. This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO). The way that you read code is different with agents as well. Agents can produce a smattering of tests alongside implementation in a single turn. This is usually a lot of code. Thus, instead of red-green-refactor'ing a single change that you can cumulatively map in your head, you're prompt-build-executing entire features all at once and focusing on the result. Code itself loses its importance as a result. See also: projects that are moving towards agentic-first development using agents for maintenance and PR review. Some maintainers don't even read their codebases anymore. They have no idea what the software is actually doing. Need security? Have an agent that does nothing but security look at it. DevOps? Use a DevOps agent. This isn't too far off from what I was doing as a business analyst a little over 20 years ago (and what some technical product managers do now for spikes/prototypes). I wrote FRDs [^0] describing what the software should do. Architects would create TRDs [^1] from those FRDs. These got sent off to developers to get developed, then to QA to get bugs hammered out, then back to my team for UAT. If agents existed back then, there would've been way fewer developers/QA in the middle. Architects would probably do a lot of what they would've done. I foresee that this is the direction we're heading in, but with agents powered by staff engineers/Enterprise Architects in the middle. > Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods. People learn differently. I (and others) learn from doing. Typing code from Stack Overflow/Expertsexchange/etc instead of pasting it, then modifying it is how I learned to code. Some can learn from reading alone. [^0]: https://www.modernanalyst.com/Resources/Articles/tabid/115/I... | |
| ▲ | mxkopy 3 hours ago | parent | prev | next [-] | | I don’t think it could be the most important skill to have. The most common, and the most standardized one for sure, but if coding agents are doing fundamental R&D or running ops then nobody needs skills anyway. > As it turns out, neural nets “won” > The people who scoffed at neural nets and never got up to speed not so much. I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again. | |
| ▲ | shmerl 2 hours ago | parent | prev [-] | | I'd say viewing it as most important is pretty unprofessional. But isn't it the point of this extreme AI push? To replace professional skills with dummy parrots. |
|
|
| ▲ | simonw 5 hours ago | parent | prev | next [-] |
| > Improving developer skills is not valuable to your company Every company I've ever worked at has genuinely believed in and invested in improving developer skills. |
| |
| ▲ | Supermancho 4 hours ago | parent | next [-] | | I've worked for 35ish companies (contract and fulltime), largely on the west coast of the US. I have experienced the lip service, from the vast majority. I have experienced maybe 2 or 3 earnest attempts at growing engineer skills through subsidized admission/travel to talks, tools, or invited instructors. | | |
| ▲ | tasuki 4 hours ago | parent | next [-] | | > I've worked for 35ish companies It seems they were correct not to invest in your skills. I've worked for six companies over almost 20 years. The majority invested in my skills, and I hope that investment has paid off for them! | | |
| ▲ | dspillett 3 hours ago | parent | next [-] | | I've worked for five companies, on the same products (well, variations there-of over time), for 25 years, due to take-overs (I nearly left ~10 years ago due to management numskullery, but a timely buy-out of the bit I worked for fixed my problems while the rest of the company died off). Hanging around for a while (a long while) doesn't necessarily mean dedication worth investing in, it could just be that I have a shocking lack of ambition :) | |
| ▲ | an hour ago | parent | prev | next [-] | | [deleted] | |
| ▲ | ojbyrne 3 hours ago | parent | prev | next [-] | | Perhaps the lack of investment in their skills was the cause for the commenter’s job hopping, not the effect. | | |
| ▲ | shagie 3 hours ago | parent | next [-] | | Consider the rate of job hopping that would be evident on that resume. I'm not sure how many companies would be willing to invest in sending a FTE who stays somewhere for likely less than a year to a conference or say "Ok, you an spend 20% of your time improving your skills." What is more likely with the 35 number is that these are multiple simultaneous contracts. When working as a contractor you're fixing that problem or that project. The company isn't going to have you around for longer than a month after it's been fixed and documented. There's no reason to spend company resources on training a person any more than there's reason for you to pay a plumber to be reading "learn to be an electrician in 10 days" while they're supposed to be working on fixing the sink or doing the plumbing for new construction. | |
| ▲ | kjksf 2 hours ago | parent | prev [-] | | It's all so vague. "lack of investment in their skill". You just spent $250k and 5 years in college learning stuff. You get hired to do a job for money. What "investment" do you expect company to do? Give me number of weeks and amount of dollars per year and tell me how it stacks against $250k and 5 years that you just spent? If you want to learn on the job, shouldn't YOU be paying the company for teaching you, like you pay college to teach you? | | |
| ▲ | mixmastamyk an hour ago | parent | next [-] | | Continuing education is recognized and required in many fields. | |
| ▲ | rafterydj 2 hours ago | parent | prev [-] | | This argument falls apart if you consider what field we're talking about. At what point would going to school for 5 years give you the whole education you actually needed? Does learning C in 1995-2000 prepare you for Rust in 2026? No, and it shouldn't, but work needs done, so _yes_ there is a dollar amount of value for educating your workforce that has already been vetted and already knows the context for your business goals. Asking what that number is completely misses the point. | | |
| ▲ | ndriscoll an hour ago | parent | next [-] | | Actually I found that if you have a pretty good understanding of the core parts of the C standard (e.g. the idea of the abstract machine, storage durations, unspecified vs undefined behavior, etc.) and working experience with the language, Rust is then quite natural. To first approximation, Rust basically makes lifetime management/ownership semantics that would be "good practice" in C into mandatory parts of the type system. | | |
| ▲ | rafterydj an hour ago | parent [-] | | I agree - I was mostly trying to think of an example against OP's rather facetious attitude towards the time and effort required to maintain engineering performance. In my experience, a lot of the Rust fighting with the borrow checker is really just enforcing better quality code I should've been writing anyway. |
| |
| ▲ | SoftTalker an hour ago | parent | prev [-] | | If all you got out of a Computer Science undergrad program was "learning C" you were severely shortchanged. An 8-week bootcamp could have done that. | | |
| ▲ | rafterydj an hour ago | parent [-] | | Point still stands. You're going to take up the mantle for suggesting a computer science degree from 2000 completely qualifies someone for work in 2026? No further education needed? |
|
|
|
| |
| ▲ | oblio 4 hours ago | parent | prev [-] | | If you include consulting that could easily be 10 companies a year... | | |
| ▲ | lsaferite 3 hours ago | parent | next [-] | | Why would a company you are consulting for invest in training you up exactly? They are paying a consultant with the expectation that they are bringing the knowledge. | | |
| ▲ | 21asdffdsa12 3 hours ago | parent [-] | | Eh, consultants are brought in not for the knowledge or advice! Management already knows what todo and where to go- they just want somebody external sanctify the decision! |
| |
| ▲ | tasuki 3 hours ago | parent | prev [-] | | Could easily be, yes. And they'd be right not to invest in OP's skills. (To explicitly state the obvious: I'm not saying OP's a bad person for doing this, just saying the employers were right not to invest in them...) |
|
| |
| ▲ | ndriscoll 4 hours ago | parent | prev | next [-] | | What exactly do you have in mind? The large companies I've worked at had book subscriptions, internal training courses, and would pay for school. Personally I don't see the point of any of it. For software engineering, the info you need is all online for free. You can go download e.g. graduate level CS courses on youtube. MIT OCW has been around for almost a quarter century now. IME no one's going to stop you from spending a couple hours a week of work time watching lectures (at least if you're fulltime). Now at least at my company, we have unlimited use of codex, which you can ask for help explaining things to you. I also don't really see how attending conferences relates to skill improvement. Meanwhile, I've been explicitly told by managers that spending half my time mentoring people sounds reasonable. I can't understand what people are looking for when they talk about lack of investment into training for engineers. It's not the kind of job where someone can train you. It's like an executive complaining they aren't trained. You're the one who's supposed to be coming up with answers and making decisions. You need to spend time on self-motivated learning/discovering how to better do your work. Every company I've been at big or small assumes that's part of the job. | | |
| ▲ | adrianN 27 minutes ago | parent | next [-] | | Putting people on projects they’re only partly qualified for, ideally with mentoring, and letting them learn even though it takes longer than having the mentor do it by themselves. Allowing people to fail and try again without risking their ratings or their career. Book subscriptions and conference travel are quite cheap in comparison. | |
| ▲ | PurpleRamen 3 hours ago | parent | prev [-] | | > For software engineering, the info you need is all online for free. Guided learning with instant feedback can be much more efficient than just consuming and tinkering on your own. Depends on the topic, the teacher and situation of course. The quality of available material is also all over the place, and not every topic has enough material, or anything at all. | | |
| ▲ | ndriscoll 2 hours ago | parent [-] | | For foundational knowledge, there's been high quality information for free from MIT, Harvard, Stanford, Yale, etc. out there for years. Just look there first. If you're beyond that, you're beyond the canon that you can "learn" and closer to needing to follow/participate in SOTA R&D. And if you need a more structured environment, that's why people go to school. Engineering jobs expect you're at the level of someone who's completed undergrad, minimum. Part of an undergrad degree is getting used to seeking out resources yourself and learning from them instead of having a teacher spoon-feed it. Again I just don't have any idea of what training people expect. The entire job is basically "we might have some idea of what we want to do, but no one here knows the details. Go figure it out." What kind of guided learning would you want? How to solve problems? That's what 16 years of school was for! | | |
| ▲ | mixmastamyk an hour ago | parent [-] | | Often doesn’t matter. Fancy degree gets an interview in this job market. Not, “I read a bunch for free.” The explosion of stacks means it’s hard to keep everything in your head at once. Lookup is routine but will sink you as a candidate. Personally not great under the gun in adversarial interviews, so my extensive self learning is not well highlighted. |
|
|
| |
| ▲ | kjksf 2 hours ago | parent | prev | next [-] | | What is your expectation, exactly? In US you go to college for 4-5 years and pay $50k per year. Or more. You pay to learn. A lot of money, a lot of time. Then you get a job, where the idea is that you get paid for doing work and you expect the employer to do what? You seem to expect that not only you won't be doing the things you're being paid to do but the employer will pay for your education on company's time. How many weeks per year of time off do you expect to get from a company? You'll either say a reasonable number, like 1 or 2, which is insignificant to the time you supposedly spent learnings (5 years). You just spend 250 weeks supposedly learning but 1 or 2 weeks a year is supposed to make a difference? Or you'll say unreasonable number (anything above 2 weeks) because employment is not free education. | |
| ▲ | PurpleRamen 3 hours ago | parent | prev | next [-] | | Care to explain a bit more? With 35 companies, that would be around 1-2 years per company on average if you are retired or near retirement. I doubt any company is seriously investing in a worker who would likely be gone the next year. Getting lip service seems already good deal at that point. | | |
| ▲ | Supermancho 43 minutes ago | parent | next [-] | | > I doubt any company is seriously investing in a worker who would likely be gone the next year. There is a mismatch between how you would expect industry to work and what my last 30 years has taught me. > With 35 companies, that would be around 1-2 years per company on average if you are retired or near retirement. I have been at 4 companies for around 2 years or more. The rest of the positions were either contract or startup or contract-to-hire. The vast majority of engineers seem to settle in and suffer at terrible companies, rather than make moves to better jobs. They also tend to settle at whatever they are assigned and grow their skillsets by their employer's needs, rather than on their own. Over the last 2 decades, if you stayed somewhere for over 2 years, you better have added concrete skills to your resume and have increased your compensation by over 10%. If that's not on track, look for another job, imo. Contract-to-hire has been very popular. ie JPMC, credit, medical, adtech, games, big retail, subcontractor shops, to startups (4 of which were acquired). All initiatives to progress the careers of developers is applied more or less company wide because the line between contract-to-hire and fulltime is considered an engineering issue if there is more than hub. If you are a sole contributor, on some satellite project or still considered in training, you might not participate due to scheduling that had already been arranged, but the idea that contractors are excluded is more a possibility than a certainty. Most of the initiatives are little more than maybe someone talking with you every quarter, anyway. > Getting lip service seems already good deal at that point. It's strange that people are assuming engineers are treated special because of a resume that nobody looks at after an offer is made - having conducted hundreds of interviews. This must be a very rare thing some people may do. | |
| ▲ | pc86 3 hours ago | parent | prev [-] | | I mean the comment says "contract" right there; you can easily be on a contract with multiple companies simultaneously. When I was freelancing full-time ca. 2010-2013 or so I often had 5-6 active contracts running simultaneously. I probably worked for 15-20 different companies total in that 3-4 year span. | | |
| ▲ | PurpleRamen 3 hours ago | parent [-] | | Yes, likely, but make even less sense, as you can't except support for education as a freelancer. I mean a freelancers whole purpose is to sell skill and be gone when the job is finished. You are from the beginning just an expendable tool they don't want to polish outside the scope of the job. |
|
| |
| ▲ | threetonesun 4 hours ago | parent | prev | next [-] | | These two statements go hand in hand though. While I do believe companies could take the altruistic take of training people whether or not they stay, and some places do, they're certainly not going to make the effort for someone who has clear markers of being someone who will leave anyway. | | |
| ▲ | Supermancho 30 minutes ago | parent [-] | | That's not how these initiatives are executed, unless the shop is very small. In which case, there's no concrete training offered anyway. If it's large, they don't want to allocate a lot of budget rather than starting a new hiring round. I would say the lack of in-job developer training (or resourcing) is due to multiple factors that results in a consistency rather than specifically targeting individuals. It's not like I don't speak with ex-coworkers or run into them at times - eg one guy I taught Java to (at a position where java wasn't required except for a tiny tool), is the team lead at blizzard now. If I was made a pariah, I would hear about it over the years. |
| |
| ▲ | 4 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | bdangubic 4 hours ago | parent | prev | next [-] | | This percentage is probably right on the money! | |
| ▲ | aduwah 4 hours ago | parent | prev [-] | | Hard same over 20 years |
| |
| ▲ | tonyedgecombe 4 hours ago | parent | prev | next [-] | | Every company I worked for didn’t give a shit about my skills. They just wanted to solve the problem in front of them and if they couldn’t then they would hire someone in with the right skills. Improving my skills was seen as a risk as I might leave. | | | |
| ▲ | Waterluvian 5 hours ago | parent | prev | next [-] | | That’s been my experience, too. But now I get a sort of, “I dunno. Maybe don’t use AI on Fridays?” There doesn’t seem to be a plan for maintaining that culture. | |
| ▲ | jasomill 3 hours ago | parent | prev | next [-] | | Given the rest of the paragraph, I believe the parent is trying to say that merely improving developer skills is not valuable to the company, not that improving developer skills cannot provide value in terms of improved work product, morale, retention, etc. | |
| ▲ | kajaktum 4 hours ago | parent | prev | next [-] | | You must be lucky then. | | |
| ▲ | simonw 2 hours ago | parent [-] | | Realizing now that I've been both lucky and selective - I've always picked the kind of employers where this culture is baked in. |
| |
| ▲ | 01284a7e 4 hours ago | parent | prev [-] | | The opposite is true in my case - though 1 organization that had a small budget for things like AWS certs. I remember almost everyone who would get those certificates would never really learn anything from it either. They would just take the exams. |
|
|
| ▲ | KronisLV 3 hours ago | parent | prev | next [-] |
| > Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Doesn't credentialism kinda throw a spanner in that - where it's not enough to have people with a good track record of solving issues, but then someone along the way says "Yeah, we'd also like the devs who'll work on the project to have Java certs." (I've done those certs, they're orthogonal to one's ability to produce good software) Might just be govt. projects or particular orgs where such requirements are drawn up by dinosaurs, go figure (as much as I'd love software development to be "real" engineering with best practices spanning decades, it's still the Wild West in many respects). Then again, the same thing more or less applies to security, a lot of it seems like posturing and checklists (how some years back the status quo was that you'll change your password every 30-90 days because IT said so) instead of the stuff that actually matters. Not to detract from the point too much, but I've very much seen people not care about solving problems and shipping fast as stuff like that, or covering their own asses by paying for Oracle support or whatever (even when it gets in the way of actually shipping, like ADF and WebLogic and the horror that is JDeveloper). But yeah, I think many companies out there don't care that much about the individual growth of their employees that much, unless they have the ability to actually look further into the future - which most don't, given how they prefer not to train junior devs into mid/senior ones over years. |
|
| ▲ | lopsotronic 41 minutes ago | parent | prev | next [-] |
| Pour yourself a drink, as I have a longish story that might be a useful metaphor. Back in the day, there were more or less two consumer flight sims: MS Flight Simulator and XPlane. MSFS was and has always been the much prettier one, much easier to work with; xplane is kludgy, very old-school *NIX, and chonky in terms of resource usage. I was doing some work integrating flight systems data (FDAU/FDR outputs) into a cheaper flight re-creation tool, since the aircraft OEM's tool cost more than my annual salary. Hmm, actually, ten years of my salary. So why use xplane at all, then? The difference was that MSFS flight dynamics was driven from a model using table-based lookup that reproduced performance characteristics for a given airframe, whereas xplane (as you might be able to tell from the company name, Laminar Research) does fluid and gas simulation over the actual skin of the airframe, and then does the physics for the forces and masses and such. I caught some flack for going with xplane: "Why not MSFS!? It's so much prettier!" Unless the airframe is in a state that is near-equivalent with tabular lookup model, the actual flight is not going to be actually re-created. A plane in distress is very often in a boundary state- at best. OR you might be flying a plane that doesn't really have a model, like, say, a brand new planform (like the company was trying to develop). Without the aerodynamic fundamentals, the further away you get from the model represented by the tabular lookups, the greater the risk gets. And how does this relate? Those fundamentals- aerodynamic or mathematical or electrical- will be able to deal with a much broader range than models trained on existing data, regardless of whether or not they are LLMs or tabular lookups. If we rely on LLMs for aerodynamics, for chemistry, for electrical engineering, we are setting ourselves up for something like the 2008 Econopalypse except now it affects ALL the physical sciences; a Black Swan event that breaks reality. I am genuinely worried we're working outselves into just such an event, where the fundamentals are all but forgotten, and a new phenomenon simply breaks the nuts and bolts of the applied sciences. As for my xplane selection, it helped in other ways. Because often the FDR data is just plain wrong, but with xplane you could actually tell, because a control surface sticking out one way, while the flight instruments say another, lights up a "YOU GOT PROBLEMS" light in the cockpit as the aircraft inexplicably lurches to the right. |
| |
|
| ▲ | TomasBM an hour ago | parent | prev | next [-] |
| I agree with the sentiment, but I think the problem is much wider. Managers at companies are just doing what they've optimized their careers for: maintaining some edge over some competition, at some cost. What is pure FOMO to you or me, is good strategy to anyone trying to win [1]. In other words, FOMO was always the strategy. This self-reinforcing loop is also not going away. There hasn't been any real evidence that any part of knowledge work, including coding, cannot be automated [2]. Even if human-level quality or cost-effectiveness takes 10 more years, all tasks are functionally solved or about to be. I don't like it, but it's true. The big problem is that the people who are removed from this loop, who have the time to understand its effects and the power to make changes, are doing fuck-all. So, whether the loop stops for a while or speeds up even more, we're fucked until we figure out how to detach full-time employment from survival. [1] I believe this is called meta in PvP games; even if you want to subvert the meta, you gotta know it well first. [2] Although it could just be my impression, and I'd be happy to be proven otherwise. |
| |
| ▲ | ModernMech an hour ago | parent [-] | | The evidence that software development cannot be automated is we already tried to do it in the 90s with OOP, UML, and outsourcing. It didn’t work out for the same reasons vibe coding isn’t working out — because building the system is the same as specifying it, and that is a creative iterative process. We are at the point where sure ai can write code, but we could always do that; lack of code writing ability was not what killed the OOP automation efforts. There was plenty ability to code back then as well. The distinction of whether it’s an offshore team in India or Claude writing the code doesn’t change things as far as the larger picture of building the software. |
|
|
| ▲ | v3xro 3 hours ago | parent | prev | next [-] |
| > I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home. It could hardly have been a hobby if people were willing to pay you for it (and good rates too)? I will rephrase it like this - the market has shifted away from providing value to the customers of said companies to pumping itself instead and it does not need to employ people for that. Simple as. |
|
| ▲ | coldtea 3 hours ago | parent | prev | next [-] |
| >Improving developer skills is not valuable to your company What's valuable to a company is not necessarily what's valuable to the customers or even more so, to a civilization at large. |
|
| ▲ | catlifeonmars 4 hours ago | parent | prev | next [-] |
| Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space. I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud). |
| |
| ▲ | lioeters 3 hours ago | parent | next [-] | | We're witnessing a divergence between Coders and Clauders, with the latter dominating the market at a lower cost of labor + subscription fee to the almighty AI providers. Coders may be called in, hopefully with better renumeration, to review and debug the massive amount of code being generated. Either that or they will also be replaced by specially trained/prompted language models doing the review. | | |
| ▲ | Izkata 31 minutes ago | parent [-] | | > + subscription With how much some people spend on tokens that they've shared on here, and concerns about raising prices, I've kind of been wondering if we're actually heading to a point where seniors who don't use AI are going to be cheaper than juniors who do. |
| |
| ▲ | Bridged7756 3 hours ago | parent | prev | next [-] | | In the future Claude will keep a tight ship on dissenters. If your monthly quota doesn't exceed the 10k worth of tokens your employer will be notified and you will be flagged as a "dissenter". Your lease will be cancelled, because who would trust someone ignorant enough to not use LLMs in their daily life, and you'll be vetoed from the field for life, for clanker companies will proclaim that anyone who doesn't use LLM-assisted coding should be culled and so they'll run a tight ship. And executives will get millions in bonuses for figuring it out, and the remaining programmers, probably one or two, will raise their necks over who's the best prompter and how everyone else was dumber than them for not figuring it out. | | |
| ▲ | ej88 2 hours ago | parent [-] | | ai skeptic fanfic evolves in fascinating ways every day | | |
| |
| ▲ | bigstrat2003 an hour ago | parent | prev | next [-] | | Yeah, the AI productivity gains are a myth in my experience. | |
| ▲ | jmmv 3 hours ago | parent | prev [-] | | > the generated code just annoys me and the agents are too chatty I’ve eyerolled way less with Codex CLI and the GPT models than with Claude. | | |
|
|
| ▲ | clvx 3 hours ago | parent | prev | next [-] |
| There's a catch. Do not break customer trust. Many people are just tinkering with solving the problem but the indirect effects have not been tackled either by the tool, processes or just some human thinking. |
|
| ▲ | stingraycharles 4 hours ago | parent | prev | next [-] |
| > Improving developer skills is not valuable to your company. Yet every company does it, except the worst sweatshops. |
|
| ▲ | bluecheese452 3 hours ago | parent | prev | next [-] |
| What about a company with high security reqs that do bot alloellms? Like gov type work. |
|
| ▲ | titzer 5 hours ago | parent | prev | next [-] |
| The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on. There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us. God I hope it doesn't all crash at once. |
| |
| ▲ | tuvang 4 hours ago | parent | next [-] | | There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible. | | |
| ▲ | theshrike79 4 hours ago | parent | next [-] | | And the irony is that AI usage should make onboarding juniors easier. Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors. Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something. And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head. | | |
| ▲ | bragr 3 hours ago | parent [-] | | The problem is that juniors given access to AI don't seem to learn as much. AI just gives them fish over and over instead of learning how to fish. | | |
| ▲ | andrekandre 2 hours ago | parent | next [-] | | > The problem is that juniors given access to AI don't seem to learn as much.
i see this first-hand; they don't even know what they don't know so they circle over and over with ai leading them down rabbit holes and code that breaks in weird ways they cant even guess how to fix... stuff that if you were a real programmer you would have wrote in a few minutes let alone hours or days... | |
| ▲ | theshrike79 2 hours ago | parent | prev [-] | | Yea, giving people a blank Claude with no setup will get you that. What you could do is encourage (or force with IT's assistance) them to use a prompt (or hook or whatever) that refuses to do work for them, but instead telling them where to change and what without actually doing the work. |
|
| |
| ▲ | flir 4 hours ago | parent | prev | next [-] | | Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route. | |
| ▲ | 4 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | turlockmike 4 hours ago | parent | prev | next [-] | | How many kernel devs does the world need? A dozen or two? It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer). Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone. It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems. Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation? | | |
| ▲ | rafterydj an hour ago | parent | next [-] | | How wildly dismissive of the foundation of the X$ billion dollar software industry. You think humans just stumbled into writing code by accident or something? How does building agentic systems, a "really hard" problem, not just end up a "regular code" problem? Because that is what it is. A distributed systems problem with non-deterministic run lengths. How do you switch agent contexts? Similar to how you solve regular program context switching. How do you search tool capabilities and verify them? How do you effectively manage scheduled tasks? Oh, look, you've just invented the operating system kernel. Suddenly, those 'dozen or two' experts don't seem so archaic after all! | |
| ▲ | vdqtp3 2 hours ago | parent | prev | next [-] | | > How many kernel devs does the world need? A dozen or two? You're low by several orders of magnitude. "The 2025 development cycle saw 2,134 developers contribute to [Linux] kernel 6.18" [1] [1] https://commandlinux.com/statistics/linux-kernel-contributor... | |
| ▲ | oblio 29 minutes ago | parent | prev [-] | | Does it even make sense to build everything on top of machines that are 70% reliable? The sheer orchestration and validation overhead at scale risks being more expensive than just keeping most software engineers and having them manage a few AI agents. Also, 200 years ago we didn't have bike mechanics. Car mechanics. Boat mechanics. Plumbers. Electricians. Not all new professions fade away. |
| |
| ▲ | qsera 3 hours ago | parent | prev | next [-] | | Trust me. All those people do it for the love of doing it, so I don't think they will outsource the jobs to some automation.... I have been coding long before internet and before there were huge demand for software devs..and I would be coding even after there is no demand for the same. | |
| ▲ | nicksergeant 4 hours ago | parent | prev | next [-] | | I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past. I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.) Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]). | | |
| ▲ | titzer 3 hours ago | parent | next [-] | | Learning calculus by watching the professor solve integrals on the board for an hour doesn't result in the same level and depth of understanding as working through homeworks every week for a semester. If you ran off to your TA to solve every problem in your homework, you just won't learn calculus. I've vibe coded plenty. I mostly don't look at the crap coming out. Don't want to. When I do I absorb a tiny bit, but not enough to recreate the thing from scratch. I might have a modicum more surface-level knowledge, but I don't have deep understanding and I don't have skills. To the extent that I've fixed or tweaked AI-generated code, it's not been to restructure, rearchitecture, or refactor. If this is all I did day in and day out, my entire skillset would atrophy. | | |
| ▲ | nicksergeant 3 hours ago | parent [-] | | "I mostly don't look at the crap coming out." This is pretty much my point. I use LLMs to code _and_ to learn. I read everything that comes out. Half of it is wrong or incomplete. The other half saved me a bunch of time and taught me things. |
| |
| ▲ | Waterluvian 4 hours ago | parent | prev | next [-] | | I think there's a considerable difference in its ability to help with breadth vs. depth of expertise. | |
| ▲ | tripledry 4 hours ago | parent | prev | next [-] | | For me both are true at the same time. I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them. Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school". So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D | | |
| ▲ | mwigdahl 3 hours ago | parent [-] | | It's not just you. I feel the same thing, and I saw it in practice helping my son study for a chemistry test just last night. He had worked through a bunch of problems by following the steps in his notes and got the right answers, but couldn't solve them without the notes because his comprehension of why he was taking all the steps wasn't solid. Once we addressed that, he did great solo. Working the mechanics of the problems with the notes helped, but it was getting independent understanding of the reason for each step that put everything together for him. |
| |
| ▲ | zozbot234 4 hours ago | parent | prev | next [-] | | What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking. | | | |
| ▲ | agentultra 3 hours ago | parent | prev | next [-] | | > Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems How many bytes is a pointer in C? How many bytes is a shared pointer in C++? What does sysctl do? What about fsync? What is a mutex lock? How is it different from a spin lock? You want to find the n nearest points to a given point on a 2-D Cartesian plane. Could you write the code to solve that on your own? Can you answer any of these questions without searching for the answer? I don't use LLMs and I learn things fine. Always have. For several decades. I care deeply about the underlying code and systems. It annoys me when people say they do and they cannot even understand how the computer works. I'm fine with people having domain-specific knowledge of programming: maybe you've only been interested in web development and scripting DOM elements. But don't pretend that your expertise in that area means you understand how to write an operating system. Or worse: that it prevents you from learning how to write an operating system. You can do that without an LLM. There's no royal road. You have to understand the theory, read the books, read the code, write the code, make mistakes, fix mistakes, read papers, talk to other people with more experience than you... and just write code. And rewrite it. And do it all again. I find the opposite is true: those who use LLM coding exclusively never enjoyed programming to begin with, only learned as much as they needed to, and want the end results. | | |
| ▲ | nicksergeant 3 hours ago | parent [-] | | Agree with pretty much everything you wrote here, I guess with the addendum that LLMs can be a part of the learning experience you're describing. It's as easy as telling the LLM "don't write a single line of code nor command, I want to do everything, your goal is to help me understand what we're doing here." There are always going to be people who just want the end result. The only difference now is that LLM tools allow them to get much closer to the end result than they previously were able to. And on the other side, there are always going to be people who want to _understand_ what's happening, and LLMs can help accelerate that. I use LLMs as a personalized guide to learning new things. | | |
| ▲ | tpdly an hour ago | parent | next [-] | | I know it sounds extreme to dismiss that workflow, but I don't think people are talking enough about the subtle psychological consequences of LLM writing for this kind of thing. In the same way that googling for an SEO article's superficial answer ends up meaning you never really bother to memorize it, "ask chat" seems to lead to never really bothering to think hard about it. Of course I google things, but maybe I should be trying to learn in a way that minimizes the need. Maybe its important to learn how to learn in way that minimizes exposure to sycophantic average-blog-speak. | |
| ▲ | agentultra an hour ago | parent | prev [-] | | Best of luck in your journey! To those reading this thread though, be wary of the answers LLMs generate: they're plausible sounding and the LLM's are designed to be sycophants. Be wary, double check their answers to your queries against credible sources. And read the source! |
|
| |
| ▲ | anovikov 4 hours ago | parent | prev [-] | | This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs. | | |
| ▲ | JustResign 3 hours ago | parent [-] | | But it isn't "from scratch", is it? It's "from Claude". | | |
| ▲ | nicksergeant 3 hours ago | parent [-] | | If you build a house from scratch but you didn't mill the lumber, did you build it from scratch? If you make a pizza from scratch but you used canned sauce was it from scratch? What if you used store bought dough? What if you made the sauce and the dough but you didn't grow the tomato? |
|
|
| |
| ▲ | hnthrow0287345 4 hours ago | parent | prev | next [-] | | >But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us. That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there. Why do people worry about a potential, temporary loss of skill? | | |
| ▲ | doctorwho42 4 hours ago | parent | next [-] | | Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation. Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book. | | |
| ▲ | hnthrow0287345 4 hours ago | parent [-] | | And we're still here right? We have more books and knowledge and capabilities than ever. Despite theoretically losing knowledge along the way, we're okay (mostly). Society can replace the systems it relies on. The replacement might not be the best, but it'll probably handle things until we can reinvent a newer, better system. It probably won't be easy, but you can't convince me that humanity suddenly cannot adapt and fix problems right in front of them. How long does history have us doing that? These are extraordinary claims that all of society will just become dumb and not be able to do any of this. History is also littered with people fretting about the next generation not being smart enough or whatever, and those fears rhyme pretty closely with what we're talking about here. | | |
| ▲ | Tomis02 2 hours ago | parent [-] | | You could have lived 200 years. But instead, people decided they'd rather invest in crypto or LLMs instead. Maybe humans will still be here in a century. But you won't be. It didn't have to be this way. | | |
| ▲ | bit-anarchist 2 hours ago | parent [-] | | I don't see how they are actually exclusive in the long-term. Crypto investment isn't that big, and LLMs, or AI in general, may provide support for better treatments, thus possibly allowing people to reliably live onto 200 years. |
|
|
| |
| ▲ | Waterluvian 4 hours ago | parent | prev | next [-] | | I imagine it being a "does anybody know COBOL?!" but much sooner than sixty years rom now. | | |
| ▲ | RhysU 4 hours ago | parent [-] | | COBOL also came to mind. The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools. Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding. | | |
| ▲ | jerf 3 hours ago | parent | next [-] | | I've been waiting for the article talking about how AI is affecting COBOL. Preferably with quotes from actual COBOL programmers since I can already theorize as well as the next guy but I'm interested in the reports from the field. While LLMs have become pretty good at generating code, I think some of their other capabilities are still undersold and poorly understood, and one of them is that they are very good at porting. AI may offer the way out for porting COBOL finally. You definitely can't just blindly point it at one code base and tell it to convert to another. The LLMs do "blur" the code, I find, just sort of deciding that maybe this little clause wasn't important and dropping it. (Though in some cases I've encountered this, I sometimes understand where it is coming from, when the old code was twisty and full of indirection I often as a human have a hard time being sure what is and is not used just by reading the code too...) But the process is still way, way faster than the old days of typing the new code in one line at a time by staring at the old code. It's definitely way cheaper to port a code base into a new language in 2026 than it was in 2020. In 2020 it was so expensive it was almost always not even an option. I think a lot of people have not caught up with the cost reductions in such porting actions now, and are not correctly calculating that into their costs. It is easier than ever to get out of a language that has some fundamental issue that is hard to overcome (performance, general lack of capability like COBOL) and into something more modern that doesn't have that flaw. | |
| ▲ | jlokier 2 hours ago | parent | prev [-] | | I know it's just anecdotal, but I looked for COBOL salaries a couple of years ago, curious about this "paid well". The salaries were ok but not good for COBOL. Here's an anecdotal Reddit thread about it. https://www.reddit.com/r/developpeurs/comments/1ixfpsx/le_sa... |
|
| |
| ▲ | FpUser 4 hours ago | parent | prev [-] | | >"That's only a brief moment in time. We learned it once, we can learn it again if we have to. " Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it. | | |
| ▲ | hnthrow0287345 4 hours ago | parent [-] | | >It is like growing food on industrial scale. How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway. >If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it. I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works. These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me. |
|
| |
| ▲ | kingkawn 4 hours ago | parent | prev | next [-] | | If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break. | | |
| ▲ | cdetrio 3 hours ago | parent [-] | | We found 500 zero-days in ten year old widely used open-source projects. Was that not a demonstration of the catastrophic failure of human debugging capability? | | |
| |
| ▲ | anon291 3 hours ago | parent | prev [-] | | I mean there should be. But there's not. Despite the millions of CS grads produced many people could not reasonably be expected to produce many 'standard' parts of a software stack |
|
|
| ▲ | qsera 4 hours ago | parent | prev [-] |
| > I got to do my hobby as a career for the past 15 years, but that’s ending. Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out. |
| |
| ▲ | Waterluvian 4 hours ago | parent | next [-] | | I love the perpetual motion machine / thermodynamics analogy. It kind of feels like companies are being fooled into outsourcing/offshoring their jr. developer level work. Then the companies depend on it because operational inertia is powerful, and will pay as the price keeps going up to cover the perpetual motion lie. Then they look back and realize they're just paying Microsoft for 20 jr. developers but are getting zero benefit from in-house skill development. | |
| ▲ | colechristensen 4 hours ago | parent | prev [-] | | This is silly. I can build products in a weekend that would take me a year by myself. I am still necessary 1% of the time for debug, design, and direction and those of not at all a shallow skill. I have some graduate algebra texts on the way my math friend is guiding me through because I have found a publishable result and need to shore up my background before writing the paper... It's not perpetual motion, it's very real capability, you just have to be able to learn how to use it. | | |
| ▲ | qsera 4 hours ago | parent | next [-] | | No one is saying that it cannot do what you say now. What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That is how I compare it to perpetual motion mechanism scams. In the case of a perpetual motion machine, it appear that it will continue to run indefinitely. That is analogous to the impression that you have now. You feel that this will go on and on for ever, and that is the scam you are falling for. | | |
| ▲ | WarmWash 3 hours ago | parent | next [-] | | >What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That's more a misunderstood study that over time turned into a confidently stated fact. Yes, the model collapses if you loop the output to the input. But no, that's not how it's done. The reality is that all the labs are already using synthetic training data, and have been for at least a year now. It basically turned out to be a non-issue if you have robust monitoring and curation in place for the generated data. | | |
| ▲ | qsera 3 hours ago | parent [-] | | >using synthetic training data yea, look up how it is done. It is exactly how a perpetual motion machine scam would project an appearance of working like using a generator to drive a motor, and the motor driving the generator..something that would obscure the fact that there is energy loss happening along the way.... | | |
| ▲ | WarmWash 2 hours ago | parent [-] | | I'm confused with the point you are trying to make, because they are using synthetic data, and the models are getting stronger. There is no "conservation of fallacy" law (bad data must conserve it's level of bad), so I'm struggling to connect the dots on the analogy, unless I ignore the fact that training on synthetic data works, is being used, and the models are getting better. | | |
| ▲ | qsera an hour ago | parent | next [-] | | If the training that did not use synthetic data failed to capture some aspect of the information contained, then using data synthesized from the original data could help to capture it, thus it could result in the models getting better. But that is because the synthetic data helped the model capture what was already there in the training data. But after all such information has been extracted, then it would not be possible to use synthetic data or anything that is derived from the original data to create "new" information for training.... | |
| ▲ | dgb23 an hour ago | parent | prev [-] | | Better by which metrics? |
|
|
| |
| ▲ | _aavaa_ 4 hours ago | parent | prev [-] | | Why would the capabilities drop instead of stagnate? | | |
| ▲ | qsera 4 hours ago | parent [-] | | Because technologies, programming languages, best practices, won't stay frozen. If LLMs cannot catch up with it, I think it can be considered as a drop in capability. No? | | |
| ▲ | coldtea 3 hours ago | parent [-] | | Close, but no. What will happen is that "technologies, programming languages, best practices" will stay frozen because human innovation will drop, and the whole field will stagnate. | | |
| ▲ | californical 2 hours ago | parent [-] | | This is the biggest fear! I don’t see an easy fix. Will the developer of a new programming language be able to reach out to model companies to give a huge amount of training data, ensuring that the models are good at that new language? I don’t think a small team can write enough code, the models already struggle in medium-popularity languages that have years of history. They hallucinate lua functionality sometimes, for example, even though I’m sure there is lots of lua code out there. So if most people use coding agents, we’re stuck with the current most popular languages because no new language will get past the barrier of having enough code that models can write it well, meaning nobody adopts the new language, etc. Same thing with libraries and frameworks - technical decisions are already being made based on “is this popular enough that the agents can use it well?” Rather than a newer library that meets our needs perfectly but isn’t in the training data |
|
|
|
| |
| ▲ | askafriend 4 hours ago | parent | prev | next [-] | | You can see their ego trying to protect itself. | |
| ▲ | coldtea 3 hours ago | parent | prev | next [-] | | >This is silly. I can build products in a weekend that would take me a year by myself Is the world any better for them existing? The decline of coding and sw engineering skills in humans from outsourcing the practice of it to AI is it worth it and sustainable long term? | | |
| ▲ | colechristensen 2 hours ago | parent [-] | | >Is the world any better for them existing? The decline of coding and sw engineering skills in humans from outsourcing the practice of it to AI is it worth it and sustainable long term? The world is going to be no worse than it was when humans transitioned from writing assembly to writing compilers for high level languages. Assembly is still necessary, but not that often. In the same way writing code is going to become less necessary as tools are going to be written in higher level language in standards and requirements documents instead of code most of the time, with more specific exact coding only occasionally. Programmers were mostly solving the same plumbing problems over and over in secret because of "proprietary" needs to hide your code, but one million separate integrations of your billing backend with Stripe didn't really add to humanity. We're cutting out the boring middle drudgery and human effort is going to be freed up to work on the edges of human knowledge instead of tromping around in the middle. | | |
| ▲ | coldtea an hour ago | parent [-] | | >The world is going to be no worse than it was when humans transitioned from writing assembly to writing compilers for high level languages When I open some Electron apps I wish we stopped right about there. |
|
| |
| ▲ | tpdly 4 hours ago | parent | prev [-] | | You're fooling yourself. People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify. | | |
| ▲ | mikkupikku 4 hours ago | parent | next [-] | | You're fooling yourself. It's very easy to get demonstrably working results in an afternoon that would take weeks at least without coding agents. Demonstrably working, as in you can prove the code actually works by then putting it to use. I had a coding agent write an entire declarative GUI library for mpv userscripts, rendering all widgets with ASS subtitles, then proceeded to prove to my satisfaction that it does in fact work by using it to make a node editor for constructing ffmpeg filter graphs and an in-mpv nonlinear video editor. All of this is stuff I already knew how to do in practice, had intended to do one day for years now, but never bit the bullet because I knew it would turn into weeks of me pouring over auto-generated ASS doing things it was never intended to do to figure out why something is rendering subtly wrong. Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. Fooling myself? The code works, I'm using it, you're fooling yourself. | | |
| ▲ | zozbot234 3 hours ago | parent | next [-] | | > Demonstrably working, as in you can prove the code actually works by then putting it to use. That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case. You need actual proof that's driven by the code's overall structure. Humans do this at least informally when they code, AI's can't do that with any reliability, especially not for non-trivial projects (for reasons that are quite structural and hard to change) so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology. | | |
| ▲ | mikkupikku an hour ago | parent | next [-] | | > That's not how you prove that code works properly Yes it is. What do you expect, formal verification of a toy GUI library? Get real. > and isn't going to fail due to some obscure or unforessen corner case. That's called "a bug", they get fixed when they're found. This isn't aerospace software, failure is not only an option, it's an expected part of the process. > You need actual proof that's driven by the code's overall structure. I literally don't. > Humans do this at least informally when they code, AI's can't do that with any reliability Sounds like a borderline theological argument. Coding agents one-shot problems a lot more often than I ever did. Results are what matters, demonstrable results. | |
| ▲ | coldtea 3 hours ago | parent | prev [-] | | >That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case. So? We didn't prove human code "isn't going to fail due to some obscure or unforessen corner case" either (aside the tiny niche of formal verification). So from that aspect it's quite similar. >so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology. You seem to imply they do some sort of random iteration until the tests pass, which is not the case. Usually they can see the test failing, and describe the issue exactly in the way a human programmer would, then fix it. | | |
| ▲ | zozbot234 3 hours ago | parent [-] | | > describe the issue exactly in the way a human programmer would Human programmers don't usually hallucinate things out of thin air, AIs like to do that a whole lot. So no, they aren't working the exact same way. | | |
| ▲ | coldtea 3 hours ago | parent [-] | | >Human programmers don't usually hallucinate things out of thin air Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on. >So no, they aren't working the exact same way. However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would". | | |
| ▲ | qsera 3 hours ago | parent [-] | | That is not hallucinating... LLM hallucinating is not an edge case. It is how they generate output 100% time. Mainstream media only calls it "hallucination" when the output is wrong, but from the point of view of a LLM, it is working exactly it is supposed to.... | | |
| ▲ | coldtea 2 hours ago | parent [-] | | >LLM hallucinating is not an edge case. It is how they generate output 100% time If enough of the time it matches reality -- which it does, it doesn't matter. Especially in a coding setup, where you can verify the results, have tests you wrote yourself, and the end goal is well defined. And conversely, if a human is a bullshitter, or ignorant, or liar, or stupid, it doesn't matter if they end up with useless stuff "in a different way" than an LLM hallucinating. The end result regarding the low utility of his output is the same. Besides, one theory of cognition (pre LLM even) is of the human brain as a prediction machine. In which case, it's not that different than an LLM in principle, even if the scope and design is better. |
|
|
|
|
| |
| ▲ | bachmeier 3 hours ago | parent | prev [-] | | > Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. One might argue that this is a substitute for metaprogramming, not software developers. | | |
| ▲ | trollbridge 2 hours ago | parent [-] | | It's interesting more people haven't talked about this. A lot of so-called agentic development is really just a very roundabout way to perform metaprogramming. At my own firm, we generally have a rule we do almost everything through metaprogramming. | | |
|
| |
| ▲ | colechristensen 2 hours ago | parent | prev [-] | | I also did a native implementation of git so I could use an S3 compatible data store, your rails guru can't do that. Objectively, my GitHub clone is still shitty, BUT it got several ways github is shitty out of my way and allowed me to add several features I wanted, no small one of which was GitHub not owning my data. I don't know the shit out of Rails and I don't want to, I know the shit out of other things and I want the tools I'm using to be better and Claude is making that happen. It's a little odd the skepticism to the level that people keep telling me I'm delusional for being satisfied that I've created something useful for myself. The opposition to AI/LLMs seems to be growing into a weird morality cult trying to convince everybody else that they're leading unhappy immoral lives. I'm exaggerating but it's looking like things are going in that direction... and in my house, so to speak, here on HN there are factions. Like programming language zealots but worse. | | |
| ▲ | tpdly an hour ago | parent [-] | | Hey I understand you've gotten something out of it. You hired a robot to 3d-print a mug that fits your hand. There's a place for that. You understand that it might poison you a little bit? You understand that this doesn't make ceramics irrelevant? Hobby-project vibe coding is pretty cool (if I'm being honest, its fucking miraculous; this tech is wild) but isn't it clear that there's a problem with the linkedincels, the investors, the management that are all convinced this will remove say 50% of programming jobs? I understand these things have legitimate uses, but I'm at my wits end hearing about how deep understanding, craftsmanship, patience and hard work aren't "results oriented". There's definitely zealotry developing against AI, but I suspect it is a proportional (if unhelpful) response to the hype machine. Is it really zealotry to insist on the value of your mind and your competence? These people saying you should never "hand write" your code-- how the fuck did the discourse move so much that this isn't a laughably stupid thing to say? "I'm a CEO, and if you aren't using consultants to make your decisions you've already lost" | | |
| ▲ | colechristensen 30 minutes ago | parent [-] | | >isn't it clear that there's a problem with the linkedincels, the investors, the management that are all convinced this will remove say 50% of programming jobs These people have always been doing this. Starting in the 90s it was outsourcing programming jobs, they were right then, they got more work for less money and you could have less expertise on staff farming out work somewhere else that was cheaper. You also got worse results sometimes. So it goes. LLMs are making people more powerful and sucking a lot of income off to the people who provide them. Yup. It makes idiot shysters more powerful just the same as it makes experts more powerful. People are acting like the software engineering industry is full of fine artistry building the finest bespoke tools instead of duct taping rocks to sticks. I'm sorry but there is a tremendous amount of crap out there by people who barely know what they're doing. Yes new technology empowers idiots, but it also empowers smart people and if you use it well it'll lead to more quality. Yes you're going to have the same problems you had before of someone doing something cheaply competing with someone trying to be careful to build something well. There also will continue to be idiots spouting off about it. Nothing changed but the tools got more powerful and people are whining complaining about this change this time ruining everything. Like they always have forever. |
|
|
|
|
|