| ▲ | thunky 3 hours ago |
| > What exactly is being de-valuated for a profession You're probably fine as a more senior dev...for now. But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it. Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off. |
|
| ▲ | spicyusername 2 hours ago | parent | next [-] |
| > assign work to an LLM This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions. Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs. LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google. |
| |
| ▲ | neom an hour ago | parent | next [-] | | I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev. | |
| ▲ | cjbgkagh an hour ago | parent | prev | next [-] | | Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world. | | |
| ▲ | scarface_74 16 minutes ago | parent [-] | | Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people. On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts. I would have actually slowed him down. |
| |
| ▲ | vladimirralev an hour ago | parent | prev | next [-] | | Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension. | |
| ▲ | grumbel 2 hours ago | parent | prev | next [-] | | > This is just not happening anywhere around me. Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves. And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist. | | |
| ▲ | danaris 2 hours ago | parent [-] | | > Don't worry about where AI is today, worry about where it will be in 5-10 years. And where will it be in 5-10 years? Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations". Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau. If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command. | | |
| ▲ | CuriouslyC an hour ago | parent | next [-] | | Right about where it is today with better integrations? One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year. The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities. | | |
| ▲ | catlifeonmars 12 minutes ago | parent | next [-] | | You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction? The bulk of that capex is chips, and those chips are straight up depreciating assets. | |
| ▲ | 16 minutes ago | parent | prev [-] | | [deleted] |
| |
| ▲ | lupire 44 minutes ago | parent | prev | next [-] | | It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power. | |
| ▲ | enraged_camel 29 minutes ago | parent | prev [-] | | I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago. |
|
| |
| ▲ | chud37 an hour ago | parent | prev | next [-] | | Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid. I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow. | |
| ▲ | raw_anon_1111 36 minutes ago | parent | prev | next [-] | | Well your anecdote is clearly at odds with absolutely all of the macro economic data. | |
| ▲ | carrychains an hour ago | parent | prev | next [-] | | It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession. But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable. | | | |
| ▲ | queenkjuul 2 hours ago | parent | prev [-] | | You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now) |
|
|
| ▲ | HarHarVeryFunny an hour ago | parent | prev | next [-] |
| > But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it. This sounds kind of logical, but really isn't. In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better. In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM. Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to. |
| |
| ▲ | enraged_camel 25 minutes ago | parent | next [-] | | >> e.g. coming back to you and asking for help Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough. | | |
| ▲ | HarHarVeryFunny 9 minutes ago | parent [-] | | Yes, they continue to get better, but they are not at human level (and jr devs are humans too) yet, and I doubt the next level "AGI" that people like Demis Hassabis are projecting to still be 10 years away will be human level either. |
| |
| ▲ | lupire an hour ago | parent | prev [-] | | Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things | | |
| ▲ | HarHarVeryFunny 26 minutes ago | parent [-] | | Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success). We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation. If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans. |
|
|
|
| ▲ | walt_grata an hour ago | parent | prev | next [-] |
| LLMs vs human Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway. Claude doesn't get better as I explain concepts to it the same way a jr engineer does. |
| |
| ▲ | cjbgkagh an hour ago | parent | next [-] | | I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can. | | |
| ▲ | eithed 26 minutes ago | parent [-] | | But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people! Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told") | | |
| ▲ | raw_anon_1111 9 minutes ago | parent [-] | | If you are a “senior” engineer who is doing nothing but pulling well defined Jira tickets off the board, you’re horribly mis titled. |
|
| |
| ▲ | sebasvisser an hour ago | parent | prev | next [-] | | Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself. | |
| ▲ | lupire 41 minutes ago | parent | prev [-] | | Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows. |
|
|
| ▲ | xtiansimon 2 hours ago | parent | prev | next [-] |
| > “…exploiting our employer's lack of information…” I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble. |
|
| ▲ | singpolyma3 2 hours ago | parent | prev [-] |
| If one is a junior the goal is to become a senior though. Not to remain a junior. |
| |
| ▲ | solids an hour ago | parent [-] | | Yes, but the barrier to become a senior is what’s currently in dispute |
|