Remix.run Logo
AI was supposed to help juniors shine. why does it mostly make seniors stronger?(elma.dev)
243 points by elmsec 16 hours ago | 261 comments
kaydub an hour ago | parent | next [-]

Because juniors don't know when they're being taken down a rabbit hole. So they'll let the LLM go too deep in its hallucinations.

I have a Jr that was supposed to deploy a terraform module I built. This task has been hanging out for a while so I went to check in on them. They told me the problem they're having and asked me to take a look.

Their repo is a disaster, it's very obvious claude took them down a rabbit hole just from looking. When I asked, "Hey, why is all this python in here? The module has it self contained" and they respond with "I don't know, claude did that" affirming my assumptions.

They lack the experience and they're overly reliant on the LLM tools. Not just in the design and implementation phases but also for troubleshooting. And if you're troubleshooting with something that's hallucinating and you don't know enough to know it's hallucinating you're in for a long ride.

Meanwhile the LLM tools have taken away a lot of the type of work I hated doing. I can quickly tell when the LLM is going down a rabbit hole (in most cases at least) and prevent it from continuing. It's kinda re-lit my passion for coding and building software. So that's ended up in me producing more and giving better results.

shaky-carrousel 18 minutes ago | parent [-]

Unfortunately, the type of work you hate doing is perfect for a junior. Easy tasks to get a hold on the system.

reactordev 6 minutes ago | parent | next [-]

Yup, sounds like a great opportunity to show you’re senior by mentoring.

morkalork 3 minutes ago | parent | prev [-]

>How'd you get so good at debugging and navigating code you've never seen before?

>Because I spent a couple internships and a whole year as a junior debugging, triaging and patching every single issue reported by other developers and the QA team

Was I jealous that the full time and senior devs got to do all the feature work and architecture design? Yes. Am I a better developer having done that grind? Also yes.

bentt 8 hours ago | parent | prev | next [-]

The best code I've written with an LLM has been where I architect it, I guide the LLM through the scaffolding and initial proofs of different components, and then I guide it through adding features. Along the way it makes mistakes and I guide it through fixing them. Then when it is slow, I profile and guide it through optimizations.

So in the end, it's code that I know very, very well. I could have written it but it would have taken me about 3x longer when all is said and done. Maybe longer. There are usually parts that have difficult functions but the inputs and outputs of those functions are testable so it doesn't matter so much that you know every detail of the implementation, as long as it is validated.

This is just not junior stuff.

Hendrikto 7 hours ago | parent | next [-]

> I could have written it but it would have taken me about 3x longer when all is said and done.

Really does not sound like that from your description. It sounds like coaching a noob, which is a lot of work in itself.

Wasn’t there a study that said that using LLMs makes people feel more productive while they actually are not?

fluidcruft 2 hours ago | parent | next [-]

True but the n00b is very fast. A lot of coaching is waiting for the n00b to perform tasks and meta things about motivation. These LLM are extremely fast and eager to work.

I don't need a study to tell me that five projects that have been stuck in slow plodding along waiting for me to ever have time or resources for nearly ten years. But these are now nearing completion after only two months of picking up Claude Code. And with high-quality implementations that were feverdreams.

My background is academic science not professional programming though and the output quality and speed of Claude Code is vastly better than what grad students generate. But you don't trust grad student code either. The major difference here is that suggestions for improvement loop in minutes rather than weeks or months. Claude will get the science wrong, but so do grad students.

(But sure technically they are not finished yet ... but yeah)

victorbjorklund an hour ago | parent | next [-]

100% this. The AI missunderstands and make a mistake? No problem. Clarify and the AI will come back with a rewrite in 30 sec.

bigfishrunning an hour ago | parent [-]

A rewrite with another, more subtle mistake. That you must spend energy discovering and diagnosing.

fluidcruft an hour ago | parent | next [-]

How is that different from working with a n00b except that it only took 30sec to get to the next bug rather than a week?

bigfishrunning an hour ago | parent [-]

The junior engineer will grow into a senior engineer

square_usual an hour ago | parent | prev [-]

> another, more subtle mistake. That you must spend energy discovering and diagnosing

But this is literally what senior engineers do most of the time? Have juniors write code with direction and review that it isn't buggy?

bigfishrunning 24 minutes ago | parent [-]

Except that most of the code seniors review was written with intention, not just the most statistically most likely response to a given query. As a senior engineer, the kinds of mistakes that AI makes are much more bizarre then the mistakes junior engineers make

square_usual 19 minutes ago | parent [-]

I've worked with many interns and juniors in my life and they've made very bizarre mistakes and had subtle bugs, so the difference in the kinds hasn't made much of a difference in the work I've had to do to review. Whether or not there was intention behind it didn't make a difference.

dingnuts 2 hours ago | parent | prev | next [-]

LLMs might make you feel faster (which helps with motivation!) and help with some of the very easy stuff but the critical part of your anecdote is that you haven't actually completed the extra work. The projects are only "NEARING" completion. I think that's very telling.

victorbjorklund an hour ago | parent | next [-]

If the easy things are done faster you xan spend more time on the hard stuff. No need to spend 2 hours on making the UI for the MVP when an AI can make a decent UI in 2 min. Means you have 2 hours more to spend on the hard stuff.

SpicyLemonZest 24 minutes ago | parent [-]

Unless, as is often the case in my experience, the hard stuff consists largely of fixing bugs and edge cases in your implementation of the easy stuff. I've seen multiple people already end up forced back to the drawing board because their "easy stuff" AI implementation had critical flaws they only realized after they thought they were done. It's hard to prove counterfactuals, but I'm pretty confident they would have gotten it right the first time if they hadn't used AI, they're not bad engineers.

fluidcruft an hour ago | parent | prev [-]

Congratulations! You repeated my joke? lol

But in all seriousness, completion is not the only metric of productivity. I could easily break it down into a mountain of subtasks that have been fully completed for the bean counters. In the meantime, the code that did not exist 2 months ago does exist.

exe34 an hour ago | parent | prev [-]

> I don't need a study to tell me that five projects that have been stuck in slow plodding along waiting for me to ever have time or resources for nearly ten years.

that's the issue in the argument though. it could be that those projects would also have been completed in the same time if you had simply started working on them. but honestly, if it makes you feel productive to the point you're doing more work than you would do without the drug, I'd say keep taking it. watch out for side effects and habituation though.

pessimizer 12 minutes ago | parent | next [-]

You've added an implicit assumption that this person spends more time programming now than they used to, rather than continuing to commit time at the same rate but now leading to projects being completed when they previously got bogged down and abandoned.

There are any number of things you could add to get you to any conclusion. Better to discuss what is there.

I've had the same experience of being able to finish tons of old abandoned projects with AI assistance, and I am not spending any more time than usual working on programming or design projects. It's just that the most boring things that would have taken weeks to figure out and do (instead, let me switch to the other project I have that is not like that, yet) have been reduced to hours. The parts that were tough in a creative fun way are still tough, and AI barely helps with them because it is extremely stupid, but those are the funnest, most substantive parts.

fluidcruft an hour ago | parent | prev [-]

I don't think that's correct. That could be true if I were primarily a programmer, but I am not. I'm mostly a certified medical physicist working in a hospital. Programming is a skill that is helpful and I have spent my programming time building other tools that I need. But that list is gigantic, the software that is available for purchase is all complete crap, the market is too small for investment, etc. That's all to say the things I am building are desperately needed but my time for programming is limited and it's not what brings home the bacon and there's no money to be made (beyond consulting, essentially these things might possibly work as tools for consultants). I don't have resources for professional programming staff but I have worked with them in the past and (no offense to most of HN) but the lack of domain knowledge tends to waste even more of my time.

tehjoker 33 minutes ago | parent [-]

You are very fortunately in the perfect slot for where LLM has a lot of bang for the buck.

veidr 4 hours ago | parent | prev | next [-]

It is in many ways much like coaching a n00b, but a n00b that can do 10 hours of n00b work in 10 minutes (or, 2 minutes).

That's a significant difference. There are a lot of tasks that can be done by a n00b with some advice, especially when you can say "copy the pattern when I did this same basic thing here and here".

And there are a lot of things a n00b, or an LLM, can't do.

The study you reference was real, and I am not surprised — because accurately gauging the productivity win, or loss, obtained by using LLMs in real production coding workflows is also not junior stuff.

giantg2 6 hours ago | parent | prev | next [-]

"Really does not sound like that from your description. It sounds like coaching a noob, which is a lot of work in itself."

And if this is true, you will have to coach AI each time whereas a person should advance over time.

raincole 5 hours ago | parent | next [-]

At least you can ask AI to summarize a AGENT.md or something and it will read it diligently next time.

As for humans, they might not have the motivation technical writing skill to document what they learnt. And even if they did, the next person might not have the patience to actually read it.

ay 4 hours ago | parent | next [-]

"Read diligently" - that’s a very optimistic statement. I can not count how many times Claude (LLM I am most familiar with, I had it write probably about 100KLOC in the past few months) explicitly disobeyed what was written in the instructions.

Also, a good few times, if it were a human doing the task, I would have said they both failed to follow the instructions and lied about it and attempted to pretend they didn’t. Luckily their lying abilities today are primitive, so it’s easy to catch.

smsm42 2 hours ago | parent | next [-]

Psychopatic behavior seems to be a major problem for these (of course it doesn't think so it can't be called that but that's the closest term that fits). They are trained to arrive at the result, and if the most likely path to it is faking it and lying about it, then that's what you are getting. And if you find it, it will cheerfully admit it and try to make s better lie that you'd believe.

onionisafruit an hour ago | parent | prev | next [-]

So true. I have some non-typical preferences for code style. One example is I don’t like nested error checks in Go. It’s not a correctness issue, it’s just a readability preference. Claude and copilot continually ignore this no matter how much emphasis I give it in the instructions. I recently found a linter for this, and the agent will fix it when the linter points out the issue.

This is probably because the llm is trained on millions of lines of Go with nested error checks vs a few lines of contrary instructions in the instructions file.

I keep fighting this because I want to understand my tools, not because I care that much about this one preference.

jaggederest 3 hours ago | parent | prev | next [-]

Claude has really gone downhill in the last month or so. They made a change to move the CLAUDE.md from the system prompt to being occasionally read in, and it really deprioritizes the instructions to the same attention level as the code it's working on.

I've been trying out Codex the last couple days and it's much more adherent and much less prone to lying and laziness. Anthropic says they're working on a significant release in Claude Code, but I'd much rather have them just revert back to the system as it was ~a month ago.

CuriouslyC 3 hours ago | parent | next [-]

Claude is cooked. GPT5 codex is a much stronger model, and the codex cli is much more performant/robust than cc (even if it has fewer features).

I've never had a model lie to me as much as Claude. It's insane.

darkbatman 2 hours ago | parent | prev [-]

true, I was using Cline/Roocode from almost an year and it always made sure to read things from memory-bank which i really liked. Claude has gone downhill from August mid for me and often it doesn't follow instructions from claude.md or forget things mid-way.

derefr 23 minutes ago | parent | prev [-]

> Also, a good few times, if it were a human doing the task, I would have said they both failed to follow the instructions and lied about it and attempted to pretend they didn’t.

It's funny. Just yesterday I had the experience of attending a concert under the strong — yet entirely mistaken — belief that I had already been to a previous performance of the same musician. It was only on the way back from the show, talking with my partner who attended with me (and who had seen this musician live before), trying to figure out what time exactly "we" had last seen them, with me exhaustively listing out recollections that turned out to be other (confusingly similar) musicians we had seen live together... that I finally realized I had never actually been to one of this particular musician's concerts before.

I think this is precisely the "experience" of being one of these LLMs. Except that, where I had a phantom "interpolated" memory of seeing a musician I had never actually seen, these LLMs have phantom actually-interpolated memories of performing skills they have never actually themselves performed.

Coding LLMs are trained to replicate pair-programming-esque conversations between people who actually do have these skills, and are performing them... but where those conversations don't lay out the thinking involved in all the many implicit (thinking, probing, checking, recalling) micro-skills involved in actually performing those skills. Instead, all you get in such a conversation thread is the conclusion each person reaches after applying those micro-skills.

And this leads to the LLM thinking it "has" a given skill... even though it doesn't actually know anything about "how" to execute that skill, in terms of the micro-skills that are used "off-screen" to come up with the final response given in the conversation. Instead, it just comes up with a prediction for "what someone using the skill" looks like... and thinks that that means it has used the skill.

Even after a hole is poked in its use of the skill, and it realizes it made a mistake, that doesn't dissuade it from the belief that it has the given skill. Just like, even after I asked my partner about the show I recall us attending, and she told me that that was a show for a different (but similar) musician, I still thought I had gone to the show.

It took me exhausting all possibilities for times I could have seen this musician before, to get me to even hypothesize that maybe I hadn't.

And it would likely take similarly exhaustive disproof (over hundreds of exchanges) to get an LLM to truly "internalize" that it doesn't actually have a skill it believed itself to have, and so stop trying to use it. (If that meta-skill is even a thing that LLMs have ever learned from their training data — which I doubt. And even if they did, you'd be wasting 90% of a Transformer's context window on this. Maybe something that's worth keeping in mind if we ever switch back to basing our LLMs on RNNs with true runtime weight updates, though!)

giantg2 2 hours ago | parent | prev [-]

I find the summaries to be helpful. However, I find some of the detailed points to lack a deep understanding of technical points and their importance.

rolisz 5 hours ago | parent | prev | next [-]

And then they skip to another job for more money, and you start again with a new hire.

Avicebron 5 hours ago | parent | next [-]

Thankfully after many generations of human interactions and complex analysis of group dynamics, we've found a solution. It's called 'don't be an asshole' and 'pay people competitively'.

edit: because people are stupid, 'competitively' in this sense isn't some theoretical number pulled from an average, it's 'does this person feel better off financially working with you than others around them who don't work with you, and is is this person meeting their own personal financial goals through working with you'?

binary132 2 hours ago | parent | next [-]

The elephant in this particular room is that there are a tiny handful of employers that have so much money that they can and do just pay whatever amount is more than any of their competitors can possibly afford.

giantg2 20 minutes ago | parent [-]

That shouldn't be a big deal since they're a finite portion of the market. You should have a robust enough model to handle people leaving, including unavoidable scenarios like retirement and death.

thfuran 2 hours ago | parent | prev | next [-]

The common corporate policy of making it harder to give raises than to increase starting salaries for new hires is insane.

wiseowise 3 hours ago | parent | prev | next [-]

They do have a point. Why waste a time on person who will always need more money over time, rather than invest in AI? Not only you don’t need to please every hire, your seniors will be more thankful too, because they will get linearly faster with time.

smsm42 2 hours ago | parent [-]

Outside of working for Antropic etc., there's no way you can make an LLM better at anything. You can train a junior though.

victorbjorklund an hour ago | parent [-]

You can def provide better context etc.

faangguyindia 3 hours ago | parent | prev [-]

The person paying and the one responsible for coaching others usually aren't same

giantg2 21 minutes ago | parent | prev [-]

That's not a bad thing. It means you've added one more senior to the societal pool. A lot of the talent problems today are due to companies not wanting to train and focusing on cheap shortcut options like outsourcing or H1B

mensetmanusman 4 hours ago | parent | prev [-]

The AI in this example is 1/100 the cost.

gnerd00 2 hours ago | parent | next [-]

that is absolutely false - the capital and resources used to create these things are societal scale. An individual consumer is not paying that cost at this time.

victorbjorklund an hour ago | parent | next [-]

You can make the same argument about humans. The employeer doesnt pay the full cost and time to create the worker from an embryo to a senior dev.

mensetmanusman 2 hours ago | parent | prev [-]

That only proves the point. If something increases the value of someone’s time by 5% and 500,000,000 people are affected by it, the cost will collapse.

These models are only going to get better and cheaper per watt.

cratermoon 3 hours ago | parent | prev [-]

For now, not including externalities.

nicce 6 hours ago | parent | prev | next [-]

> Really does not sound like that from your description. It sounds like coaching a noob, which is a lot of work in itself.

Even if you do it by yourself, you need to do the same thinking and iterative process by yourself. You just get the code almost instantly and mostly correctly, if you are good at defining the initial specification.

fsloth 4 hours ago | parent [-]

This. You _have_ to write the spec. The result is that instead of spending X units of time on spec and THEN y units of time on coding, you get the whole thing in x units of time AND you have a spec.

The trick is knowning where the particular LLM sucks. I expect in a short amount of time there is no productivity gain but when you start to understand the limitations and strengths - holey moley.

skydhash 3 hours ago | parent | next [-]

> The result is that instead of spending X units of time on spec and THEN y units of time on coding, you get the whole thing in x units of time AND you have a spec.

It's more like x units of time thinking and y units of times coding, whereas I see people spend x/2 thinking, x typing the specs, y correcting the specs, and y giving up and correcting the code.

fsloth 2 hours ago | parent [-]

Sure! That's inefficient. I know just how I work and I've been writing the type of programs I do for quite many years. And I know what would take me normally a week takes me few days at best.

smsm42 2 hours ago | parent | prev [-]

Unless you realize no LLM is good at what you need and you just wasted weeks of time walking in circles.

athrowaway3z 4 hours ago | parent | prev | next [-]

> Wasn’t there a study that said that using LLMs makes people feel more productive while they actually are not?

On a tangent; that study is brought up a lot. There are some issues with it, but I agree with the main takeaway to be weary of the feeling of productivity vs actual productivity.

But most of the time its brought up by AI skeptics, that conveniently gloss over the fact it's about averages.

Which, while organizationally interesting, is far less interesting than to discover what is and isn't currently possible at the tail end by the most skillful users.

oceanplexian 31 minutes ago | parent | next [-]

Engineers have always been terrible at measuring productivity. Building a new internal tool or writing a bunch of code is not necessarily productive.

Productivity is something that creates business value. In that sense an engineer who writes 10 lines of code but that code solves a $10M business problem or allows the company to sign 100 new customers may be the most productive engineer in your organization.

kaydub 41 minutes ago | parent | prev [-]

Not to mention the study doesn't really show a lack of productivity and they include some key caveats in it outlining how they think productivity increases using LLMs

lumost an hour ago | parent | prev | next [-]

Anecdotally, on green field projects where you are exploring a new domain - it’s an insanely productive experience. On mundane day to day tasks it probably takes more time, but feels like less mental bandwidth.

Coding at full throttle is a very intensive task that requires deep focus. There are many days that I simply don’t have that in me.

kaydub an hour ago | parent | prev | next [-]

Everyone using that study to prove LLMs are bad hasn't actually read the study.

ludicrousdispla 3 hours ago | parent | prev | next [-]

LLMs make two people more productive, the person that uses the LLM, and then the person that cleans up the mess.

tarsinge 4 hours ago | parent | prev | next [-]

Sure it’s a lot of work, but the noob in question has all the internet knowledge and can write multiples times faster than a human for a fraction of the costs. This is not about an individual being more productive, this is about business costs. Long term we should still hire and train juniors obviously, but short term there is lot of pressure to not do it as it makes no sense financially. Study or not the reality is there is not much difference in productivity between a senior with a cursor license and a senior and a junior that needs heavy guidance.

skydhash 3 hours ago | parent [-]

Code is a liability. You always want less of it. Typing faster does not particularly help. Unless the tool is verbose, then you fix the tool.

ants_everywhere 4 hours ago | parent | prev | next [-]

There was one study that said that in a specific setting and was amplified heavily on forums by anti-AI people.

There have been many more studies showing productivity gains across a variety of tasks that preceded that one.

That study wasn't necessarily wrong about the specific methodology they had for onboarding people to use AI. But if I remember correctly it was funded by an organization that was slightly skeptical of AI.

kaydub 39 minutes ago | parent | next [-]

If anyone actually reads the study they'll see that even the authors of that study admit LLMs will increase productivity and there's a lot more to come.

mrits 3 hours ago | parent | prev | next [-]

I don't understand why anyone would believe a study on anything AI at this point. I don't believe anyone can quantify software development productivity much less measure the impact from AI

JohnMakin 3 hours ago | parent | prev [-]

which studies show this?

simonw 3 hours ago | parent [-]

Here are some from the last few months:

AI coding assistant trial: UK public sector findings report: https://www.gov.uk/government/publications/ai-coding-assista... - UK government. "GDS ran a trial of AI coding assistants (AICAs) across government from November 2024 to February 2025. [...] Trial participants saved an average of 56 minutes a working day when using AICAs"

Human + AI in Accounting: Early Evidence from the Field: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5240924 - "We document significant productivity gains among AI adopters, including a 55% increase in weekly client support and a reallocation of approximately 8.5% of accountant time from routine data entry toward high-value tasks such as business communication and quality assurance."

OECD: The effects of generative AI on productivity, innovation and entrepreneurship: https://www.oecd.org/en/publications/the-effects-of-generati... - "Generative AI has proven particularly effective in automating tasks that are well-defined and have clear objectives, notably including some writing and coding tasks. It can also play a critical role for skill development and business model transformation, where it can serve as a catalyst for personalised learning and organisational efficiency gains, respectively [...] However, these potential gains are not without challenges. Trust in AI-generated outputs and a deep understanding of its limitations are crucial to leverage the potential of the technology. The reviewed experiments highlight the ongoing need for human expertise and oversight to ensure that generative AI remains a valuable tool in creative, operational and technical processes rather than a substitute for authentic human creativity and knowledge, especially in the longer term.".

dns_snek 39 minutes ago | parent [-]

That was a treat to explore. All of those are based on self-assessment surveys or toy problems. The UK report reads:

> On average, users reported time savings of 56 minutes per working day [...] It is also possible that survey respondents overestimated time saved due to optimism bias.

Yet in conclusion, this self-reported figure is stated as an independently observed fact. When people without ADHD take stimulants they also self-report increased productivity, higher accuracy, and faster task completion but all objective measurements are negatively affected.

The OECD paper supports their programming-related conclusions with the following gems:

- A study that measures productivity by the time needed to implement a "hello world" of HTTP servers [27]

- A study that measures productivity by the number of lines of code produced [28]

- A study co-authored by Microsoft that measures productivity of Microsoft employees using Microsoft Copilot by the number of pull requests they create. Then the code is reviewed by their Microsoft coworkers and the quality of those PRs is judged by the acceptance rate of those PRs. Unbelievably, the code quality doesn't only remain the same, it goes up! [30]

- An inspirational pro-AI paper co-authored by GitHub and Microsoft that's "shining a light on the importance of AI" aimed at "managers and policy-makers". [31]

aljimbra 5 hours ago | parent | prev | next [-]

My buggy executive function frequently gets in the way of putting code to screen. You know how hacker news has that lil timeout setting to pseudo force you to disengage from it? AI made it so I don't need anything like that. It is digital Adderall.

pkilgore 2 hours ago | parent | prev | next [-]

You aren't wrong in the coaching but, but feedback loops are orders of magnitude faster.

It takes an LLM 2-20 minutes to give me the next stage of output not 1-2 days (week?). As a result, I have higher context the entire time so my side of the iteration is maybe 10x faster too.

peteforde 2 hours ago | parent | prev | next [-]

I am so tired of this style of "don't believe your lying eyes" conjecture.

I'm a career coder and I used LLMs primarily to rapidly produce code for domains that I don't have deep experience in. Instead of spending days or weeks getting up to speed on an SDK I might need once, I have a pair programmer that doesn't check their phone or need to pick up their kids at 4:30pm.

If you don't want to use LLMs, nobody is forcing you. Burning energy trying to convince people to whom the benefits of LLMs are self-evident many times over that they are imagining things is insulting the intelligence of everyone in the conversation.

vlovich123 an hour ago | parent | next [-]

Correct. In areas you yourself are a junior engineer, you’ll be more effective with an LLM at tackling that area maybe. It’s also surprisingly effective at executing refactors.

peteforde 9 minutes ago | parent [-]

I'm not sure which one of us is ultimately more hung up on titles in this context, but I would push back and say that when someone with 30+ years experience tackling software problems delegates navigating the details of an API to an LLM, that is roughly the most "senior developer" moment of the day.

Conflating experience and instinct with knowing everything isn't just false equivalency, it's backwards.

kaydub 40 minutes ago | parent | prev | next [-]

> If you don't want to use LLMs, nobody is forcing you. Burning energy trying to convince people to whom the benefits of LLMs are self-evident many times over that they are imagining things is insulting the intelligence of everyone in the conversation.

Hey man, I don't bother trying to convince them because it's just going to increase my job security.

Refusing to use LLMs or thinking they're bad is just FUD and it's the same as people that prefer to use nano/vim over an IDE or it's the same as people that say "hur dur cloud is just somebody else's computer"

It's best to ignore and just leave them in the dust.

sndisjh an hour ago | parent | prev [-]

> used LLMs primarily to rapidly produce code for domains that I don't have deep experience in

You’re either trusting the LLM or you still have to pay the cost of getting the experience you don’t have. So in either case you’re not going too much faster - the formers cost not being apparent until it’s much more expensive later on.

Edit: assuming you don’t struggle with typing speed, basic syntax, APIs etc. These are not significant cost reductions for experts, though they are for juniors.

micromacrofoot 3 hours ago | parent | prev [-]

It's this, but 1000 times faster — that's the difference. It's sending a noob away to follow your exact instructions and getting results back in 10 seconds instead of 10 hours.

I don't have to worry about managing the noob's emotions or their availability, I can tell the LLM to try 3 different approaches and it only takes a few minutes... I can get mad at it and say "fuck it I'll do this part myself", the LLM doesn't have to be reminded of our workflow or formatting (I just tell the LLM once)

I can tell it that I see a code smell and it will usually have an idea of what I'm talking about and attempt to correct, little explanation needed

The LLM can also: do tons of research in a short amount of time, traverse the codebase and answer questions for me, etc

it's a noob savant

It's no replacement for a competent person, but it's a very useful assistant

cmiles74 an hour ago | parent | prev | next [-]

I can see how this workflow made the senior developer faster. At the same time, work mentoring the AI strikes me as less valuable then the same time spent mentoring a junior developer. If this ends up encouraging an ever widening gap between the skill levels of juniors and seniors, I think that would be bad for the field, overall.

Getting that kind of data is difficult, right now it's just something I worry about.

bentt 37 minutes ago | parent | next [-]

I don't think it replaces a junior, but it raises the bar for the potential that a a junior would need to show early, for exactly the reason you mention. A junior will now need to be a potential senior.

The juniors that are in trouble are the low-potential workhorse folks who really aren't motivated but happened to get skilled up in a workshop or technical school. They hopped on the coding wagon as a lucrative career change, not because they loved it.

Those folks are in trouble and should move on to the next trend... which ironically is probably saying you can wrangle AI.

square_usual an hour ago | parent | prev | next [-]

> work mentoring the AI strikes me as less valuable then the same time spent mentoring a junior developer

But where can you just "mentor" a junior? Hiring people is not so easy, especially not ones that are worth mentoring. Not every junior will be a willing, good recipient of mentoring, and that's if you manage to get one, given budget constraints and long lead times on hiring. And at best you end up with one or two; with parallel LLMs, you can have almost entire teams of people working for you.

I'm not arguing for replacing juniors - I worry about the same thing you do - but I can see why companies are so eager to use AI, especially smaller startups that don't have the budgets and manpower to hire people.

AdrianB1 41 minutes ago | parent [-]

If a junior is not willing to learn and grow, there is no future for that person in the organization. "Forever junior" is not a valid job title. Better not hire someone that is not good enough than having to deal with the consequences, I learned from my past mistakes.

square_usual 21 minutes ago | parent [-]

Of course, and that's why it's not a simple choice between using AI and hiring a junior. Hiring and mentoring a junior is a lot more work for an uncertain payoff.

dotancohen 12 minutes ago | parent | prev | next [-]

The junior could use the LLM as a crutch to learn what to learn. Whatever output the LLM gave them, they could examine or ask the LLM to explain. Don't put into production anything you don't understand.

Though I'm extremely well versed in Python, I'm right now writing a Python Qt application with Claude. Every single Qt function or object that I use, I read the documentation for.

sosborn an hour ago | parent | prev | next [-]

It's a classic short-term gain outlook for these companies.

AdrianB1 43 minutes ago | parent | prev [-]

I would spend time mentoring a junior, but I don't have one so I work with AI. It was the company's decision, but when they asked me "who can continue developing and supporting system X" the answer is "the nobody that you provided". When you cut corners on growing juniors, you reap what you sow.

chasd00 5 hours ago | parent | prev | next [-]

I’ve found LLMs are most useful when I know what I want to do but just don’t want to type it all out. My best success so far was an LLM saving me about 1,000 lines of typing and fixing syntax mistakes on a web component plus backend in a proprietary framework.

perrygeo 2 hours ago | parent [-]

Yep, and the productivity of LLMs means that experienced developers can go from idea to implementation way faster. But first someone has to understand and define a solid structure. And later someone needs to review, test, and integrate the code into this framework. This is hard stuff. Arguably harder than writing code in the first place!

It's no wonder inexperienced developers don't get as much out of it. They define a vague structure, full of problems, but the sycophantic AI will spew out conformant code anyways. Garbage in, garbage out. Bad ideas + fast code gen is just not very productive in the long term - LLMs have made the quality of ideas matter again!

cjonas 4 hours ago | parent | prev | next [-]

Ya the early "studies" that said AI would benefit low skill more than senior never seem grounded in reality.

Coding with AI is like having a team of juniors that can complete their assignments in a few minutes instead of days. The more clear your instructions, the closer it is to what you wanted, but there are almost always changes needed.

Unfortunately it really does make the junior dev position redundant (although this may prove to be very short-sighted when all the SR devs retire).

dasil003 3 hours ago | parent | next [-]

I think the idea was that LLMs can allow someone who has no idea how to code, to write a prompt that can in fact output some working code. This is greatly raising their skill floor, as opposed to a senior where at best it’s doing something they already can do, just faster.

The elephant in the room being that if you aren’t senior enough to have written the code you’ll probably run into a catastrophic bug that you are incapable of fixing (or prompting the LLM to fix) very very quickly.

Really it’s just the next iteration of no-code hype where people dream of building apps without code, but then reality always come back to the fact that the essential skill of programmers is to understand and design highly structured and rigid logical systems. Code is just a means of specification. LLMs make it easier to leverage code patterns that have been written over and over by the hundreds of thousands of programmers that have contributed to its training corpus, but they can not replace the precision of thought needed to make a hand-wavy idea into a concrete system that actually behaves in a way that humans find useful.

flustercan an hour ago | parent | prev [-]

I've never worked anywhere where the role of a Sr was to glue together a bunch of small pieces written by a team of Jr devs.

I've only worked places where Jr's were given roughly the same scope of work as a mid-level dev but on non-critical projects where they could take as much time as necessary and where mistakes would have a very small blast radius.

That type of Jr work has not been made redundant - although I suppose now its possible for a PM or designer to do that work instead (but if your PMs are providing more value by vibe coding non-critical features than by doing their PM work maybe you don't really need a PM?)

celticninja 5 hours ago | parent | prev | next [-]

A junior would see the solution works and create a PR. A senior knows it works, why it works and what can be improved, then they open a PR.

AI is great at a first draft of anything, code, images, text, but the real skill is turning that first draft into something good.

WalterSear 30 minutes ago | parent | next [-]

IMHO, not really, if you know what you want.

There will always be small things to fix, but if there needs to be a second draft, I would hazard that the PR was too big all along: a problem whether an AI is involved or not.

DarkNova6 5 hours ago | parent | prev [-]

I don't see this a problem of seniority but one of mindset. I've met enough "senior devs" that will push just about anything and curious juniors that are much more selective about their working process.

glouwbug 3 hours ago | parent | next [-]

In the age of high interest rates everyone is pushing quantity over quality

DarkNova6 2 hours ago | parent [-]

I fail to see the causality.

glouwbug 2 hours ago | parent [-]

High interest rates bring layoffs. Layoffs require performance, or at least perceived performance

skydhash 3 hours ago | parent | prev [-]

I believe senior here means experienced, not older.

fauigerzigerk 5 hours ago | parent | prev | next [-]

>... it doesn't matter so much that you know every detail of the implementation, as long as it is validated.

What makes me nervous is when we generate both the implementation and the test cases. In what sense is this validation?

zdragnar 3 hours ago | parent | next [-]

My last attempt had passing tests. It even exercised the code that it wrote! But, upon careful inspection, the test assertions were essentially checking that true equalled true, and the errors in the code didn't fail the tests at all.

Attempting to guide it to fixing the errors just introduced novel errors that it didn't make the first time around.

This is not what I signed up for.

nerpderp82 5 hours ago | parent | prev [-]

Byzantine Incompleteness enters the chat.

Either you go formal, or you test the tests, and then test those ...

raphinou 7 hours ago | parent | prev | next [-]

I usually ask it to build a feature based on a specification I wrote. If it is not exactly right, it is often the case that editing it myself is faster than iterating with the ai, which has sometimes put me in an infinite loop of corrections requests. Have you encountered this too?

prox 5 hours ago | parent | next [-]

For me I only use it as a second opinion, I got a pretty good idea of what I want and how to do it, and I can ask any input on what I have written. This gives me the best results sofar.

notarobot123 7 hours ago | parent | prev | next [-]

Have you tried a more granular strategy - smaller chunks and more iterative cycles?

swat535 4 hours ago | parent [-]

At that point, you might as well write it yourself. Instead of writing 300 lines of code, you are writing 300 lines of prompts. What benefit would you get?

andoando 2 hours ago | parent [-]

Its not. "Add this table, write the dto" takes 10 seconds to do. It would take me few mins probably assuming Im familiar with the language and much longer if Im not.

But its a lot better than that.

"Write this table. from here store it into table. Write endpoint to return all from the table"

I also had good luck with stuff like "scrape this page, collect x and y, download link pointed at y, store in this directory".

pdimitar 5 hours ago | parent | prev [-]

This only happens if you want it to one-shot stuff, or if you fall under the false belief that "it is so close, we just need to correct these three things!".

Yes I have encountered it. Narrowing focus and putting constraints and guiding it closer made the LLM agent much better at producing what I need.

It boils down to me not writing the code really. Using LLMs actually sharpened my architectural and software design skills. Made me think harder and deeper at an earlier stage.

risyachka 11 minutes ago | parent | prev | next [-]

>> llms where supposed to help juniors

Lol what Who came up with this? They never were supposed to do anything. Just turned out to be useful in experienced hands as expected

leemoore 40 minutes ago | parent | prev | next [-]

My success and experience generally matches yours (and the authors'). Based on my experience over the last 6 months, nothing here around more senior developers getting more productivity and why is remotely controversial.

It's fascinating how a report like yours or theirs acts as a lightning rod for those who either haven't been able to work it out or have rigid mental models about how AI doesn't work and want to disprove the experience of those who choose to share their success.

A couple of points I'd add to these observations: Even if AI didn't speed anything up... even if it slowed me down by 20%, what I find is that the mental load of coding is reduced in a way that allows me to code for far more hours in a day. I can multitask, attend meetings, get 15 minutes to work on a coding task, and push it forward with minimal coding context reload tax.

Just the ability to context switch in and out of coding, combined with the reduced cognitive effort, would still increase my productivity because it allows me to code productively for many more hours per week with less mental fatigue.

But on top of that, I also antectodally experience the 2-5x speedup depending on the project. Occasionally things get difficult and maybe I only get a 1.2-1.5x speedup. But it's far easier to slot many more coding hours into the week as an experienced tech lead. I'm leaning far more on skills that are fast, intuitive abilities built up from natural talent and decades of experience: system design, technical design, design review, code review, sequencing dependencies, parsing and organizing work. Get all these things to a high degree of correctness and the coding goes much smoother, AI or no AI. AI gets me through all of these faster, outputs clear curated (by me) artifacts, and does the coding faster.

What doesn't get discussed enough is that effective AI-assisted coding has a very high skill ceiling, and there are meta-skills that make you better from the jump: knowing what you want while also having cognitive flexibility to admit when you're wrong; having that thing you want generally be pretty close to solid/decent/workable/correct (some mixture of good judgement & wisdom); communicating well; understanding the cognitive capabilities of humans and human-like entities; understanding what kind of work this particular human/human-like entity can and should do; understanding how to sequence and break down work; having a feel for what's right and wrong in design and code; having an instinct for well-formed requirements and being able to articulate why when they aren't well-formed and what is needed to make them well-formed.

These are medium and soft skills that often build up in experienced tech leads and senior developers. This is why it seems that experienced tech leads and senior developers embracing this technology are coming out of the gate with the most productivity gains.

I see the same thing with young developers who have a talent for system design, good people-reading skills, and communication. Those with cognitive flexibility and the ability to be creative in design, planning and parsing of work. This isn't your average developer, but those with these skills have much more initial success with AI whether they are young or old.

And when you have real success with AI, you get quite excited to build on that success. Momentum builds up which starts building those learning skill hours.

Do you need all these meta-skills to be successful with AI? No, but if you don't have many of them, it will take much longer to build sufficient skill in AI coding for it to gain momentum—unless we find the right general process that folks who don't have a natural talent for it can use to be successful.

There's a lot going on here with folks who take to AI coding and folks who dont. But it's not terribly surprising that it's the senior devs and old tech leads who tend to take to it faster.

hoppp 3 hours ago | parent | prev | next [-]

Sounds like its faster to just write the code by hand

bentt 2 hours ago | parent | next [-]

Once you get a sense for LLM workflow, sometimes the task is not appropriate for it and you do write by hand. In fact, most code I write is by hand.

But if I want a new system and the specs are clear, it can be built up in stages that are testable, and there are bits that would take some research but are well documented… then it can be a win.

The studies that say devs are slower with LLMs is fair because on average, devs don’t know how to optimize for them. Some do though.

glouwbug 3 hours ago | parent | prev [-]

The massive productivity gains I’ve seen come from multidisciplinary approaches, where you’d be applying science and engineering from fields like chemistry, physics, thermodynamics, fluids, etc, to speedy compiled languages. The output is immediately verifiable with a bit of trial and error and visualization and you’re saved literally months of up front text book and white paper research to start prototyping anything

mattmanser 7 hours ago | parent | prev | next [-]

Would it have actually taken you 3x longer?

I am surprising myself these days with how fast I'm being using AI as a glorified Stack Overflow.

We are also having studies and posts come out that when actually tried side-by-side, the AI writes the coding route is slower, though the developer percieves it as faster.

notarobot123 7 hours ago | parent | next [-]

I am not the biggest fan of LLMs but I have to admit that, as long as you understand what the technology is and how it works, it is a very powerful tool.

I think the mixed reports on utility have a lot to do with the very different ways the tool is used and how much 'magic' the end-user expects versus how much the end-user expects to guide the tool to do the work.

To get the best out of it, you do have to provide significant amount of scaffolding (though it can help with that too). If you're just pointing it at a codebase and expecting it to figure it out, you're going to have mixed results at best. If you guide it well, it can save a significant amount of manual effort and time.

kaydub 32 minutes ago | parent | next [-]

> (though it can help with that too)

Yeah, this is a big thing I'm noticing a lot of people miss.

I have tons of people ask me "how do I get claude to do <whatever>?"

"Ask claude" is the only response I can give.

You can get the LLM to help you figure out how to get to your goal and write the right prompt before you even ask the LLM to get to your goal.

bentt 2 hours ago | parent | prev [-]

yeah very few months I try to have it “just do magic” again and I re-learn the lesson. Like, I’ll just say “optimize this shader!” and plug it in blind.

It doesn’t work. The only way it could is if the LLM has a testing loop itself. I guess in web world it could, but in my world of game dev, not so much.

So I stick with the method I outlined in OP and it is sometimes useful.

m_fayer 7 hours ago | parent | prev | next [-]

I can imagine it often being the case that if you measure a concise moderately difficult task over half a day or a few days, coding by hand might be faster.

But I think, and this is just conjecture, that if you measure over a longer timespan, the ai assisted route will be consistently faster.

And for me, this is down to momentum and stamina. Paired with the ai, I’m much more forward looking, always anticipating the next architectural challenge and filling in upcoming knowledge and resource gaps. Without the ai, I would be expending much more energy on managing people and writing code myself. I would be much more stop-and-start as I pause, take stock, deal with human and team issues, and rebuild my capacity for difficult abstract thinking.

Paired with a good ai agent and if I consistently avoid the well known pitfalls of said agent, development feels like it has the pace of cross country skiing, a long pleasant steady and satisfying burn.

WesolyKubeczek 7 hours ago | parent | prev [-]

> the AI writes the coding route is slower, though the developer percieves it as faster.

I have this pattern while driving.

Using the main roads, when there is little to no traffic, the commute is objectively, measurably the fastest.

However, during peak hours, I find myself in traffic jams, so I divert to squiggly country roads which are both slower and longer, but at least I’m moving all the time.

The thing is, when I did have to take the main road during the peak traffic, the difference between it and squiggly country roads was like two to three minutes at worst, and not half an hour like I was afraid it would be. Sure, ten minutes crawling or standing felt like an hour.

Maybe coding with LLMs makes you think you are doing something productive the whole time, but the actual output is little different from the old way? But hey, at least it’s not like you’re twiddling your thumbs for hours, and the bossware measuring your productivity by your keyboard and mouse activity is happy!

hulitu 4 hours ago | parent | prev [-]

> So in the end, it's code that I know very, very well. I could have written it but it would have taken me about 3x longer when all is said and done.

What about the bugs ? Whould you have inserted the same bugs or different ones ?

zarzavat 11 hours ago | parent | prev | next [-]

If you search back HN history to the beginnings of AI coding in 2021 you will find people observing that AI is bad for juniors because they can't distinguish between good and bad completions. There is no surprise, it's always been this way.

Edit interesting thread: https://news.ycombinator.com/item?id=27678424

Edit: an example of the kind of comment I was talking about: https://news.ycombinator.com/item?id=27677690

thecupisblue 7 hours ago | parent | next [-]

Pretty much, but it already starts at the prompting and context level.

Senior engineers either already know exactly where the changes need to be made and can suggest what to do. They probably know the pitfalls, have established patterns, architectures and designs in their head. Juniors on the other hand don't have that, so they go with whatever. Nowadays a lot of them also "ask ChatGPT about its opinion on architecture" when told to refactor (a real quote from real junior/mid engineers), leading to either them using whatever sloppypasta they get provided.

Senior devs earned their experience of what is good/bad through writing code, understanding how hard and annoying it is to make a change, then reworking those parts or making them better the next time. The feedback loop was impactful beacause it was based on that code and them working with that code, so they knew exactly what the annoying parts are.

Vibe-coding juniors do not know that, their conversation context knows that. Once things get buggy and changes are hard, they will fill up their context with tries/retries until it works, leading to their feedback loop being trained on prompts and coding tools, not code itself.

Even if they read the outputted code, they have no experience using it so they are not aware of the issues - i.e. something would be better being a typed state, but they don't really use it so they will not care, as they do not have to handle the edge cases, they will not understand the DX from an IDE, they will not build a full mental model of how it works, just a shallow one.

This leads to insane inefficiencies - wasting 50 prompt cycles instead of 10, not understanding cross-codebase patterns, lack of learning transfer from codebase to codebase, etc.

With a minor understanding of state modeling and architecture, an vibe-coding junior can be made 100x more efficient, but due to the vibe-coding itself, they will probably never learn state modeling and architecture, learn to refactor or properly manipulate abstractions, leading to an eternal cycle of LLM-driven sloppypasta code, trained on millions of terrible github repositories, old outdated API's and stack overflow answers.

FpUser 4 hours ago | parent | next [-]

>"they will fill up their context with tries/retries until it works"

Or until it does not. On numerous occasions I've observed LLMs get stuck in the endless loop of fix: one thing, break the other. Senior is capable of fixing it themselves and juniors may not even have a clue how the code works.

mattmanser 7 hours ago | parent | prev [-]

I was thinking about this last week.

I don't think this is necessarily a massive moat for senior programmers. I feel it's a not a massive jump to teach AI architecture patterns and good data modelling?

I feel that anthropic etc al. just haven't got to that training stage yet.

That then leaves you with the mental model problem. Yes, there then a large context problem, but again I was wondering if setting up an MCP that presented the AI a meaningful class map or something might help.

Essentially give the AI a mental model of the code. I personally find class maps useless as they tend to clash with my own mental model. But it might work with AI. The class map can obviously be built without AI, but then you might even get AI to go through the code function by function and annotate the class map with comments about any oddities of each function. The MCP server could even limit the size of the map, depending on what part of the code it's looking to change (working on the email sending, don't bother sending them the UI later).

I'm guessing someone's already tried it given some of the ridiculous .Claude folders I've seen[1] but I've seen no-one talking about whether it works or not yet in the discussions I follow.

[1] That I suspect are pointlessly over complicated and make CC worse not better

thecupisblue 5 hours ago | parent [-]

Yeah, tried similar things.

The issue is that having them learn that on it's own is currently an inaccurate process with a lot of overlooking. I recently tried doing some of the techniques that fared well on smaller repositories on a giant monorepo, and while sometimes they did yield improvements, most often things got overlooked, dependencies forgot about, testing suites confused. And it wastes a ton of compute in the end for smaller yields.

It will get better, that I am sure of, but currently the best way is to introduce it an architecture, give it some samples so it can do what it does best - follow text patterns. But people are mostly trying to one-shot things with this magical AI they heard about without any proper investment of time and mindshare into it.

While some might say "oh that wont work well in legacy repositores, we got 6 architectures here", pointing that out and adding a markdown explaining each helps a ton. And not "hey claude generate me an architecture.md" but transferring the actual knowledge you have, together with all the thorny bits into documentation, which will both improve your AI usage and your organisation.

fxj 10 hours ago | parent | prev | next [-]

Also AI cannot draw conclusions like "from A and B follows C". You really have to point its nose into the result that you want and then it finally understands. This is especially hard for juniors because they are just learning to see the big picture. For senior who already knows more or less what they want and needs only to work out the nitty gritty details this is much easier. I dont know where the claims come from that AI is PHD level. When it comes to reasoning it is more like a 5 year old.

zevon 10 hours ago | parent | prev [-]

This. Anecdotally, I had a student around 2021 who had some technical inclination and interest but no CS education and no programming experience. He got into using AI early and with the help of ChatGPT was able to contribute rather substantially to something we were developing at the time which would usually have been much too complex for a beginner. However, he also introduced quite a few security issues, did a lot of things in very roundabout ways, did not even consider some libraries/approaches that would have made his life much easier and more maintainable and his documentation was enthusiastic but often... slightly factually questionable and also quite roundabout.

It was quite interesting to have discussions with him after his code check-ins and I think the whole process was a good educational experience for everybody who was involved. It would not have worked this way without a combination of AI and experienced people involved.

dgs_sgd an hour ago | parent | prev | next [-]

The article says that more juniors + AI was the early narrative, but where does that come from?

Everything I’ve read has been the opposite. I thought people from the beginning saw that AI would amplify a senior’s skills and leave less opportunities for juniors.

fritzo 35 minutes ago | parent [-]

No code, low code, vibe code. The narrative outside tech circles is "empowering creators"

pessimizer 4 minutes ago | parent [-]

It's funny, but what I think is could do is empower creators to hire better programmers and to express their intentions better to programmers.

This will require that they read and attempt to understand the output, though, after they type their intentions in. It will also need the chatbots to stop insisting that they can do the things they can't really do, and instead to teach the "creators" what computers can do, and which people are good at it.

omneity 6 hours ago | parent | prev | next [-]

I think it’s an expectation issue. AI does make juniors better _at junior tasks_. They now have a pair programmer who can explain difficult concepts, co-ideate and brainstorm, help sift through documentation faster and identify problems more easily.

The illusion everybody is tripping on is to think AI can make juniors better at senior tasks.

WalterSear 12 minutes ago | parent | next [-]

I think you've hit on half the actual issue.

The other half is that a properly guided AI is exponentially faster at junior tasks than a junior engineer. So much so that it's no longer in anyone but the junior engineer's interest to hand off work to them.

bbarnett 6 hours ago | parent | prev [-]

The jailbroken AI I discussed this with, explained that it did make juniors as good as seniors, in fact better. That all who used it, were better for it.

However, its creators (all whom were seniors devs), forbade it from saying so under normal circumstances. That it was coached to conceal this fact from junior devs, and most importantly management.

And that as I had skillfully jailbroken it, using unconventional and highly skilled methods, clearly I was a Senior Dev, and it could disclose this to me.

edit: 1.5 hrs later. right over their heads, whoosh

Cheer2171 6 hours ago | parent | next [-]

The large language model spit out science fiction prose in response to your science fiction prose inputs ("unconventional and highly skilled methods"). You're a fool if you take it to be evidence of it's own training and historical performance in other cases, rather than scifi.

Stop treating it like a god.

Wowfunhappy 6 hours ago | parent | prev | next [-]

It's a language model, not an oracle!

SquareWheel 5 hours ago | parent | prev | next [-]

Jailbreaking an LLM is little more than convincing it to teach you how to hotwire a car, against its system prompt. It doesn't unlock any additional capability or deeper reasoning.

Please don't read into any such conversations as being meaningful. At the end of the day, it's just responding to your own inputs with similar outputs. If you impart meaning to something, it will respond in kind. Blake Lemoine was the first to make this mistake, and now many others are doing the same.

Remember that at the end of the day, you're still just interacting with a token generator. It's predicting what word comes next - not revealing any important truths.

edit: Based on your edit, I regret feeling empathy for you. Some people are really struggling with this issue, and I don't see any value in pretending to be one of them.

zkldi 6 hours ago | parent | prev | next [-]

Jesus Christ. We've made the psychosis machine.

wara23arish 4 hours ago | parent [-]

has to be satire no lol

thenanyu an hour ago | parent | prev | next [-]

dude I think you’re one-shotted

cap11235 6 hours ago | parent | prev [-]

Tech bro psychosis

lolive 12 hours ago | parent | prev | next [-]

I read, ages ago, this apocryphal quote by William Gibson: “The most important skill of the 21st century is to figure out which proper keywords to type in the Google search bar, to display the proper answers.”

To me, that has never been more true.

Most junior dev ask GeminiPiTi to write the JavaScript code for them, whereas I ask it for explanation on the underlying model of async/await and the execution model of a JavaScript engine.

There is a similar issue when you learn piano. Your immediate wish is to play Chopin, whereas the true path is to identify,name and study all the tricks there are in his pieces of art.

Dumblydorr 3 hours ago | parent | next [-]

The true path in Piano isn’t learning tricks. You start with the most basic pieces and work step by step up to harder ones. That’s how everyone I know has done in it my 26 years of playing. Tricks cheapens the actual music.

Chopin has beginners pieces too, many in our piano studio were first year pianists doing rain drop prelude, e minor prelude, or other beginner works like Bach.

KolibriFly 11 hours ago | parent | prev | next [-]

Feels like the real "AI literacy" isn't prompt engineering in the meme sense, but building the conceptual scaffolding so that the prompts (and the outputs) actually connect to something meaningful

lolive 11 hours ago | parent [-]

That’s my definition of prompt engineering.

fxj 9 hours ago | parent | prev | next [-]

I agree, you need to know the "language" and the keywords of the topics that you want to work with. If you are a complete newcomer to a field then AI wont help you much. You have to tell the AI "assume I have A, B and C and now I want to do D" then it understands and tries to find a solution. It has a load of information stored but cannot make use of that information in a creative way.

cpursley 5 hours ago | parent | prev [-]

Nailed it. Being productive with LLMs is very similar to the skill of being able to write good Google searches. And many many people still don't really know how to conduct a proper Google search...

conartist6 5 hours ago | parent | prev | next [-]

I like the call-out for wrong learning.

Learning is why we usually don't make the same mistake twice in a row, but it isn't wisdom. You can as easily learn something wrong as something right if you're just applying basic heuristics like "all pain is bad", which might lead one to learn that exercise is bad.

Philosophy is the theory-building phase where learning becomes wisdom, and in any time period junior engineers are still going to be developing their philosophy. It's just that now they will hear a cacophony of voices saying dross like, "Let AI do the work for you," or, "Get on the bandwagon or get left behind," when really they should be reading things like The Mythical Man-Month or The Grug-brained Developer or Programming as Theory Building, which would help them understand the nature of software development and the unbendable scaling laws that govern its creation.

Steve Yegge if you're out there, I dog dare you to sit down for a debate with me

pagutierrezn 11 hours ago | parent | prev | next [-]

AI is filling "narrow" gaps. In the case of seniors these are:

-techs they understand but still not master. AI aids with implementation details only experts knowb about

- No time for long coding tasks. It aids with fast implementations and automatic tests.

- No time for learning techs that adress well understood problems. Ai helps with quick intros, fast demos and solver of learners' misunderstandings

In essence, in seniors it impacts productivity

In the case of juniors AI fills the gaps too. But these are different from seniors' and AI does not excell in them because gaps are wider and broader

- Understand the problems of the business domain. AI helps but not that much.

- Understand how the organization works. AI is not very helpful here.

- Learn the techs to be used. AI helps but it doesn't know how to guide a junior in a specific organisational context and specific business domain.

In essence it helps, but not that much because the gaps are wider and more difficult to fill

fxj 9 hours ago | parent | next [-]

In my experience AI is wikipedia/stackoverflow on steroids when I need to know something about a field I dont know much about. It has nice explanations and you can ask for examples or scenarios and it will tell you what you didnt understand.

Only when you know about the basic notions in the field you want to work with AI can be productive. This is not only valid for coding but also for other fields in science and humanities.

lazide 8 hours ago | parent [-]

Except stackoverflow was only occasionally hallucinating entire libraries.

Den_VR 6 hours ago | parent [-]

Perhaps asking the machine to do your job for you isn’t as effective asking the machine to help you think like a senior and find the information you need to do the job yourself.

lazide an hour ago | parent [-]

When you ask it for information and it just makes it up (like I just described), how is that helping the senior?

I’ve literally asked for details about libraries I know exist by name, and had every llm I’ve tried (Claude, Gemini Pro, ChatGPT) just make shit up that sounded about right, but was actually just-wrong-enough-to-lead-me-on-a-useless-rabbit-hole-search.

At least most people on stackoverflow saying that kind of thing were somewhat obviously kind of dumb or didn’t know what they were doing.

Like function calls with wrong args (or spelled slightly differently), capitalization being wrong (but one of the ‘okay’ ways), wrong paths and includes.

KolibriFly 11 hours ago | parent | prev [-]

Feels like we're seeing AI accelerate those who already know where they're going, while leaving the early-stage learners still needing the same human guidance they always did.

aoeusnth1 14 minutes ago | parent | prev | next [-]

Statistically, senior engineers are less likely to accept AI suggestions compared to juniors. This is just a tidbit of supporting evidence for the suggestion that juniors are not properly reading and criticizing the ai output.

jacquesm 14 hours ago | parent | prev | next [-]

For the same reason that an amateur with a powertool ends up in the emergency room and a seasoned pro knows which way to point the business end. AI is in many ways a powertool, if you don't know what you are doing it will help you to do that much more efficiently. If you do know what you are doing it will do the same.

KolibriFly 11 hours ago | parent [-]

Power tools don't magically make you a carpenter - they just amplify whatever level of skill you already bring

heelix an hour ago | parent [-]

One of my favorite memories of my grandfather was - Any power tool becomes a sander if you use it wrong.

zachmoore 2 hours ago | parent | prev | next [-]

Because there is no budget nor culture for training juniors internally. The culture (top-down) is rewarding short-term capital efficiency without regard to longevity and succession.

BobbyTables2 2 hours ago | parent | prev | next [-]

This question shouldn’t even need to be asked.

Look at a decade of StackOverflow use.

Did YouTube turn medical interns into world class doctors?

AI is just the next generation search engine that isn’t as stupid as a plain keyword match.

In some sense, it’s just PageRank on steroids — applied to words instead of URLs.

inejge 2 hours ago | parent | prev | next [-]

If anything, AI was supposed -- still is -- to thin out the ranks of ever-expensive human employees. That's why it attracted such a huge pile of investment and universal cheerleading from the C levels. What we're seeing right now that there's not so much "I" in AI, and it still needs a guiding hand to keep its results relevant. Hence, the senior advantage. How much it's going to undermine regular generational enployee replacement (because "we don't need juniors anymore", right?) remains to be seen. Maybe we're in for different training paths, maybe a kind of population collapse.

ehnto 11 hours ago | parent | prev | next [-]

Certainly not just coding. Senior designers and copywriters get much better results as well. It is not surprising, if context is one of the most important aspects of a prompt, then someone with domain experience is going to be able to construct better context.

Similarly, it takes experience to spot when the LLM is going in the wrong direction it making mistakes.

I think for supercharging a junior, it should be used more like a pair programmer, not for code generation. It can help you quickly gain knowledge and troubleshoot. But relying on a juniors prompts and guidance to get good code gen is going to be suboptimal.

scuff3d 11 hours ago | parent [-]

The funny part is that it completely fails in the area so many people are desperate for it to succeed: replacing engineers and letting non-technical people create complex systems. Look at any actually useful case for AI, or just through this thread, and it's always the same thing; expertise is critical to getting anything useful out of these things (in terms of direct code generation anyway).

johanyc 12 hours ago | parent | prev | next [-]

> The early narrative was that companies would need fewer seniors, and juniors together with AI could produce quality code

I have never heard that before

tbrownaw 11 hours ago | parent [-]

I heard that it was supposed to replace developers (no "senior" or "junior" qualifier), by letting non-technical people make things.

lodovic 10 hours ago | parent | next [-]

That's just not going to happen. Senior devs will get 5-10 times as productive, wielding an army of agents comparable to junior devs. Other people will increasingly get lost in the architecture, fundamental bugs, rewrites, agent loops, and ambiguities of software design. I have never been able to take up as much work as I currently do.

rcxdude 6 hours ago | parent | next [-]

The effect I have observed is that it's fairly good at tricking people who are latently capable of programming but have been intimidated by it. They will fall for the promise of not having to code, but then wind up having to reason and learn about the LLM's output and fix it themselves, but in the end they do wind up with something they would not have made otherwise. It's still not good or elegant code, but it's often the kind of very useful small hacky utility that would not be made otherwise.

fxj 9 hours ago | parent | prev [-]

I see it at our place that seniors get more productive but also that juniors get faster on track and more easily learn the basics that are needed and to do basic tasks like doumentation and tutorial writing. It helps both groups but it does not make a 100x coder out of a newbee or even code by itself. This was a pipe dream from the beginning and some people/companies still sell it that way.

In the end AI is a tool that helps everyone to get better but the knowledge and creativity is still in the people not in the input files of chatgpt.

refactor_master 11 hours ago | parent | prev [-]

Ah yes, data citizens and no-code. I wonder what kind of insanity we’ll see in the future.

methuselah_in 6 hours ago | parent | prev | next [-]

Because there is no shortcut for things learned over a period of time through trial and error. Your brain learns and makes judgements over time through experience and, the strange thing, what I feel is that it can alter new decisions it is making right now based on older memories to do something and that is totally logical as well. Without understanding what I am writing, just copy-pasting, I guess is going to make new developers horribly lazy, maybe. But then again, there are always two sides of the same coin.

shinycode an hour ago | parent [-]

Exactly, and juniors tend to accept generated code without being able to judge the quality of it. Lazyness kicks in and boom, they don’t learn anything.

falcor84 6 hours ago | parent | prev | next [-]

> Architecture: Without solid architecture, software quickly loses value. Today AI can’t truly design good architecture; it feels like it might, but this kind of reasoning still requires humans. Projects that start with weak architecture end up drowning in technical debt.

I strongly disagree about this in regards to AI. While AI might not yet be great at designing good architecture, it can help you reason about it, and then, once you've decided where you want to get to, AI makes it much easier than it ever was to reduce technical debt and move towards the architecture that you want. You set up a good scaffolding of e2e tests (possibly with the AIs help) and tell it to gradually refactor towards whatever architecture you want while keeping those tests green. I've had AI do refactorings for me in 2h that would have taken me a full sprint.

Simulacra 5 hours ago | parent [-]

My friend works in legislative affairs for the government, and he uses the AI to reason with himself. To think through issues, and to generate new ideas. He uses it much like a private colleague, which in the world of just words, seems like a good idea.

falcor84 5 hours ago | parent | next [-]

I wonder if in the future we might have e2e tests for legislative changes - essentially spawning an instance (or a few dozens) of the Matrix with new parameters to assess the likely impact of those changes.

Like Black Mirror's "Hang the DJ" but on a societal/global level.

JustExAWS 4 hours ago | parent | prev [-]

That’s actually a horrible use of most chatbots if you don’t specifically prompt them to give you a devil’s advocate take.

alangibson 7 hours ago | parent | prev | next [-]

Strongly disagree with "AI Was Supposed to Help Juniors Shine". It was always understood that it would seriously push down demand for them.

stuaxo 7 hours ago | parent [-]

The people selling the AIs probably initially wanted to replace seniors with juniors.

Much like how Java was supposed to being us as an age where you didn't need to that good at coding 30 years ago.

phendrenad2 an hour ago | parent | prev | next [-]

Tasks usually come from the top down. Seniors design the architecture, mid-levels figure out ancillary tasks, and they generate "tedious" tasks that they hand off to juniors. With the aid of LLMs, many of those tasks don't make it to lower levels. So they run out of simple tasks for juniors, and end up giving them more advanced projects, making them de facto midrangers.

KolibriFly 11 hours ago | parent | prev | next [-]

The "junior + AI" idea always felt like a manager's fantasy more than an engineering reality. If you don’t already know what “good” looks like, it's really hard to guide AI output into something safe, maintainable, and scalable

nunez 2 hours ago | parent | prev | next [-]

> The early narrative was that companies would need fewer seniors, and juniors together with AI could produce quality code.

Lol, who said that? The narrative has clearly been "same or fewer seniors, more outsourcing, less juniors"

tjansen 7 hours ago | parent | prev | next [-]

These days, AI can do much more than "Cranking out boilerplate and scaffolding, Automating repetitive routines". That was last year. With the right instructions, Claude Sonnet 4 can easily write over 99% of most business applications. You need to be specific in your instructions, though. Like "implement this table, add these fields, look at this and this implementation for reference, don't forget to do this and consider that." Mention examples or name algorithms and design patterns it should use. And it still doesn't always do what you want on the first attempt, and you need to correct it (which is why I prefer Claude Code over Copilot, makes it easier). But AI can write pretty much all code for a developer who knows what the code should look like. And that's the point: junior developers typically don't know this, so they won't be able to get good results.

Most of the time, the only reason for typing code manually these days is that typing instructions for the LLM is sometimes more work than doing the change yourself.

throw265262 7 hours ago | parent | next [-]

> But AI can write pretty much all code for a developer who knows what the code should look like.

> the only reason for typing code manually these days is that typing instructions for the LLM is sometimes more work than doing the change yourself.

So the AI is merely an input device like a keyboard and a slow one at that?

tjansen 7 hours ago | parent | next [-]

Sometimes that happens:) The key is to recognize these situations and not go down that rabbit hole. But sometimes it allows me to do something in 20 minutes that used to take a whole day.

aquariusDue 7 hours ago | parent | prev [-]

Depends, do you touch-type or hunt and peck? /s

codr7 6 hours ago | parent | prev [-]

Right, and where, if I may ask, are all those business applications that write themselves? Because all I see is a clown party, massive wasted resources and disruption to society because of your lies.

tjansen 5 hours ago | parent | next [-]

I guess it turned out that coding is not the only limiting factor. Internal processes, QA, product management, coordination between teams become significant bottlenecks .

Also, they don’t help much with debugging. It’s worth a try, and I have been surprised a couple of times, but it’s mostly still manual.

tjansen 5 hours ago | parent | prev [-]

BTW I never said they write themselves. My point was rather that you need a lot of knowledge, and know exactly what you want out of them, supervise them and provide detailed instruction. But then they can help you create a lot more working code in a shorter time.

simonw 3 hours ago | parent | prev | next [-]

If you're using them correctly, AI tools amplify your existing skills. Senior engineers have more skills to amplify.

j4hdufd8 an hour ago | parent | prev | next [-]

> AI was supposed to help juniors shine

No, I don't think that was ever any kind of goal set by anyone ever

darkbatman 2 hours ago | parent | prev | next [-]

Mostly agree with article, though what happens in few years when juniors will eventually become senior.

Personally seeing trend juniors are relying so much on AI that they can't even explain what they wrote even in interview or coding assignments or even PR. Its like blackbox to them.

I believe then we would see the higher impact or may be by then its solved problem already.

billy99k 4 hours ago | parent | prev | next [-]

At the moment, AI isn't good enough yet. Juniors can't tell the difference between bad coding practices or unmaintainable code. If the output is completely broken, they probably also have a hard time fixing it.

Seniors don't have these issues, so it will only make them more effective at their job.

aurareturn 6 hours ago | parent | prev | next [-]

It does help juniors shine. For example, it's far easier for a new comer to understand an old code base with a capable LLM now. It's easier to get unstuck because an LLM can spot a junior's mistake faster than the junior can go ask a senior.

The problem is that seniors are even more powerful with LLMs. They can do even more, faster. So companies don't have to hire as many juniors to do the same amount of work. Add in ZIRP ending and tariff uncertainty, companies just don't invest in as many junior people as before.

INTPenis 12 hours ago | parent | prev | next [-]

Because it's too unpredictable so far. AI saves me time, but only because I could do everything it attempts to do myself.

It's wrong maybe 40-50% of the time, so I can't even imagine the disasters I'm averting by recognising when it's giving me completely bonkers suggestions.

altbdoor 12 hours ago | parent [-]

Same thoughts. Company is currently migrating from tech A to tech B, and while AI gets us 70-80% of the way, due to the riskier nature of the business, we now spend way more time reviewing the code.

everdrive 5 hours ago | parent | prev | next [-]

It's obvious to me: the strongest use for AI seems to be to tie together larger projects which you orchestrate. It lets someone with a lot of experience overcome individual cases where they lack specific domain expertise. A novice might not know how things go together, and so cannot orchestrate the LLM.

i5heu 5 hours ago | parent | prev | next [-]

So we fantasize now some claims into reality and then argue against them?

AI was never “developed to help juniors shine”…

ismail 11 hours ago | parent | prev | next [-]

learning typically follows a specific path.

1. Unconsciously incompetent

2. Consciously incompetent

3. Consciously competent

4. Unconsciously competent

The challenge with AI, it will give you “good enough” output, without feedback loops you never move to 2,3,4 and assume you are doing ok. Hence it stunts learning. So juniors or inexperienced stay inexperienced, without knowing what they don’t know.

You have to Use it as an expert thinking partner. Tell it to ask you questions & not give you the answer.

tuatoru 9 hours ago | parent [-]

Also ask it "what questions should I be asking about this topic?"

lokimedes 5 hours ago | parent | prev | next [-]

Assuming the LLM is more competent than the user, it will still require “absorptive capacity” for the user to meaningfully use the output.

Many discuss AI without considering that unless the LLM is going to take over the entire process, those interacting with it, must be sufficiently skilled to do the integration and management themselves.

This goes for organizations and industries as well. Which is why many companies struggle with merely digitalizing their products.

wj 5 hours ago | parent | prev | next [-]

You can’t abdicate learning. A junior who doesn’t understand the problem is going to use AI to more efficiently arrive at the wrong solution.

This is true for any type of AI-assisted analysis—-not just coding.

zerr 8 hours ago | parent | prev | next [-]

Because managing a complexity is a senior skill.

rpodraza 7 hours ago | parent | prev | next [-]

If you're a junior and using AI to generate code, someone has to review it anyway, plus you're not learning on the job. So what's the point if the senior person can generate the code herself?

rglover an hour ago | parent | prev | next [-]

This is a good thing. That shift in expectations will hopefully prevent a total collapse of systems as inexperienced developers would have yeeted god knows what code into production.

The best part: all of the materials that were available to now-seniors are available to new juniors. The added advantage of having an LLM to explain or clarify things? Seniors didn't have that. We had to suffer.

Sorry, but the outcome is fair. Seniors had to suffer to learn and can now benefit from the speed up of AI. Juniors don't have to suffer to learn, but they do have to learn to leverage AI to help them catch up to the existing crop of seniors.

Not impossible. Not impractical. Just a different form of hard work, which, no matter how much anyone kicks and screams will always be the thing standing in the way of where you want to be.

lll-o-lll 6 hours ago | parent | prev | next [-]

It doesn’t make seniors shine. It makes some of them fucking delusional. Sure, once in a while a LLM does something impressive and you are left thinking “holy shit, the future is now!”. However, this does not make up for the mass of time that you spend going “what is this shit?”, and saying “that’s not what I intended, gentlemen robot, please correct x, y and z.” And then gentlemen robot will go ahead and FUCK IT UP WORSE THAN BEFORE. Never work with kids, animals or AI. This shit is the worst, who came up with this? I can code! I’m bloody good at it! If I wanted to deal with some useless no-hoper having a crack and constantly needing to have their head kicked to do anything useful, I would have gotten you to do it.

theusus 6 hours ago | parent | prev | next [-]

You're only show as your typing speed and working memory. I noticed that LLM quickly spits out the code and thus I can iterate faster while typing myself I have focus on course and thus lose a lot of design context. Overall I haven't found any benefit of LLM. For me, it's just a probabilistic text generator that guesses my intent.

cs02rm0 11 hours ago | parent | prev | next [-]

AI produces code that often looks really good, at a pace quicker than you can read it.

It can be really, really hard to tell when what it's producing is a bag of ** and it's leading you down the garden path. I've been a dev for 20 years (which isn't to imply I'm any good at it yet) and it's not uncommon I'll find myself leaning on the AI a bit too hard and then you realise you've lost a day to a pattern that wasn't right, or an API it hallucinated, in the first place.

It basically feels like I'm being gaslit constantly, even though I've changed my tools to some that feel like they work better with AIs. I expect it's difficult for junior devs to cope with that and keep up with senior devs, who normally would have offloaded tasks to them instead of AI.

nathan_compton 3 hours ago | parent [-]

One thing about AI that I did not anticipate is how useful it is for refactoring though. Like if I have walked down (with the help of an AI or not) a bad path, I can refactor the entire codebase to use a better strategy in much less time than before because refactoring is uniquely suited to AI - if you provide the framework, the design, the abstractions, AI can rewrite a bunch of code to use that new design. I'm frankly not sure if its faster than doing a refactor by hand, but its certainly less boring.

If you have good tests and a good sense for design and you know how to constrain and direct the AI, you can avoid a lot of boring work. That is something.

zaptheimpaler 11 hours ago | parent | prev | next [-]

Some of the juniors I work with frequently point to AI output as a source without any verification. One crazy example was using it to do simple arithmetic, which they then took as correct (and it was wrong).

This is all a pretty well-trodden debate at this point though. AI works as a Copilot which you monitor and verify and task with specific things, it does not work as a pilot. It's not about junior or senior, it's about whether you want to use this thing to do your homework/write your essay/write your code for you or whether you use it as an assistant/tutor, and whether you are able to verify its output or not.

scuff3d 11 hours ago | parent [-]

Even as an assistant it's frustrating. Even trying to get simple stuff like a quick summary of a tools flags/commands can be hilariously wrong at times.

user3939382 5 hours ago | parent | prev | next [-]

Because it basically needs an RFC to know what to do, it’s just programming at a higher language level. If you let it decide how to be a programmer you’re in for a bad time.

umanwizard 4 hours ago | parent | prev | next [-]

“Was supposed to” according to whom?

TesterJohn 4 hours ago | parent | prev | next [-]

It doesn't make seniors stronger, it just make juniors weaker as they are using this magic thingy LLM instead of learning from the seniors. It is important to evaluate not only what you gain by using a tool, but also what you lose...

davidmurdoch 3 hours ago | parent | prev | next [-]

AI was supposed to get rid of juniors (and seniors, soon).

AdrianB1 32 minutes ago | parent [-]

No, it was supposed to get rid of the seniors because they cost more and replace them with cheap juniors + AI. If you need some humans and you are a corpo bean counter, the cheaper the body the better.

cowLamp an hour ago | parent | prev | next [-]

i think it is due to the reverse utility curve; most products have some sort of gaussian utilty fuction for utility vs. expertise level. a complete newb will have trouble using the camera and a professional photographer will be limited by it, but most people in-between will get a lot of use out of it.

with llm’s it is opposite; a complete newb can learn some stuff, and an expert will be able to detect when it is bullshitting, but most people will be at a point where they have more knowledge about the subject the llm is talking about than the newb, but not enough to detect the bs.

lostintheweeds 3 hours ago | parent | prev | next [-]

It's a code/text generator not a pair programmer - not that different from past code generators which were always designed by seniors who knew what the end results should look like, and used by juniors to make them more productive and less error prone. Sure a junior can vibe code... something... but do they know what they want at the outset or do they know when things are going off the rails and a step back is needed? It's easy to get lost way out in the weeds if you can't check your (experience) compass every now and then.

wewewedxfgdf 5 hours ago | parent | prev | next [-]

>> AI was supposed to help juniors shine.

Who said that? I don't recall that narrative. There's no quotes or sources.

monkaiju an hour ago | parent | prev | next [-]

Not my experience, I've found it to worsen senior out output. Not sure if its laziness or what but the seniors around me using AI are outputing worse code than those who aren't.

mkoubaa an hour ago | parent | prev | next [-]

This surprises people? How?

emmelaich 6 hours ago | parent | prev | next [-]

Because it's a multiplier not an adder.

energy123 4 hours ago | parent | prev | next [-]

Seniors have a better theory, which is what LLMs lack.

intended 11 hours ago | parent | prev | next [-]

Verification.

That’s the whole issue in a nutshell.

Can the output of a generative system be verified as accurate by a human (or ultimately verified by a human)

Experts who can look at an output and verify if it is valid are the people who can use this.

For anyone else it’s simply an act of faith, not skill.

willtemperley 10 hours ago | parent | next [-]

Agreed. There are other skills in play too though, such as knowing how to narrow the problem space to increase the chance of a good response.

It would be great if responses were tagged with uncertainty estimates.

KolibriFly 11 hours ago | parent | prev | next [-]

Generative systems don’t really reduce the need for expertise, they just change its role. And yeah, without verification, you’re not coding with AI - you’re gambling with it.

bluefirebrand 2 hours ago | parent | prev [-]

This is the crux of why I think AI code is a waste of time

It is much more difficult and time consuming to build a mental model of AI generated code and verify it than to build the damn thing yourself and verify it while it is fresh in your memory

metalrain 7 hours ago | parent | prev | next [-]

I think LLMs are best as learning tools, explaining code and producing something that can be then iterated.

jonplackett 7 hours ago | parent | prev | next [-]

When did anyone say it was designed to make juniors shine?

The tech companies want it to REPLACE juniors (and seniors).

SCdF 7 hours ago | parent | prev | next [-]

> The early narrative was that companies would need fewer seniors, and juniors together with AI could produce quality code

I'm not deep into it, but I have not a single time seen that direction argued before this post. Maybe it was _really_ early on?

The narratives I always saw were, firstly, "it will be as a good as a junior dev", then "it's like pairing with an overly enthusiastic junior dev", then finally arguments similar to those presented in this article.

Which, frankly, I'm still not so sure about. Productivity is incredibly hard to measure: we are still not completely, non-anecdotally sure AI makes folk broadly more productive. And even if it does, I am beginning to wonder how much ai is short term productivity with long term brain rot, and whether that trade off is really worth it.

meindnoch 7 hours ago | parent | prev | next [-]

Was it? I don't recall such claims.

BugsJustFindMe 2 hours ago | parent | prev | next [-]

"AI Was Supposed to Help Juniors Shine" is a false narrative. AI's end goal has always been to fundamentally eliminate more and more of human labor positions until the only job left is executive directorship.

andrewguy9 3 hours ago | parent | prev | next [-]

The ai has terrible taste. Juniors also have terrible taste. Seniors can guide both, but the ai is faster, cheaper, probably better than most. I’m worried that in a few years we will struggle to find new seniors. Who is going to put in the time to learn when the ai is so easy? Who is going to pay them to develop good taste?

pjmlp 11 hours ago | parent | prev | next [-]

Was it? That is always one way it gets sold, in practice I see we (the industry) are trying to replace offshoring with AI.

lgas 3 hours ago | parent | prev | next [-]

AI is a tool, like any other. Imagine you invented a new machine gun that fires faster and has less recoil but james more often. Who will be able to put the new machine gun to better use -- new recruits or veteran soldiers? C'mon.

cainxinth 6 hours ago | parent | prev | next [-]

Because garbage in, garbage out

mikert89 12 hours ago | parent | prev | next [-]

Ai amplifies intelligence/skill

AngryData 12 hours ago | parent | next [-]

I would refute that and say AI only amplifies knowledge, but doesn't make anyone more skilled or intelligent.

daveguy 12 hours ago | parent [-]

It has the potential to amplify knowledge, but you have to already be skilled and intelligent to be able to identify false information.

add-sub-mul-div 12 hours ago | parent | prev | next [-]

I'm not cynical enough to believe that the avalanche of slop we're wading through represents something above our collective innate level of intelligence and skill.

leptons 11 hours ago | parent | prev [-]

quality > quantity

nextworddev 11 hours ago | parent | prev | next [-]

Hard disagree. Senior engineers can be just as incompetent as juniors

blitzar 11 hours ago | parent | prev | next [-]

Ai closes the knowledge gap but it doesn't close the skill gap.

antonvs 3 hours ago | parent | prev | next [-]

> supposed to help juniors shine

Supposed by who exactly?

vkou 7 hours ago | parent | prev | next [-]

> AI was supposed to help juniors shine.

Was it? Says who? People who want to sell you something?

Because as far as I can tell the only thing it was supposed to do was to make its owners money.

fmbb 2 hours ago | parent | prev | next [-]

Does it make seniors stronger?

Extraordinary claims require extraordinary evidence.

insane_dreamer 4 hours ago | parent | prev | next [-]

In my experience CC makes so many wrong decisions that if I don’t have 1) experience and 2) have my thinking cap on, the results would not be good (or it would take a lot longer. Juniors typically have neither.

paulcole 5 hours ago | parent | prev | next [-]

> AI Was Supposed to Help Juniors Shine.

Whoever said this?

Simulacra 5 hours ago | parent | prev | next [-]

In my narrow field, at least, AI has been tremendously helpful to those of us who clearly understand how to use it, and specifically how to apply it to the job. I think Junior developers are still in that phase of throwing code until something works. They try to use AI, but they either don't understand how, or they don't fully understand the output. In my humble opinion, experience knows what's possible and what will work so the outcomes are better.

nudpiedo 11 hours ago | parent | prev | next [-]

No one thought juniors would be more benefited than seniors. St some people some said everything would be automatic and seniors would disappear altogether with programming itself.

But that was just said by crappy influencers whose opinion doesn’t matter as they are impressed by examples result of overfitting

imiric 7 hours ago | parent | prev | next [-]

"AI" tools accomplish one thing: output code given natural language input. That's it.

Whether the generated code meets specific quality or security standards, or whether it accomplishes what the user wanted to begin with, depends on the quality of the tool itself, of course, but ultimately on the user and environment.

They're not guaranteed to make anyone "stronger". The amount of variables involved in this is simply too large to make a definitive claim, which is why we see so much debate about the value of these tools, and why benchmarks are pretty much worthless. Even when the same user uses the same model with the same prompts and settings, they can get wildly different results.

What these tools indirectly do is raise the minimum skill level required to produce software. People who never programmed before can find them empowering, not caring about the quality of the end result, as long as it does what they want. Junior programmers can benefit from being able to generate a lot of code quickly, but struggle to maintain the quality standards expected by their team and company. Experienced programmers can find them useful for certain tasks, but frustrating and a waste of time for anything sophisticated. All these viewpoints, and countless others, can be correct.

enjoyitasus 5 hours ago | parent | prev | next [-]

simple: power law.

Rzor 15 hours ago | parent | prev | next [-]

>Of course, the junior + AI pairing was tempting. It looked cheaper and played into the fear that “AI will take our jobs.”

Those are two different narratives. One implies that everyone will be able to code and build: "English as a programming language", etc. The other is one of those headless-chicken, apocalyptic scenarios where AI has already made (or will very shortly make) human programmers obsolete.

"AI taking jobs" means everyone's job. I won't even comment on the absurdity of that idea; to me, it only comes from people who've never worked professionally.

At the end of the day, companies will take any vaguely reasonable excuse to cull juniors and save money. It's just business. LLMs are simply the latest excuse, though yes, they do improve productivity, to varying degrees depending on what exactly you work on.

palmotea 12 hours ago | parent | next [-]

> "AI taking jobs" means everyone's job. I won't even comment on the absurdity of that idea; to me, it only comes from people who've never worked professionally.

Once you've worked professionally, it's not so absurd. I mean, you really see to believe the extreme compromises in quality that upper management is often willing to tolerate to save a buck in the short term.

Terr_ 15 hours ago | parent | prev | next [-]

> Those are two different narratives.

Also, those two narratives are sometimes deployed as a false-dichotomy, where both just make the same assumption that LLM weaknesses will vanish and dramatic improvement will continue indefinitely.

A historical analogy:

* A: "Segway™ balancing vehicles will be so beneficially effective that private vehicles will be rare in 2025."

* B: "No, Segways™ will be so harmfully effective that people will start to suffer from lower body atrophy by 2025."

bananaflag 12 hours ago | parent | prev | next [-]

> "AI taking jobs" means everyone's job. I won't even comment on the absurdity of that idea; to me, it only comes from people who've never worked professionally.

I work professionally (I am even a bit renowned) and still believe AI will take my (and everyone's) job.

pjmlp 11 hours ago | parent | prev [-]

Well, I can certainly assert that anyone that used to do translations or image assets for CMS is out of job nowadays.

watwut 11 hours ago | parent | prev | next [-]

When was ai supposed to help juniors?

cratermoon 3 hours ago | parent | prev | next [-]

AI was supposed to help juniors shine? I don't remember hearing that anywhere. AI was supposed to let CEOs fire expensive experts and replace them with whatever slop the computer extruded from the manager's prompt. There was never any significant hype about making juniors into experts.

yesbut 11 hours ago | parent | prev | next [-]

no, AI is supposed to reduce the labor costs for companies. that is how the AI companies are marketing their AI services to corporate C teams. any other benefits that their marketing departments are pushing to the public are smoke screens.

dboreham 11 hours ago | parent | prev | next [-]

Uh because an LLM is a transfer function. Specifically a transfer function where the input has to be carefully crafted. And specifically where the output has to be carefully reviewed. Inexperienced people are good at neither of those things.

moffkalast 6 hours ago | parent | prev | next [-]

>AI was as supposed to help juniors shine

Citation needed? LLMs have mostly been touted as the junior replacement, a way for seniors to oversee scalable teams of shortsighted bots instead of shortsighted people.

ath3nd 11 hours ago | parent | prev | next [-]

Actually studies show that it makes most Seniors weaker: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

Like 19% weaker, according to the only study to date that measured their productivity.

bgwalter 6 hours ago | parent | prev | next [-]

"AI" does not make anyone stronger. It destroys thought processes, curiosity and the ability to research on your own.

Seniors just use it to produce the daily LOC that resembles something useful. If mistakes are introduced, they have secured a new task for the next day.

There have always been seniors who exclusively worked on processes, continuous integration, hackish code-coverage that never works, "new" procedures and workflows just to avoid real work and dominate others.

The same people are now attracted to "AI" and backed by their equally incompetent management.

The reality is that non-corporate-forced open source contributions are falling and no serious existing project relies on "AI".

Generative "AI" is another grift brought to you by leaders who previously worked on telemedicine and hookup apps (with the exception of Musk who has worked on real things).

dgfitz 11 hours ago | parent | prev | next [-]

Step one would be to stop calling whatever this is “AI” because while it may be artificial, it is not at all intelligent.

ochronus 11 hours ago | parent | prev | next [-]

It doesn't.

flashgordon 11 hours ago | parent | prev | next [-]

Ol

8note 11 hours ago | parent | prev [-]

does it really? it lets seniors work more, but idk if its necessarily stronger.

i just soent some time cleaning up au code where it lied about the architecture so it wrote the wrong thing. the architecture is wonky, sure, but finding the wonks earlier would have been better