Remix.run Logo
heartbreak 7 hours ago

It remains unclear to me why my ability to read and review code (the majority of my job for years now) will atrophy if I continue doing it while writing even less code than I was before.

If my ability to write code somehow atrophies because I stop doing it, does that matter if I continue with the architecture and strategy around coding?

The act of writing code by hand seems to be on a trajectory of irrelevance, so as long as I maintain my ability to reason about code (both by continuing to read it and instruct tools to write it), what’s the issue?

Edit to add: the vast majority of the code I’ve worked on in my career was not written by me. A significant portion of it was not written by someone still employed by my employer. I think that’s true for a lot of us, and we all made it work. And we made it work without modern coding assistants helping out. I think we’ll be fine.

encomiast 7 hours ago | parent | next [-]

"so as long as I maintain my ability to reason about code…what’s the issue?"

It seems like that is the open question. The article suggests that people don't maintain this ability:

"The AI group scored 17% lower on conceptual understanding, debugging, and code reading. The largest gap was in debugging, the exact skill you need to catch what AI gets wrong. One hour of passive AI-assisted work produced measurable skill erosion."

From my own (anecdotal) experience I am seeing a lot more cases of what I call developer bullshit where developers can't even talk about the work they are vibe-coding on in a coherent way. Management doesn't notice this since it's all techno-bable to them and sounds fancy, but other developers do.

mirsadm 6 hours ago | parent | next [-]

This use to be the most embarrassing thing that could happen. A team member asks you why you did something a certain way during a PR and you can't provide an answer. This seems to be becoming the norm now.

tisdadd 5 hours ago | parent [-]

It also used to be an indicator that potentially someone was outsourcing their work overseas.

Edit: I had an instance once where about once a month another developer would ask me about workplace setup, mentioned it to someone and was told maybe they were the English speaker of the group. Upon further investigation, that seemed to be the case.

lenkite 5 hours ago | parent | prev | next [-]

What happens when more and more people cannot explain their PRs ? I mean they already use AI to create the "explanation" as well and ping you. Ask them questions and they will delegate again to AI and copy-paste what the AI answers.

logicprog 6 hours ago | parent | prev | next [-]

The problem is that that is an incorrect interpretation of the study. The entire task of that study was specifically to learn a brand new asynchronous library that they hadn't had experience with before. As a group on average, those who used AI failed to learn how to use explain and debug that async library as well as those who hadn't used AI on average had, but that doesn't mean they lost pre-existing skills. It's literally in the study title: "skill formation", not skill practice, maintenance, or deterioration.

I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem.

I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies

encomiast 5 hours ago | parent [-]

Point taken. Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer? Haven’t we all complained at some point about companies hiring react developers because we all know the real skill is the ability to pick up new things. And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.

logicprog 3 hours ago | parent [-]

> Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer?

Yeah, this is fair. However, I think the same study indicates a way out of this dilemma. I feel like I've actually learned a lot about the languages and libraries I've used that I haven't known before through agent coding just by watching it do things and having explained and summarized things to me and find documentation and so on.

> And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.

I don't think this concern in itself constitutes a moral panic, no. I think it's a reasonable thing to think about and worry about how to avoid. However, I think there are definitely very prominent features of the general discourse and concern around this that do constitute a moral panic:

1. First of all, and most importantly, this issue is very often moralized, around like having done a proper amount of work or suffered enough or whatever to be like justified in getting the results you're getting or participating or being considered a real program or whatever. This moralization has existed in the community for a really long time, treating programmers that work at a higher abstraction level than you as lesser than or whatever, so I would argue this isn't a natural response to these concerns so much as it is an extension of a moralistic attitude to a new topic and it's also not necessary to have in order to have these discussions.

2. The hyperbolic misinterpretation of studies, and the fact that these concerns are being raised as if they are going to be a certain, and huge, consequence when we just have very little evidence so far to that effect.

3. The fact that most of the discussion isn't about how to use these tools in a way that doesn't face these problems or whatever, but instead a sort of binary framing around either you suffer from this problem or you boycott.

4. The way the idea of "cognitive debt" and these other things is used to frame it as anyone using AI has their brains "turned to mush" or whatever, as if it's some kind of like general debilitating cognitive injury inside of just that you're getting rusty at skills because you aren't using them, even using scaremongering terms like skill atrophy, which is then used to preemptively dismiss the thoughts of anyone, the people most concerned about this disagree with.

I also think like... If the argument is that people get good at what they do a lot and to get rusty at what they don't do, I really don't see the argument for why using AI a lot wouldn't make you better at high level architectural decisions spotting possible problems with architectures and approaches before they become a problem organizing your thoughts and tasks, reading a code and spotting issues, etc.

The thing is that using coding agents doesn't actually hide whether an algorithm isn't performant enough from you. It'll either be slow or it won't be.

It doesn't hide if you've come up with a bad architecture, because that will also confuse the agent and make it difficult for it to make future modifications without breaking other things.

It doesn't hide a lack of DRY either because if you've got the same code in multiple places and you want to change how it behaves then you've got to change it in multiple places and they can get out of sync and that will bite you in the ass.

And then there's the fact that with coding agents, there's an obvious and direct reward for using more tests and more advanced testing, better linters, more compiled and typed languages, better CI/CD, better documentation, etc. So people will probably get better at that.

dangus 6 hours ago | parent | prev [-]

Per your last paragraph, I also think we are in an awkward middle period where developers are embarrassed to admit how much code is vibes with very little review before they submit.

The embarrassment is understanding. It feels wrong, because in many ways it is wrong.

The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.”

I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real.

Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems.

If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier.

orphea 7 hours ago | parent | prev | next [-]

  > The act of writing code by hand seems to be on a trajectory of irrelevance
It does not. English (or any human language) is an awful language to write specifications in, because it is not as precise as code. Each time you "compile" your prompt into a program, LLMs spit up something a little bit different. How is it a good thing?

  > so as long as I maintain my ability to reason about code (both by continuing to read it and instruct tools to write it), what’s the issue?
The post mentions this. You need to write code yourself to keep your review skill (know what's good and what's bad) sharp. You think why if you want to learn something, you better get a paper, a pen and write notes, by hand, like in those ancient times? You would think we're in 2026, you can grab an ipad, watch some videos and become an expert? No. You need to have your hands dirty. By writing some damn code.
heartbreak 6 hours ago | parent [-]

> Each time you "compile" your prompt into a program, LLMs spit up something a little bit different. How is it a good thing?

Because that’s not how it works. How can we have a discussion about this topic if we don’t have a mutual understanding of how the tools even work?

The code is not replaced by English prompts. The code still exists.

orphea 5 hours ago | parent | next [-]

Yes, it exists, but are you going to edit it by hand if you didn't write it in the first place, or you would rather throw another prompt to update it? People tend to do the latter, thinking about the code as some generated artifact, like object files.

skydhash 6 hours ago | parent | prev [-]

> The code is not replaced by English prompts. The code still exists

If you can guarantee that it does what you say it does, then all is ok. The core issue since the advent of ChatGPT was always this reliability issue, whether the end result, the code, addresses the change request issued.

It turned out that you need to be an expert programmer to vet the code as well as supervise its evolution, whatever the tool used to write it.

xantronix 5 hours ago | parent | prev | next [-]

How do we maintain best practices when the compiler outputs a different result for the spec at any given time? How do we obtain reproducible builds? Do we pin to a specific version of our compiler (ie, snapshot of the model; is this possible anywhere except local currently?), and vigorously test changes after any updates in our "toolchain"? How do we have control over our "toolchain" (again, apart from local), especially when said "toolchain" can, for all its users simultaneously, fold to political pressure from state regimes? And, if the code generated by LLMs is the build artifact, why is it now okay to check the build artifact into source control?

There may come a day when we, as an industry, decide that simply doing it by hand is more expedient when it comes to resolving urgent production issues. We may not know the pain we are causing ourselves until well into the future when it has become too much to bear without a visit to the proverbial doctor.

kccqzy 6 hours ago | parent | prev [-]

Even before AI, I’ve witnessed at Google plenty of L6 and L7 software engineers atrophy. They stop writing code, start reviewing code, until they find that their code reviews catch fewer issues than a junior engineer’s reviews. They have become accustomed to thinking only at a high-level, and when met with low-level details they can’t tell good from bad any more. Their coding skills, both reading and writing, have atrophied.

heartbreak 6 hours ago | parent [-]

Do they also stop providing value to Google as a result?

I don’t get paid to write code, and you probably don’t either.

grayhatter 6 hours ago | parent | next [-]

> Do they also stop providing value to Google as a result?

In the context of a software engineer, yes obviously?

> I don’t get paid to write code, and you probably don’t either.

I feel like you're rejecting the premise of the argument. You're talking about becoming a manager, as if that track is somehow relevant to software engineers. I used to be a nurse, I'm not anymore. My skills have definitely atrophied. I would now be a shitty and dangerous nurse. How does that apply to my skills at software engineering? When you stop being a software engineer, it's expected your skills at interacting with code will fall away. But the article you're arguing against isn't written for nurses, and equally isn't written engineering managers.

skydhash 6 hours ago | parent | prev [-]

You are not paid to only write code.