Remix.run Logo
encomiast 5 hours ago

"so as long as I maintain my ability to reason about code…what’s the issue?"

It seems like that is the open question. The article suggests that people don't maintain this ability:

"The AI group scored 17% lower on conceptual understanding, debugging, and code reading. The largest gap was in debugging, the exact skill you need to catch what AI gets wrong. One hour of passive AI-assisted work produced measurable skill erosion."

From my own (anecdotal) experience I am seeing a lot more cases of what I call developer bullshit where developers can't even talk about the work they are vibe-coding on in a coherent way. Management doesn't notice this since it's all techno-bable to them and sounds fancy, but other developers do.

mirsadm 5 hours ago | parent | next [-]

This use to be the most embarrassing thing that could happen. A team member asks you why you did something a certain way during a PR and you can't provide an answer. This seems to be becoming the norm now.

tisdadd 4 hours ago | parent [-]

It also used to be an indicator that potentially someone was outsourcing their work overseas.

Edit: I had an instance once where about once a month another developer would ask me about workplace setup, mentioned it to someone and was told maybe they were the English speaker of the group. Upon further investigation, that seemed to be the case.

lenkite 3 hours ago | parent | prev | next [-]

What happens when more and more people cannot explain their PRs ? I mean they already use AI to create the "explanation" as well and ping you. Ask them questions and they will delegate again to AI and copy-paste what the AI answers.

logicprog 4 hours ago | parent | prev | next [-]

The problem is that that is an incorrect interpretation of the study. The entire task of that study was specifically to learn a brand new asynchronous library that they hadn't had experience with before. As a group on average, those who used AI failed to learn how to use explain and debug that async library as well as those who hadn't used AI on average had, but that doesn't mean they lost pre-existing skills. It's literally in the study title: "skill formation", not skill practice, maintenance, or deterioration.

I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem.

I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies

encomiast 4 hours ago | parent [-]

Point taken. Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer? Haven’t we all complained at some point about companies hiring react developers because we all know the real skill is the ability to pick up new things. And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.

logicprog 2 hours ago | parent [-]

> Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer?

Yeah, this is fair. However, I think the same study indicates a way out of this dilemma. I feel like I've actually learned a lot about the languages and libraries I've used that I haven't known before through agent coding just by watching it do things and having explained and summarized things to me and find documentation and so on.

> And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.

I don't think this concern in itself constitutes a moral panic, no. I think it's a reasonable thing to think about and worry about how to avoid. However, I think there are definitely very prominent features of the general discourse and concern around this that do constitute a moral panic:

1. First of all, and most importantly, this issue is very often moralized, around like having done a proper amount of work or suffered enough or whatever to be like justified in getting the results you're getting or participating or being considered a real program or whatever. This moralization has existed in the community for a really long time, treating programmers that work at a higher abstraction level than you as lesser than or whatever, so I would argue this isn't a natural response to these concerns so much as it is an extension of a moralistic attitude to a new topic and it's also not necessary to have in order to have these discussions.

2. The hyperbolic misinterpretation of studies, and the fact that these concerns are being raised as if they are going to be a certain, and huge, consequence when we just have very little evidence so far to that effect.

3. The fact that most of the discussion isn't about how to use these tools in a way that doesn't face these problems or whatever, but instead a sort of binary framing around either you suffer from this problem or you boycott.

4. The way the idea of "cognitive debt" and these other things is used to frame it as anyone using AI has their brains "turned to mush" or whatever, as if it's some kind of like general debilitating cognitive injury inside of just that you're getting rusty at skills because you aren't using them, even using scaremongering terms like skill atrophy, which is then used to preemptively dismiss the thoughts of anyone, the people most concerned about this disagree with.

I also think like... If the argument is that people get good at what they do a lot and to get rusty at what they don't do, I really don't see the argument for why using AI a lot wouldn't make you better at high level architectural decisions spotting possible problems with architectures and approaches before they become a problem organizing your thoughts and tasks, reading a code and spotting issues, etc.

The thing is that using coding agents doesn't actually hide whether an algorithm isn't performant enough from you. It'll either be slow or it won't be.

It doesn't hide if you've come up with a bad architecture, because that will also confuse the agent and make it difficult for it to make future modifications without breaking other things.

It doesn't hide a lack of DRY either because if you've got the same code in multiple places and you want to change how it behaves then you've got to change it in multiple places and they can get out of sync and that will bite you in the ass.

And then there's the fact that with coding agents, there's an obvious and direct reward for using more tests and more advanced testing, better linters, more compiled and typed languages, better CI/CD, better documentation, etc. So people will probably get better at that.

dangus 5 hours ago | parent | prev [-]

Per your last paragraph, I also think we are in an awkward middle period where developers are embarrassed to admit how much code is vibes with very little review before they submit.

The embarrassment is understanding. It feels wrong, because in many ways it is wrong.

The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.”

I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real.

Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems.

If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier.