| ▲ | encomiast 5 hours ago | ||||||||||||||||
"so as long as I maintain my ability to reason about code…what’s the issue?" It seems like that is the open question. The article suggests that people don't maintain this ability: "The AI group scored 17% lower on conceptual understanding, debugging, and code reading. The largest gap was in debugging, the exact skill you need to catch what AI gets wrong. One hour of passive AI-assisted work produced measurable skill erosion." From my own (anecdotal) experience I am seeing a lot more cases of what I call developer bullshit where developers can't even talk about the work they are vibe-coding on in a coherent way. Management doesn't notice this since it's all techno-bable to them and sounds fancy, but other developers do. | |||||||||||||||||
| ▲ | mirsadm 5 hours ago | parent | next [-] | ||||||||||||||||
This use to be the most embarrassing thing that could happen. A team member asks you why you did something a certain way during a PR and you can't provide an answer. This seems to be becoming the norm now. | |||||||||||||||||
| |||||||||||||||||
| ▲ | lenkite 3 hours ago | parent | prev | next [-] | ||||||||||||||||
What happens when more and more people cannot explain their PRs ? I mean they already use AI to create the "explanation" as well and ping you. Ask them questions and they will delegate again to AI and copy-paste what the AI answers. | |||||||||||||||||
| ▲ | logicprog 4 hours ago | parent | prev | next [-] | ||||||||||||||||
The problem is that that is an incorrect interpretation of the study. The entire task of that study was specifically to learn a brand new asynchronous library that they hadn't had experience with before. As a group on average, those who used AI failed to learn how to use explain and debug that async library as well as those who hadn't used AI on average had, but that doesn't mean they lost pre-existing skills. It's literally in the study title: "skill formation", not skill practice, maintenance, or deterioration. I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem. I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies | |||||||||||||||||
| |||||||||||||||||
| ▲ | dangus 5 hours ago | parent | prev [-] | ||||||||||||||||
Per your last paragraph, I also think we are in an awkward middle period where developers are embarrassed to admit how much code is vibes with very little review before they submit. The embarrassment is understanding. It feels wrong, because in many ways it is wrong. The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.” I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real. Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems. If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier. | |||||||||||||||||