> Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer?
Yeah, this is fair. However, I think the same study indicates a way out of this dilemma. I feel like I've actually learned a lot about the languages and libraries I've used that I haven't known before through agent coding just by watching it do things and having explained and summarized things to me and find documentation and so on.
> And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.
I don't think this concern in itself constitutes a moral panic, no. I think it's a reasonable thing to think about and worry about how to avoid. However, I think there are definitely very prominent features of the general discourse and concern around this that do constitute a moral panic:
1. First of all, and most importantly, this issue is very often moralized, around like having done a proper amount of work or suffered enough or whatever to be like justified in getting the results you're getting or participating or being considered a real program or whatever. This moralization has existed in the community for a really long time, treating programmers that work at a higher abstraction level than you as lesser than or whatever, so I would argue this isn't a natural response to these concerns so much as it is an extension of a moralistic attitude to a new topic and it's also not necessary to have in order to have these discussions.
2. The hyperbolic misinterpretation of studies, and the fact that these concerns are being raised as if they are going to be a certain, and huge, consequence when we just have very little evidence so far to that effect.
3. The fact that most of the discussion isn't about how to use these tools in a way that doesn't face these problems or whatever, but instead a sort of binary framing around either you suffer from this problem or you boycott.
4. The way the idea of "cognitive debt" and these other things is used to frame it as anyone using AI has their brains "turned to mush" or whatever, as if it's some kind of like general debilitating cognitive injury inside of just that you're getting rusty at skills because you aren't using them, even using scaremongering terms like skill atrophy, which is then used to preemptively dismiss the thoughts of anyone, the people most concerned about this disagree with.
I also think like... If the argument is that people get good at what they do a lot and to get rusty at what they don't do, I really don't see the argument for why using AI a lot wouldn't make you better at high level architectural decisions spotting possible problems with architectures and approaches before they become a problem organizing your thoughts and tasks, reading a code and spotting issues, etc.
The thing is that using coding agents doesn't actually hide whether an algorithm isn't performant enough from you. It'll either be slow or it won't be.
It doesn't hide if you've come up with a bad architecture, because that will also confuse the agent and make it difficult for it to make future modifications without breaking other things.
It doesn't hide a lack of DRY either because if you've got the same code in multiple places and you want to change how it behaves then you've got to change it in multiple places and they can get out of sync and that will bite you in the ass.
And then there's the fact that with coding agents, there's an obvious and direct reward for using more tests and more advanced testing, better linters, more compiled and typed languages, better CI/CD, better documentation, etc. So people will probably get better at that.