| ▲ | pnathan 5 days ago |
| I - senior - can patch an application in an unknown language and framework with the AI. I know enough to tell it to stop the wildly stupid ideas. But I don't learn. That's not what I'm trying to do- I'm trying to fix the bug. Hmm. I'm pretty sure AI is going to lead us to a deskilling crash. Food for thought. |
|
| ▲ | omnimus 5 days ago | parent | next [-] |
| I think the temptation to use AI is so strong that it will be those who will keep learning who will be valuable in future. Maybe by asking AI to explain/teach instead of asking for solution direclty. Or not using AI at all. |
| |
| ▲ | thunky 4 days ago | parent [-] | | > I think the temptation to use AI is so strong that it will be those who will keep learning who will be valuable in future. AI is an excellent teacher for someone that wants to learn. |
|
|
| ▲ | zahlman 5 days ago | parent | prev | next [-] |
| > But I don't learn. That's not what I'm trying to do- I'm trying to fix the bug. Hmm. I'm pretty sure AI is going to lead us to a deskilling crash. Nothing is preventing you from studying how the bugfix works once it's in place. Nor is there any reason this use of AI should cause you to lose skills you already have. |
| |
| ▲ | golly_ned 4 days ago | parent | next [-] | | I haven't seen things work like this in practice, where heavy AI users end up being able to generating a solution, then later grasp it and learn from it, with any kind of effectiveness or deep understanding. It's like reading the solution to a math proof instead of proving it yourself. Or writing a summary of a book compared to reading one. The effort towards seeing the design space and choosing a particular solution doesn't exist; you only see the result, not the other ways it could've been. You don't get a feedback loop to learn from either, since that'll be AI generated too. It's true there's nothing stopping someone from going back and trying to solve it themselves to get the same kind of learning, but learning the bugfix (or whatever change) by studying it once in place just isn't the same. And things don't work like that in practice any more than things like "we'll add tests later" end up being followed through with with any regularity. If you fix a bug, the next thing for you to do is to fix another bug, or build another feature, write another doc, etc., not dwell on work that was already 'done'. | | |
| ▲ | hyperadvanced 4 days ago | parent | next [-] | | Ironically, AI is really good at the adding tests later thing. It can really help round out test coverage for a piece of code and create some reusable stuff that can inspire you to test even more. I’m not a super heavy AI user but I’ve vibe coded a few things for the frontend with it. It has helped me understand how you lay out react apps a little better and how the legos that React gives you work. Probably far less than if I had done it from scratch and read a book but sometimes a working prototype is so much more valuable to a product initiative than learning a programming language is that you would be absolutely burning time and value to not vibe code the prototype | |
| ▲ | rudnevr 4 days ago | parent | prev [-] | | that's true, and nice comparison with tests. |
| |
| ▲ | Karliss 4 days ago | parent | prev [-] | | Often it's less about learning from the bugfix itself but the journey. Learning how various pieces of software operate and fit together, learning the tools you tried for investigating and debugging the problem. |
|
|
| ▲ | deepspace 5 days ago | parent | prev | next [-] |
| > I'm pretty sure AI is going to lead us to a deskilling crash. That's my thought too. It's going to be a triple whammy 1. Most developers (Junior and Senior) will be drawn in by the temptation of "let the AI do the work", leading to less experience in the workforce in the long term. 2. Students will be tempted to use AI to do their homework, resulting in new grads who don't know anything. I have observed this happen first hand. 3. AI-generated (slop) code will start to pollute Github and other sources used for future LLM training, resulting in a quality collapse. I'm hoping that we can avoid the collapse somehow, but I don't see a way to stop it. |
|
| ▲ | pphysch 5 days ago | parent | prev | next [-] |
| On the contrary, being able to access (largely/verifiably) correct solutions to tangible & relevant problems is an extremely great way to learn by example. It should probably be supplemented with some good old RTFM, but it does get us somewhat beyond the "blind leading the blind" StackOverflow paradigm of most software engineering. |
|
| ▲ | JeremyNT 5 days ago | parent | prev | next [-] |
| I think seniors know enough to tell whether they need to learn or not. At least that's what I tell myself! The thing with juniors is: those who are interested in how stuff works now have tools to help them learn in ways we never did. And then it's the same as before: some hires will care and improve, others won't. I'm sure that many juniors will be happy to just churn out slop, but the stars will be motivated on their own to build deeper understanding. |
|
| ▲ | BeFlatXIII 5 days ago | parent | prev [-] |
| On the other hand, if it's a one-off, you'll have forgotten what you learned by the time you'd need to use that skill again. |
| |
| ▲ | PaulStatezny 5 days ago | parent [-] | | But without AI, there are neural connections formed while determining the correct one-off solution. The neural connections (or lack of them) have longer term comprehension-building implications. | | |
| ▲ | pnathan 5 days ago | parent [-] | | This is it. Otherwise I'd know more about $that_technology than I would otherwise. |
|
|