Remix.run Logo
grim_io 3 days ago

Why is it so hard for people to understand or appreciate the higher layers of abstraction, that lie above their preferred layers of comfort?

Think back on assembly programmers scoffing at c programmers.

Same arguments, probably same outcomes.

globnomulous 2 days ago | parent [-]

I disagree. When you add an abstraction layer, the user of that layer continues to write code. That's not the case when people rely heavily on LLMs. They're at best reading and tweaking the model's output.

That's not the only way to use an LLM. One can instead write a piece of code and then ask the tool for analysis, but that's not the scenario that people like me are criticizing or concerned about -- and it's not how most people imagine LLMs will be used in the future, if models and tools continue to improve. People are predicting that the models will write the software. That's what people like me and the person I agreed with are criticizing and concerned about.

I'm uncomfortable with the idea not because it's outside of my area of comfort but because people don't understand code they read the way they understand code they write. Writing the code familiarizes the writer with the problem space (the pitfalls, for instance). When you haven't written it, and you've instead just read it, then you haven't worked through the problems. You don't know the problem space or the reasons for the choices that the author made.

To put this another way: you can learn to read a language or understand it by ear without learning to speak it. The skills are related, but they're separate. In turn, people acquire and develop the skills they practice: you don't learn to speak by reading. Junior engineers and young people who learn to code with AI, and don't write code themselves, will learn, in essence, how to read but not how to write or 'speak;' they'll learn how to talk to the AI models, and maybe how to read code, but not how to write software.