| ▲ | theptip 5 days ago |
| Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things. If you stop thinking, then of course you will learn less. If instead you think about the next level of abstraction up, then perhaps the details don’t always matter. The whole problem with college is that there is no “next level up”, it’s a hand-curated sequence of ideas that have been demonstrated to induce some knowledge transfer. It’s not the same as starting a company and trying to build something, where freeing up your time will let you tackle bigger problems. And of course this might not work for all PhDs; maybe learning the details is what matters in some fields - though with how specialized we’ve become, I could easily see this being a net win. |
|
| ▲ | PhantomHour 5 days ago | parent | next [-] |
| > Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things. One of the other replies alludes to it, but I want to say it explicitly: The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had. It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code. And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there. Yet AI is corrosive to those foundations. |
| |
| ▲ | theptip 5 days ago | parent | next [-] | | I don't follow; you can read the code that your LLM produces as well. It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide. | | |
| ▲ | rstuart4133 5 days ago | parent | next [-] | | > I don't follow; you can read the code that your LLM produces as well. You can. You can also read the code a compiler produces perfectly well. In fact https://godbolt.org/ is a web site dedicated to programmers do just that. But ... how many programmers do you know who look at the assembler their compiler produces? In fact how many programmers do you know who understand the assembler? Now lets extrapolate a bit. I've seen people say they've vibe coded a some program, yet they can't program. Did they read the code the LLM produced? Of course not. Did it matter? Apparently not for the program they produced. Does the fact that they can vide program but not read code alter the types of programs they can produce? Of course it does. There limited to the sort of programs an LLM has seen before. Does that matter? Possibly not if the only programs they write are minor variations of what has been posted onto the internet already. Now take two people, one who can only vide code, and another who knows how to program and understands computers at a very deep level. Ask yourself, who is going to be paid more? Is it the one who can only write programs that have been seen many times before by an LLM, or is it the one who can produce something truly new and novel? | | |
| ▲ | yesbut 5 days ago | parent [-] | | salary aside, the vibe coders are exposing themselves to increased cognitive decline which should be a strong enough incentive to avoid AI to begin with. maybe they already had a cognitive impairment before reading this MIT study and can't understand the risk. |
| |
| ▲ | PhantomHour 5 days ago | parent | prev [-] | | The distinction is that you cannot make the LLM do the drilling. And the way these tools are designed is to train the user to use the LLM rather than their own brain, so they'll never learn it themselves. A big problem with the "Just read the code" approach is that reading the code at the level deep enough to truly understand it is at minimum equally time-consuming than writing the code in the first place. (And in practice tends to be significantly worse) Anyone who claims they're reading the LLM's code output properly is on some level lying to them. Human brains are simply bad at consistently monitoring output like that, especially if the output is consistently "good", especially especially when the errors appear to be "good" output on the surface level. This is universal across all fields and tools. |
| |
| ▲ | Cthulhu_ 3 days ago | parent | prev [-] | | The other one is that code to assembly is exact and repeatable. Code will (well, should, lmao) behave the same way, every time. A prompt to generate code won't. Some prompts / AI agents will write all the validations and security concerns when prompted to write an API endpoint (or whatever). Others may not, because you didn't specify it. But if someone who doesn't actually know about security just trusts that the AI will just do it for you - like how a developer using framework might - you'll run into issues fast. |
|
|
| ▲ | Jensson 5 days ago | parent | prev | next [-] |
| > Just beware the “real programmers hand-write assembly” fallacy All previous programming abstractions kept correctness, a python program produce no less reliable results than a C program running the same algorithm, it just took more time. LLM doesn't keep correctness, I can write a correct prompt and get incorrect results. Then you are no longer programming, you are a manager over a senior programmer suffering from extreme dementia so they forget what they were doing a few minutes ago and you try to convince him to write what you want before he forgets about that as well and restart the argument. |
| |
| ▲ | invalidptr 5 days ago | parent [-] | | >All previous programming abstractions kept correctness That's not strictly speaking true, since most (all?) high level languages have undefined behaviors, and their behavior varies between compilers/architectures in unexpected ways. We did lose a level of fidelity. It's still smaller than the loss of fidelity from LLMs but it is there. | | |
| ▲ | pnt12 5 days ago | parent | next [-] | | That's a bit pedantic: lots of python programs will work the same way in major OSs. If they don't, someone will likely try to debug the specific error and fix it. But LLMs frequently hallucinate in non deterministic ways. Also, it seems like there's little chance for knowledge transfer. If I work with dictionaries in python all the timrle, eventually I'm better prepared to go under the hood and understand their implementation. If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? Not such direct connection, surely! | | |
| ▲ | theptip 5 days ago | parent [-] | | > That's a bit pedantic It's a pedantic reply to a pedantic point :) > If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? A sibling also made this point, but I don't follow. You can still read the code. If you don't know the syntax, you can ask the LLM to explain it to you. LLMs are great for knowledge transfer, if you're actually trying to learn something - and they are strongest in domains where you have an oracle to test your understanding, like code. |
| |
| ▲ | ashton314 5 days ago | parent | prev | next [-] | | Undefined behavior does not violate correctness. Undefined behavior is just wiggle room for compiler engineers to not have to worry so much about certain edge cases. "Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness. I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM. [1]: https://github.com/AliveToolkit/alive2 | |
| ▲ | ndsipa_pomu 5 days ago | parent | prev | next [-] | | However, the undefined behaviours are specified and known about (or at least some people know about them). With LLMs, there's no way to know ahead of time that a particular prompt will lead to hallucinations. | |
| ▲ | sieabahlpark 5 days ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | nitwit005 5 days ago | parent | prev | next [-] |
| I'd caution that the people not familiar with working at the low level are often missing a bunch of associated knowledge which is useful in the day to day. You run into Python/Javascript/etc programers who have no concept of what operations might execute quickly or slowly. There isn't a mental model of what the interpreter is doing. We're often insulated from the problem because the older generation often used fairly low level languages on very limited computers, and remember lessons from that era. That's not true of younger developers. |
| |
| ▲ | Cthulhu_ 3 days ago | parent [-] | | Depends on the operation tbh, and whether the one or the other is a micro-optimization or actually significant. It's better to focus on high-level optimizations, core architecture decisions and the right algorithms than on an operation level. Unless those operations are executed billions of times and the difference becomes statistically significant, of course. | | |
| ▲ | nitwit005 3 days ago | parent [-] | | You're kind of side stepping the issue. The problem is, when it does matter, they won't know how to make things performant. |
|
|
|
| ▲ | daemin 5 days ago | parent | prev | next [-] |
| I would agree with the statement that you don't need to know or write in assembly to build programs, but what you end up with is usually slow and inefficient. Having curiosity to examine the platform that your software is running on and taking a look into what the compilers generate is a skill worth having. Even if you never write raw assembly yourself, being able to see what the compiler generated and how data is laid out does matter. This then helps you make better decisions about what patterns of code to use in your higher level language. |
|
| ▲ | Cthulhu_ 3 days ago | parent | prev | next [-] |
| I think the real fear is that AI will generate code that is subtly broken, but people will lose the skills to understand why it's broken; a fear that it's too much abstraction. And the other difference is that code to assembly is a 'hard' conversion, extensively tested, verified, predictable, etc, while prompt-to-code is a 'loose' conversion, where repeating the same prompt in the same agent will cause different outcomes every time. |
|
| ▲ | MobiusHorizons 5 days ago | parent | prev [-] |
| I have never needed to write assembly in a professional context because of the changes you describe, but this does not mean I don't have a need to understand what is going on at that level of abstraction. I _have_ had occasion to look at disassembly in the process of debugging before, and it was important that I was not completely lost when I had to do this. You don't have to do something all the time for the capacity to do something to be useful. At the end of the day engineering is about choosing the correct tradeoffs given constraints, and in a professional environment, cost is almost always one of the constraints. |