Remix.run Logo
PhantomHour 5 days ago

> Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.

One of the other replies alludes to it, but I want to say it explicitly:

The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had.

It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code.

And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there.

Yet AI is corrosive to those foundations.

theptip 5 days ago | parent | next [-]

I don't follow; you can read the code that your LLM produces as well.

It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.

rstuart4133 5 days ago | parent | next [-]

> I don't follow; you can read the code that your LLM produces as well.

You can. You can also read the code a compiler produces perfectly well. In fact https://godbolt.org/ is a web site dedicated to programmers do just that. But ... how many programmers do you know who look at the assembler their compiler produces? In fact how many programmers do you know who understand the assembler?

Now lets extrapolate a bit. I've seen people say they've vibe coded a some program, yet they can't program. Did they read the code the LLM produced? Of course not. Did it matter? Apparently not for the program they produced.

Does the fact that they can vide program but not read code alter the types of programs they can produce? Of course it does. There limited to the sort of programs an LLM has seen before. Does that matter? Possibly not if the only programs they write are minor variations of what has been posted onto the internet already.

Now take two people, one who can only vide code, and another who knows how to program and understands computers at a very deep level. Ask yourself, who is going to be paid more? Is it the one who can only write programs that have been seen many times before by an LLM, or is it the one who can produce something truly new and novel?

yesbut 5 days ago | parent [-]

salary aside, the vibe coders are exposing themselves to increased cognitive decline which should be a strong enough incentive to avoid AI to begin with. maybe they already had a cognitive impairment before reading this MIT study and can't understand the risk.

PhantomHour 5 days ago | parent | prev [-]

The distinction is that you cannot make the LLM do the drilling. And the way these tools are designed is to train the user to use the LLM rather than their own brain, so they'll never learn it themselves.

A big problem with the "Just read the code" approach is that reading the code at the level deep enough to truly understand it is at minimum equally time-consuming than writing the code in the first place. (And in practice tends to be significantly worse) Anyone who claims they're reading the LLM's code output properly is on some level lying to them.

Human brains are simply bad at consistently monitoring output like that, especially if the output is consistently "good", especially especially when the errors appear to be "good" output on the surface level. This is universal across all fields and tools.

Cthulhu_ 3 days ago | parent | prev [-]

The other one is that code to assembly is exact and repeatable. Code will (well, should, lmao) behave the same way, every time. A prompt to generate code won't.

Some prompts / AI agents will write all the validations and security concerns when prompted to write an API endpoint (or whatever). Others may not, because you didn't specify it.

But if someone who doesn't actually know about security just trusts that the AI will just do it for you - like how a developer using framework might - you'll run into issues fast.