Remix.run Logo
theptip 5 days ago

I don't follow; you can read the code that your LLM produces as well.

It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.

rstuart4133 5 days ago | parent | next [-]

> I don't follow; you can read the code that your LLM produces as well.

You can. You can also read the code a compiler produces perfectly well. In fact https://godbolt.org/ is a web site dedicated to programmers do just that. But ... how many programmers do you know who look at the assembler their compiler produces? In fact how many programmers do you know who understand the assembler?

Now lets extrapolate a bit. I've seen people say they've vibe coded a some program, yet they can't program. Did they read the code the LLM produced? Of course not. Did it matter? Apparently not for the program they produced.

Does the fact that they can vide program but not read code alter the types of programs they can produce? Of course it does. There limited to the sort of programs an LLM has seen before. Does that matter? Possibly not if the only programs they write are minor variations of what has been posted onto the internet already.

Now take two people, one who can only vide code, and another who knows how to program and understands computers at a very deep level. Ask yourself, who is going to be paid more? Is it the one who can only write programs that have been seen many times before by an LLM, or is it the one who can produce something truly new and novel?

yesbut 5 days ago | parent [-]

salary aside, the vibe coders are exposing themselves to increased cognitive decline which should be a strong enough incentive to avoid AI to begin with. maybe they already had a cognitive impairment before reading this MIT study and can't understand the risk.

PhantomHour 5 days ago | parent | prev [-]

The distinction is that you cannot make the LLM do the drilling. And the way these tools are designed is to train the user to use the LLM rather than their own brain, so they'll never learn it themselves.

A big problem with the "Just read the code" approach is that reading the code at the level deep enough to truly understand it is at minimum equally time-consuming than writing the code in the first place. (And in practice tends to be significantly worse) Anyone who claims they're reading the LLM's code output properly is on some level lying to them.

Human brains are simply bad at consistently monitoring output like that, especially if the output is consistently "good", especially especially when the errors appear to be "good" output on the surface level. This is universal across all fields and tools.