| ▲ | gtowey 4 hours ago | ||||||||||||||||||||||
Not even remotely close. Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly. With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things. | |||||||||||||||||||||||
| ▲ | alecbz 4 hours ago | parent | next [-] | ||||||||||||||||||||||
Even ignoring determinism, with traditional source code you have a durable, human-readable blueprint of what the software is meant to do that other humans can understand and tweak. There's no analogy in the case of "don't read the code" LLM usage. No artifacts exist that humans can read or verify to understand what the software is supposed to be doing. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | knowknow 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
Not only that but compiler optimizations are generally based on rigorous mathematical proofs, so that even without testing them you can be pretty sure it will generate equivalent assembly. From the little I know of LLM's, I'm pretty sure no one has figured out what mathematical principles LLM's are generating code from so you cant be sure its going to right aside from testing it. | |||||||||||||||||||||||