| ▲ | fn-mote 6 hours ago | ||||||||||||||||
“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails. I’m not saying generative AI meets this standard, but it’s different from what you’re saying. | |||||||||||||||||
| ▲ | smj-edison 6 hours ago | parent [-] | ||||||||||||||||
Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist. Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU. | |||||||||||||||||
| |||||||||||||||||