| ▲ | usrnm 5 hours ago | |||||||
> LLM-driven development could land on a safer language Why does an LLM need to produce human readable code at all? Especially in a language optimized around preventing humans from making human mistakes. For now, sure, we're in the transitional period, but in the long run? Why? | ||||||||
| ▲ | jerf 5 hours ago | parent | next [-] | |||||||
From my post at https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ , in a footnote: "It has been lost in AI money-grabbing frenzy but a few years ago we were talking a lot about AIs being “legible”, that they could explain their actions in human-comprehensible terms. “Running code we can examine” is the highest grade of legibility any AI system has produced to date. We should not give that away. "We will, of course. The Number Must Go Up. We aren’t very good at this sort of thinking. "But we shouldn’t." | ||||||||
| ▲ | mjr00 4 hours ago | parent | prev | next [-] | |||||||
Because the traits that make code easy for LLMs to work on are the same that make it ideal for humans: predictable patterns, clearly named functions and variables, one canonical way to accomplish a task, logical separation of concerns, clear separation of layers of abstraction, etc. Ultimately human readability costs very little. | ||||||||
| ▲ | mandevil 5 hours ago | parent | prev | next [-] | |||||||
I can't even imagine what "next token prediction" would look like generating x86 asm. Feels like 300 buffer overflows wearing a trench-coat, honestly. | ||||||||
| ||||||||
| ▲ | davorak 4 hours ago | parent | prev | next [-] | |||||||
> For now, sure, we're in the transitional period, but in the long run? Why? Assuming that after the transitional period it will still be humans working with ai tools to build things where humans actually add value to the process. Will the human+ai where the ai can explain what the ai built in detail and the human leverages that to build something better, be more productive that the human+ai where the human does not leverage those details? That 'explanation' will be/can act as the human readable code or the equivalent. It does not need to be any coding language we know today however. The languages we have today are already abstractions and generalizations over architectures, OSs, etc and that 'explanation' will be different but in the same vein. | ||||||||
| ▲ | recursive 5 hours ago | parent | prev | next [-] | |||||||
So humans can verify that the code is behaving in the interests of humanity. | ||||||||
| ▲ | ttd 4 hours ago | parent | prev | next [-] | |||||||
Well, IMO there's not much reason for an LLM to be trained to produce machine language, nor a functional binary blob appearing fully-formed from its head. If you take your question and look into the future, you might consider the existence of an LLM specifically trained to take high-level language inputs and produce machine code. Well, we already have that technology: we call it a compiler. Compilers exist, are (frequently) deterministic, and are generally exceedingly good at their job. Leaving this behind in favor of a complete English -> binary blob black box doesn't make much sense to me, logically or economically. I also think there is utility in humans being able to read the generated output. At the end of the day, we're the conscious ones here, we're the ones operating in meatspace, and we're driving the goals, outputs, etc. Reading and understanding the building blocks of what's driving our lives feels like a good thing to me. (I don't have many well-articulated thoughts about the concept of singularity, so I leave that to others to contemplate.) | ||||||||
| ▲ | zadikian 5 hours ago | parent | prev | next [-] | |||||||
LLMs are better at dealing with human-readable code on their own too | ||||||||
| ▲ | IncreasePosts 5 hours ago | parent | prev | next [-] | |||||||
For one thing, because it would be trained on human readable code. | ||||||||
| ▲ | Copyrightest 4 hours ago | parent | prev [-] | |||||||
[dead] | ||||||||