▲ | appease7727 7 days ago | |||||||
The way it works for me at least is I can fit a huge amount of context in my head. This works because the text is utterly irrelevant and gets discarded immediately. Instead, my brain parses code into something like an AST which then is represented as a spatial graph. I model the program as a logical structure instead of a textual one. When you look past the language, you can work on the program. The two are utterly disjoint. I think LLMs fail at software because they're focused on text and can't build a mental model of the program logic. It take a huge amount of effort and brainpower to truly architect something and understand large swathes of the system. LLMs just don't have that type of abstract reasoning. | ||||||||
▲ | starlust2 6 days ago | parent | next [-] | |||||||
It's not that they can't build a mental model, it's that they don't attempt to build one. LLMs jump straight from text to code with little to no time spent trying to architect the system. | ||||||||
▲ | taminka 6 days ago | parent | prev [-] | |||||||
i wonder why nobody bothered w/ feeding llms the ast instead (not sure in what format), but it only seems logical, since that's how compilers undestand code after all... | ||||||||
|