| ▲ | tovej 3 hours ago | |
As someone who has implemented a fair few DSLs, lexical and syntactic analysis is pretty much the same anywhere, and the structure of the lexer/parser does not really depend on the grammar of the language. And even semantic analysis is at least very similar in most PLs. Even DSLs. Assuming you're using concepts like variables and functions. When it comes to codegen / interpreter runtimes, things start to diverge. But this also depends on the use case. More often than not a DSL is a one-to-one map to an existing language, with syntactic sugar on top. I'm curious, what's the DSL you're working on? | ||
| ▲ | 3371 28 minutes ago | parent [-] | |
It's pretty much WIP but if you are interested here is the repo. https://github.com/No3371/zoh The points you brought up all are valid. Lexer, parser and general concepts are language-agnostic, yes, and I wasn't talking about how the implementation is different. When I said "you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design", I was thinking about the many times they just fail to write in my language even when provided will full language specs. LLMs don't "think" and boilerplate is easy for LLMs because highly similar syntax structure even identical code exist in their training data, they are kind of just copying stuff. But that doesn't work that well when they are tasked to write in a original language that is... too creative. | ||