Remix.run Logo
wslh 3 hours ago

What is great about the Ohm approach compared to typical lex/yacc/ANTLR parsers is that it avoids ambiguity by using ordered choice (the first matching rule wins), instead of requiring you to resolve conflicts explicitly. This makes working with Ohm/PEGs less painful in the initial phase of a project.

It's also important to highlight that this makes the parsing process slower.

ogogmad 2 hours ago | parent [-]

I had the impression that PEG and Earley/GLR all fully solved the parsing problem, but in different ways. But then recently, I found this guy's blog: https://www.oilshell.org/blog/tags.html?tag=parsing#parsing

Now I don't know what to think. The author's got a ton more experience than me. It seems there's a big enough market out there for people wanting non-ambiguity proofs and linear running-time proofs.

Then again, the more I think about parsing, the more I think it's a completely made-up problem. I'm pretty sure there's a middle ground between Lisp (or even worse, Forth) and Python. Fancy parsing has robbed us of the possibilities of metaprogramming and instead opened up a market for Stephen Wolfram, whose product features a homo-iconic language.

I've been gorging on Formal Language Theory literature recently. I am now fully convinced that Regular Languages are a very good idea: They are precisely the string-search problems that can be solved in constant space. If you had to find occurrences of certain patterns in a massive piece of text, you would naturally try to keep the memory usage of your search program independent of the text's size. But the theory of Regular Languages only actually gets off the ground when you wonder if Regular Languages are closed under concatenation or not. It turns out that they are closed under concatenation, but this requires representing them as Non-Deterministic Finite-State Automata - which leads to the Kleene Star operation and then Regular Expressions. This is a non-obvious piece of theory which solves a well-formulated problem. So now I suspect that if history were to repeat again, Regular Expressions would still have been invented and used for the same reasons.

By contrast, I find Context-Free Grammars much more dubious, and LR almost offensive? The problem with LR is I can't find a description of what it is that isn't just a gory description of how it works. And furthermore, it doesn't appear to relate to anything other than parsing. There's no description anywhere of how any of its ideas could be used in any other area.

DevelopingElk an hour ago | parent [-]

The issue with Regex for parsing is it can't handle balanced parentheses. https://en.wikipedia.org/wiki/Regular_expression. More generally, they can't handle nested structure. Context free grammars are the most natural extension that can. It adds a substitution operator to Regex that makes it powerful enough to recognize nested structure. So, Regex would be reinvented if history was rerun, but so would Context Free Grammars. Part of the complexity in parsing is attaching semantic meaning to the parse. Regex mostly avoids this by not caring how a string matches, just if it matches or not.

Now, I do agree that LR grammars are messy. Nowadays, they have mostly fallen from favor. Instead, people use simpler parsers that work for the restricted grammars actual programming languages have.

IIRC there is some research into formalizing the type of unambiguous grammar that always uses () or [] as nesting elements, but can use Regex for lexing.