Remix.run Logo
grokx 6 days ago

When I studied compiler theory, a large part of the compilation involved a lexical analyser (e.g. `flex`) and a syntax analyser (e.g. `bison`), that would produce an internal representation of the input code (the AST), used to generate the compiled files.

It seems that the terminology as evolved, as we speak more broadly of frontends and backends.

So, I'm wondering if Bison and Flex (or equivalent tools) are still in use by the modern compilers? Or are they built directly in GCC, LLVM, ...?

eslaught 6 days ago | parent | next [-]

The other answers are great, but let me just add that C++ cannot be parsed with conventional LL/LALR/LR parsers, because the syntax is ambiguous and requires disambiguation via type checking (i.e., there may be multiple parse trees but at most one will type check).

There was some research on parsing C++ with GLR but I don't think it ever made it into production compilers.

Other, more sane languages with unambiguous grammars may still choose to hand-write their parsers for all the reasons mentioned in the sibling comments. However, I would note that, even when using a parsing library, almost every compiler in existence will use its own AST, and not reuse the parse tree generated by the parser library. That's something you would only ever do in a compiler class.

Also I wouldn't say that frontend/backend is an evolution of previous terminology, it's just that parsing is not considered an "interesting" problem by most of the community so the focus has moved elsewhere (from the AST design through optimization and code generation).

nextaccountic 6 days ago | parent | next [-]

Note that depending on what parsing lib you use, it may produce nodes of your own custom AST type

Personally I love the (Rust) combo of logos for lexing, chumsky for parsing, and ariadne for error reporting. Chumsky has options for error recovery and good performance, ariadne is gorgeous (there is another alternative for Rust, miette, both are good).

The only thing chumsky is lacking is incremental parsing. There is a chumsky-inspired library for incremental parsing called incpa though

estebank 6 days ago | parent [-]

If you want something more conservative for error reporting, annotate-snippets is finally at parity with rustc's current custom renderer and will soon become the default for both rustc and cargo.

nextaccountic 5 days ago | parent [-]

Will migrating to annotate-snippets change rustc/cargo formatting of errors in any way?

Also, in what sense it is more conservative?

estebank 5 days ago | parent [-]

The output will cause no user visible change.

It uses ASCII for all output, replaces ZWJs to have consistent terminal output in the face of multi codepoint emoji for two out of the top of my head.

ajb 6 days ago | parent | prev | next [-]

GLR C++ parsers were for a short time in use on production code at Mozilla, in refactoring tools: Oink (and it's fork, pork). Not quite sure what ended that, but I don't think it was any issue with parsing.

fithisux 6 days ago | parent | prev | next [-]

I disagree. It is interesting, that is why there many languages out there without an LSP.

ricudis 4 days ago | parent | prev [-]

Not just C++. Even C parsing is context-dependent because of typedef. Requires a bit of hackery to parse in a conventional LL/LARL/LR parser.

umanwizard 6 days ago | parent | prev | next [-]

"Frontend" as used by mainstream compilers is slightly broader than just lexing/parsing.

In typical modern compilers "frontend" is basically everything involving analyzing the source language and producing a compiler-internal IR, so lexing, parsing, semantic analysis and type checking, etc. And "backend" means everything involving producing machine code from the IR, so optimization and instruction selection.

In the context of Rust, rustc is the frontend (and it is already a very big and complicated Rust program, much more complicated than just a Rust lexer/parser would be), and then LLVM (typically bundled with rustc though some distros package them separately) is the backend (and is another very big and complicated C++ program).

pklausler 6 days ago | parent | prev | next [-]

Table-driven parsers with custom per-statement tokenizers are still common in surviving Fortran compilers, with the exception of flang-new in LLVM. I used a custom parser combinator library there, inspired by a prototype in Haskell's Parsec, to implement a recursive descent algorithm with backtracking on failure. I'm still happy with the results, especially with the fact that it's all very strongly typed and coupled with the parse tree definition.

brooke2k 6 days ago | parent | prev | next [-]

Not sure about GCC, but in general there has been a big move away from using parser generators like flex/bison/ANTLR/etc, and towards using handwritten recursive descent parsers. Clang (which is the C/C++ frontend for LLVM) does this, and so does rustc.

afdbcreid 6 days ago | parent | next [-]

I don't know a single mainstream language that uses parser generators. Python used to, and even they have moved.

AFAIK the reason is solely error messages: the customization available with handwritten parsers is just way better for the user.

muvlon 6 days ago | parent | next [-]

I'll let you decide whether it counts as "mainstream", but the principal implementation of Nix has a very old school setup using bison and flex:

https://github.com/NixOS/nix/blob/master/src/libexpr/parser....

https://github.com/NixOS/nix/blob/master/src/libexpr/lexer.l

nullpoint420 5 days ago | parent [-]

It shows, even as a Nix fan. The errors messages are abysmal

sanxiyn 6 days ago | parent | prev [-]

Ruby also used to use Bison, uses its own https://github.com/ruby/lrama these days.

gpderetta 6 days ago | parent | prev [-]

I believe that GCC also moved to a handwritten parser, at least for c++, a couple of decades ago.

jojomodding 6 days ago | parent | prev | next [-]

This was in the olden days when your language's type system would maybe look like C's if you were serious and be even less of a thing when you were not.

The hard part about compiling Rust is not really parsing, it's the type system including parts like borrow checking, generics, trait solving (which is turing-complete itself), name resolution, drop checking, and of course all of these features interact in fun and often surprising ways. Also macros. Also all the "magic" types in the StdLib that require special compiler support.

This is why e.g. `rustc` has several different intermediate representations. You no longer have "the" AST, you have token trees, HIR, THIR, and MIR, and then that's lowered to LLVM or Cranelift or libgccjit. Each stage has important parts of the type system happen.

astrange 6 days ago | parent | prev | next [-]

Compiler theory a) doesn't seem to have much to do with production compilers b) is unnecessarily heavyweight and scary about everything.

In particular, it makes parsing everything look like a huge difficult problem. This is my main problem with the Dragon Book.

In practice everyone uses hacky informal recursive-descent parsers because they're the only way to get good error messages.

quamserena 6 days ago | parent | prev | next [-]

Not really. Here’s a comparison of different languages: https://notes.eatonphil.com/parser-generators-vs-handwritten...

Most roll their own for three reasons: performance, context, and error handling. Bison/Menhir et al. are easy to write a grammar and get started with, but in exchange you get less flexibility overall. It becomes difficult to handle context-sensitive parts, do error recovery, and give the user meaningful errors that describe exactly what’s wrong. Usually if there’s a small syntax error we want to try to tell the user how to fix it instead of just producing “Syntax error”, and that requires being able to fix the input and keep parsing.

Menhir has a new mode where the parser is driven externally; this allows your code to drive the entire thing, which requires a lot more machinery than fire-and-forget but also affords you more flexibility.

wrs 6 days ago | parent | next [-]

If you're parsing a new language that you're trying to define, I do recommend using a parser generator to check your grammar, even if your "real" parser is handwritten for good reasons. A parser generator will insist on your grammar being unambiguous, or at least tell you where it is ambiguous. Without this sanity check, your unconstrained handwritten parser is almost guaranteed to not actually parse the language you think it parses.

6 days ago | parent | prev [-]
[deleted]
peterfirefly 6 days ago | parent | prev | next [-]

Mostly because that's the part that had the best developed theory so that's what tended to be taught.

The rest of the f*cking owl is the interesting part.

6 days ago | parent | prev [-]
[deleted]