▲ | blakepelton 2 days ago | |
I wonder how tricks that rely on compiler extensions (e.g., computed goto, musttail, and preserve_none) compare against the weval transform? The weval transform involves a small language extension backed by a larger change to the compiler implementation. I suppose the downside of the weval transform is that it is only helpful for interpreters, whereas the other extensions could have other use cases. Academic paper about weval: https://dl.acm.org/doi/pdf/10.1145/3729259 My summary of that paper: https://danglingpointers.substack.com/p/partial-evaluation-w... | ||
▲ | ivankra 2 days ago | parent | next [-] | |
Well, runtime/warmup costs seems like one obvious downside to me - weval would add some non-trivial compilation overhead to your interpreter (unrolling of interpreter loop, dead code elimination, optimizing across opcodes boundaries - probably a major source of speedup). Great if you have the time to precompile your script - only have to pay those costs once. It also helps if your host language's runtime ships with an optimizing compiler/JIT you can piggyback on (WASM runtime in weval's paper, JVM in Graal's case) - these things take space. But sometimes you might just have a huge pile of code that's not hot enough to be worth optimizing and you would be better off with a basic interpreter (that can benefit from computed gotos or tail-call dispatch with zero runtime overhead). Octane's CodeLoad or TypeScript benchmarks are such examples - GraalJS does pretty poorly there. | ||
▲ | naasking 2 days ago | parent | prev [-] | |
Partial evaluation subsumes a lot of other compiler optimizations, like constant folding, inlining and dead code elimination, so it wouldn't just find application with interpreters. |