| ▲ | vidarh 17 hours ago | |
I sympathise with the motivations for this, though I don't use Python much. I occasionally work on a toy Ruby compiler that started as a long blog series. More recently I've picked it up again with heavy AI use - I set Claude working on improving Rubyspec pass rates (which are atrocious). It's chugging along right now, actually. One of the things I've spent a lot of time thinking about are the ways to avoid a lot of the dynamic features of Ruby without affecting actual, real code much. There's a lot that can be done there - e.g. all of the research on Self and JS VMs is highly applicable. But I say "real code" because a lot of the "worst" dynamic features of Ruby (and Python) either doesn't appear in production code very often, or at all (there are still aspects of Ruby I have never seen used in real-life use despite having used Ruby for 20 years), or could be mitigated trivially, so I still believe you can do quite decently without a lot of the more complex optimisations. As an example (from Ruby): You can re-open the Integer class and override +:
(don't do this in IRB; it crashes)But nobody does that. The problem with a lot of these features isn't that people use them, but that people might use them. That leaves two main avenues: We can do like the author of thise post and strip away the costly features that are rarely used. Or we can provide ways of promising not to use them through code or options. The first option is perfectly valid, but I quite like the second: In Ruby, it turns out a lot of the optimisation challenges goes away if you get an app to freeze the most important system classes after setup, because even most of the most horrific examples of Ruby monkeypatching tends to do most of it only during startup, and you then tend to get to a stable state where you can let applications opt in to additional optimisations just by calling "freeze" on a number of objects. Ruby programs will also do things like dynamically decide which files to load based on reading the directory, but if you compile an application, most of the time you want that to happen ahead of time with a few exceptions (e.g. plugins), and so similarly, if you freeze as many classes as possible at a given point, you can partially evaluate manipulation of the runtime up until that point, and treat it as mostly static afterward, and fall back to slow paths for anything you can't statically resolve the names of, and still end up with lots of optimisation potential for most of the low level code. I think a lot of the same would work for Python, and might bridge the gap between the categories of alternative implementations the author mentions with more predictability than relying on a JIT doing the right analysis. E.g. your compiler can eat least potentially guarantee under which circumstances it can statically determine that an Integer can inline the fast path if Integer is frozen so that you can in fact reason about the code. | ||