Remix.run Logo
lioeters 3 hours ago

So interesting, thank you, I'm glad I asked.

I've been looking into tree calculus, which can "dissolve" lambda calculus and SKI combinators into natural (unlabelled) binary trees. I was wondering if you had any thoughts on it. It's a curious model of computation with unique properties, that I think can also convert to interaction nets or vice versa. I saw that Taelin tweeted about it once, dismissing it as misguided somehow ("not expressive nor efficient") but I feel like there's more to it.

EDIT: Oh I see you've commented on it before. Right, it's equivalent to SKI but worded differently. And this "intensionality", I guess no one has yet found practical use for that property.

> It's not based on a single combinator. It's based on the two standard basis combinators S and K, and a non-combinator that can inspect normalized terms to distinguish between degrees 0,1,and 2.

I quoted his tweet below, I'll keep it in case it's of any interest.

---

it is not even related to kind or interaction nets, this calculus is simply not expressive nor efficient. it comes from a reasonable but incorrect intuition: "the less axioms you have, the more natural it must be!". that's not true. take nand logic, for example; it is complete, but expressing circuits with nand gates only will take more space than separating and/or/not, because you're just forcing ands/ors/nots to be inserted and computed where they're not needed. SKI combinators suffer from a similar trap; they're inherently slower and less expressive than the λ-calculus, because there are λ-terms that can't be expressed without "a lot of bloat" in them

this calculus does a similar thing: it packs many operations into a single one, and, by doing so, it bloats the program space and forces useless computations. yes, this makes the language feel more natural, because there are less axioms; but the axiom is just an amalgamation of many different functions in one. you want it the other way around: you want each axiom to add a key "expressivity" to your language, as tersely as possible, and in a way that makes the programs written on top of them as clean as possible, because everything you need to express is expressive from these axioms. yes, it is good to have as few axioms as possible (to avoid redundancy), but the expressivity of your axioms is what matters the most. the add function in tree calculus is a great example of a program that has been forced to be far from optimal due to the underlying languages constraint - that's terrible.

interaction combinators do the opposite: yes, the core language is extremely small and elegant - in fact, smaller than tree calculus - but, what is most important, it that its axioms allow terms written in it to be expressed in their shortest, cleanest, and most efficient formulation. interaction combinators provide exactly 3 interaction rules: erasure, which erases data; commutation, which copies data incrementally; and annihilation, which observes and moves data. from these 3, all else follows. loops and recursion, for example, can be optimally implemented (both in time AND space) by commutations. beta-reduction (i.e., λ-calculus) and pattern-matching (i.e., branching) can also be done optimally through annihilations. tuples and datatypes can also be encoded space-optimally with the same axioms. there is no waste, there is no bloat, which is make it so good to compute and to search.

in fact, you can't invent an axiomatic interaction system that is more efficient than interaction combinators, because they're optimal. and if, for some reason, interaction combinators scare you, the same holds for the recursive linear λ-calculus with addition of incremental copying, for example. in many of these senses, even this language is better to express algorithms on, than this proposed tree calculus