Remix.run Logo
qsort 8 hours ago

What the hell is this?

The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.

But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?

I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?

danaugrs 3 hours ago | parent | next [-]

Author here. Other experts in this field have also used the term "lambda reduction", including Levy himself [1] and Lamping [2], both which are referenced in the Delta-Nets paper. "Lambda-reduction" is clearly an abbreviation of Lambda-calculi reduction.

[1] https://dl.acm.org/doi/abs/10.1145/143165.143172

[2] https://dl.acm.org/doi/pdf/10.1145/96709.96711

arethuza 8 hours ago | parent | prev | next [-]

HN Guidelines: "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

qsort 7 hours ago | parent [-]

I'm not being "generically" negative, I'm being very specifically negative.

We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.

I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.

koolala 6 hours ago | parent [-]

The AI slop statement is harsh. The website looks nice.

papes_ 8 hours ago | parent | prev | next [-]

The author, Daniel Augusto Rizzi Salvadori' and Github user, 'https://github.com/danaugrs' align. Couldn't comment on the actual content, though.

forgotpwd16 7 hours ago | parent [-]

Affiliation they mean with an institute/company funding their research/work. It's quite rare, if it ever happens, for someone to find an innovative algorithm, let alone write a technical paper, as hobbyist.

psychoslave 4 hours ago | parent [-]

What a strange expectation. Obviously humans didn't wait these institutions to create ideas and notations. The fact that these institutions now exist don't make individual with own desire to express novel idea disappear. All the more when these institutions with their elitist mindset will actively reject whatever is going too much of the rails of their own social bubbles.

mrkeen 7 hours ago | parent | prev | next [-]

The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.

Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin

I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)

But if I were to go sniffing for red flags:

From the first commit:

lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.

What?

util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it

koolala 7 hours ago | parent [-]

Weak vs. "strong" lambda calculus maybe? Typed vs untyped?

anonnon 5 hours ago | parent | prev [-]

> it's very weird they're calling this "lambda-reduction"

That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.

marvinborner 5 hours ago | parent | next [-]

The annihilating interaction between abstraction and application nodes is well-known in the area of interaction net research to ~correspond to β-reduction, as is also explained in the associated research paper [1].

α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].

[1] https://arxiv.org/pdf/2505.20314

[2] https://www.sciencedirect.com/science/article/pii/S030439750...

qsort 5 hours ago | parent | prev [-]

Yes.

To be transparent: I don't understand this stuff all that well and it's entirely possible I'm missing something, but everything here is weird AF.

- Who is the author? Why he has no affiliation?

- What is the main result of the paper? How does it improve on the state of the art? Even for stuff that's way beyond my pay grade, I can usually tell from the abstract. I'm completely baffled here.

- Why do they introduce graphical notation without corresponding formal definitions?

- Why is it written in this weird style where theorems are left implicit? Usually, there's at least a sketch of proof.

- Why does it not address that the thing they're claiming to do isn't elementary recursive as per https://doi.org/10.1006/inco.2001.2869?

Again, it's entirely possible that it's a skill issue on my part and I'd love to be corrected, but I'm completely baffled and I still have absolutely no idea of what I'm looking at. Am I the stupid one and it's obvious to everyone else?

etiamz 4 hours ago | parent [-]

Note that, in the interaction net literature, it is pretty common to introduce graphical notation without corresponding textual counterparts. See the works of Lafont, Lamping, Asperti, Guerrini, and many others [1]. (However, the textual calculus can be absolutely crucial for formal proof.)

The absence of proofs and benchmarks undermines the paper for me as well. I find it absolutely critical to demonstrate how the approach works in comparison with already established software, such as BOHM [2].

Parallel beta reduction not being elementarily recursive is not a thing to address, nor a thing one can address. Lamping's abstract algorithm already performs the minimal amount of work to reduce a term -- one cannot improve it further.

From my understanding, the paper aims to optimize the bookkeeping overhead present in earlier implementations of optimal reduction. However, as I said, the absence of formal/empirical evidence of correctness and extensive benchmarks makes the contributions of the paper debatable.

[1] https://github.com/etiamz/interaction-net-resources

[2] https://github.com/asperti/BOHM1.1