Remix.run Logo
Syzygies 3 hours ago

We're in agreement. I understand how much harder it is to "think with AI"; the last year of my life has been a brutal struggle to figure this out.

I also agree that neural net LLMs are not the inevitable way to implement AI. I'm most intrigued by the theoretical underpinnings of mathematical proof assistants such as Lean 4. Computer scientists understand the word problem for strings as undecidable. The word problem for typed trees with an intrinsic notion of induction is harder, but constructing proofs is finding paths in this tree space. Just as mechanical computers failed in base ten while at the same time Boole had already developed base two logic, I see these efforts merging. Neural nets struggle to simulate recursion; for proof assistants recursion is baked in. Stare at these tree paths and one sees thought at the atomic level, begging to be incorporated into AI. For now the river runs the other way, using AI to find proofs. That river will reverse flow.

wizzwizz4 3 hours ago | parent [-]

Lean 4 is not a theoretically-interesting proof assistant. If you're interested in such things, look into Rocq (which uses CoIC, like Lean, but is more rigorous about it), the HOL logic, Isabelle/HOL's automation suite (though Isabelle proper is fairly mediocre, apart from being the thing everyone's standardised around), Lean-auto (https://arxiv.org/abs/2505.14929), and whatever SAT solvers are state-of-the-art this week. Like the tools for symbolic integration and frequentist statistics, there isn't any magic: the power comes from handling enough uninteresting special-cases that we get broad coverage. (Personally, I think there's still a lot of power being left on the table by using overly-general algorithms: sledgehammer is used to crack a lot of nuts, even when that takes quadratic time or longer.)

While CoIC has recursion "baked in", HOL does not. It turns out that we can treat structural recursion as a derived property, even over coinductively-defined types. We don't even need a notion of ordinals for this! (See https://www.tcs.ifi.lmu.de/staff/jasmin-blanchette/card.pdf and https://matryoshka-project.github.io/pubs/bindings.pdf.)

Lean 2 used HoTT, which was theoretically interesting, but not enough was known about HoTT at the time (in particular, whether it was a constructive logic – I think we have all the pieces for an explicit construction via cubical type theory now, but I don't know that anyone's put the pieces together), so that direction has been mostly abandoned. I think there's useful work to be done in that direction, but with the current state of HoTT pedagogy, I doubt I'd ever be able to keep on top of it enough to contribute; and with Lean 4 taking so much of the funding, I don't think we'll see much work in this direction until HoTT is easier to learn.

I still think you're overgeneralising. What actual thing does your poetic tree / thought / river analogy correspond to?