▲ | ogogmad a month ago | |
The article also suggests that you could use a proof-verifier like Lean instead. Using that capability to generate synthetic data on which to train helps too. Very large context windows have been known to help with programming, and should help with mathematical reasoning too. None of this gives you AGI, I suppose, but the important thing is it makes LLMs more reliable at mathematics. Anyone have a link to an article exploring Lean plus MCP? EDIT: Here's a recent Arxiv paper: https://arxiv.org/abs/2404.12534v2, the keyword is "neural theorem proving" I've just remembered: AlphaEvolve showed that LLMs can design their own "learning curricula", to help train themselves to do better at reasoning tasks. I recall these involve the AI suggesting problems that have the right amount of difficulty to be useful to train on. I'll ramble a tiny bit more: Anybody who learns maths comes to understand that it helps to understand the "guts" of how things work. It helps to see proofs, write proofs, do homework, challenge yourself with puzzles, etc. I wouldn't be surprised if the same thing were true for LLMs. As such, I think having the LLM call out to symbolic solvers could ultimately undermine their intelligence - but using Lean to ensure rigour probably helps. |