Remix.run Logo
behnamoh 2 days ago

So, we’ve come full circle to symbolic AI! This article essentially suggests that LLMs could be effective translators of our requests to command-line code or input to symbolic AI software, which would then yield precise solutions. However, I feel this approach is overly mechanical, and I don’t believe AGI would be achieved by creating thousands, if not millions, of MCP servers on our machines. This is especially true because MCP lacks scalability, and anyone who has had to send more than three or four function schemas to a language model knows that excessive JSON schema complexity confuses the model and reduces its performance.

bwfan123 13 hours ago | parent | next [-]

We've come back full-circle to precise and "narrow interfaces".

Long story short, it is great when humans interact with LLMs for imprecise queries, because, we can ascribe meaning to LLM output. But for precise queries, the human, or the LLM needs to speak a narrow interface to another machine.

Precision requires formalism, as what we mean by precise involves symbolism and operational definition. Where the genius of the human brain lies (and which is not yet captured in LLMs) is the insight and understanding of what it means to precisely model a world via symbolism - ie, the place where symbolism originates. As an example, humans operationally and precisely model the shared experience of "space" using the symbolism and theory of euclidean geometry.

pona-a a day ago | parent | prev | next [-]

I'm reminded of what happened in the later years of Cyc. They found their logical framework didn't address certain common problems, so they kept adding specialized hard-coded solutions in Lisp. LLMs are headed for AI autumn.

godelski a day ago | parent [-]

I think the problem here is we keep making promises we can't keep. It causes us to put too many eggs in one bakery, ironically frequently preventing us from filling in those gaps. We'd make much more progress without the railroading.

There's only so much money but come on, we're dumping trillions into highly saturated research directions where several already well funded organizations have years worth of a head start. You can't tell me that there's enough money to throw at another dozen OpenAI competitors and another dozen CoPilot competitors but we don't have enough for a handful of alternative paradigms that already show promise but will struggle to grow without funding. These are not only much cheaper investments but much less risky then betting on a scrappy startup being the top dog at their own game.

ogogmad 17 hours ago | parent | prev [-]

The article also suggests that you could use a proof-verifier like Lean instead. Using that capability to generate synthetic data on which to train helps too. Very large context windows have been known to help with programming, and should help with mathematical reasoning too. None of this gives you AGI, I suppose, but the important thing is it makes LLMs more reliable at mathematics.

Anyone have a link to an article exploring Lean plus MCP? EDIT: Here's a recent Arxiv paper: https://arxiv.org/abs/2404.12534v2, the keyword is "neural theorem proving"

I've just remembered: AlphaEvolve showed that LLMs can design their own "learning curricula", to help train themselves to do better at reasoning tasks. I recall these involve the AI suggesting problems that have the right amount of difficulty to be useful to train on.

I'll ramble a tiny bit more: Anybody who learns maths comes to understand that it helps to understand the "guts" of how things work. It helps to see proofs, write proofs, do homework, challenge yourself with puzzles, etc. I wouldn't be surprised if the same thing were true for LLMs. As such, I think having the LLM call out to symbolic solvers could ultimately undermine their intelligence - but using Lean to ensure rigour probably helps.