Remix.run Logo
ryandv 10 hours ago

> This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you.

What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers.

Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem").

Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically.

You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics.

Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation."

PaulRobinson 9 hours ago | parent | next [-]

You need to see the comment I was replying to, in order to understand the context I was making.

LLMs are part of what I was thinking of, but not the totality.

We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning.

The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner.

I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine.

I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done?

If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful.

TuringTest 5 hours ago | parent | prev [-]

> If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article.

LLMs are not used for notation; you are right that they're not precise enough for accurate knowledge.

What LLMs do as a tool is solving the Frame Problem, allowing the reasoning system to have access to the "common sense" knowledge that is needed for a specific situation, retrieving it from a humongous amount of background corpus of diverse knowledge, in an efficient way.

Classic AI based on logical inference was never able to achieve this retrieval, thus the unfulfilled promises in the 2000s to have autonomous agents based on ontologies. Those promises seem approachable now thank to the huge statistical databases of all topics stored in compressed LLM models.

A viable problem-solving system should combine the precision of symbolic reasoning with the breadth of generative models, to create checks and heuristics that guide the autonomous agents to interact with the real world in ways that make sense given the background relevant cultural knowledge.

https://en.wikipedia.org/wiki/Frame_problem