▲ | TuringTest 5 hours ago | |
> If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. LLMs are not used for notation; you are right that they're not precise enough for accurate knowledge. What LLMs do as a tool is solving the Frame Problem, allowing the reasoning system to have access to the "common sense" knowledge that is needed for a specific situation, retrieving it from a humongous amount of background corpus of diverse knowledge, in an efficient way. Classic AI based on logical inference was never able to achieve this retrieval, thus the unfulfilled promises in the 2000s to have autonomous agents based on ontologies. Those promises seem approachable now thank to the huge statistical databases of all topics stored in compressed LLM models. A viable problem-solving system should combine the precision of symbolic reasoning with the breadth of generative models, to create checks and heuristics that guide the autonomous agents to interact with the real world in ways that make sense given the background relevant cultural knowledge. |