| ▲ | hackinthebochs 3 hours ago |
| What are neurosymbolic systems supposed to bring to the table that LLMs can't in principle? A symbol is just a vehicle with a fixed semantics in some context. Embedding vectors of LLMs are just that. |
|
| ▲ | logicprog 2 hours ago | parent [-] |
| Pre-programmed, hard and fast rules for manipulating those symbols, that can automatically be chained together according to other preset rules. This makes it reliable and observable. Think Datalog. IMO, symbolic AI is way too brittle and case-by-case to drive useful AI, but as a memory and reasoning system for more dynamic and flexible LLMs to call out to, it's a good idea. |
| |
| ▲ | hackinthebochs 2 hours ago | parent [-] | | Sure, reliability is a problem for the current state of LLMs. But I see no reason to think that's an in principle limitation. | | |
| ▲ | logicprog 31 minutes ago | parent [-] | | There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut! |
|
|