▲ | sixdimensional 7 days ago | |
I feel like the fundamental concept of symbolic logic[1] as a means of reasoning fits within the capabilities of LLMs. Whether it's a mirage or not, the ability to produce a symbolically logical result that has valuable meaning seems real enough to me. Especially since most meaning is assigned by humans onto the world... so too can we choose to assign meaning (or not) to the output of a chain of symbolic logic processing? Edit: maybe it is not so much that an LLM calculates/evaluates the result of symbolic logic as it is that it "follows" the pattern of logic encoded into the model. |