| ▲ | throwaway17_17 5 hours ago | |
I would love for this to turn out to be some internal constraint where the LLM can not ‘reason’ about LEM and will always go to an understanding based in constructive logic. However, I am more ready to accept that LLM aren’t actually ‘reasoning’ about anything and it’s an inherent flaw in how we talk about the algorithms as though they were actually thinking ‘minds’ instead of very fancy syntax completion machines. | ||
| ▲ | AnimalMuppet 2 hours ago | parent [-] | |
The problem is that both constructive logic and "normal" logic are part of the training data. You might be able to say "using constructive logic, prove X". But even that depends on none of the non-constructive training data "leaking" into the part of the model that it uses for answering such a query. I don't think LLMs have hard partitions like that, so you may not get a purely constructive proof even if that's what you ask for. Worse, the non-constructive part may be not obvious. | ||