▲ | bwfan123 6 days ago | ||||||||||||||||
Humans build theories of how things work. llms dont. Theories are deterministic symbolic representation of the chaotic worlds of meaning . Take the turing machine for example as a theory of computation in general, euclidean geometry as a theory for space, and newtonian mechanics as a theory for motion. A theory gives 100% correct predictions. Although the theory itself may not model the world accurately. Such feedback between the theory, and its application in the world causes iterations to the theory. From newtonian mechanics to relativity etc. Long story short, the LLM is a long way away from any of this. And to be fair to LLMs, the average human is not creating theories, it takes some genius to create them (newton, turing, etc). Understanding something == knowing the theory of it. | |||||||||||||||||
▲ | hodgehog11 6 days ago | parent [-] | ||||||||||||||||
> Humans build theories of how things work. llms dont. Theories are deterministic symbolic representation of the chaotic worlds of meaning What made you believe this is true? Like it or not, yes, they do (at least to the best extent of our definitions of what you've said). There is a big body of literature exploring this question, and the general consensus is that all performant deep learning models adopt an internal representation that can be extracted as a symbolic representation. | |||||||||||||||||
|