▲ | hodgehog11 6 days ago | |||||||
> Humans build theories of how things work. llms dont. Theories are deterministic symbolic representation of the chaotic worlds of meaning What made you believe this is true? Like it or not, yes, they do (at least to the best extent of our definitions of what you've said). There is a big body of literature exploring this question, and the general consensus is that all performant deep learning models adopt an internal representation that can be extracted as a symbolic representation. | ||||||||
▲ | bwfan123 6 days ago | parent [-] | |||||||
> What made you believe this is true? I am yet to see a theory coming of the LLM that is sufficiently interesting. My comment was answering your question of what does it mean to "understanding something". My answer to that is: understanding something is knowing the theory of it. Now, that begs the question of what is a theory. And to answer that, a theory comprises of building block symbols and a set of rules to combine them. for example, building blocks for space (and geometry) could be points, lines, etc. The key point in all of this is symbolism as abstractions to represent things in some world. | ||||||||
|