Remix.run Logo
bwfan123 6 days ago

Humans build theories of how things work. llms dont. Theories are deterministic symbolic representation of the chaotic worlds of meaning . Take the turing machine for example as a theory of computation in general, euclidean geometry as a theory for space, and newtonian mechanics as a theory for motion.

A theory gives 100% correct predictions. Although the theory itself may not model the world accurately. Such feedback between the theory, and its application in the world causes iterations to the theory. From newtonian mechanics to relativity etc.

Long story short, the LLM is a long way away from any of this. And to be fair to LLMs, the average human is not creating theories, it takes some genius to create them (newton, turing, etc).

Understanding something == knowing the theory of it.

hodgehog11 6 days ago | parent [-]

> Humans build theories of how things work. llms dont. Theories are deterministic symbolic representation of the chaotic worlds of meaning

What made you believe this is true? Like it or not, yes, they do (at least to the best extent of our definitions of what you've said). There is a big body of literature exploring this question, and the general consensus is that all performant deep learning models adopt an internal representation that can be extracted as a symbolic representation.

bwfan123 6 days ago | parent [-]

> What made you believe this is true?

I am yet to see a theory coming of the LLM that is sufficiently interesting. My comment was answering your question of what does it mean to "understanding something". My answer to that is: understanding something is knowing the theory of it.

Now, that begs the question of what is a theory. And to answer that, a theory comprises of building block symbols and a set of rules to combine them. for example, building blocks for space (and geometry) could be points, lines, etc. The key point in all of this is symbolism as abstractions to represent things in some world.

hodgehog11 6 days ago | parent [-]

The "sufficiently interesting" part is the most important qualifier here. My response was talking about theories and representations that we already know, either instinctively from near-birth, or from learned experience. We have not seen anything unique from LLMs because they do not appear to have an internal understanding (in the same sense that I was talking about) that is as broad as an adult human. But that doesn't mean it lacks any understanding.

> The key point in all of this is symbolism as abstractions to represent things in some world.

The difficulty is understanding how to extract this information from the model, since the output of the LLM is actually a very poor representation of its internal state.