| > Symbols, by definition, only represent a thing. This is missing the lesson of the Yoneda Lemma: symbols are uniquely identified by their relationships with other symbols. If those relationships are represented in text, then in principle they can be inferred and navigated by an LLM. Some relationships are not represented well in text: tacit knowledge like how hard to twist a bottle cap to get it to come off, etc. We aren't capturing those relationships between all your individual muscles and your brain well in language, so an LLM will miss them or have very approximate versions of them, but... that's always been the problem with tacit knowledge: it's the exact kind of knowledge that's hard to communicate! |
| |
| ▲ | drdeca 8 days ago | parent | next [-] | | When I have a physical experience, sometimes it results in me saying a word. Now, maybe there are other possible experiences that would result in me behaving identically, such that from my behavior (including what words I say) it is impossible to distinguish between different potential experiences I could have had. But, “caused me to say” is a relation, is it not? Unless you want to say that it wasn’t the experience that caused me to do something, but some physical thing that went along with the experience, either causing or co-occurring with the experience, and also causing me to say the word I said. But, that would still be a relation, I think. | | |
| ▲ | nomel 8 days ago | parent [-] | | Yes, but it's a unidirectional relation: it was the result of the experience. The word cannot represent the context (the experience), in a meaningful way. It's like trying to describe a color to a blind person: poetic subjective nonsense. | | |
| ▲ | drdeca 8 days ago | parent [-] | | I don’t know what you mean by “unidirectional relation”. I get that you gave an explanation after the colon, but I still don’t quite get what you mean. Do you just mean that what words I use doesn’t pick out a unique possible experience? That’s true of course, but I don’t know why you call that “unidirectional” I don’t think describing colors to a blind person is nonsense. One can speak of how the different colors relate to one-another. A blind person can understand that a stop sign is typically “red”, and that something can be “borderline between red and orange”, but that things will not be “borderline between green and purple”. A person who has never had any color perception won’t know the experience of seeing something red or blue, but they can still have a mental model of the world that includes facts about the colors of things, and what effects these are likely to have, even though they themselves cannot imagine what it is like to see the colors. | | |
| ▲ | akomtu 7 days ago | parent [-] | | IMO, the GP's idea is that you can't explain sounds to a deaf man, or emotions to someone who doesn't feel them. All that needs direct experience and words only point to our shared experience. | | |
| ▲ | drdeca 6 days ago | parent [-] | | Ok, but you can explain properties of sounds to deaf men, and properties of colors to blind men. You can’t give them a full understanding of what it is like to experience these things, but that doesn’t preclude deaf or blind men from having mental models of the world that take into account those senses. A blind man can still reason about what things a sighted person would be able to conclude based on what they see, likewise a deaf man can reason about what a person who can hear could conclude based on what they could hear. |
|
|
|
| |
| ▲ | semiquaver 8 days ago | parent | prev [-] | | Well shit, I better stop reading books then. | | |
| ▲ | nomel 8 days ago | parent [-] | | I think you've missed the concept here. You exist in the full experience. That lossy projection to words is still meaningful to you, in your reading, because you know the experience it's referencing. What do I mean by "lossy projection"? It's the experience of seeing the color blue to the word "blue". The word "blue" is meaningless without already having experienced it, because the word is not a description of the experience, it's a label. The experience itself can't be sufficiently described, as you'll find if you try to explain a "blue" to a blind person, because it exists outside of words. The concept here is that something like an LLM, trained on human text, can't having meaningful comprehension of some concepts, because some words are labels of things that exist entirely outside of text. You might say "but multimodal models use tokens for color!", or even extending that to "you could replace the tokens used in multimodal models with color names!" and I would agree. But, the understanding wouldn't come from the relation of words in human text, it would come from the positional relation of colors across a space, which is not much different than our experience of the color, on our retina tldr: to get AI to meaningful understand something, you have to give it a meaningful relation. Meaningful relations sometimes aren't present, in human writing. |
|
|