| ▲ | Chance-Device 2 hours ago | |||||||
> They aren't stenographically hiding useful computation state in words like "the" and "and". Do you know that is true? These aren’t just tokens, they’re tokens with specific position encodings preceded by specific context. The position as a whole is a lot richer than you make it out to be. I think this is probably an unanswered empirical question, unless you’ve read otherwise. | ||||||||
| ▲ | dTal 2 hours ago | parent [-] | |||||||
I am quite certain. The output is "just tokens"; the "position encodings" and "context" are inputs to the LLM function, not outputs. The information that a token can carry is bounded by the entropy of that token. A highly predictable token (given the context) simply can't communicate anything. Again: if a tiny language model or even a basic markov model would also predict the same token, it's a safe bet it doesn't encode any useful thinking when the big model spits it out. | ||||||||
| ||||||||