| ▲ | staminade 7 hours ago | |||||||||||||||||||||||||
That’s why you need filler words that contribute little to the sentence meaning but give it a chance to compute/think. This is part of why humans do the same when speaking. | ||||||||||||||||||||||||||
| ▲ | dTal 2 hours ago | parent | next [-] | |||||||||||||||||||||||||
The LLM has no accessible state beyond its own output tokens; each pass generates a single token and does not otherwise communicate with subsequent passes. Therefore all information calculated in a pass must be encoded into the entropy of the output token. If the only output of a thinking pass is a dumb filler word with hardly any entropy, then all the thinking for that filler word is forgotten and cannot be reconstructed. | ||||||||||||||||||||||||||
| ▲ | jaccola 7 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Do you have any evidence at all of this? I know how LLMs are trained and this makes no sense to me. Otherwise you'd just put filler words in every input e.g. instead of: "The square root of 256 is" you'd enter "errr The er square um root errr of 256 errr is" and it would miraculously get better? The model can't differentiate between words you entered and words it generated its self... | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||