▲ | wolfi1 5 days ago | |
aren't llms some sort of Markov Chains? surprise means less probability means more gibberish | ||
▲ | drdeca 5 days ago | parent [-] | |
Ssorta? In the sense of “each term is randomly sampled from a probability distribution that depends on the current state” yes, but, they aren’t like an n-gram model (well, unless you actually make a large n-gram model, but that’s usually not what one is referring to when one says LLM). |