| ▲ | Yoric 2 days ago | |
I agree that non-deterministic isn't the right word, because that's not the property we care about, but unless I'm strongly missing something LLM outputs are very much non-deterministic, both during the inference itself and when projecting the embeddings back into tokens. | ||
| ▲ | energy123 2 days ago | parent [-] | |
I agree it isn't the main property we care about, we care about reliability. But at least in its theoretical construction the LLM should be deterministic. It outputs a fixed probability distribution across tokens with no rng involvement. We then sample from that fixed distribution non-deterministically for better performance or we use greedy decoding and get slightly worse performance in exchange for full determinism. Happy to be corrected if I am wrong about something. | ||