| ▲ | orbital-decay 3 hours ago | |
>Yes, you get the same predicted token every time for a given context. But why that token and not a different one? Too many factors to reliably abstract. Fixed input-to-output mapping is determinism. Prompt instability is not determinism by any definition of this word. Too many people confuse the two for some reason. Also, determinism is a pretty niche thing that is only necessary for reproducibility, and prompt instability/unpredictability is irrelevant for practical usage, for the same reason as in humans - if the model or human misunderstands the input, you keep correcting the result until it's right by your criteria. You never need to reroll the result, so you never see the stochastic side of the LLMs. | ||
| ▲ | spixy 7 minutes ago | parent [-] | |
But there is no fixed input-to-output mapping in current popuular LLMs. | ||