Remix.run Logo
swyx 4 hours ago

"What might be more surprising is that even when we adjust the temperature down to 0This means that the LLM always chooses the highest probability token, which is called greedy sampling. (thus making the sampling theoretically deterministic), LLM APIs are still not deterministic in practice (see past discussions here, here, or here)"

https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

wongarsu 3 hours ago | parent | next [-]

Also from the article:

"Note that this is “run-to-run deterministic.” If you run the script multiple times, it will deterministically return the same result. However, when a non-batch-invariant kernel is used as part of a larger inference system, the system can become nondeterministic. When you make a query to an inference endpoint, the amount of load the server is under is effectively “nondeterministic” from the user’s perspective"

Which is a factor you can control when running your own local inference, and in many simple inference engines simply doesn't happen. In those cases you do get deterministic output at temperature=0 (provided they got everything else mentioned in the article right)

dnautics 3 hours ago | parent | prev [-]

Having implemented LLM APIs, if you selected 0.0 as the temperature, my interface would drop the existing picking algorithm and select argmax(Logits)