| ▲ | 2ndorderthought 5 hours ago | ||||||||||||||||||||||||||||||||||
That's not really true. If you turn a few knobs you can make them deterministic. Namely setting temperature to zero, and turning off all history. But none of the cloud providers do this. Because it's not a product as far as they are concerned. So in practice - not so much. | |||||||||||||||||||||||||||||||||||
| ▲ | maplethorpe 5 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Can someone explain why this is? Do LLMs somehow contain a true random number generator? Why wouldn't they produce the same outputs given the same inputs (even temperature)? edit: I'm not talking about an LLM as accessed through a provider. I'm just talking about using a model directly. Why wouldn't that be deterministic? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | slashdave 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Eh, conceptually true, but in practice, it is rather hard to get any decent performance out of a GPU and still produce a deterministic answer. And in any case, setting the temperature to zero will not produce a useful result, unless you don't mind your LLM constantly running into infinite loops. | |||||||||||||||||||||||||||||||||||