| ▲ | jumploops 3 hours ago | |
> why can't there be an LLM that would always give the exact same output for the exact same input LLMs are inherently deterministic, but LLM providers add randomness through “temperature” and random seeds. Without the random seed and variable randomness (temperature setting), LLMs will always produce the same output for the same input. Of course, the context you pass to the LLM also affects the determinism in a production system. Theoretically, with a detailed enough spec, the LLM would produce the same output, regardless of temp/seed. Side note: A neat trick to force more “random” output for prompts (when temperature isn’t variable enough), is to add some “noise” data to the input (i.e. off-topic data that the LLM “ignores” in it’s response). | ||
| ▲ | tacone 2 hours ago | parent | next [-] | |
No, setting the temperature to zero is still going to yeld different results. One might think they add random seeds, but it makes no sense for temperature zero. One theory is that the distributed nature of their systems adds entropy and thus produces different results each time. Random seeds might be a thing, but for what I see there's a lot demand for reproducibility and yet no certain way to achieve it. | ||
| ▲ | EMM_386 34 minutes ago | parent | prev [-] | |
> Without the random seed and variable randomness (temperature setting), LLMs will always produce the same output for the same input. Except they won't. Even at temperature 0, you will not always get the same output as the same input. And it's not because of random noise from inference providers. There are papers that explore this subject because for some use-cases - this is extremely important. Everything from floating point precision, hardware timing differences, etc. make this difficult. | ||