| ▲ | madisonmay 4 hours ago |
| LLMs are not inherently non-deterministic during inference. I don't believe non-determinism implies lack of abstraction. Abstraction is simply hiding detail to manage complexity. |
|
| ▲ | danpalmer 4 hours ago | parent [-] |
| Non-determinism is configurable at the level of the mathematical model, but current production systems do not support deterministic evaluation of LLMs. |
| |
| ▲ | orbital-decay 2 hours ago | parent [-] | | They do, though. Providers don't because batching makes it cheaper. Among the providers, DeepSeek seems to support it for v4 (and have actually optimized their kernels for batching), and Gemini Flash is "almost deterministic". | | |
| ▲ | danpalmer 41 minutes ago | parent [-] | | I'm pretty sure that the determinism issue is at the floating point math level, or even the hardware level. Just disabling batching and reducing the temperature to 0 does not result in truly deterministic answers. | | |
|
|