| ▲ | Zambyte 4 hours ago | |||||||||||||||||||||||||||||||||||||
Language models are deterministic unless you add random input. Most inference tools add random input (the seed value) because it makes for a more interesting user experience, but that is not a fundamental property of LLMs. I suspect determinism is not the issue you mean to highlight. | ||||||||||||||||||||||||||||||||||||||
| ▲ | dTal 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
Sort of. They are deterministic in the same way that flipping a coin is deterministic - predictable in principle, in practice too chaotic. Yes, you get the same predicted token every time for a given context. But why that token and not a different one? Too many factors to reliably abstract. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | usernametaken29 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Actually at a hardware level floating point operations are not associative. So even with temperature of 0 you’re not mathematically guaranteed the same response. Hence, not deterministic. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||