▲ | kazinator 2 days ago | ||||||||||||||||
This is a SaaS problem, not a LLM problem. If you have a local LLM that nobody is upgrading behind your back, it will calculate the same thing on the same inputs. Unless there is a bug somewhere, like using uninitialized memory, the flaoting-point calculations and the token embedding and all the rest do the same thing each time. | |||||||||||||||||
▲ | Cilvic 2 days ago | parent | next [-] | ||||||||||||||||
So could SaaS LLM or cloud/api LLMs not offer this as an option? A guarantee that the "same prompt" will always produce the same result. Also the way I usually interpret this "non-deterministic" a bit "broader". Say i have have slightly different prompts "what's 2+2?" vs. "can you please tell me what's 2 plus 2" or even "2+2=?" or "2+2" for most applications it would be useful if they all produce the same result | |||||||||||||||||
| |||||||||||||||||
▲ | nativeit 2 days ago | parent | prev [-] | ||||||||||||||||
Doesn’t that imply that LLMs are just “if then, then that” but bigger? | |||||||||||||||||
|