Remix.run Logo
kazinator 2 days ago

This is a SaaS problem, not a LLM problem. If you have a local LLM that nobody is upgrading behind your back, it will calculate the same thing on the same inputs. Unless there is a bug somewhere, like using uninitialized memory, the flaoting-point calculations and the token embedding and all the rest do the same thing each time.

Cilvic 2 days ago | parent | next [-]

So could SaaS LLM or cloud/api LLMs not offer this as an option? A guarantee that the "same prompt" will always produce the same result.

Also the way I usually interpret this "non-deterministic" a bit "broader".

Say i have have slightly different prompts "what's 2+2?" vs. "can you please tell me what's 2 plus 2" or even "2+2=?" or "2+2" for most applications it would be useful if they all produce the same result

alphan0n 2 days ago | parent [-]

The form of the question determines the form of the outcome, even if the answer is the same. Asking the same question in a different way should result in the adherence to the form of the question.

2+2 is 4

2 plus 2 is 4

4=2+2

4

Having the LLM pass the input to a tool (python) will result in deterministic output.

nativeit 2 days ago | parent | prev [-]

Doesn’t that imply that LLMs are just “if then, then that” but bigger?

ezst 2 days ago | parent [-]

Sure, why would you expect it to be different?

bloqs 2 days ago | parent [-]

Well I do because not a day has passed since 2021 where the general popular discourse on the subject of AI has not referenced it's functionality as fundamentally novel