▲ | riazrizvi 5 days ago | ||||||||||||||||
Me: What’s an example of a dice roll? LLM: 1 “Language ambiguity with determinism”? Sure I can juxtapose the terms but if it’s semantically inconsistent, then what we mean by that is not a deterministic, definitive thing. You’re chasing your tail on this ‘goal’. | |||||||||||||||||
▲ | Nevermark 5 days ago | parent | next [-] | ||||||||||||||||
Ambiguity: The request/prompt leaves a lot of room for interpretation. Many qualitatively different answers may be correct, relative to the prompt. Different or non-deterministic models will return highly variance results. Determinism: If a model is given the exact same request/prompt twice, its two responses will also be identical. Whether or not the consistent response qualifies as correct. The two concepts are very different. (Ambiguous vs. precise prompt) x (Deterministic vs. Non-deterministic model) = 4 different scenarios. A model itself can be non-deterministic without being ambiguous. If you know exactly how it functions, why it is non-deterministic (batch sensitive for instance), that is not an ambiguous model. Its operation is completely characterized. But it is non-deterministic. An ambiguous model would simply be model whose operation was not characterized. A black box model for instance. A black box model can be deterministic and yet ambiguous. | |||||||||||||||||
| |||||||||||||||||
▲ | skybrian 5 days ago | parent | prev | next [-] | ||||||||||||||||
If you really want that to work while being reproducible, maybe give it a random number tool and set the seed? | |||||||||||||||||
▲ | raincole 5 days ago | parent | prev [-] | ||||||||||||||||
> LLM: 1 A perfectly acceptable answer. If it answers 1 every time it's still a perfectly acceptable answer. | |||||||||||||||||
|