▲ | withinboredom 3 days ago | |||||||
LLMs deliberately insert randomness. If you run a model locally (or sometimes via API), you can turn that off and get the same response for the same input every time. | ||||||||
▲ | layer8 3 days ago | parent [-] | |||||||
True, but I'd argue that you can't get the definite knowledge of an LLM by turning off randomness, or fixing the seed. Otherwise that would be a routinely employed feature, to determine what an LLM "truly knows", removing any random noise distorting that knowledge, and instead randomness would only be turned on for tasks requiring creativity, not when merely asking factual questions. But it doesn’t work that way. Different seeds and will uncover different "knowledge", and it's not the case that one is a truer representation of an LLM's knowledge than another. Furthernore, even in the absence of randomness, asking an LLM the same question in different ways can yield different, potentially contradictory answers, even when the difference in prompting is perfectly benign. | ||||||||
|