▲ | didibus 5 days ago | |||||||
I agree with everything you said except: > Trying to eliminate cases where a stochastic model the size of an LLM gives “undesirable” or “untrue” responses seems rather odd. Take it back to what it is like you say, this is a predictive model, and the work of any ML scientist is to iterate on the model to try and get perfect accuracy on unseen data. It makes sense to want to tune the models to lower the rate of predictive errors. And because perfect predictive accuracy is rarely possible, you need to make judgment calls between precision and recall, which, in the case of LLMs, directly affects how often the model will hallucinate versus how often it will stay silent or overly cautious. | ||||||||
▲ | rubatuga 5 days ago | parent [-] | |||||||
But we're getting into the limits of knowledge and what is true/untrue. A stochastic model will be wrong sometimes. | ||||||||
|