▲ | Waraqa 5 days ago | |||||||
The fact that any surprise used in the wrong place is considered hallucination and a downside for that LLM. I guess a good starting point to improve that is to add an experimental "Surprise Mode" which will try to guess the right kinds of surprises rather than minimizing them and get the feedback from the users. Over time, it will learn what kind of surprises users like so that they will be used in future training datasets. | ||||||||
▲ | qcnguy 5 days ago | parent | next [-] | |||||||
Hallucinations aren't surprising, that's why they're problematic. They tend to look like exactly what you'd expect to be true, they just aren't. | ||||||||
| ||||||||
▲ | wolfi1 5 days ago | parent | prev [-] | |||||||
aren't llms some sort of Markov Chains? surprise means less probability means more gibberish | ||||||||
|