Remix.run Logo
Waraqa 5 days ago

The fact that any surprise used in the wrong place is considered hallucination and a downside for that LLM. I guess a good starting point to improve that is to add an experimental "Surprise Mode" which will try to guess the right kinds of surprises rather than minimizing them and get the feedback from the users. Over time, it will learn what kind of surprises users like so that they will be used in future training datasets.

qcnguy 5 days ago | parent | next [-]

Hallucinations aren't surprising, that's why they're problematic. They tend to look like exactly what you'd expect to be true, they just aren't.

Waraqa 5 days ago | parent [-]

They aren't surprising when you are dealing with new knowledge. But when hallucinations occur with something you are familiar with, it will be surprising and might be funny. Remember when AI was asked: how many rocks should I eat per day.

wolfi1 5 days ago | parent | prev [-]

aren't llms some sort of Markov Chains? surprise means less probability means more gibberish

drdeca 5 days ago | parent [-]

Ssorta? In the sense of “each term is randomly sampled from a probability distribution that depends on the current state” yes, but, they aren’t like an n-gram model (well, unless you actually make a large n-gram model, but that’s usually not what one is referring to when one says LLM).