| ▲ | papyrus9244 2 days ago | |
> This is why it's impossible to create a digital assistant, or really anything useful, via Markov Chain. The fact that they only generate sequences that existed in the source mean that it will never come up with anything creative. Or, in other words, a Markov Chain won't hallucinate. Having a system that only repeats sentences from it's source material and doesn't create anything new on its own is quite useful on some scenarios. | ||
| ▲ | Sohcahtoa82 2 days ago | parent | next [-] | |
> Or, in other words, a Markov Chain won't hallucinate. It very much can. Remember, the context windows used for Markov Chains are usually very short, usually in the single digit numbers of words. If you use a context length of 5, then when asking it what the next word should be, it has no idea what the words were before the current context of 5 words. This results in incoherence, which can certainly mean hallucinations. | ||
| ▲ | stavros a day ago | parent | prev | next [-] | |
A Markov chain certainly will not hallucinate, because we define hallucinations as garbage within otherwise correct output. A Markov chain doesn't have enough correct output to consider the mistakes "hallucinations", but in a sense that nothing is a hallucination when everything is one. | ||
| ▲ | vrighter a day ago | parent | prev [-] | |
You can very easily inject wrong information into the state transition function. And machine learning can and regularly does do so. That is not a difference between an LLM and a markov chain. | ||