Remix.run Logo
inciampati 2 days ago

Markov chains have exponential falloff in correlations between tokens over time. That's dramatically different than real text which contains extremely long range correlations. They simply can't model long range correlations. As such, they can't be guided. They can memorize, but not generalize.

kittikitti 2 days ago | parent | next [-]

As someone who developed chatbots with HMM's and the Transformers algorithms, this is a great and succinct answer. The paper, Attention Is All You Need, solved this drawback.

vjerancrnjak 2 days ago | parent [-]

Markov Random Fields also do that.

Difference is obviously there but nothing prevents you from undirected conditioning of long range dependencies. There’s no need to chain anything.

The problem from a math standpoint is that it’s an intractable exercise. The moment you start relaxing the joint opt problem you’ll end up at a similar place.

zwaps 2 days ago | parent | prev [-]

This is the correct answer