| ▲ | lotyrin 2 days ago | |||||||||||||||||||||||||
Yeah, some of the failure modes are the same. This one in particular is fun because even a human, given "the the the" and asked to predict what's next will probably still answer "the". How a Markov chain starts the the train and how the LLM does are pretty different though. | ||||||||||||||||||||||||||
| ▲ | arowthway a day ago | parent | next [-] | |||||||||||||||||||||||||
I wonder if "X is not Y - its' Z" LLM shibboleth is just an artifact of "is not" being a third most common bigram starting with is, just after "is a" and "is the" [0]. It doesn't follow as simply as it does with markov chains, but maybe this is where the tendency originated, and later was trained and RLHFed into the shape that kind of makes sense instead of getting eliminated. | ||||||||||||||||||||||||||
| ▲ | psychoslave a day ago | parent | prev [-] | |||||||||||||||||||||||||
I never saw any human starting to loop "the" as a reaction to any utterance though. Personally my concern is more about the narrative that LLM are making "chain of thoughts", can "hallucinante" and that people should become "AI complement". They are definitely making nice inferences most of the time, but they are also totally different thing compared to human thoughts. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||