Remix.run Logo
HarHarVeryFunny 2 hours ago

> LLMs are just seemingly intelligent autocomplete engines

Well, no, they are training set statistical predictors, not individual training sample predictors (autocomplete).

The best mental model of what they are doing might be that you are talking to a football stadium full of people, where everyone in the stadium gets to vote on the next word of the response being generated. You are not getting an "autocomplete" answer from any one coherent source, but instead a strange composite response where each word is the result of different people trying to steer the response in different directions.

An LLM will naturally generate responses that were not in the training set, even if ultimately limited by what was in the training set. The best way to think of this is perhaps that they are limited to the "generative closure" (cf mathematical set closure) of the training data - they can generate "novel" (to the training set) combinations of words and partial samples in the training data, by combining statistical patterns from different sources that never occurred together in the training data.