| ▲ | andy12_ an hour ago | |
You do understand that the mechanism through which an auto-regressive transformer works (predicting one token at a time) is completely unrelated to how a model with that architecture behaves or how it's trained, right? You can have both: - An LLM that works through completely different mechanisms, like predicting masked words, predicting the previous word, or predicting several words at a time. - A normal traditional program, like a calculator, encoded as an autoregressive transformer that calculates its output one word at a time (compiled neural networks) [1][2] So saying "it predicts the next word" is a nothing-burger. That a program calculates its output one token at a time tells you nothing about its behavior. | ||