| ▲ | jaccola 2 hours ago | |||||||
The person above was being a bit pedantic, and zealous in their anti-anthropomorphism. But they are literally predicting the next token. They do nothing else. Also if you think they were just predicting the next token in 2021, there has been no fundamental architecture change since then. All gains have been via scale and efficiency optimisations (not to discount that, an awful lot of complexity in both of these) | ||||||||
| ▲ | nearbuy 2 hours ago | parent [-] | |||||||
That's not what they said. They said: > It's evaluation function simply returned the word "Most" as being the most likely first word in similar sentences it was trained on. Which is false under any reasonable interpretation. They do not just return the word most similar to what they would find in their training data. They apply reasoning and can choose words that are totally unlike anything in their training data. If you prompt it: > Complete this sentence in an unexpected way: Mary had a little... It won't say lamb. Any if you think whatever it says was in the training data, just change the constraints until you're confident it's original. (E.g. tell it every word must start with a vowel and it should mention almonds.) "Predicting the next token" is also true but misleading. It's predicting tokens in the same sense that your brain is just minimizing prediction error under predictive coding theory. | ||||||||
| ||||||||