Remix.run Logo
grey-area 2 days ago

They do not understand. They predict a plausible next sequence of words.

bheadmaster 2 days ago | parent [-]

I don't disagree with the conclusion, I disagree with the reasoning.

There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.