▲ | somenameforme 3 days ago | |||||||
Their output is in natural language, that's about the end of similarities with humans. They're token prediction algorithms, nothing more and nothing less. This can achieve some absolutely remarkable output, probably because our languages (both formal and linguistic) are absurdly redundant. But the next token being a word, instead of e.g. a ticker price, doesn't suddenly make them more like humans than computers. | ||||||||
▲ | nisegami 3 days ago | parent [-] | |||||||
I see this "next token predictor" description being used as a justification for drawing a distinction between LLMs and human intelligence. While I agree with that description of LLMs, I think the concept of "next token predictor" is much, much closer to describing human intelligence than most people consider. | ||||||||
|