Remix.run Logo
mfbx9da4 6 days ago

How can you tell a human actually understands? Prove to me that human thought is not predicting the most probable next token. If it quacks like duck. In psychology research the only way to research if a human is happy is to ask them.

alpaca128 6 days ago | parent | next [-]

Does speaking in your native language, speaking in a second language, thinking about your life and doing maths feel exactly the same to you?

> Prove to me that human thought is not predicting the most probable next token.

Explain the concept of color to a completely blind person. If their brain does nothing but process tokens this should be easy.

> How can you tell a human actually understands?

What a strange question coming from a human. I would say if you are a human with a consciousness you are able to answer this for yourself, and if you aren't no answer will help.

randallsquared 6 days ago | parent [-]

> What a strange question coming from a human.

Oh, I dunno. The whole "mappers vs packers" and "wordcels vs shape rotators" dichotomies point at an underlying truth, which is that humans don't always actually understand what they're talking about, even when they're saying all the "right" words. This is one reason why tech interviewing is so difficult: it's partly a task of figuring out if someone understands, or has just learned the right phrases and superficial exercises.

0xak 6 days ago | parent | prev [-]

That is ill-posed. Take any algorithm at all, e.g. a TSP solver. Make a "most probable next token predictor" that takes the given traveling salesman problem, runs the solver, and emits the first token of the solution, then reruns the solver and emits the next token, and so on.

By this thought experiment you can make any computational process into "predict the most probable next token" - at an extreme runtime cost. But if you do so, you arguably empty the concept "token predictor" of most of its meaning. So you would need to more accurately specify what you mean by a token predictor so that the answer isn't trivially true (for every kind of thought that's computation-like).