Remix.run Logo
breuleux 4 hours ago

> These feel like they involve something beyond "predict the next token really well, with a reasoning trace."

I don't think there's anything you can't do by "predicting the next token really well". It's an extremely powerful and extremely general mechanism. Saying there must be "something beyond that" is a bit like saying physical atoms can't be enough to implement thought and there must be something beyond the physical. It underestimates the nearly unlimited power of the paradigm.

Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions? What else than a sequence of these tokens would a machine have to produce in response to its environment and memory?

bopbopbop7 2 hours ago | parent [-]

> Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions?

Ah yes, the brain is as simple as predicting the next token, you just cracked what neuroscientists couldn't for years.

breuleux 2 hours ago | parent | next [-]

The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.

unshavedyak 2 hours ago | parent | prev | next [-]

I mean.. i don't think that statement is far off. Much of what we do is entirely about predicting the world around us, no? Physics (where the ball will land) to emotional state of others based on our actions (theory of mind), we operate very heavily based on a predictive model of the world around us.

Couple that with all the automatic processes in our mind (filled in blanks that we didn't observe, yet will be convinced we did observe them), hormone states that drastically affect our thoughts and actions..

and the result? I'm not a big believer in our uniqueness or level of autonomy as so many think we have.

With that said i am in no way saying LLMs are even close to us, or are even remotely close to the right implementation to be close to us. The level of complexity in our "stack" alone dwarfs LLMs. I'm not even sure LLMs are up to a worms brain yet.

holoduke 2 hours ago | parent | prev [-]

Well it's the prediction part that is complicated. How that works is a mystery. But even our LLMs are for a certain part a mystery.