Remix.run Logo
CamperBob2 3 hours ago

I'm telling you how these technologies work. When a language model isn't performing inference, it is not doing anything. A language model is a function which takes a token stream as input and produces a token probability distribution as output. By definition, there is no thinking outside of producing words. The function isn't running.

If what you are saying is true, then LLMs wouldn't be able to handle out-of-distribution math problems without resorting to tool use. Yet they can. When you ask a current-generation model to multiply some 8-digit numbers, and forbid it from using tools or writing a script, it will almost certainly give you the right answer. That includes local models that can't possibly cheat. LLMs are stochastic, but they are not parrots.

At the risk of sounding like an LLM myself, whatever process makes this possible is not simply next-token prediction in the pejorative sense you're applying to it. It can't be. The tokens in a transformer network are evidently not just words in a Markov chain but a substrate for reasoning. The model is generalizing processes it learned, somehow, in the course of merely being trained to predict the next token.

Mechanically, yes, next-token prediction is what it's doing, but that turns out to be a much more powerful mechanism than it appeared at first. My position is that our brains likely employ similar mechanism(s), albeit through very different means.

It is scarcely believable that this abstraction process is limited to keeping track of intermediate results in math problems. The implications should give the stochastic-parrot crowd some serious cognitive dissonance, but...

(Edit: it occurs to me that you are really arguing that the continuous versus discrete nature of human thinking is what's important here. If so, that sounds like a motte-and-bailey thing that doesn't move the needle on the argument that originally kicked off the subthread.)

(Edit 2, again due to rate-limiting: it does sound like you've fallen back to a continuous-versus-discrete argument, and that's not something I've personally thought much about or read much about. I stand by my point that the ability to do arithmetic without external tools is sufficient to dispense with the stochastic-parrot school of thought, and that's all I set out to argue here.)

mort96 3 hours ago | parent [-]

> If what you are saying is true, then LLMs wouldn't be able to handle out-of-distribution math problems without resorting to tool use. Yet they can. When you ask a current-generation model to multiply some 8-digit numbers, and forbid it from using tools or writing a script, it will almost certainly give you the right answer. That includes local models that can't possibly cheat. LLMs are stochastic, but they are not parrots.

Okay, what do you think language models are doing when they're not producing token probability distributions? What processes do you think are going on when the function which predicts a token isn't running?

> At the risk of sounding like an LLM myself, whatever process makes this possible is not simply next-token prediction in the pejoreative sense you're applying to it.

I don't know what pejorative sense you're implying here. I am, to the best of my ability, describing how the language model works. I genuinely believe that a language model is, in essence, a function which takes in a sequence of tokens and produces a token probability distribution as an output. If this is incorrect, please, correct me.

dpark an hour ago | parent [-]

> Okay, what do you think language models are doing when they're not producing token probability distributions? What processes do you think are going on when the function which predicts a token isn't running?

What are you doing when you are not outputting tokens? You have a thought, evaluate it, refine it, repeat.

You’re not wrong that the basic building block is just “next token prediction”, but clearly the emergent behaviors exceed our intuition about what this process can achieve. We’re seeing novel proofs come out of these. Will this lead to AGI? That’s still TBD.

> I genuinely believe that a language model is, in essence, a function which takes in a sequence of tokens and produces a token probability distribution as an output. If this is incorrect, please, correct me.

The pejorative is that you imply this is a shallow and unthinking process. As I said earlier, you are literally a token generator on HN. You read someone’s comment, do some kind of processing, and output some tokens of your own.

mort96 an hour ago | parent [-]

> What are you doing when you are not outputting tokens? You have a thought, evaluate it, refine it, repeat.

I mean I do think sometimes even when not typing?

> Will this lead to AGI? That’s still TBD.

This is literally what I have been saying this whole time.

Since we agree, I will consider this conversation concluded.