Remix.run Logo
mort96 3 hours ago

> If what you are saying is true, then LLMs wouldn't be able to handle out-of-distribution math problems without resorting to tool use. Yet they can. When you ask a current-generation model to multiply some 8-digit numbers, and forbid it from using tools or writing a script, it will almost certainly give you the right answer. That includes local models that can't possibly cheat. LLMs are stochastic, but they are not parrots.

Okay, what do you think language models are doing when they're not producing token probability distributions? What processes do you think are going on when the function which predicts a token isn't running?

> At the risk of sounding like an LLM myself, whatever process makes this possible is not simply next-token prediction in the pejoreative sense you're applying to it.

I don't know what pejorative sense you're implying here. I am, to the best of my ability, describing how the language model works. I genuinely believe that a language model is, in essence, a function which takes in a sequence of tokens and produces a token probability distribution as an output. If this is incorrect, please, correct me.

dpark an hour ago | parent [-]

> Okay, what do you think language models are doing when they're not producing token probability distributions? What processes do you think are going on when the function which predicts a token isn't running?

What are you doing when you are not outputting tokens? You have a thought, evaluate it, refine it, repeat.

You’re not wrong that the basic building block is just “next token prediction”, but clearly the emergent behaviors exceed our intuition about what this process can achieve. We’re seeing novel proofs come out of these. Will this lead to AGI? That’s still TBD.

> I genuinely believe that a language model is, in essence, a function which takes in a sequence of tokens and produces a token probability distribution as an output. If this is incorrect, please, correct me.

The pejorative is that you imply this is a shallow and unthinking process. As I said earlier, you are literally a token generator on HN. You read someone’s comment, do some kind of processing, and output some tokens of your own.

mort96 an hour ago | parent [-]

> What are you doing when you are not outputting tokens? You have a thought, evaluate it, refine it, repeat.

I mean I do think sometimes even when not typing?

> Will this lead to AGI? That’s still TBD.

This is literally what I have been saying this whole time.

Since we agree, I will consider this conversation concluded.