Remix.run Logo
ainiriand 9 hours ago

Are not LLMs supposed to just find the most probable word that follows next like many people here have touted? How this can be explained under that pretense? Is this way of problem solving 'thinking'?

throw310822 7 hours ago | parent | next [-]

> just find the most probable word that follows next

Well, if in all situations you can predict which word Einstein would probably say next, then I think you're in a good spot.

This "most probable" stuff is just absurd handwaving. Every prompt of even a few words is unique, there simply is no trivially "most probable" continuation. Probable given what? What these machines learn to do is predicting what intelligence would do, which is the same as being intelligent.

qsera 7 hours ago | parent [-]

>Probable given what?

The training data..

>predicting what intelligence would do

No, it just predict what the next word would be if an intelligent entity translated its thoughts to words. Because it is trained on the text that are written by intelligent entities.

If it was trained on text written by someone who loves to rhyme, you would be getting all rhyming responses.

It imitates the behavior -- in text -- of what ever entity that generated the training data. Here the training data was made by intelligent humans, so we get an imitation of the same.

It is a clever party trick that works often enough.

throw310822 7 hours ago | parent | next [-]

> The training data

If the prompt is unique, it is not in the training data. True for basically every prompt. So how is this probability calculated?

cbovis 6 hours ago | parent | next [-]

The prompt is unique but the tokens aren't.

Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.

hmmmmmmmmmmmmmm 6 hours ago | parent [-]

...? what is the response supposed to be here?

qsera 7 hours ago | parent | prev | next [-]

Just using a scaled up and cleverly tweaked version of linear regression analysis...

red75prime an hour ago | parent [-]

That is, the probability distribution that the network should learn is defined by which probability distribution the network has learned. Brilliant!

hmmmmmmmmmmmmmm 6 hours ago | parent | prev [-]

Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.

red75prime an hour ago | parent [-]

The specific sequence of tokens that comprise the Knuth's problem with an answer to it is not in the training data. A naive probability distribution based on counting token sequences that are present in the training data would assign 0 probability to it. The trained network represents extremely non-naive approach to estimating the ground-truth distribution (the distribution that corresponds to what a human brain might have produced).

empath75 5 hours ago | parent | prev [-]

It is impossible to accurately imitate the action of intelligent beings without being intelligent. To believe otherwise is to believe that intelligence is a vacuous property.

slopinthebag an hour ago | parent | next [-]

An unintelligent device can accurately imitate the action of intelligent beings within a given scope, in the same way an actor can accurately imitate the action of a fictional character in a given scope (the stage or camera) without actually being that character.

If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.

qsera 4 hours ago | parent | prev [-]

>It is impossible to accurately imitate the action of intelligent beings without being intelligent.

Wait what? So a robot who is accurately copying the actions of an intelligent human, is intelligent?

UltraSane an hour ago | parent | next [-]

How can you distinguish intelligence form a sufficiently accurate imitation of intelligence?

slopinthebag an hour ago | parent [-]

By "sufficiently accurate" do you mean identical? Because if so, it's not an imitation of intelligence at all, and the question is thus nonsensical.

UltraSane 27 minutes ago | parent [-]

"it's not an imitation of intelligence at all"

But that is the key insight, how can you tell when an imitation of intelligence becomes the real thing?

empath75 3 hours ago | parent | prev [-]

That was probably phrased poorly. If a robot can independently accurately do what an intelligent person would do when placed in a novel situation, then yes, I would say it is intelligent.

If it's just basically being a puppet, then no. You tell me what claude code is more like, a puppet, or a person?

dilap 8 hours ago | parent | prev | next [-]

That description is really only fair for base models†. Something like Opus 4.6 has all kinds of other training on top of that which teach it behaviors beyond "predict most probable token," like problem-solving and being a good chatbot.

(†And even then is kind of overly-dismissive and underspecified. The "most probable word" is defined over some training data set. So imagine if you train on e.g. mathematicians solving problems... To do a good job at predicting [w/o overfitting] your model will have to in fact get good at thinking like a mathematician. In general "to be able to predict what is likely to happen next" is probably one pretty good definition of intelligence.)

gpm 8 hours ago | parent | next [-]

I'd disagree, the other training on top doesn't alter the fundamental nature of the model that it's predicting the probabilities of the next token (and then there's a sampling step which can roughly be described as picking the most probable one).

It just changes the probability distribution that it is approximating.

To the extent that thinking is making a series of deductions from prior facts, it seems to me that thinking can be reduced to "pick the next most probable token from the correct probability distribution"...

dilap 6 hours ago | parent | next [-]

The fundamental nature of the model is that it consumes tokens as input and produces token probabilities as output, but there's nothing inherently "predictive" about it -- that's just perspective hangover from the historical development of how LLMs were trained. It is, fundamentally, I think, a general-purpose thinking machine, operating over the inputs and outputs of tokens.

(With this perspective, I can feel my own brain subtly oferring up a panoply of possible responses in a similar way. I can even turn up the temperature on my own brain, making it more likely to decide to say the less-obvious words in response, by having a drink or two.)

(Similarly, mimicry is in humans too a very good learning technique to get started -- kids learning to speak are little parrots, artists just starting out will often copy existing works, etc. Before going on to develop further into their own style.)

vidarh 7 hours ago | parent | prev [-]

Put a loop around an LLM and, it can be trivially made Turing complete, so it boils down to whether thinking requires exceeding the Turing computable, and we have no evidence to suggest that is even possible.

gpm 7 hours ago | parent [-]

What are you doing in your loop?

As typically deployed [1] LLMs are not turing complete. They're closer to linear bounded automaton, but because transformers have a strict maximum input size they're actually a subset of the weaker class of deterministic finite automaton. These aren't like python programs or something that can work on as much memory as you supply them, their architecture works on a fixed maximum amount of memory.

I'm not particularly convinced turing complete is the relevant property though. I'm rather convinced that I'm not turing complete either... my head is only so big after all.

[1] i.e. in a loop that appends output tokens to the input and has some form of sliding context window (perhaps with some inserted instructions to "compact" and then sliding the context window right to after those instructions once the LLM emits some special "done compacting" tokens).

[2] Common sampling procedures make them mildly non-deterministic, but I don't believe they do so in a way that changes the theoretical class of these machines from DFAs.

vidarh 6 hours ago | parent | next [-]

Context effectively provifes an IO port, and so all the loop needs to do is to simulate the tape head, and provide a single token of state.

You can not be convinced Turing complete is relevant all you want - we don't know of any more expansive category of computable functions, and so given that an LLM in the setup described is Turing complete no matter that they aren't typically deployed that way is irrelevant.

They trivially can be, and that is enough to make the shallow dismissal of pointing out they're "just" predicting the next token meaningless.

roywiggins 6 hours ago | parent | prev | next [-]

Turing Machines don't need access to the entire tape all at once, it's sufficient for it to see one cell at a time. You could certainly equip an LLM with a "read cell", "write cell", and "move left/right" tool and now you have a Turing machine. It doesn't need to keep any of its previous writes or reads in context. A sliding context window is more than capacious enough for this.

gpm 4 hours ago | parent [-]

You're right of course, but at the point where you're saying "well we can make a turing machine with the LLM as the transition function by defining some tool calls for the LLM to interact with the tape" it feels like a stretch to call the LLM itself turing complete.

Also people definitely talk about them as "thinking" in contexts where they haven't put a harness capable of this around them. And in the common contexts where people do put harness theoretically capable of this around the LLM (e.g. giving the LLM access to bash), the LLM basically never uses that theoretical capability as the extra memory it would need to actually emulate a turing machine.

And meanwhile I can use external memory myself in a similar way (e.g. writing things down), but I think I'm perfectly capable of thinking without doing so.

So I persist in my stance that turing complete is not the relevant property, and isn't really there.

roywiggins 2 hours ago | parent [-]

Yeah, humans and LLMs and a TM transition function are all Turing complete in the same way, but it's also basically a useless fact. You could possibly train a sufficiently motivated rat to compute a TM transition function.

empath75 5 hours ago | parent | prev [-]

No physically realizable machine is technically turing complete.

But it is trivially possible to give systems-including-LLMs external storage that is accessible on demand.

ericd 8 hours ago | parent | prev [-]

I think it's pretty likely that "intelligence" is emergent behavior that comes when you predict what comes next in physical reality well enough, at varying timescales. Your brain has to build all sorts of world model abstractions to do that over any significant timescale. Big LLMs have to build internal world models, too, to do well at their task.

pvillano 32 minutes ago | parent | prev | next [-]

Does water flowing through a maze solve it by 'thinking'? No. The rules of physics eventually result in the water flowing out the exit. Water also hits every dead end along the way.

The power of LLMs is that by only selecting sequences of words that fit a statistical model, they avoid a lot of dead ends.[^1]

I would not call that, by itself, thinking. However, if you start with an extrapolation engine and add the ability to try multiple times and build on previous results, you get something that's kind of like thinking.

[1]: Like, a lot of dead ends. There are an unfathomable number of dead ends in generating 500 characters of code, and it is a miracle of technology that Claude only hit 30.

tux3 8 hours ago | parent | prev | next [-]

>Are not LLMs supposed to just find the most probable word that follows next like many people here have touted?

The base models are trained to do this. If a web page contains a problem, and then the word "Answer: ", it is statistically very likely that what follows on that web page is an answer. If the base model wants to be good at predicting text, at some point learning the answer to common question becomes a good strategy, so that it can complete text that contains these.

NN training tries to push models to generalize instead of memorizing the training set, so this creates an incentive for the model to learn a computation pattern that can answer many questions, instead of just memorizing. Whether they actually generalize in practice... it depends. Sometimes you still get copy-pasted input that was clearly pulled verbatim from the training set.

But that's only base models. The actual production LLMs you chat with don't predict the most probable word according to the raw statistical distribution. They output the words that RLHF has rewarded them to output, which includes acting as an assistant that answers questions instead of just predicting text. RLHF is also the reason there are so many AI SIGNS [1] like "you're absolutely right" and way more use of the word "delve" than is common in western English.

[1]: https://en.wikipedia.org/wiki/WP:AISIGNS

sega_sai 7 hours ago | parent | prev | next [-]

In some sense that is still correct, i.e. the words are taken from some probability distribution conditional on previous words, but the key point is that probability distribution is not just some sort of average across the internet set of word probabilities. In the end this probability distribution is really the whole point of intelligence. And I think the LLMs are learning those.

adamtaylor_13 7 hours ago | parent | prev | next [-]

That's the way many people reduce it, and mathematically, I think that's true. I think what we fail to realize is just far that will actually take you.

"just the most probable word" is a pretty powerful mechanism when you have all of human knowledge at your fingertips.

I say that people "reduce it" that way because it neatly packs in the assumption that general intelligence is something other than next token prediction. I'm not saying we've arrived at AGI, in fact, I do not believe we have. But, it feels like people who use that framing are snarkily writing off something that they themselves to do not fully comprehend behind the guise of being "technically correct."

I'm not saying all people do this. But I've noticed many do.

IgorPartola 9 hours ago | parent | prev | next [-]

In some cases solving a problem is about restating the problem in a way that opens up a new path forward. “Why do planets move around the sun?” vs “What kind of force exists in the world that makes planets tethered to the sun with no visible leash?” (Obviously very simplified but I hope you can see what I am saying.) Given that a human is there to ask the right questions it isn’t just an LLM.

Further, some solutions are like running a maze. If you know all the wrong turns/next words to say and can just brute force the right ones you might find a solution like a mouse running through the maze not seeing the whole picture.

Whether this is thinking is more philosophical. To me this demonstrates more that we are closer to bio computers than an LLM is to having some sort of divine soul.

ainiriand 9 hours ago | parent [-]

Thanks for your input. The way I saw this and how it looks Knuth interpreted it is that there were some reasoning steps taken by Claude independently. Some internal decisions in the model that made it try different things, finally succeeding.

qsera 8 hours ago | parent | prev | next [-]

Yes, that is exactly what they do.

But that does not mean that the results cannot be dramatic. Just like stacking pixels can result in a beautiful image.

vjerancrnjak 6 hours ago | parent | prev | next [-]

No. There is good signal in IMO gold medal performance.

These models actually learn distributed representations of nontrivial search algorithms.

A whole field of theorem provingaftwr decades of refinements couldn’t even win a medal yet 8B param models are doing it very well.

Attention mechanism, a bruteforce quadratic approach, combined with gradient descent is actually discovering very efficient distributed representations of algorithms. I don’t think they can even be extracted and made into an imperative program.

kaiokendev 4 hours ago | parent | prev | next [-]

Given some intelligent system, an AI that perfectly reproduces any sequence that system could produce must encode the patterns that superset that intelligence.

lijok an hour ago | parent | prev | next [-]

To get an answer to that you would first have to define 'thinking'

crocowhile 8 hours ago | parent | prev | next [-]

Those people still exist? I only know one guy who is still fighting those windmills

qsera 8 hours ago | parent | next [-]

Yes, I am one.

ezst 6 hours ago | parent | prev [-]

[flagged]

wrsh07 8 hours ago | parent | prev | next [-]

Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess

Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"

Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"

And we train towards that objective and it starts picking better moves (note: the moves are still valid)

You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!

Now you might have a skilled chess-playing-model.

It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.

This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.

So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")

noslenwerdna 6 hours ago | parent | prev | next [-]

I find this kind of reduction silly.

All your brain is doing is bouncing atoms off each other, with some occasionally sticking together, how can it be really thinking?

See how silly it sounds?

esafak 8 hours ago | parent | prev | next [-]

Are you feigning ignorance? The best way to answer a question, like completing a sentence, is through reasoning; an emergent behavior in complex models.

8 hours ago | parent | prev | next [-]
[deleted]
adampunk 7 hours ago | parent | prev [-]

Thinking is a big word that sweeps up a lot of different human behavior, so I don't know if it's right to jump to that; HOWEVER, explanations of LLMs that depend heavily on next-token prediction are defunct. They stopped being fundamentally accurate with the rise of massive reinforcement learning and w/ 'reasoning' models the analogy falls apart when you try to do work with it.

Be on the lookout for folks who tell you these machines are limited because they are "just predicting the next word." They may not know what they're talking about.