Remix.run Logo
ozgung 5 days ago

This is not about _Large_ Language models though. This explains math for word vectors and token embeddings. I see this is the source of confusion for many people. They think LLMs just do this to statistically predict the next word. That was pre-2020s. They ignore the 1.8+ Trillion-parameter Transformer network. Embeddings are just the input of that giant machine. We don't know what is going on exactly in those trillions of parameters.

ants_everywhere 5 days ago | parent | next [-]

But surely you need this math to start understanding LLMs. It's just not the math you need to finish understanding them.

HarHarVeryFunny 5 days ago | parent | next [-]

It depends on what level of understanding, and who you are talking about. For the 99% of people outside of software development or machine learning, it is totally irrelevant, as is any details of the Transformer architecture, or the mechanism by which a trained Transformer operates.

For the man in the street, inclined to view "AI" as some kind of artificial brain or sentient thing, the best explanation is that basically it's just matching inputs to training samples and regurgitating continuations. Not totally accurate of course, but for that audience at least it gives a good idea and is something they can understand, and perhaps gives them some insight into what it is, how it works/fails, and that it is NOT some scary sentient computer thingy.

For anyone in the remaining 1% (or much less - people who actually understand ANNs and machine learning), then learning about the Transformer architecture and how a trained Transformer works (induction heads etc) is what they need to learn to understand what an (Transformer-based, vs LSTM-based) LLM is and how it works.

Knowing about the "math" of Transformers/ANNs is only relevant to people who are actually implementing them from ground up, not even those who might just want to build one using PyTorch or some other framework/lbrary where the math has already been done for you.

Finally, embeddings aren't about math - they are about representation, which is certainly important to understanding how Transformers and other ANNs work, but still a different topic.

* US population of ~300M has ~1M software developers, of which a large fractions are going to be doing things like web development and only at a marginal advantage over someone smart outside of development in terms of learning how ANNs/etc work.

gpjt 5 days ago | parent | next [-]

Post author here. I agree 100%! The post is the basic maths for people digging in to how LLMs work under the hood -- I wrote a separate one for non-techies who just want to know what they are, at https://www.gilesthomas.com/2025/08/what-ai-chatbots-are-doi...

ants_everywhere 5 days ago | parent | prev [-]

I agree that most people don't need to understand the mathematics or design of the transformer architecture, but that isn't a good description of what LLMs do from a technical perspective. Someone with that mental model would be worse off than someone who had no mental model at all and just used it as a black box.

HarHarVeryFunny 5 days ago | parent [-]

I disagree - I just had my non-technical sister staying with me, who said she was creeped out by "AI" and didn't like that it heard her in background while her son was talking to Gemini.

An LLM is, at the end of day, a next-word predictor, trying to predict according to training samples. We all understand that it's the depth/sophistication of context pattern matching that makes "stochastic parrot" an inadequate way to describe an LLM, but conceptually it is still more right than wrong, and is the base level of understanding you need, before beginning to understand why it is inadequate.

I think it's better for a non-technical person to understand "AI" as a stochastic parrot than to have zero understanding and think of it as a black box, or sentient computer, especially if that makes them afraid of it.

bonoboTP 5 days ago | parent [-]

She's right to be creeped out by the normalization of cloud based processing of her audio and the increasing surveillance infrastructure. No Ai tech understanding needed. Sometimes being more ignorant of details can allow people to see the big picture better.

nickpsecurity 5 days ago | parent [-]

This 100%. The surveillance industry tries to normalize stalking people to exploit them. It's creepy and evil, not normal.

HSO 5 days ago | parent | prev [-]

"necessary but not sufficient"

ants_everywhere 5 days ago | parent [-]

yes exactly :)

cranx 5 days ago | parent | prev | next [-]

But we do. A series of mathematical functions are applied to predict the next tokens. It’s not magic although it seems like it is. People are acting like it’s the dark ages and Merlin made a rabbit disappear in a hat.

ekunazanu 5 days ago | parent | next [-]

Depends on your definition of knowing. Sure, we know it is predicting next tokens, but do we understand why they output the things they do? I am not well versed with LLMs, but I assume even for smaller modles interpretability is a big challenge.

chongli 5 days ago | parent | next [-]

The answer is simple: the set of weights and biases comprise a mathematical function which has been specifically built to approximate the training set. The methods of building such a function are very old and well-known (from calculus).

There's no magic here. Most of people's awestruck reactions are due to our brain's own pattern recognition abilities and our association of language use with intelligence. But there's really no intelligence here at all, just like the "face on Mars" is just a random feature of a desert planet's landscape, not an intelligent life form.

lazide 5 days ago | parent | prev [-]

For any given set of model weights and inputs? Yes, we definitely do understand them.

Do we understand the emergent properties of almost-intelligence they appear to present, and what that means about them and us, etc. etc.?

No.

jvanderbot 5 days ago | parent [-]

Right. The machine works as designed and it's all assembly instructions on gates. The values in the registers change but not the instructions.

And it happens to do something weirdly useful to our own minds based on the values in the registers.

umanwizard 5 days ago | parent | prev [-]

Doesn’t this apply to any process (including human brains) that outputs sequences of words? There is some statistical distribution describing what word will come next.

clickety_clack 5 days ago | parent | prev | next [-]

That is what they do though. It might have levels of accuracy we can barely believe, but it is still a statistical process that predicts the next tokens.

ozgung 5 days ago | parent [-]

Not necessarily. They can generate letters, tokens, or words in any order. They can even write them all at once like they do in a diffusion model. Next token generation (auto-reggresion) is just a design choice of GPT, mostly for practical reasons. It fits naturally to the task at hand (we humans also generate words in sequential order). Also they have to train GPT in a self-supervised manner since we don't have labeled internet scale data. Auto-regression solves that problem as well.

The distinction I want to emphasize is that they don't just predict words statistically. They model the world, understand different concepts and their relationships, can think on them, can plan and act on the plan, can reason up to a point, in order to generate the next token. It learns all of these via that training scheme. It doesn't learn just the frequency of word relationships, unlike the old algorithms. Trillions are parameters do much more than that.

griffzhowl 5 days ago | parent | next [-]

> The distinction I want to emphasize is that they don't just predict words statistically. They model the world, understand different concepts and their relationships, can think on them, can plan and act on the plan, can reason up to a point, in order to generate the next token.

This sounds way over-blown to me. What we know is that LLMs generate sequences of tokens, and they do this by clever ways of processing the textual output of millions of humans.

You say that, in addition to this, LLMs model the world, understand, plan, think, etc.

I think it can look like that, because LLMs are averaging the behaviours of humans who are actually modelling, understanding, thinking, etc.

Why do you think that this behaviour is more than simply averaging the outputs of millions of humans who understand, think, plan, etc.?

ozgung 5 days ago | parent [-]

> Why do you think that this behaviour is more than simply averaging the outputs of millions of humans who understand, think, plan, etc.?

This is why it’s important to make the distinction that Machine Learning is a different field than Statistics. Machine Learning models does not “average” anything. They learn to generalize. Deep Learning models can handle edge cases and unseen inputs very well.

In addition to that, OpenAI etc. probably use a specific post-training step (like RLHF or better) for planning, reasoning, following instructions step by step etc. This additional step doesn’t depend on the outputs of millions of humans.

HarHarVeryFunny 5 days ago | parent | prev | next [-]

How can an LLM model the world, in any meaningful way, when it has no experience of the world?

An LLM is a language model, not a world model. It has never once had the opportunity to interact with the real world and see how it responds - to emit some sequence of words (the only type of action it is capable of generating), predict what will happen as a result, and see if it was correct.

During training the LLM will presumably have been exposed to some second person accounts (as well as fictional stories) of how the world works, mixed up with sections of stack overflow code and Reddit rantings, but even those occasional accounts of real world interactions (context, action + result) are only at best teaching it about the context that someone else, at that point in their life, saw relevant to mention as causal/relevant to the action outcome. The LLM isn't even privvy to the world model of the raconteur (let alone the actual complete real world context in which the action was taken, or the detailed manner in which it was performed), so this is a massively impoverished source of 2nd hand experience from which to learn.

It would be like someone who had spent their whole life locked in a windowless room reading randomly ordered paragraphs from other peoples diaries of daily experience (also randomly interpersed with chunks of fairy tales and python code), without themselves ever having actually seen a tree or jumped in a lake, or ever having had the chance to test which parts of the mental model they had built, of what was being described, were actually correct or not, and how it aligned with the real outside world they had never laid eyes on.

When someone builds an AGI capable of continual learning, and sets it loose in the world to interact with it, then it'll be reasonable to say it has it's own world model of how the world works, but as as far as pre-trained language models go, it seems closer to the mark to say they they are indeed just language models, modelling the world of words which is all they know, and the only kind of model for which they had access to feedback (next word prediction errors) to build.

istjohn 5 days ago | parent [-]

We build mental models of things we have not personally experienced all the time. Such mental models lack the detail and vividness of that of someone with first-hand experience, but they are nonetheless useful. Indeed, a student of physics who has never touched a baseball may have a far more accurate and precise mental model of a curve ball than a major league pitcher.

HarHarVeryFunny 4 days ago | parent [-]

Sure, but the nature of the model can only reflect the inputs (incl. corrections) that it was built around. A theoretical model of the aerodynamics of a curve ball isn't going to make the physics prof an expert pitcher, maybe not able to throw a curve ball at all.

Given the widely different natures of a theoretical "book smart" model vs a hands-on model informed by the dynamics of the real world and how it responds to your own actions, it doesn't seem useful to call these the same thing.

For sure the LLM has, in effect, some sort of distributed statistical model of it's training material, but this is not the same as knowledge represented by someone/something that has hands-on world knowledge. You wouldn't train a autonomous car to drive by giving it an instruction manual and stories of peoples near-miss experiences - you'd train it in a simulator (or better yet real world), where it can learn a real world model - a model of the world you want it to know about and be effective in, not a WORD model of how drivers are likely to describe their encounters with black ice and deer on the road.

istjohn 4 days ago | parent [-]

You're moving the goal posts. OP wrote:

> The distinction I want to emphasize is that they don't just predict words statistically. They model the world, understand different concepts and their relationships, can think on them, can plan and act on the plan, can reason up to a point, in order to generate the next token.

You replied:

> How can an LLM model the world, in any meaningful way, when it has no experience of the world?

> An LLM is a language model, not a world model.

No one in this discussion has claimed that LLM's are effective general purpose agents, able to throw a curve ball, or drive a vehicle. The claim is that they do model the world in a meaningfull sense.

You may be able to make a case for that being false, but the assumption that direct experience is required to form a model of a certain domain is not an assumption we make of humans. Some domains, such as mathematics, can only be accessed through abstract reasoning, but it's clear that mathematicians form models of mathematical objects and domains that cannot be directly experienced.

I feel like you are arguing against a claim much stronger than what is being made. No one is arguing that LLM's understand the world in the same way human's do. But they do form models of the world.

jurgenaut23 5 days ago | parent | prev [-]

Can you provide sources of your claim that LLMs “model the world”.

ozgung 5 days ago | parent | next [-]

You are right that it is a bold claim but here is a relevant summary: https://en.wikipedia.org/wiki/Stochastic_parrot#Interpretabi...

I think "The Platonic Representation Hypothesis" is also related: https://phillipi.github.io/prh/

Unfortunately, large LLMs like ChatGPT and Claude are blackbox for researchers. They can't probe what is going on inside those things.

lgas 5 days ago | parent | prev [-]

It seems somewhat obvious to me. Language models the world, and LLMs model language. If A models B and B models C then A models C, as well, no?

TurboTveit 5 days ago | parent [-]

Can you provide sources of your claim that language “model the world”.

measurablefunc 5 days ago | parent | prev | next [-]

It's exactly the same math. There is no mathematics in any neural network, regardless of its scale, that can not be expressed w/ matrix multiplications & activation functions.

libraryofbabel 5 days ago | parent | prev | next [-]

* You’re right that a lot of people take a cursory look at the math (or someone else’s digest of it) and their takeaway is “aha, LLMs are just stochastic parrots blindly predicting the next word. It’s all a trick!”

* So we find ourselves over and over again explaining that that might have been true once, but now there are (imperfect, messy, weird) models of large parts of the world inside that neutral network.

* At the same time, the vector embedding math is still useful to learn if you want to get into LLMs. It’s just that the conclusions people draw from the architecture are often wrong.

baxtr 5 days ago | parent | prev [-]

Wait so you’re saying it’s not a high-dimensional matrix multiplication?

dmd 5 days ago | parent | next [-]

Everything is “just” ones and zeros, but saying that doesn’t help with understanding.

measurablefunc 5 days ago | parent [-]

If you know about boolean algebra then it explains a lot more than you realize : https://boolean.dk.workers.dev/

tatjam 5 days ago | parent | prev [-]

Pretty much all problems can be reduced to some number of matrix multiplications ;)