Remix.run Logo
BubbleRings 3 days ago

> …reused its embedding matrix as the weights for the linear layer that projects the context vectors from the last Transformers layer into vocab space to get the logits.

At first glance this claim sounds airtight, but it quietly collapses under its own techno-mythology. The so-called “reuse” of the embedding matrix assumes a fixed semantic congruence between representational space and output projection, an assumption that ignores well-known phase drift in post-transformer latent manifolds. In practice, the logits emerging from this setup tend to suffer from vector anisotropification and a mild but persistent case of vocab echoing, where probability mass sloshes toward high-frequency tokens regardless of contextual salience.

Just kidding, of course. The first paragraph above, from OP’s article, makes about as much sense to me as the second one, which I (hopefully fittingly in y’all’s view) had ChatGPT write. But I do want to express my appreciation for being able to “hang out in the back of the room” while you folks figure this stuff out It is fascinating, I’ve learned a lot (even got a local LLM running on a NUC), and very much fun. Thanks for letting me watch, I’ll keep my mouth shut from now on ha!

tomrod 3 days ago | parent | next [-]

Disclaimer: working and occasionally researching in the space.

The first paragraph is clear linear algebra terminology, the second looked like deeper subfield specific jargon and I was about to ask for a citation as the words definitely are real but the claim sounded hyperspecific and unfamiliar.

I figure a person needs 12 to 18 months of linear algebra, enough to work through Horn and Johnson's "Matrix Analysis" or the more bespoke volumes from Jeffrey Humpheries to get the math behind ML. Not necessarily to use AI/ML as a tech, which really can benefit from the grind towards commodification, but to be able to parse the technical side of about 90 to 95 percent of conference papers.

danielmarkbruce 3 days ago | parent | next [-]

One needs about 12 to 18 hours of linear algebra to work though the papers, not 12 to 18 months. The vast majority of stuff in AI/ML papers is just "we tried X and it worked!".

miki123211 3 days ago | parent | next [-]

You can understand 95+% of current LLM / neural network tech if you know what matrices are (on the "2d array" level, not the deeper lin alg intuition level), and if you know how to multiply them (and have an intuitive understanding why a matrix is a mapping between latent spaces and how a matrix can be treated as a list of vectors). Very basic matrix / tensor calculus comes in useful, but that's not really part of lin alg.

There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them).

devmor 3 days ago | parent | next [-]

I was about to respond with a similar comment. The majority of the underlying systems are the same and can be understood if you know a decent amount of vector math. That last 3-5% can get pretty mystical, though.

Honestly, where stuff gets the most confusing to me is when the authors of the newer generations of AI papers invent new terms for existing concepts, and then new terms for combining two of those concepts, then new terms for combining two of those combined concepts and removing one... etc.

Some of this redefinition is definitely useful, but it turns into word salad very quickly and I don't often feel like teaching myself a new glossary just to understand a paper I probably wont use the concepts in.

buildbot 3 days ago | parent [-]

This happens so much! It’s actually imo much more important to be able to let the math go and compare concepts vs. the exact algorithms. It’s much more useful to have semantic intuition than concrete analysis.

Being really good at math does let you figure out if two techniques are mathematically the same but that’s fairly rare (it happens though!)

whimsicalism 3 days ago | parent | prev | next [-]

> There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them)

This stuff is part of modern optimizers. You can often view a lot of optimizers as doing something similar to what is called mirror/'spectral descent.'

tomrod 2 days ago | parent | prev [-]

Eigenvector/eigenvalues: direction and amount of stretch a matrix pushes a basis vector.

cultofmetatron 3 days ago | parent | prev | next [-]

for anyone looking to get into it, mathacademy has a full zero to everythign you need pathway that you can follow to mastery

https://mathacademy.com/courses/mathematics-for-machine-lear...

DenisM a day ago | parent [-]

There is no mention of llm there?

cultofmetatron a day ago | parent [-]

if you want to use llms, just download one and play with it. if you want to understand llms enough to push research forward, learn the underlying math

gpjt 3 days ago | parent | prev [-]

OP here -- agreed! I tried to summarise (at least to my current level of knowledge) those 12-18 hours here: https://www.gilesthomas.com/2025/09/maths-for-llms

jhardy54 3 days ago | parent | prev [-]

> 12 to 18 months of linear algebra

Do you mean full-time study, or something else? I’ve been using inference endpoints but have recently been trying to go deeper and struggling, but I’m not sure where to start.

For example, when selecting an ASR model I was able to understand the various architectures through high-level descriptions and metaphors, but I’d like to have a deeper understanding/intuition instead of needing to outsource that to summaries and explainers from other people.

tomrod 2 days ago | parent [-]

I was projecting as classes, taken across 2 to 3 semesters.

You can gloss the basics pretty quickly from things like Kahn academy and other sources.

Knowing Linalg doesn't guarantee understanding modern ML, but if you then go read seminal papers like Attention is All You Need you have a baseline to dig deeper.

woadwarrior01 3 days ago | parent | prev | next [-]

It's just a long winded way of saying "tied embeddings"[1]. IIRC, GPT-2, BERT, Gemma 2, Gemma 3, some of the smaller Qwen models and many more architectures use weight tied input/output embeddings.

[1]: https://arxiv.org/abs/1608.05859

jcims 3 days ago | parent | prev | next [-]

The turbo encabulator lives on.

empath75 3 days ago | parent | prev | next [-]

It's a 28 part series. If you start from the beginning, everything is explained in detail.

miki123211 3 days ago | parent | prev | next [-]

As somebody who understands how LLMs work pretty well, I can definitely feel your pain.

I started learning about neural networks when Whisper came out, at that point I literally knew nothing about how they worked. I started by reading the Whisper paper... which made about 0 sense to me. I was wondering whether all of those fancy terms are truly necessary. Now, I can't even imagine how I'd describe similar concepts without them.

whimsicalism 3 days ago | parent | prev | next [-]

i consider it a bit rude to make people read AI output without flagging it immediately

squigz 3 days ago | parent | prev | next [-]

I'm glad I'm not the only one who has a Turbo Encabulator moment when this stuff is posted.

QuadmasterXLII 2 days ago | parent | prev | next [-]

The second paragraph is highly derivative of the adversarial turbo encabulator, which Schmithuber invented in the 90s. No citation of course.

BubbleRings 2 days ago | parent [-]

Are you saying I should have attributed, or ChatGPT should have? I suppose I would have but my spurving bearings were rusty.

unethical_ban 3 days ago | parent | prev | next [-]

I was reading this thinking "Holy crap, this stuff sounds straight out of Norman Rockwell... wait, Rockwell Automation. Oh, it actually is"

ekropotin 3 days ago | parent | prev [-]

I have no idea what you’ve just said, so here is my upvote.