▲ | nvtop 2 days ago | ||||||||||||||||||||||||||||||||||||||||
Despite the name, diffusion LMs have little to do with image diffusion and are much closer to BERT and old good masked language modeling. Recall how BERT is trained: 1. Take a full sentence ("the cat sat on the mat") 2. Replace 15% of tokens with a [MASK] token ("the cat [MASK] on [MASK] mat") 3. Make the Transformer predict tokens at masked positions. It does it in parallel, via a single inference step. Now, diffusion LMs take this idea further. BERT can recover 15% of masked tokens ("noise"), but why stop here. Let's train a model to recover texts with 30%, 50%, 90%, 100% of masked tokens. Once you've trained that, in order to generate something from scratch, you start by feeding the model all [MASK]s. It will generate you mostly gibberish, but you can take some tokens (let's say, 10%) at random positions and assume that these tokens are generated ("final"). Next, you run another iteration of inference, this time input having 90% of masks and 10% of "final" tokens. Again, you mark 10% of new tokens as final. Continue, and in 10 steps you'll have generated a whole sequence. This is a core idea behind diffusion language models. Of course, there are some optimizations in the real world. If you need to generate a really long text (over 200 tokens), you'd better split it in chunks and fully generate the first chunk in parallel before moving to the next one. This semi-autoregressive generation is what Block Diffusion does. You can be smart about how exactly you pick tokens you consider generated and what % exactly. At earlier stages, when it's mostly noise, you can take more, and on final stages you can do more iterations and take fewer tokens. All in all, diffusion LMs are still iterative, but the number of steps is much lower than in autoregressive models. A nice thing is that you can choose how many steps are you going to make, trading quality for speed. In the extreme, you can even generate just one leftmost masked token with a diffusion LM, effectively turning it into a traditional causal language model. | |||||||||||||||||||||||||||||||||||||||||
▲ | yahoozoo 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
Great explanation. I think I have seen where text diffusion models can “edit” as it’s running inference. Or in other words, a “final” token isn’t necessarily “final” and could change but at some later iteration the model decides it truly is. How does that work? | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | oliwary 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Fascinating, and great explanation. What about insert and delete operations however? Isn't there a risk of there being too few tokens to properly finish the code in-between the "final" tokens? | |||||||||||||||||||||||||||||||||||||||||
▲ | Workaccount2 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Can you have a hybrid model that can do autoregression and diffusion? It doesn't seem like there is something that would fundamentally prevent this. A model with diffusion CoT for rapid "thought" generation, and then autoregression for the answer on the output. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | shawntan a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I'm curious how the speed is achieved is this is the technique used. Generally I expected this "masked language model" technique to be far slower since the full vocab projection needs to be computed every iteration. I always thought the eventual technique would be some form of diffusion in continuous space, then decoding into the discrete tokens. Also I'm guessing this is a "best guess" of how Gemini Diffusion is done? | |||||||||||||||||||||||||||||||||||||||||
▲ | victorbjorklund 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Thanks. Best explanation of text diffusion. | |||||||||||||||||||||||||||||||||||||||||
▲ | ctxc 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Thank you for the explanation! | |||||||||||||||||||||||||||||||||||||||||
▲ | moralestapia 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
Whoa man, thanks. This is a great explanation. |