▲ | mattnewton 2 days ago | |
tokens in a diffusion model typically look like encoders where the tokens earlier in the sentence can “see” tokens later in the sentence, attending to their values. Noise is iteratively removed from an entire buffer all at once in a couple steps. Versus one step per token, where autoregressive models only attend to previous tokens. |