▲ | pama 5 days ago | |
It is the other way around. If the data is causal and presented in the causal order, it is impossible to beat the loss of a pure auto-regressive model because it has the correct probability distribution for the dataset. Language data is mostly causal (as words follow in the context of previous words when they are spoken/written). Most of the remaining additional info in the extreme oversampling of the same data via diffusion models should be there by using fill-in-the-middle or order-reversal strategies with AR models as well and with significant compute savings during training. | ||
▲ | cma 4 days ago | parent [-] | |
I mean models like BERT and not diffusion. > Language data is mostly causal (as words follow in the context of previous words when they are spoken/written). But where it isn't, the old KV is frozen in place and has to be ammended after what follows, where BERT like models take it all into account all over. I have definitely heard they have less loss for the same amount of training tokens but are less efficient to compute and running next token prediction from them would be much more expensive. |