| ▲ | zozbot234 4 hours ago | |
> for a 1T model youd need to stream something like 2TB of weights per forward pass Isn't this missing the point of MoE models completely? MoE inference is sparse, you only read a small fraction of the weights per layer. You still have a problem of each individual expert-layer being quite small (a few MiBs each give or take) but those reads are large enough for the NVMe. | ||
| ▲ | visarga 4 hours ago | parent [-] | |
But across a sequence you still have to load most of them. | ||