| ▲ | connorbrinton 4 hours ago | |
Speculative decoding takes advantage of the fact that it's faster to validate that a big model would have produced a particular sequence of tokens than to generate that sequence of tokens from scratch, because validation can take more advantage of parallel processing. So the process is generate with small model -> validate with big model -> then generate with big model only if validation fails More info: * https://research.google/blog/looking-back-at-speculative-dec... * https://pytorch.org/blog/hitchhikers-guide-speculative-decod... | ||
| ▲ | sails 4 hours ago | parent [-] | |
See also speculative cascades which is a nice read and furthered my understanding of how it all works https://research.google/blog/speculative-cascades-a-hybrid-a... | ||