| ▲ | jmward01 2 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :) | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | cs702 2 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> Who out there is working on ... infinite history? Many people are still working on improving RNNs, mostly in academia. Examples off the top of my head: * RWKV: https://arxiv.org/abs/2006.16236 / https://arxiv.org/abs/2404.05892 https://arxiv.org/abs/2305.13048 * Linear attention: https://arxiv.org/abs/2503.14456 * State space models: https://arxiv.org/abs/2312.00752 / https://arxiv.org/abs/2405.21060 * Linear RNNs: https://arxiv.org/abs/2410.01201 Industry OTOH has gone all-in on Transformers. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||