Remix.run Logo
westurner 7 hours ago

ScholarlyArticle: "Attention Residuals" (2026) https://arxiv.org/abs/2603.15031 :

> Abstract: Residual connections with PreNorm are standard in modern LLMs, yet they accumulate all layer outputs with fixed unit weights. This uniform aggregation causes uncontrolled hidden-state growth with depth, progressively diluting each layer's contribution. We propose Attention Residuals (AttnRes), which replaces this fixed accumulation with softmax attention over preceding layer outputs, allowing each layer to selectively aggregate earlier representations with learned, input-dependent weights. To address the memory and communication overhead of attending over all preceding layer outputs for large-scale model training, we introduce Block AttnRes, which partitions layers into blocks and attends over block-level representations, reducing the memory footprint while preserving most of the gains of full AttnRes. [...]

czbond 5 hours ago | parent [-]

Ah - now I understand how this has 2k+ (supposedly legitimate) Github stars in less than a week. Thank you - I was more skeptical