Remix.run Logo
Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation(arxiv.org)
95 points by fheinsen 3 hours ago | 46 comments
amluto an hour ago | parent | next [-]

I skimmed the paper, and I think I completely lost the plot.

Sections 2.1 through 2.4 talk about the decomposing the per-token-pair attention (key vector from the ith token with query vector from the jth token, where, in inference, the jth token is the one being sampled) into an approximation that is only mildly outrageously exponential in size compared to the original exponential-of-a-dot product. And they get something that's a polynomial (in the mathematical sense -- you're literally evaluating a polynomial) and has a size that's manageable at 4th order.

Okay, great, they took something simple and made it bigger and nastier but less transcendental without losing too much precision. (As far as I know, there is really nothing special about the exp in attention in the first place, so trying to approximate it well seems mostly useful insofar as it will keep existing models working.)

But the reason that attention is quadratic is that each token gets evaluated with respect to each other token. They haven't changed this at all. Section 2.5 seems like it's deferring this to an appendix. Section 2.6 gives the hidden state size per token, which, on first read, is strictly larger than the hidden state in normal attention (in normal attention it's d_v * d_k -- I'm not sure where their +1 comes from).

So what did the paper gain? Is there some detail that I missed or that the paper completely glossed over that explains why there is any gain of efficiency at all?

For what it's worth, the paper's overall claim is, in some sense, impossible. You can think of attention as being a sort of vector database, and this gets more accurate the sharper you make the exponential. If you replace softmax with actual max, a query locates the key that is the closest match to the query and returns the associated value. This operation is a plain linear search, it's possible (in principle anyway) to do lots of queries and recover the entire contents of the database, and I think that any paper claiming to do it faster than linear time should explain how it's compressing the data and where the loss is.

In language model terms, imagine an prompt like so:

    1: [string 1]
    2: [string 2]
    3: [string 3]
    ...
    n: [string n]
    
    Tell me the string associated with the number k.
As long as there's enough precision and enough query/key space to fit some embedding of the number k that will match the right thing (and there is a lot of room in high-dimensional spaces), one might expect a transformer to be able to answer this question. But this obviously requires memory with size linear in the prompt length. If you try to get rid of that, you necessarily lose something. (This is not to say that nice attention scaling is impossible -- one could imagine schemes where it takes the model multiple tokens to answer the question, and the number of tokens needed could scale, say, logarithmically with prompt size. But you still need that linear memory.)
csense 8 minutes ago | parent | next [-]

This paper combines two different insights, the second one is buried in the appendix.

Let's say you consider the 3 most-recent tokens. The first insight is that you can use a Taylor approximation: At token position 3 you compute A_3 = ((q1, q2, q3) . (k1, k2, k3))^1, B_3 = ((q1, q2, q3) . (k1, k2, k3)^2, C_3 = ((q1, q2, q3) . (k1, k2, k3))^3, etc. [1] [2]

The second insight is that you can compute e.g. B_{i+1} incrementally from B_i, with much fewer FLOPS than computing B_{i+1} from scratch. [3]

[1] I'd buy that it's empirically "good enough" that you don't need to go beyond D_3 (fourth degree polynomial).

[2] I'd also buy that it's empirically "good enough" to assume the inputs aren't extreme enough for E_3, F_3 etc. to matter. I agree with other posters that radius of convergence worries aren't addressed. I find it plausible that these issues don't sink the paper. I'd not be surprised to learn that either it doesn't matter in practice, or workarounds can be implemented without much performance impact.

[3] The author's choice to bury this insight in an appendix rather than putting it front and center is a baffling pedagogical choice but it's a small issue in the grand scheme of things. Perhaps that second insight is prior work (possibly by others) that experts in the latest LLM linear algebra could reasonably be expected to be familiar with, but is included as an appendix because it's not universally known in e.g. HN comment sections?

fheinsen 16 minutes ago | parent | prev [-]

This is a form of linear attention (https://arxiv.org/abs/2006.16236) that approximates standard scaled dot-product attention to arbitrary precision, by adding Taylor terms in an efficient manner. Each additional Taylor term improves the approximation. Efficiency is achieved by exploiting certain mathematical symmetries that become evident only after decomposing the standard formulation of attention into an expression over chains of tensor products. The github repository's README walks through examples. The first example is with 8 Taylor terms.

thomasahle 2 hours ago | parent | prev | next [-]

There's a graveyard of 100s of papers with "approximate near linear time attention."

They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.

Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214

WhitneyLand 4 minutes ago | parent | next [-]

The 2023 paper even if true doesn’t preclude the 2026 paper from being true, it just sets constraints on how a faster attention solution would have to work.

jcarreiro an hour ago | parent | prev | next [-]

The paper says that:

> In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications.

ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision.

fheinsen 44 minutes ago | parent | next [-]

The method is more general. The github repository's first example is with eight Taylor terms (P = 8).

energy123 28 minutes ago | parent | prev [-]

It converges on conventional attention as P goes up

kristjansson 2 hours ago | parent | prev | next [-]

> self-attention is efficiently computable to arbitrary precision with constant cost per token

This paper at least aspires to reproduce 'true' attention, which distinguishes it from many of the others. TBD if its successful in that.

energy123 2 hours ago | parent | next [-]

It's like claims of room temperature superconductors or millenium prize solutions. Earth shattering if true. It'd be such a black swan. Terrible for Nvidia.

SeanAnderson an hour ago | parent [-]

Well, we solved one of the Millennium Prize problems (honestly kinda quickly) so maybe there's hope :)

logicchains an hour ago | parent | prev [-]

It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences.

hellohello2 35 minutes ago | parent | next [-]

I'm not saying if the paper is correct or not (since I can't tell), but I don't think your argument really holds. Consider applying it to multiplication:

Fundamentally, multiplication need to look at every pair of integer from the two input numbers. It must be O(n^2); N digits looking at N other digits is quadratic. Any sub-quadratic multiplication must hence necessarily lose some information.

oasisaimlessly 9 minutes ago | parent | prev [-]

That argument could also be used to say that the FFT's time complexity of O(n log n) should be impossible.

cobolexpert 2 hours ago | parent | prev | next [-]

Dumb question: is the quadratic time complexity for training, inference, or both?

dave_universetf 26 minutes ago | parent | next [-]

Both, with caveats. The attention computation is fundamentally quadratic: for every token in the sequence, you're doing a computation that has to read every other token in the sequence. So it's O(N) per token, O(N^2) for the whole sequence.

The big mitigation for this is that in causal transformers (i.e. all the chatbot type applications, where each token is only allowed to see tokens before it), you're running inference repeatedly on the same prefix in order to grow it by one token at a time. So if you cache the computations for tokens 0..N-1, on each inference pass you only have to compute O(N) for the newly added token at the end of the sequence.

That's why caching (and caching charges) appear so prominently everywhere in the pricing of inference.

In practice, caching is most beneficial at inference time, because you typically have relatively long conversations that start with the same cacheable prefix (the system prompt). At training time the same optimization can apply, but you're typically not pushing the same prefixes through the model repeatedly so you end up paying the quadratic cost more often.

The quadratic cost of attention is the fundamental compute bottleneck for transformer architectures, which is why there's research like this trying to find shortcuts in computing attention, as well as research into completely new primitives to replace attention (e.g. SSM, which is O(N) on a cold cache and O(1) on a warm cache).

omneity 2 hours ago | parent | prev [-]

Attention is calculated during the forward pass of the model, which happens in both inference (forward only) and training (forward & backward).

SubiculumCode an hour ago | parent [-]

Dumb question: Can inference be done in a reverse pass? Outputs predicting inputs?

gpm an hour ago | parent | next [-]

Not as trivially as the forwards direction, unsurprisingly information is lost, but better than you might expect. See for example https://arxiv.org/pdf/2405.15012

root_axis an hour ago | parent | prev [-]

Sounds like a great premise for a sci-fi short story.

anu7df 42 minutes ago | parent [-]

Sci-fi ? You mean historical fiction!

fheinsen 2 hours ago | parent | prev | next [-]

As the error via linear approximation approaches similar magnitude as numerical error via quadratic computation, don’t the two start becoming comparable in practice?

I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats.

Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN:

https://www.anthropic.com/engineering/effective-context-engi...

naasking 2 hours ago | parent | prev | next [-]

I think any kind of innovation here will have to take advantage of some structure inherent to the problem, like eliminating attention in favour of geometric structures like Grassman flows [1].

[1] Attention Is Not What You Need, https://arxiv.org/abs/2512.19428

findalex an hour ago | parent [-]

Right - e.g., if you're modeling a physical system it makes sense to bake in some physics - like symmetry.

naasking 23 minutes ago | parent [-]

Indeed, and I think natural language and reasoning will have some kind of geometric properties as well. Attention is just a sledgehammer that lets us brute force our way around not understanding that structure well. I think the next step change in AI/LLM abilities will be exploiting this geometry somehow [1,2].

[1] GrokAlign: Geometric Characterisation and Acceleration of Grokking, https://arxiv.org/abs/2510.09782

[2] The Geometry of Reasoning: Flowing Logics in Representation Space, https://arxiv.org/abs/2506.12284

cubefox 2 hours ago | parent | prev [-]

I think DeepSeek V3.2 is sub n^2, but it clearly performs quite well, refuting the alleged lower bounds in the paper.

andy12_ 2 hours ago | parent [-]

It really isn't sub N^2. The main attention is only O(Nk), but only thanks to a lightning indexer that still has complexity O(N^2). So overall it still has the same complexity; just with a smaller constant factor [1]

> DSA reduces the core attention complexity of the main model from O(L^2) to O(Lk), where k (<< L) is the number of selected tokens. Although the lightning indexer still has a complexity of O(L^2), it requires much less computation compared with MLA in DeepSeek-V3.1-Terminus

[1] https://arxiv.org/pdf/2512.02556

riemannzeta 42 minutes ago | parent | prev | next [-]

Neat result. The symmetry exploitation here reminds me of recent work connecting neural network training dynamics to renormalization group theory. Charles Martin's SETOL paper https://arxiv.org/abs/2507.17912 shows that well-trained layers converge to something like an RG fixed point—the eigenvalue spectrum of the weight matrix develops power-law tails with exponent α ≈ 2, which is the signature of scale invariance. At this fixed point, the "effective correlation space" is low-dimensional: you can truncate the SVD aggressively and recover nearly identical test accuracy.

I wonder if there's a connection to your Taylor truncation order. In RG terms, higher-order polynomial interactions are "irrelevant operators"—they get suppressed as you flow toward the fixed point. If trained attention heads are sitting near this fixed point, that might explain why modest truncation orders work: the network has already learned to concentrate its computation in the lower-order terms. A testable prediction: layers with α closer to 2 (measurable via weightwatcher https://github.com/CalculatedContent/WeightWatcher) might need fewer Taylor terms for accurate approximation than layers with α far from 2. If true, you could potentially use the spectral statistics to adaptively choose truncation order per-head.

bluecoconut 3 hours ago | parent | prev | next [-]

I almost feel like this goes opposite to what attention is good at. This would be good at approximating all the places where attention is low / not sharp. Where attention/the exponential is key is when it selects out / needle-in-haystack / winner-takes-all focus (the word "attention" itself sorta implies this), and this is where the taylor expression would fail to represent the values well. This just... softens attentions ability to attend?

(I'm imagining that if in the context there's ~4-8 "similar" attention-targets that should be sharp, and regular attention learns to select the correct one, this taylor approximation version would wash out any difference and they'd all loosly be attended to, and it'd fail to isolate the correct signal)

Really wish this had some downstream tests -- apply it to a pretrained model and see how performance degrades, train a fresh one, etc. The tests are worth doing, but I somehow don't feel that hopeful this is the unlock required for sub-quadratic attention. It's possible that a freshly trained model with this learns to attend without the sharp attention signals, but that seems a bit dubious to me.

But also, maybe this combined with some other selective (sparse attention) trick, means that the hybrid model gets the "fuzzy long tail" of attention well represented as well as the sharpness well represented, and all together it could actually be a part of the larger solution.

energy123 3 hours ago | parent | next [-]

> this is where the taylor expression would fail to represent the values well.

"In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution"

seanhunter 2 hours ago | parent [-]

I read that too, but I wondered whether elementwise error is the right metric. Surely the actual error metric should be to evaluate model performance for a conventional transformer model and then the same model with the attention mechanism replaced by this 4th order Taylor approximation?

vlovich123 an hour ago | parent [-]

Bounded error weights by definition is a more strict evaluation criterion than “performance” metrics through running the model.

tehsauce 2 hours ago | parent | prev | next [-]

Right, and when they compare to floating point accuracy they seem to be using the number of decimals supported by the mantissa, but the exponent is important no?

seanhunter 2 hours ago | parent [-]

When someone says the error is of a certain magnitude they mean the absolute value of the difference between the the two things, so what they're saying is that the values they produced with their approximation are about as wrong as the difference between the actual values and those values cast to float16. The exponent is most definitely important and would be included in that.

mapontosevenths 2 hours ago | parent | prev [-]

> This just... softens attentions ability to attend?

I think this does soften, but not linearly. That is to say the fixed state size limitation means that it softens more as it gets further into the past.

abeppu 2 hours ago | parent | prev | next [-]

I haven't tried to follow the math closely but should there not be some concern about the region of convergence? It looks like they don't specifically discuss it. Or is there some reason this isn't a problem in this context?

reactordev 2 hours ago | parent [-]

I fear they have completely overlooked it.

andes314 2 hours ago | parent | prev | next [-]

Linear time attention doesn’t work, by principle. Dead end pursuit. Much great research on more efficient quadratic time inference

smokel an hour ago | parent [-]

What about n log n?

mapontosevenths 3 hours ago | parent | prev | next [-]

This uses the Taylor approximation to approximate softmax, but that IS only an approximation. I wonder exactly how much that trade-off costs in terms of accuracy vs performance? I note that they say it's close to float16 with four Taylor terms.

My other concern would be that Taylor itself is fairly complex. I wonder how well GPU's handle this in comparison to good old fashioned softmax? The last time I used Taylor with a custom Triton kernel it was still very slow. That could just have been my own jank vibe-coded implementation though.

observationist 2 hours ago | parent | prev | next [-]

This could turbocharge ByT5 and other tokenless architectures, whose big downside was the increase in compute over longer sequences. It's easy to imagine a bunch of strategies with variable levels of "focus" and so on with a fixed compute budget assigned on the fly with learned optimizers informing the distribution.

NedCode an hour ago | parent | prev | next [-]

Reference implementation: https://github.com/glassroom/sata_attention

alyxya 2 hours ago | parent | prev | next [-]

The best and proven linear attention is the Gated DeltaNet or variations of it, used by Kimi and Qwen. Anyone who thinks linear attention can't work is forgetting that models are a fixed size so attention should always be compressable to be linear. Another way to think of the feasibility of linear attention is that the standard attention mechanism can be made linear simply by removing the softmax so the kv cache stores the kv product as a constant size matrix instead. Softmax just normalizes attention, but it's not theoretically required.

spacewhales 3 hours ago | parent | prev | next [-]

Github here: https://github.com/glassroom/sata_attention

yanosh_kunsh 3 hours ago | parent | prev | next [-]

So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?

rvz 2 hours ago | parent | prev [-]

> Our work enables unbounded token generation at modest fixed cost, substantially reducing the infrastructure and energy demands of large-scale Transformer models. The mathematical techniques we introduce are of independent interest.

Now this is a very interesting paper, which hopefully should address the chronic inefficiencies of the AI lack of efficient methods and approaches in reducing their significant computational and energy demands which are off the charts.

> These factors penalize performance relative to what a fused, hardware-optimized implementation could achieve, and the reported runtime results should therefore be interpreted conservatively.

It's still early with several limitations, but the need for wasting billions on GPUs will begin to not make any sense soon.